Blog

  • In rude health

    Reaching 60 is often associated with people slowing down and feeling their age. For example, Britons are entitled to a free bus pass.

    But their universal health care system, the National Health Service (NHS), which celebrates its 60th birthday on July 5, is largely considered to be in better shape now than in its fifties – even though baby boomers are gray and new. Medicines are expensive. It would be unthinkable to eliminate it and switch to private systems.

    Bashing the NHS has become Britain’s national pastime, and at one of the world’s five biggest employers, shortcomings are inevitable. But those who use the service regularly tend to rate it more positively than those who mainly talk about it. British newspapers are often full of reports of poor management, but a recent survey showed that 91% of 17 million hospitalized patients rated their care as good, very good or excellent.

    And although the NHS sometimes does poorly in surveys that focus on how well specific diseases are treated, perhaps the most thorough assessment of late – equity, efficiency, quality, access, And long and productive patients live by the Independent Foundation. The Commonwealth Fund – ranked it atop the health care systems of Australia, Canada, Germany, New Zealand and the United States.

    Scientific research has been a main goal of the NHS since its foundation – but one can be forgiven for not knowing it. Over the years, funding for research has been distributed within regional health care providers in a system that may have been designed to hinder collaboration between universities and pharmaceutical companies.

    All credit, then, to the NHS’s Director General of Research and Development, Sally Davis. With the creation of the National Institute for Health Research (NIHR), a virtual body within the NHS, Davies has pulled funding into broad daylight. By 2011, these are expected to amount to approximately £1 billion (US$2 billion).

    For researchers’ careers, networks and collaborations, and transparent indicators of achievement, Davis’ Best Research for Best Health program is helping to transform the research landscape.

    Plans include virtual organizations to connect universities, hospitals and industry; The ten ‘academic health centers’ resemble American university hospitals, allowing researchers to study patients more easily; And somewhere between 15 and 50 ‘health innovation and educational clusters’, which the government hopes will spur procedural innovation, promise better funding for the academics involved. Meanwhile, the NIHR collaboration with the Medical Research Council, after a poor start, is starting to make progress.

    Perhaps the most important step for biomedical science in the NHS lies in opening up the ocean of patient data that the organization has collected over decades.

    Public consultations starting now, Favor Sounds says, will lead to ways by which researchers can more easily find suitable patients for research and clinical trials, and access data that Sources are unknown but subject to patient permission. In particular, the national extent and depth of those data will provide researchers in academia and industry with a globally unique resource for highly targeted studies and clinical trials – a key element of translational medicine.

    Such an information system would create concerns about privacy. In the future, those concerns will become more sensitive as genetic testing becomes more predictively powerful. Yet, at the same time, as that era flourishes, it will bring forth the risk-pooling benefits of universal health-care.

    Unless people are required to share genetic data with private insurance companies, as is the case in the United Kingdom until at least 2014, those suspected of ill health would do well to buy insurance cover. The genetically fortunate, meanwhile, can save money and rely on the state. This will squeeze private insurers, suggesting that the golden period of the NHS is yet to come.

  • Materials science: Share corrosion data

    In November 2013, an oil pipeline exploded in the Chinese city of Qingdao, killing 62 people and injuring 136. Eight months later, a similar explosion in Kaohsiung killed 32 people and injured 321.

    The pipelines were made of steel of the same specification and failed after two decades of use in the same environment. The cause was corrosion – the degradation of a material by a chemical or electrochemical reaction with its environment.

    Such disasters are common: every square kilometer of any Chinese city contains more than 30 kilometers of buried pipes, creating tangled networks of oil and gas lines, water mains, and electrical and telecommunications cables. Rust is also expensive.

    According to a US survey, rust costs six cents for every dollar of GDP in the United States. Globally, this amount is more than US$4 trillion per year – the equivalent of 40 in damages from Hurricane Katrina. Half of that cost is in rust prevention and control, the other half in damage and lost productivity.

    Lack of knowledge hinders our ability to prevent failures. Erosion of underground pipes is influenced, for example, by the composition, microstructure and design of materials, as well as by a raft of environmental conditions such as soil oxygen level, humidity, salinity, pH, temperature and biological organisms.

    Many industries, including oil, gas, marine and nuclear, collect corrosion data to identify risks, predict the service life of components, and control corrosion. Most of this data is proprietary, and best practices are rarely shared. Oil spills, bridge collapses and other disasters keep on coming.

    The demand for knowledge about corrosion is increasing with the increasing use of advanced materials in medical devices, biosensors, fuel cells, batteries, solar panels and microelectronics. Corrosion is the main restriction on many nanotechnology applications.

    Efforts to make material data accessible, such as the Material Genome Initiative (MGI), focus on the ‘birth’ rather than the ‘death’ of the material. Online platform is desperately needed for Jung data sharing. Access to large amounts and different types of corrosion information, which researchers can investigate with data mining and modeling tools, will improve the forecast of corrosion failures and anticorrosion designs.

    Complex processes

    The biggest challenge in corrosion research is accurately predicting how a material will corrode in a given environment. This requires a thorough knowledge of all relevant factors and their interactions.

    Yet precise models for the mechanism are lacking. It is impossible to forecast problems without historical data about material failures under various circumstances. And field performance cannot be assessed in laboratories when environmental parameters are unknown.

    Corrosion data is hard to collect. Damage can take years or even decades to accumulate and any project tracks only a few contributing factors. Data sets need to be combined.

    For example, early studies of marine erosion (for example, occurring at oil-drilling platforms) were unreliable because they considered only physicochemical processes (including pH, dissolved oxygen, and temperature), not seawater. on the effects of living organisms. The model has now been improved by the inclusion of genomic data.

    Corrosion depends on local conditions. Steel structures that last for decades in arid parts of inland China fail within months in the moist and salty coastal regions of Southeast Asia.

    Protective polymer coatings that have worked for years at northern latitudes can wear down in weeks near the equator, where higher doses of heat and ultraviolet radiation break chemical bonds more quickly.

    Referring to common corrosion knowledge – such as how particular steels are affected by moisture, salt or air pollution – requires a combination of studies from many diverse environments. For example, a worldwide survey of weathering steel reviewed 22 years of exposure test results from 108 sites in 22 countries.

    With the increase in global trade, the oil and gas, construction, car, electronics and other industries have called for the sharing of corrosion data between countries to ensure the quality and safety of their products. Millions of cars around the world have been recalled over the years due to unforeseen corrosion problems arising in destination countries.

    China’s 2013 ‘Belt and Road’ initiative, which promotes industrial ties with countries along the Silk Road economic belt between China and the West, poses unprecedented challenges.

    Rapid corrosion assessment, material selection and design will be required as billions of dollars in construction, transportation, energy and telecommunications projects begin in Asia, Africa and Europe.

    Advanced materials introduce entirely new corrosion problems. For example, the electrochemical stability of noble metals such as platinum and gold rapidly declines as their dimensions are reduced to the nanometer scale.

  • Global climate agreement: After the talks

    After years of failure to draft global agreements on climate change, the upcoming UN Paris climate conference may be turning a corner. Diplomats have drafted a practical text that will be adopted. Business and environmental groups are engaged in this process in an unprecedented way.

    Governments, development banks and foundations are raising money to help the poorest countries pay for emissions cuts and prepare for a changing climate – the main sticking point in 2009, when the last major climate conference in Copenhagen, in disarray was over.

    The United Nations and the French hosts have a sophisticated agenda to bring all these efforts together. Even religious leaders have spoken out loudly about the dangers of uncontrolled climate change.

    The good news from the Paris meetings will build confidence, which is a vital component for effective international cooperation. Governments and firms will invest in the future with lower emissions if they think others will do the same.

    The agreement will demonstrate the feasibility of a new, flexible ‘bottom-up’ mode for climate diplomacy – based on national pledges accommodating different priorities and capabilities. In contrast, the Kyoto Protocol’s rigid goals and timetable attracted few of the world’s emitters.

    Yet a dose of moderation is also needed. Agreements are only possible now because diplomats are postponing the toughest problems, such as how to hold nations accountable.

    Business engagement can prove short-lived when the spotlight shifts. The good news about climate finance is now possible because the mix of public funding (which is difficult to mobilize and spend effectively) and private funding (which is plentiful but often focused on global goals) is unclear.

    Whether the Paris Conference will be successful or not depends on what happens afterwards. Diplomats will have a lot to do until 2020, when the main agreements take full effect. Civil society – businesses in particular – must move from making bold promises to cut emissions.

    Governments and businesses must build and invest in review and accountability mechanisms to ensure that they are delivering on their promises – an area in which non-governmental organizations (NGOs) have an important role to play. And scientists must conduct research that is directly relevant to policy making, as well as assessing the underlying causes and effects of climate change.

    engage business

    The most important challenge will be getting the business on board. It’s easy for companies to make commitments when the world’s media and political leaders are watching. Changes are difficult to implement when fierce competition makes it risky to invest in more expensive but less polluting technologies and practices.

    The most striking example of business engagement is the pledge made by several firms and governments to cut deforestation.

    In 2010, the Consumer Goods Forum (which includes the largest retailers and consumer-products companies) announced that its members would eliminate deforestation from their supply chains, particularly for palm oil, soy, beef, timber and for the pulp. More than 300 companies have followed (see www.supply-change.org). Major producers and traders of palm oil in Indonesia – which accounts for half of the world’s supply – have pledged to stop converting forest or peat land.

    Palm oil is a main culprit in fires that have spread a suffocating haze across the region since August, afflicting more than 40 million people and often causing daily emissions of greenhouse gases that exceed those of the United States.

    It is not sure whether these pledges will result in permanent changes in complex supply chains – from how land is managed, to oil produced and finally to consumer products.

    There are already signs of trouble. Most businesses pledge to become more sustainable following pressure from NGOs. (One of us, JPL, led WWF International for nine years, during which time the organization was centrally involved in many such efforts.) Firms fear consumer backlash if their products are tied to environmental destruction. (see go.nature.com/5l8yjm).

    Following the Paris meetings, CEOs will be required to activate changes through the ranks of their organizations and suppliers; NGOs will need both to keep up with the pressure for action and to work with companies to secure comprehensive reforms in major producing countries.

    Shifting entire industries to more sustainable modes of production requires collaboration between government, business, and civil society. Economic incentives must be reimagined so that no firm can profit, for example, by continuing to destroy the forest.

    Solutions will vary by country and region, but common threads include better governance – laws, financial systems, property rights and public governance – and investments in helping countries, communities and small producers to transition to sustainability. .

  • A ‘perfect’ agreement in Paris is not essential

    So here we go again. Nations are meeting in Paris for their twenty-first attempt to agree decisive action to avoid what the United Nations defines as dangerous climate change.

    Climate negotiations have set the extent of this threat at 1.5–2  °C of global warming above pre-industrial levels. With the installation of such a guard rail, the essential components of a ‘successful’ climate deal are more or less in place. A fair chance of achieving 2 °C translates into a limited global carbon budget of about 900 gigatonnes of carbon dioxide from 2015 that must be shared fairly among all countries.

    Can the Paris talks lead to an agreement that gives a binding commitment from all countries to meet this outcome? The last time the world gathered for a decisive global agreement on climate change in Copenhagen in 2009, the message was that, yes, world leaders need to do nothing less than decide on a global, legally binding agreement. which fulfills one’s scientific goals. Safe and in the future below 2 °C.

    But since Copenhagen, the global discourse has changed. In 2009, it was only possible to show that we need to address the climate challenge; It was not easy to show that it was possible. Today the need for this is clearer than ever. And, more importantly, there is ample evidence that it is possible to grow economically competitive, clean-energy solutions.

    Before Copenhagen, economists generally thought that a higher oil price was the best way to enable the transition to a decarbonized future. The surprising reality is that low oil prices appear to be the most effective way to make the transition away from fossil fuels.

    Renewable energy systems also compete at low oil prices, which in turn closes the door to exploitation of unconventional, expensive oil, such as offshore oil, and in difficult environments such as the Arctic. It also opens a unique window for introducing a global price on carbon – clearly the most effective policy measure to accelerate the transition to fossil-fuel-free energy.

    Experience in the industrial sectors shows that new solutions can become mainstream and become part of the market and society only if they have penetrated at least 15-20% of the market or society. For renewable energy, this penetration has been achieved in only enough countries in the last three to four years.

    In this new situation, is it possible to envisage a transition to a carbon-free world by 2050, even if Paris does not reach the ‘perfect’ agreement? the answer is yes. To get there, the threshold of success in Paris must not be at the level of ‘solving the climate problem’ through incremental change, but ‘reassurance that the world is serious about change’.

    We need a deal that is decisive enough to move the world rapidly towards decarbonisation. A new treaty need not force nations to comply, but should build trust and send the right signal to investors, businesses and societies – that global political leadership is irreversibly turning towards a new sustainable era .

    How ambitious must the Paris Agreement be to decisively support such a trajectory? To meet the 2 °C limit, the world would have to cut carbon emissions by about 6% per year. The national resolutions laid on the table in Paris will not bring us any closer.

    From experience, we know that emissions reductions in the 0-2% per annum range are within the scope of incremental policy measures. A range of 2-3% requires ambitious adaptation. Once levels exceed 3-4%, experience indicates that radical measures are needed, such as a carbon tax and phasing out coal power.

    These are the kinds of changes needed to decarbonize the world economy, and above all, to send a clear signal of a shift from incremental to transformational change. Thus the success in Paris should be seen as an agreement that matches the pace of emissions reductions of over 3–4% per year starting in the 2015–20 window.

    This, in turn, would suggest that Paris should submit to 80% of the national pledges required to stay within the 2 °C guard rail, with at least 20% of countries committing to an average cut of no more than 4% per year, in order to build a bigger one.

    A substantial critical mass of nations committed to decarbonization and to influence the global argument (see go.nature.com/1uxlyn). Achieving this goal is ambitious but realistic.

    And it comes with a good opportunity that, once nations realize the benefits of decarbonisation, they will step up their pledges. Therefore, it is important that the Paris Agreement permits a repetition of the pledges at least every third or fifth year.

    It would be dangerous to allow ‘success’ to be reduced to a low level of political achievement so that the world continues along an incremental policy path that has no chance of supporting the transition to decarbonisation.

  • GE additional partner with Indiana EDC to advance binder jetting technology

    Industrial 3D printer manufacturer GE Additive has announced a Binder Jet public-private partnership with Indiana Economic Development Corporation (IEDC), Indiana’s premier economic development agency.

    The contract is ultimately intended to help commercialize GE’s newly developed H2 binder jetting system, all while furthering the state’s extensive manufacturing capabilities. In particular, the pair have agreed to co-invest in R&D for the technology, including factory automation, software development and manufacturing readiness for the region.

    Christine Furstoss, CTO of GE Additives, says: “We are excited by the opportunity IEDC has presented to us. Binder jet is one of the most dynamic areas of additive manufacturing today, and one that the automotive and mobility industry is watching particularly closely. is. ”

    Binder Jet Beta Partner Program

    With GE’s Binder Jet Beta Partner Program gaining rapid traction, the company has now partnered with six major organizations in the technology and automotive sectors. The program aims to rapidly develop the company’s binder jetting technology, which will begin next year with the commercial launch of its prototype H2 system.

    The state of Indiana is of particular interest to GE because it is one of the U.S. states. Is the focal point for manufacturing, with 8500 manufacturing facilities and the highest concentration of manufacturing jobs in the country. Looking at the automotive in particular, Indiana is home to more than 500 suppliers and five major OEMs, and generates the country’s second-largest automotive GDP.

    Says Furstoss: “Given Indiana’s strong automotive manufacturing focus, we are very hopeful that this partnership will harness its abundant use of innovation and foster new forward-thinking applications – particularly in the fields of automation and software development In.”

    As an extension of the Beta Partner Program, the R&D partnership with IEDC will provide a new test bed for working with future partners, customers and SMEs in Indiana to further innovate binder jetting.

    Emerging Manufacturing Cooperation Center

    As part of the Economic Activity Stabilization and Promotion Initiative, IEDC recently set aside $ 3M to set up a new Emerging Manufacturing Collaboration Center (EMC2) at 16 Tech Innovation District by the summer of 2021.

    Train your employees on advanced manufacturing equipment like GE’s H2. The space will also be used to undergo contract manufacturing as well as to promote its new systems to OEMs.

    GE and IEDC will also run a Virtual Industry Day on 8 December to further broaden the partnership and its potential sub-projects. The partners will discuss broader technology and economic benefits to Indiana, and how EMC2 will improve the state’s manufacturing competitiveness.

    Readers wishing to participate in the program can do so here, to hear from binder jetting experts, see demos of the H2 system, and to participate in technical workshops.

    Earlier this year, Binder Jetting OEM Xvon also announced the start of five similar projects with the Universities of Pennsylvania to pursue various aspects of Binder Jet 3D printing. The R&D projects are being funded through the Manufacturing PA Innovation Program established by the Department of Community and Economic Development (DCED).

    Elsewhere, simulation software developer Simufact recently announced the launch of a simulation tool for metal binder jetting in its Simufactitive additive program. Users of the software will be able to predict and prevent – in the design phase – distortion effects that are often used in post-processing can occur on binder jetty parts.

  • Contributes to Wabtech’s sustainability achievements

    Global rail and transit manufacturer Wabtech released its 2020 Sustainability Report detailing the company’s commitment to environmental and social responsibilities.

    The report outlines a series of activities the company has undertaken to date to improve its global environmental performance, including how additive manufacturing has contributed to the firm’s sustainability objectives.

    Rafael Santana, President and CEO of Webtech, stated, “Webtech’s position as a global transportation leader gives us a unique perspective on the trends that are affecting our customers and other stakeholders, namely: climate change, automation and digitization, and urbanization.”

    “Our 2020 Sustainability Report outlines a series of aggressive goals to address those trends, improve our performance on global environmental, social and governance matters, and drive a better future for the people and the planet.”

    Webtech and 3d printing

    Webtech is a global leader in the provision of equipment, systems and digital solutions for the freight and transit rail sectors. The firm’s portfolio includes highly engineered metal components and systems that it provides to most major rail transit systems around the world.

    Last year, Wabtec became one of the first customers to receive GE Additive’s H2 binder jet metal 3D printer, driven by the aim of increasing the use of additive manufacturing in the transportation industry.

    Recently, the company announced that it had acquired an 11,000-square-foot plot at Neighborhood 91, Pittsburgh’s additive manufacturing hub. Expected to be completed by spring next year, Wabtech plans to use its new facility to produce lighter parts for its transit customers. Reduction in time by 80 percent.

    AM Improve stability through

    Currently, Wabtec uses 3D printing in its manufacturing processes to reduce material and energy waste associated with the manufacture of complex assemblies and parts. Integrating additive manufacturing can reduce production waste by 70-80 percent, while significantly reducing time to market by up to 90 percent.

    Wabtec produced over 1,250 3D printed prototypes during 2019, becoming the first rail supplier to incorporate metal 3D printed parts into production on its North American rolling stock. Looking ahead, the company intends to produce more than 25,000 additive manufactured parts by 2025.

    The firm is also employing remanufacturing processes to keep its products in circulation for as long as possible, reducing waste, extending the life of equipment and increasing cost savings.

    According to WebTech, approximately £ 296 million worth of end-of-life materials are brought back to its global manufacturing facilities, which are later reused or recycled with less than one percent of waste.

    Santana continued, “On almost every continent we are demonstrating the power of Wabtech when we work together to achieve a common objective.” “By focusing on sustainability and accountability, and with an incredible team behind us, I believe we will achieve our goals and create a bigger, stronger webtech to move and improve the world.”

    Sustainability efforts in 3D printing sector

    Within industries around the world, sustainability has become an important consideration for all levels of the supply chain. Reducing waste, improving efficiency of operations, integrating additive manufacturing and digitization are all ways in which companies are trying to improve their processes to suit their environmental and social responsibilities.

    Last year, German 3D printer OEM EOS CEO Mary Langer announced that the company would do more with the “positive environmental and social benefits” of 3D printing, while UK-based post-processing specialist Editive Manufacturing Technologies (AMT) made four Designed the outline. the pillars through which it will promote stability and security; No waste, better chemicals, less energy, less labor and consumables.

    Elsewhere, 3D printing is being used to improve the environmental footprint of manufacturing spare parts in favor of time and material-intensive traditional methods.

    German engineering group ThyssenKrupp recently partnered with Wilhelmsen Ships Services to deliver 3D printed spare parts for the maritime sector, while petrochemical firm Braskem 3D to help optimize its inventory supply chain and ultimately keep less stock The software adopted the DigiPart program of start-up spare parts 3D.

    Additive manufacturing is also helping firms get closer to the circular economy concept, a notion that seeks to make optimal use of resources to avoid waste. Recent projects in this direction include the production of biobased materials for 3D printing from waste food, and the production of high-performance metal powders from scrap sources by building closed-loop supply chains.

  • 20% drop in financial results

    Belgium’s software and 3D printing services provider Materialize reported a drop of nearly twenty percent in its Q3 2020 financial results.

    Materialize’s revenue fell from € 50.4 million in the third quarter of 2019 to € 40.7 million in the third quarter of 2020, representing a decrease of 19.2 percent. During Q3 2020, the firm’s medical segment grew, but it was not enough to offset the significant revenue cuts seen within its manufacturing and software divisions.

    The company’s software business H1 was proven to be resilient to the impact of the pandemic in 2020, and other industry developers such as Autodesk also reported revenue growth during Q2 2020. In Q3 2020, however, content software revenue declined by 12.7 percent due to economic uncertainty surrounding the outbreak due to the company’s tight spending.

    In a call with analysts and investors, executive chairman Peter Leyes claimed that despite the firm’s revenue decline in Q3 2020, there were reasons to be optimistic in Q4. “Given the challenging environment, Materialize has done well this quarter thanks to the continued hard work and inspiring contributions of our entire workforce,” Leyes said.

    “While our manufacturing revenue and, to a lesser extent, the software segment declined in the midst of the epidemic, our medical segment grew its revenue by an impressive 11 percent,” he said.

    Materialize Q3 2020 Financial Results

    Materialize revenue is reported in three segments: software, medical and manufacturing. Only the company’s medical division showed revenue growth during the third quarter, rising from € 15.4 million in the third quarter of 2019 to € 17.1 million in the third quarter of 2020, an increase of 10.8 percent.

    The firm’s medical growth was primarily driven by a 14.5 percent increase in revenue derived from its equipment and services, while income from its medical software grew 3.1 percent. In terms of Materialize’s overall software segment, its revenue fell from € 10.8 million reported in the third quarter of 2019 to € 9.4 million in Q3 2020, representing a decrease of 12.7 percent.

    During Q3 2020, the company’s recurring software revenue increased by 15.9 percent, but this was not reflected in the segment’s performance, indicating that potential customers are cutting spending. Manufacturing was the worst-performing division of the materialize business, producing € 14.1 million in Q3 2020, a 41.3 percent drop on the € 24.1 million generated in Q3 2019.

    According to the company, the epidemic was the primary reason behind the decrease in its manufacturing revenue, with its core infotech business showing the biggest revenue shortfall of 60 percent. Materialize reported a drop in demand within its major automotive and aerospace businesses, a sector that typically accounted for more than half of its industrial revenue.

    Implement investment in future development

    Despite the company’s overall decline in revenue during Q3 2020, it has opted to increase its spending on R&D to 4.2 percent and invest in new opportunities for future growth. Earlier this month, Materialize announced a strategic investment in customized eyewear firm Ditto, and Lez explained the rationale behind the $ 9 million outlay.

    Lay explained, “The alliance aims to strengthen Ditto’s state-of-the-art virtual training platform by optimizing materials and adding 3D printing functionalities.” “The sale of Ditto’s digital eyewear solutions will generate a royalty stream for Materialize, and we believe the collaboration will help promote other pioneering initiatives.”

    Along with its Q3 Financial, the firm also announced the acquisition of insole manufacturer Superfeet’s share in the dynamic foot measurement technology of competitor RSscan. Materialize aims to continue to work with Superfeet in the future to develop a more comprehensive personalized insole product line and to meet the footcare needs of customers in the medical industry.

    Fried VanCrain, CEO and founder of Materialize, was keen to highlight the importance of the ongoing partnership of the companies. “3D printing and design technology has great potential to help both consumers and health professionals improve comfort, health, and performance through personalized footcare,” VanCrain said.

    “I am particularly excited that we have been able to strengthen our strategic partnership with Superfeet in such a way that both partners will be able to focus solely on their personal strengths,” he said.

    A Strong End of the Year for Physicization?

    In his closing remarks on the earnings call, Leyes stated that although the company’s industrial activities are slowly recovering from the effects of the epidemic, it will not be immediately apparent in its financials.

    Materialize’s Q3 2020 figures still benefited from backorders, and as demand decreased during the year, this was likely to be reflected in its future results.

  • ‘Tantalizing’ results of 2 experiments defy physics rulebook

    Preliminary results from two experiments suggest that something might be wrong with the way physicists think the universe works, a possibility that is astonishing and thrilling in the field of particle physics.

    In two separate long-running experiments in the United States and Europe, they are expected to have small particles called munons that are not doing much. Confusing results – if proven to be true – reveal major problems with the rulebook, which physicists use to describe and understand how the universe works at the sub-atomic level.

    “We think we can swim in a sea of ​​background particles all the time, which has not yet been directly discovered,” Fermilab’s co-chief scientist Chris Poli said at a news conference. “There may be monsters we haven’t yet imagined who are interacting with our muons from the vacuum and it gives us a window into seeing them.”

    The rulebook, called the Standard Model, was developed about 50 years ago. Experiments conducted over the decades confirmed that its particles and the forces that created and ruled the universe were very much described. till now.

    “New particles, new physics may be beyond our research,” said Wayne State University particle physicist Alexey Petrov. “It’s tantalizing.”

    Fermilab of the United States Energy Department announced Wednesday the results of an 8.2 billion race along a track outside of Chicago, while most people are ho-hum physicists: Muon’s magnetic field standard model says they shouldn’t be . This comes after new results published last month from the Large Hadron Collider of the European Center for Nuclear Research, which found an astonishing proportion of particles after high-speed collisions.

    If confirmed, the results of the US would be the world’s largest discovery of oddly microscopic particles in nearly 10 years, as the discovery of the Higgs boson, often called the “God Particle”, Ada El-Khadra of the University of Illinois Said, which works on theoretical physics for the Fermilab experiment.

    The point of the experiments, explains the theoretical physicist David Kaplan of Johns Hopkins University, is to separate the particles and find out if there is “something strange going on” with both particles and that there is a blank space between them.

    “Secrets don’t just matter. They live in something that seems to fill in all of space and time. These are quantum fields,” Kaplan said. “We’re putting the energy into the vacuum and see what comes out.”

    Both sets of results include strange, transitory particles called muons. The muon is the largest cousin to the electron orbiting an atomic center. But the muon is not part of the atom, it is unstable and normally exists for only two microseconds. After it was discovered in cosmic rays in 1936, it confused scientists so much that a famous physicist asked “Who ordered what?”

    “From the very beginning it was the physicists scratching their heads,” said Graziano Venzoni, an experimental physicist at an Italian national laboratory, who was in the U.S. Fermilab is one of the top scientists on the experiment, called the Muon G-2.

    This experiment sends the muons around a magnetic track that keeps the particles in existence for a long time so that researchers can get to know them closely. Preliminary results show that the magnetic “spin” of the standard model is 0.1% that predicts the standard model. This may not sound like much, but for particle physicists it is huge – more than enough to enhance current understanding.

    Researchers need another year or two to complete the analysis of all results around the 50-ft (14-m) track. If the results don’t change, it will count as a major discovery, Vanzoni said.

    In Cern, the world’s largest Atom smasher, physicists are crashing protons against each other. One of many different experiments of the particle measures what happens when particles called beauty or bottom quarks collide.

    The standard model predicts that these beauty quark crashes should result in an equal number of electrons and muons. It’s like waving a coin 1,000 times and equating it to an equal number of heads and tails, said James Hadron, head of the Large Hadron Collider aesthetic experiment.

    but that did not happen.

    Researchers looked at the data for several years and a few thousand accidents and found a 15% difference, with significantly more electrons than muons said, use researcher Sheldon Stone of Syracuse University.

    Neither experiment is being called an official finding, as there is still a small chance that the results are statistical quirks. Researchers said that experiments should run more and more often – in both cases, in a year or two, meeting incredibly rigorous statistical requirements for physics.

  • Field guides: Scientists bolster evidence of new physics in Muon

    Scientists are testing our fundamental understanding of the universe, and there is much to discover.

    What are touch screen, radiation therapy and shrink wrap common? They were all made possible by particle physics research. The discovery of how the universe works on the smallest scale often leads to enormous advances in technology that we use every day.

    The US Department of Energy (DOE), along with colleagues from 46 other institutions and seven countries, are conducting an experiment to keep our current understanding of the universe for scientific testing at the Arganne National Laboratory and the Fermi National Accelerator Laboratory.

    The first result points to the existence of unseen particles or forces. This new physics can help explain long-standing scientific mysteries, and add new insights into a repository of information that can tap scientists into modeling our universe and developing new technologies.

    The experiment, Muon G-2 (pronounced Mun G minus 2), is one that began at DOE’s Brookhaven National Laboratory in the ’90s, in which scientists measured a magnetic property of an elemental particle called muon.

    The Brookhaven experiment yielded a result that differed from the value predicted by the standard model, with scientists describing the makeup and behavior of the universe as the best yet. The new experiment is a recreation of Brookhaven, designed to challenge or confirm the discrepancy with high accuracy.

    The standard model very accurately predicts the muon’s G-factor – a value that tells scientists how this particle behaves in a magnetic field. This G-factor is assumed to be close to value two, and experiments measure their deviations from two, hence the name Muon G-2.

    The experiment in Brookvane indicated that the G-2 differed from the theoretical prediction by a few parts per million. This minimal difference is indicated by the existence of unknown interactions between the muon and the magnetic field – which may involve new particles or forces.

    The first result from the new experiment strongly agrees with Brookhaven, reinforcing the evidence that there is new physics to discover.

    The combined results from Fermilab and Brookhaven show a difference from the standard model at the importance of 4.2 sigma (or standard deviation), slightly less than the 5 sigma that scientists need to claim a discovery, but still be able to claim new physics. Forcing evidence. The chance that the results are a statistical fluctuation is approximately 1 in 40,000.

    Beyond standard models can help explain esoteric phenomena in particle physics, such as the nature of dark matter, a mysterious and widespread matter that physicists know, but has yet to be explored.

    “This is an incredibly exciting result,” said Argono Ran Hong, a postdoctoral appointee who has worked on the MUN G-2 experiment for over four years. “These findings may have major implications for future particle physics experiments and may lead to a stronger understanding of how the universe works.”

    The Argonone team of scientists contributed significantly to the success of the experiment. The original team, assembled and led by physicist Peter Winter, includes Argon’s Hong and Simon Corody as well as Suvarna Ramachandran and Joe Grange, who have left Argon.

    “This team has an impressive and unique skill set with high expertise about hardware, operational planning and data analysis,” said Winter, who leads the MUON G-2 contribution from Argon. “They contributed significantly to the experiment, and we could not have achieved these results without their work.”

    To obtain the real G-2 of the muon, Fermilab scientists produce a beam of MUN traveling in a circle through a large, hollow ring in the presence of a strong magnetic field. This area keeps Mun in the ring and rotates the direction of Moyen’s spin. The rotation, called scientific precedence, is similar to the rotation of the Earth’s axis, only very, very fast.

    To calculate the G-2 to the desired accuracy, scientists need to measure two values ​​with much greater certainty. One is the rate of spin precession of the muon as it detects the ring. The second is the strength of the magnetic field surrounding the muon, which affects its precedence. This is where Aragogna comes from.

    Local tour

    Although the muons travel through an impressive continuous magnetic field, changes in ambient temperature and effects from the hardware of the experiment cause slight changes throughout the ring. Even for these small changes in field strength, if not accounted for, the G-2 can significantly affect the accuracy of the calculation.

    To correct for field variations, scientists continuously measure the area flowing using hundreds of probes mounted on ring walls. In addition, they send a trolley around the ring every three days so that the field strength can be measured, where the MUN beam actually passes.

  • The spintronics technology revolution could be just a hopfion away

    A decade ago, the discovery of quipiparticles called magnetized skymission provided important new clues as to how subtle spin textures would enable sprintronics, a new class of electronics that uses the orientation of electron spin rather than its charge to encode data. .

    But although scientists have made great progress in this very young field, they still do not fully understand how to design Spintronics materials that would allow for ultrasound, ultrafast, low-power devices.

    Skyrmions may look promising, but scientists have long considered Skyrmions to be only 2D objects. However, recent studies have suggested that 2D skyrion may actually be the origin of a 3D spin pattern known as hopfion. But no one was able to prove experimentally that magnetic hops exist at the nanoscale.

    Now, a team of co-researchers from Berkeley Lab have reported on the first demonstration and observation of 3D hoffs emanating from the skymation at the nanoscale (billions of meters) in a magnetic system at Nature Communications. Researchers say their discovery is a major step forward in realizing high-density, high-speed, low-power, yet ultrastable magnetic memory devices that exploit the intrinsic power of electron spin.

    Peter Fisher, a senior scientist and senior author of the Materials Sciences Division of Berkeley Lab, said, “Not only did we prove that complex spin textures like 3D Hoff exist — we also showed how to study and use them Go. ”

    Physics at UC Santa Cruz. He said, “To understand how hops actually work, we need to know how to make them and study them. This work is only possible because we have these amazing tools at Berkeley Lab and We have collaborative partnerships with scientists around the world. ”

    According to previous studies, Hoff, unlike skfmions, do not drift when they move with an instrument and are therefore excellent candidates for data technologies. Furthermore, theory colleagues in the United Kingdom predicted that expectations could emerge from a multilayer 2D magnetic system.

    The current study is the first to impose those tests, Fisher said.

    Php at Noah Kent, Berkeley Lab’s Molecular Foundry. Students in Physics at Fisher’s group at UC Santa Cruz and Berkeley Lab, worked with molecular foundry staff to extrude magnetic nanopillars from layers of iridium, cobalt, and platinum.

    The multilevel material was produced by UC Berkeley postdoctoral scholar Neil Reynolds under the supervision of co-senior author Frances Hellman, who holds the title of Senior Faculty Scientist in the Materials Science Division of Berkeley Lab and Professor of Physics and Materials Science and Engineering at UC Berkeley . She also leads the Department of Energy’s Non-equilibrium Magnetic Materials (NEMM) program, which supported this study.

    Hopfians and skirmians are known to coexist in magnetic materials, but they have a characteristic spin pattern in three dimensions.

    Therefore, to differentiate them, the researchers used a combination of two advanced magnetic X-ray microscopy techniques — X-PEEM (X-ray photomission electron microscopy) at Berkeley Lab’s Synchrotron user facility, Advanced Light Source; ALBA features magnetic synchrotron light in magnetic soft X-ray transmission microscopy (MTXM) Barcelona, ​​Spain – which image the possibilities and skirmishes of different spin patterns.

    To corroborate their observations, the researchers then performed detailed simulations to simulate how 2D skyriones evolve into 3D hots in carefully designed multilayer structures inside a magnetic device, and polarized X-ray light. How will it appear when imaged by.

    “Simulations are an important part of this process, which enables us to understand experimental images and design structures that would support hopfines, skirmians, or other designed 3D spin structures,” Hellman said.

    To understand how an instrument ultimately functions, researchers plan to employ Berkeley Lab’s unique capabilities and world-class research facilities – which Fisher describes as “essential for carrying out such interdisciplinary work” Are – and furthermore study the dynamic behavior of quiesotic quipiparticles.

    “We have long known that spin textures are nearly three-dimensional, even in relatively thin films, but direct imaging is experimentally challenging,” Hellman said. “The evidence here is exciting, and it opens doors to discover and explore even more exotic and potentially important 3D spin structures.”