Normal view

Received before yesterdayPhysics World

Physicists discuss the future of machine learning and artificial intelligence

12 November 2025 at 15:00
Pierre Gentine, Jimeng Sun, Jay Lee and Kyle Cranmer
Looking ahead to the future of machine learning: (clockwise from top left) Jay Lee, Jimeng Sun, Pierre Gentine and Kyle Cranmer.

IOP Publishing’s Machine Learning series is the world’s first open-access journal series dedicated to the application and development of machine learning (ML) and artificial intelligence (AI) for the sciences.

Part of the series is Machine Learning: Science and Technology, launched in 2019, which bridges the application and advances in machine learning across the sciences. Machine Learning: Earth is dedicated to the application of ML and AI across all areas of Earth, environmental and climate sciences while Machine Learning: Health covers healthcare, medical, biological, clinical and health sciences and Machine Learning: Engineeringfocuses on applied AI and non-traditional machine learning to the most complex engineering challenges.

Here, the editors-in-chief (EiC) of the four journals discuss the growing importance of machine learning and their plans for the future.

Kyle Cranmer is a particle physicist and data scientist at the University of Wisconsin-Madison and is EiC of Machine Learning: Science and Technology (MLST). Pierre Gentine is a geophysicist at Columbia University and is EiC of Machine Learning: Earth. Jimeng Sun is a biophysicist at the University of Illinois at Urbana-Champaign and is EiC of Machine Learning: Health. Mechanical engineer Jay Lee is from the University of Maryland and is EiC of Machine Learning: Engineering.

What do you attribute to the huge growth over the past decade in research into and using machine learning?

Kyle Cranmer (KC): It is due to a convergence of multiple factors. The initial success of deep learning was driven largely by benchmark datasets, advances in computing with graphics processing units, and some clever algorithmic tricks. Since then, we’ve seen a huge investment in powerful, easy-to-use tools that have dramatically lowered the barrier to entry and driven extraordinary progress.

Pierre Gentine (PG): Machine learning has been transforming many fields of physics, as it can accelerate physics simulation, better handle diverse sources of data (multimodality), help us better predict.

Jimeng Sun (JS): Over the past decade, we have seen machine learning models consistently reach — and in some cases surpass — human-level performance on real-world tasks. This is not just in benchmark datasets, but in areas that directly impact operational efficiency and accuracy, such as medical imaging interpretation, clinical documentation, and speech recognition. Once ML proved it could perform reliably at human levels, many domains recognized its potential to transform labour-intensive processes.

Jay Lee (JL):  Traditionally, ML growth is based on the development of three elements: algorithms, big data, and computing.  The past decade’s growth in ML research is due to the perfect storm of abundant data, powerful computing, open tools, commercial incentives, and groundbreaking discoveries—all occurring in a highly interconnected global ecosystem.

What areas of machine learning excite you the most and why?

KC: The advances in generative AI and self-supervised learning are very exciting. By generative AI, I don’t mean Large Language Models — though those are exciting too — but probabilistic ML models that can be useful in a huge number of scientific applications. The advances in self-supervised learning also allows us to engage our imagination of the potential uses of ML beyond well-understood supervised learning tasks.

PG: I am very interested in the use of ML for climate simulations and fluid dynamics simulations.

JS: The emergence of agentic systems in healthcare — AI systems that can reason, plan, and interact with humans to accomplish complex goals. A compelling example is in clinical trial workflow optimization. An agentic AI could help coordinate protocol development, automatically identify eligible patients, monitor recruitment progress, and even suggest adaptive changes to trial design based on interim data. This isn’t about replacing human judgment — it’s about creating intelligent collaborators that amplify expertise, improve efficiency, and ultimately accelerate the path from research to patient benefit.

JL: One area is  generative and multimodal ML — integrating text, images, video, and more — are transforming human–AI interaction, robotics, and autonomous systems. Equally exciting is applying ML to nontraditional domains like semiconductor fabs, smart grids, and electric vehicles, where complex engineering systems demand new kinds of intelligence.

What vision do you have for your journal in the coming years?

KC: The need for a venue to propagate advances in AI/ML in the sciences is clear. The large AI conferences are under stress, and their review system is designed to be a filter not a mechanism to ensure quality, improve clarity and disseminate progress. The large AI conferences also aren’t very welcoming to user-inspired research, often casting that work as purely applied. Similarly, innovation in AI/ML often takes a back seat in physics journals, which slows the propagation of those ideas to other fields. My vision for MLST is to fill this gap and nurture the community that embraces AI/ML research inspired by the physical sciences.

PG: I hope we can demonstrate that machine learning is more than a nice tool but that it can play a fundamental role in physics and Earth sciences, especially when it comes to better simulating and understanding the world.

JS: I see Machine Learning: Health becoming the premier venue for rigorous ML–health research — a place where technical novelty and genuine clinical impact go hand in hand. We want to publish work that not only advances algorithms but also demonstrates clear value in improving health outcomes and healthcare delivery. Equally important, we aim to champion open and reproducible science. That means encouraging authors to share code, data, and benchmarks whenever possible, and setting high standards for transparency in methods and reporting. By doing so, we can accelerate the pace of discovery, foster trust in AI systems, and ensure that our field’s breakthroughs are accessible to — and verifiable by — the global community.

JL:  Machine Learning: Engineering envisions becoming the global platform where ML meets engineering. By fostering collaboration, ensuring rigour and interpretability, and focusing on real-world impact, we aim to redefine how AI addresses humanity’s most complex engineering challenges.

The post Physicists discuss the future of machine learning and artificial intelligence appeared first on Physics World.

Playing games by the quantum rulebook expends less energy

12 November 2025 at 09:00

Games played under the laws of quantum mechanics dissipate less energy than their classical equivalents. This is the finding of researchers at Singapore’s Nanyang Technological University (NTU), who worked with colleagues in the UK, Austria and the US to apply the mathematics of game theory to quantum information. The researchers also found that for more complex game strategies, the quantum-classical energy difference can increase without bound, raising the possibility of a “quantum advantage” in energy dissipation.

Game theory is the field of mathematics that aims to formally understand the payoff or gains that a person or other entity (usually called an agent) will get from following a certain strategy. Concepts from game theory are often applied to studies of quantum information, especially when trying to understand whether agents who can use the laws of quantum physics can achieve a better payoff in the game.

In the latest work, which is published in Physical Review Letters, Jayne Thompson, Mile Gu and colleagues approached the problem from a different direction. Rather than focusing on differences in payoffs, they asked how much energy must be dissipated to achieve identical payoffs for games played under the laws of classical versus quantum physics. In doing so, they were guided by Landau’s principle, an important concept in thermodynamics and information theory that states that there is a minimum energy cost to erasing a piece of information.

This Landau minimum is known to hold for both classical and quantum systems. However, in practice systems will spend more than the minimum energy erasing memory to make space for new information, and this energy will be dissipated as heat. What the NTU team showed is that this extra heat dissipation can be reduced in the quantum system compared to the classical one.

Planning for future contingencies

To understand why, consider that when a classical agent creates a strategy, it must plan for all possible future contingencies. This means it stores possibilities that never occur, wasting resources. Thompson explains this with a simple analogy. Suppose you are packing to go on a day out. Because you are not sure what the weather is going to be, you must pack items to cover all possible weather outcomes. If it’s sunny, you’d like sunglasses. If it rains, you’ll need your umbrella. But if you only end up using one of these items, you’ll have wasted space in your bag.

“It turns out that the same principle applies to information,” explains Thompson. “Depending on future outcomes, some stored information may turn out to be unnecessary – yet an agent must still maintain it to stay ready for any contingency.”

For a classical system, this can be a very wasteful process. Quantum systems, however, can use superposition to store past information more efficiently. When systems in a quantum superposition are measured, they probabilistically reveal an outcome associated with only one of the states in the superposition. Hence, while superposition can be used to store both pasts, upon measurement all excess information is automatically erased “almost as if they had never stored this information at all,” Thompson explains.

The upshot is that because information erasure has close ties to energy dissipation, this gives quantum systems an energetic advantage. “This is a fantastic result focusing on the physical aspect that many other approaches neglect,” says Vlatko Vedral, a physicist at the University of Oxford, UK who was not involved in the research.

Implications of the research

Gu and Thompson say their result could have implications for the large language models (LLMs) behind popular AI tools such as ChatGPT, as it suggests there might be theoretical advantages, from an energy consumption point of view, in using quantum computers to run them.

Another, more foundational question they hope to understand regarding LLMs is the inherent asymmetry in their behaviour. “It is likely a lot more difficult for an LLM to write a book from back cover to front cover, as opposed to in the more conventional temporal order,” Thompson notes. When considered from an information-theoretic point of view, the two tasks are equivalent, making this asymmetry somewhat surprising.

In Thompson and Gu’s view, taking waste into consideration could shed light on this asymmetry. “It is likely we have to waste more information to go in one direction over the other,” Thompson says, “and we have some tools here which could be used to analyse this”.

For Vedral, the result also has philosophical implications. If quantum agents are more optimal, he says, it is “surely is telling us that the most coherent picture of the universe is the one where the agents are also quantum and not just the underlying processes that they observe”.

The post Playing games by the quantum rulebook expends less energy appeared first on Physics World.

Quantum computing and AI join forces for particle physics

23 October 2025 at 13:57

This episode of the Physics World Weekly podcast explores how quantum computing and artificial intelligence can be combined to help physicists search for rare interactions in data from an upgraded Large Hadron Collider.

My guest is Javier Toledo-Marín, and we spoke at the Perimeter Institute in Waterloo, Canada. As well as having an appointment at Perimeter, Toledo-Marín is also associated with the TRIUMF accelerator centre in Vancouver.

Toledo-Marín and colleagues have recently published a paper called “Conditioned quantum-assisted deep generative surrogate for particle–calorimeter interactions”.

Delft logo

This podcast is supported by Delft Circuits.

As gate-based quantum computing continues to scale, Delft Circuits provides the i/o solutions that make it possible.

The post Quantum computing and AI join forces for particle physics appeared first on Physics World.

How to solve the ‘future of physics’ problem

22 October 2025 at 10:00

I hugely enjoyed physics when I was a youngster. I had the opportunity both at home and school to create my own projects, which saw me make electronic circuits, crazy flying models like delta-wings and autogiros, and even a gas chromatograph with a home-made chart recorder. Eventually, this experience made me good enough to repair TV sets, and work in an R&D lab in the holidays devising new electronic flow controls.

That enjoyment continued beyond school. I ended up doing a physics degree at the University of Oxford before working on the discovery of the gluon at the DESY lab in Hamburg for my PhD. Since then I have used physics in industry – first with British Oxygen/Linde and later with Air Products & Chemicals – to solve all sorts of different problems, build innovative devices and file patents.

While some students have a similarly positive school experience and subsequent career path, not enough do. Quite simply, physics at school is the key to so many important, useful developments, both within and beyond physics. But we have a physics education problem, or to put it another way – a “future of physics” problem.

There are just not enough school students enjoying and learning physics. On top of that there are not enough teachers enjoying physics and not enough students doing practical physics. The education problem is bad for physics and for many other subjects that draw on physics. Alas, it’s not a new problem but one that has been developing for years.

Problem solving

Many good points about the future of physics learning were made by the Institute of Physics in its 2024 report Fundamentals of 11 to 19 Physics. The report called for more physics lessons to have a practical element and encouraged more 16-year-old students in England, Wales and Northern Ireland to take AS-level physics at 17 so that they carry their GCSE learning at least one step further.

Doing so would furnish students who are aiming to study another science or a technical subject with the necessary skills and give them the option to take physics A-level. Another recommendation is to link physics more closely to T-levels – two-year vocational courses in England for 16–19 year olds that are equivalent to A-levels – so that students following that path get a background in key aspects of physics, for example in engineering, construction, design and health.

But do all these suggestions solve the problem? I don’t think they are enough and we need to go further. The key change to fix the problem, I believe, is to have student groups invent, build and test their own projects. Ideally this should happen before GCSE level so that students have the enthusiasm and background knowledge to carry them happily forward into A-level physics. They will benefit from “pull learning” – pulling in knowledge and active learning that they will remember for life. And they will acquire wider life skills too.

Developing skillsets

During my time in industry, I did outreach work with schools every few weeks and gave talks with demonstrations at the Royal Institution and the Franklin Institute. For many years I also ran a Saturday Science club in Guildford, Surrey, for pupils aged 8–15.

Based on this, I wrote four Saturday Science books about the many playful and original demonstrations and projects that came out of it. Then at the University of Surrey, as a visiting professor, I had small teams of final-year students who devised extraordinary engineering – designing superguns for space launches, 3D printers for full-size buildings and volcanic power plants inter alia. A bonus was that other staff working with the students got more adventurous too.

But that was working with students already committed to a scientific path. So lately I’ve been working with teachers to get students to devise and build their own innovative projects. We’ve had 14–15-year-old state-school students in groups of three or four, brainstorming projects, sketching possible designs, and gathering background information. We help them and get A-level students to help too (who gain teaching experience in the process). Students not only learn physics better but also pick up important life skills like brainstorming, team-working, practical work, analysis and presentations.

We’ve seen lots of ingenuity and some great projects such as an ultrasonic scanner to sense wetness of cloth; a system to teach guitar by lighting up LEDs along the guitar neck; and measuring breathing using light passing through a band of Lycra around the patient below the ribs. We’ve seen the value of failure, both mistakes and genuine technical problems.

Best of all, we’ve also noticed what might be dubbed the “combination bonus” – students having to think about how they combine their knowledge of one area of physics with another.  A project involving a sensor, for example, will often involve electronics as well the physics of the sensor and so student knowledge of both areas is enhanced.

Some teachers may question how you mark such projects. The answer is don’t mark them! Project work and especially group work is difficult to mark fairly and accurately, and the enthusiasm and increased learning by students working on innovative projects will feed through into standard school exam results.

Not trying to grade such projects will mean more students go on to study physics further, potentially to do a physics-related extended project qualification – equivalent to half an A-level where students research a topic to university level – and do it well. Long term, more students will take physics with them into the world of work, from physics to engineering or medicine, from research to design or teaching.

Such projects are often fun for students and teachers. Teachers are often intrigued and amazed by students’ ideas and ingenuity. So, let’s choose to do student-invented project work at school and let’s finally solve the future of physics problem.

The post How to solve the ‘future of physics’ problem appeared first on Physics World.

A recipe for quantum chaos

22 October 2025 at 09:44

The control of large, strongly coupled, multi-component quantum systems with complex dynamics is a challenging task.

It is, however, an essential prerequisite for the design of quantum computing platforms and for the benchmarking of quantum simulators.

A key concept here is that of quantum ergodicity. This is because quantum ergodic dynamics can be harnessed to generate highly entangled quantum states.

In classical statistical mechanics, an ergodic system evolving over time will explore all possible microstates states uniformly. Mathematically, this means that a sufficiently large collection of random samples from an ergodic process can represent the average statistical properties of the entire process.

Quantum ergodicity is simply the extension of this concept to the quantum realm.

Closely related to this is the idea of chaos. A chaotic system is one in which is very sensitive to its initial conditions. Small changes can be amplified over time, causing large changes in the future.

The ideas of chaos and ergodicity are intrinsically linked as chaotic dynamics often enable ergodicity.

Until now, it has been very challenging to predict which experimentally preparable initial states will trigger quantum chaos and ergodic dynamics over a reasonable time scale.

In a new paper published in Reports on Progress in Physics, a team of researchers have proposed an ingenious solution to this problem using the Bose–Hubbard Hamiltonian.

They took as an example ultracold atoms in an optical lattice (a typical choice for experiments in this field) to benchmark their method.

The results show that there are certain tangible threshold values which must be crossed in order to ensure the onset of quantum chaos.

These results will be invaluable for experimentalists working across a wide range of quantum sciences.

The post A recipe for quantum chaos appeared first on Physics World.

This jumping roundworm uses static electricity to attach to flying insects

17 October 2025 at 14:30

Researchers in the US have discovered that a tiny jumping worm uses static electricity to increase the chances of attaching to its unsuspecting prey.

The parasitic roundworm Steinernema carpocapsae, which live in soil, are already known to leap some 25 times their body length into the air. They do this by curling into a loop and springing in the air, rotating hundreds of times a second.

If the nematode lands successfully, it releases bacteria that kills the insect within a couple of days upon which the worm feasts and lays its eggs. At the same time, if it fails to attach to a host then it faces death itself.

While static electricity plays a role in how some non-parasitic nematodes detach from large insects, little is known whether static helps their parasitic counterparts to attach to an insect.

To investigate, researchers are Emory University and the University of California, Berkeley, conducted a series of experiments, in which they used high-speed microscopy techniques to film the worms as they leapt onto a fruit fly.

They did this by tethering a fly with a copper wire that was connected to a high-voltage power supply.

They found that a charge of a few hundred volts – similar to that generated in the wild by an insect’s wings rubbing against ions in the air – fosters a negative charge on the worm, creating an attractive force with the positively charged fly.

Carrying out simulations of the worm jumps, they found that without any electrostatics, only 1 in 19 worm trajectories successfully reached their target. The greater the voltage, however, the greater the chance of landing. For 880 V, for example, the probability was 80%.

The team also carried out experiments using a wind tunnel, finding that the presence of wind helped the nematodes drift and this also increased their chances of attaching to the insect.

“Using physics, we learned something new and interesting about an adaptive strategy in an organism,” notes Emory physicist Ranjiangshang Ran. “We’re helping to pioneer the emerging field of electrostatic ecology.”

The post This jumping roundworm uses static electricity to attach to flying insects appeared first on Physics World.

Wearable UVA sensor warns about overexposure to sunlight

17 October 2025 at 08:09
Illustration showing the operation of the UVA detector
Transparent healthcare Illustration of the fully transparent sensor that reacts to sunlight and allows real-time monitoring of UVA exposure on the skin. The device could be integrated into wearable items, such as glasses or patches. (Courtesy: Jnnovation Studio)

A flexible and wearable sensor that allows the user to monitor their exposure to ultraviolet (UV) radiation has been unveiled by researchers in South Korea. Based on a heterostructure of four different oxide semiconductors, the sensor’s flexible, transparent design could vastly improve the real-time monitoring of skin health.

UV light in the A band has wavelengths of 315–400 nm and comprises about 95% of UV radiation that reaches the surface of the earth. Because of its relatively long wavelength, UVA can penetrate deep into the skin. There it can alter biological molecules, damaging tissue and even causing cancer.

While covering up with clothing and using sunscreen are effective at reducing UVA exposure, researchers are keen on developing wearable sensors that can monitor UVA levels in real time. These can alert users when their UVA exposure reaches a certain level. So far, the most promising advances towards these designs have come from oxide semiconductors.

Many challenges

“For the past two decades, these materials have been widely explored for displays and thin-film transistors because of their high mobility and optical transparency,” explains Seong Jun Kang at Kyung Hee University, who led the research. “However, their application to transparent ultraviolet photodetectors has been limited by high persistent photocurrent, poor UV–visible discrimination, and instability under sunlight.”

While these problems can be avoided in more traditional UV sensors, such as gallium nitride and zinc oxide, these materials are opaque and rigid – making them completely unsuitable for use in wearable sensors.

In their study, Kang’s team addressed these challenges by introducing a multi-junction heterostructure, made by stacking multiple ultrathin layers of different oxide semiconductors. The four semiconductors they selected each had wide bandgaps, which made them more transparent in the visible spectrum but responsive to UV light.

The structure included zinc and tin oxide layers as n-type semiconductors (doped with electron-donating atoms) and cobalt and hafnium oxide layers as p-type semiconductors (doped with electron-accepting atoms) – creating positively charged holes. Within the heterostructure, this selection created three types of interface: p–n junctions between hafnium and tin oxide; n–n junctions between tin and zinc oxide; and p–p junctions between cobalt and hafnium oxide.

Efficient transport

When the team illuminated their heterostructure with UVA photons, the electron–hole charge separation was enhanced by the p–n junction, while the n–n and p–p junctions allowed for more efficient transport of electrons and holes respectively, improving the design’s response speed. When the illumination was removed, the electron–hole pairs could quickly decay, avoiding any false detections.

To test their design’s performance, the researchers integrated their heterostructure into a wearable detector. “In collaboration with UVision Lab, we developed an integrated Bluetooth circuit and smartphone application, enabling real-time display of UVA intensity and warning alerts when an individual’s exposure reaches the skin-type-specific minimal erythema dose (MED),” Kang describes. “When connected to the Bluetooth circuit and smartphone application, it successfully tracked real-time UVA variations and issued alerts corresponding to MED limits for various skin types.”

As well as maintaining over 80% transparency, the sensor proved highly stable and responsive, even in direct outdoor sunlight and across repeated exposure cycles. Based on this performance, the team is now confident that their design could push the capabilities of oxide semiconductors beyond their typical use in displays and into the fast-growing field of smart personal health monitoring.

“The proposed architecture establishes a design principle for high-performance transparent optoelectronics, and the integrated UVA-alert system paves the way for next-generation wearable and Internet-of-things-based environmental sensors,” Kang predicts.

The research is described in Science Advances.

The post Wearable UVA sensor warns about overexposure to sunlight appeared first on Physics World.

Ask me anything: Scott Bolton – ‘It’s exciting to be part of a team that’s seeing how nature works for the first time’

29 September 2025 at 10:00

What skills do you use every day in your job?

As a planetary scientist, I use mathematics, physics, geology and atmospheric science. But as the principal investigator of Juno, I also have to manage the Juno team, and interface with politicians, people at NASA headquarters and other administrators. In that capacity, I need to be able to talk about topics at various technical levels, because many of the people I’m speaking with are not actively researching planetary science. I need a broad range of skills, but one of the most important is to be able to recognize when I don’t have the right expertise and need to find someone who can help.

The surface of Jupiter
Pretty amazing Hurricane-like spiral wind patterns near Jupiter’s north pole as seen by NASA’s Juno mission, of which Scott Bolton is principal investigator. (Courtesy: NASA/JPL/Caltech SwRIMSSS Gerald Eichstädt, SeánDoran)

What do you like best and least about your job?

I really love being part of a mission that’s discovering new information and new ideas about how the universe works. It’s exciting to be at the edge of something, where you are part of a team that’s seeing an image or an aspect of how nature works for the first time. The discovery element is truly inspirational. I also love seeing how a mixture of scientists with different expertise, skills and backgrounds can come together to understand something new. Watching that process unfold is very exciting to me.

Some tasks I like least are related to budget exercises, administrative tasks and documentation. Some government rules and regulations can be quite taxing and require a lot of time to ensure forms and documents are completed correctly. Occasionally, an urgent action item will appear requiring an immediate response and having to drop current work to fit in a new task. As a result, my normal work gets delayed, and this can be frustrating. I consider one of my main jobs to shelter the team from these extraneous tasks so they can get their work done.

What do you know today that you wish you’d known at the start of your career?

The most important thing I know now is that if you really believe in something, you should stick to it. You should not give up. You should keep trying, keep working at it, and find people who can collaborate with you to make it happen. Early on, I didn’t realize how important it was to combine forces with people who complemented my skills in order to achieve goals.

The other thing I wish I had known is that taking time to figure out the best way to approach a challenge, question or problem is beneficial to achieving one’s goals.  That was a very valuable lesson to learn. We should resist the temptation to rush into finding the answer – instead, it’s worthwhile to take the time to think about the question and develop an approach.

The post Ask me anything: Scott Bolton – ‘It’s exciting to be part of a team that’s seeing how nature works for the first time’ appeared first on Physics World.

Discovery of the Higgs boson at CERN inspires new stained-glass artwork

25 September 2025 at 14:02

London-based artist Oksana Kondratyeva has created a new stained-glass artwork – entitled Discovery – that is inspired by the detection of the Higgs boson at CERN’s Large Hadron Collider (LHC) in 2012.

Born in Ukraine, Kondratyeva has a PhD in the theory of architecture and has an artist residency at the Romont Glass Museum (Vitromusée Romont) in Switzerland, where Discovery is currently exhibited.

In 2023 Kondratyeva travelled to visit the LHC at CERN, which she notes represents “more than a laboratory [but] a gateway to the unknown”.

Discovery draws inspiration from the awe I felt standing at the frontier of human knowledge, where particles collide at unimaginable energies and new forms of matter are revealed,” Kondratyeva told Physics World.

Kondratyeva says that the focal point of the artwork – a circle structured with geometric precision – represents the collision of two high-energy protons.

The surrounding lead lines in the panel trace the trajectories of particle decays as they move through a magnetic field: right-curved lines represent positively charged particles, left-curved lines indicate negatively charged ones, while straight lines signify neutral particles unaffected by the magnetic field.

The geometric composition within the central circle reflects the hidden symmetries of physical laws – patterns that only emerge when studying the behaviour of particle interactions.

Kondratyeva says that the use of mouth-blown flashed glass adds further depth to the piece, with colours and subtle shades moving from hot and luminous at the centre to cooler, more subdued tones toward the edges.

“Through glass, light and colour I sought to express the invisible forces and delicate symmetries that define our universe – ideas born in the realm of physics, yet deeply resonant in artistic expression,” notes Kondratyeva. “The work also continues a long tradition of stained glass as a medium of storytelling, reflecting the deep symmetries of nature and the human drive to find order in chaos.”

In 2022 Kondratyeva teamed up with Rigetti Computing to create piece of art inspired by the packaging for a quantum chip. Entitled Per scientiam ad astra (through science to the stars), the artwork was displayed at the 2024 British Glass Biennale at the Ruskin Glass Centre in Stourbridge, UK.

The post Discovery of the Higgs boson at CERN inspires new stained-glass artwork appeared first on Physics World.

Imagining alien worlds: we explore the science and fiction of exoplanets

25 September 2025 at 11:00

In the past three decades astronomers have discovered more than 6000 exoplanets – planets that orbit stars other than the Sun. Many of these exoplanets are very unlike the eight planets of the solar system, making it clear that the cosmos contains a rich and varied array of alien worlds.

Weird and wonderful planets are also firmly entrenched in the world of science fiction, and the interplay between imagined and real planets is explored in the new book Amazing Worlds of Science Fiction and Science Fact. Its author Keith Cooper is my guest in this episode of the Physics World Weekly podcast and our conversation ranges from the amazing science of “hot Jupiter” exoplanets to how the plot of a popular Star Trek episode could inform our understanding of how life could exist on distant exoplanets.

The post Imagining alien worlds: we explore the science and fiction of exoplanets appeared first on Physics World.

Gyroscopic backpack improves balance for people with movement disorder

25 September 2025 at 08:00

A robotic backpack equipped with gyroscopes can enhance stability for people with severe balance issues and may eventually remove the need for mobility walkers. Designed to dampen unintended torso motion and improve balance, the backpack employs similar gyroscopic technology to that used by satellites and space stations to maintain orientation. Individuals with the movement disorder ataxia put the latest iteration of the device – the GyroPack – through its paces in a series of standing, walking and body motion exercises.

In development for over a decade, GyroPack is the brainchild of a team of neurologists, biomechanical engineers and rehabilitation specialists at the Radboud University Medical Centre, Delft University of Technology (TU Delft) and Erasmus Medical Centre. The first tests of its ability to improve balance performance with ataxia-impacted adults, described in npj Robotics, produced encouraging enough results to continue the GyroPack’s development as a portable robotic wearable for individuals with neurological conditions.

Degenerative ataxias, a variety of diseases of the nervous system, cause progressive cerebral dysfunction manifesting as symptoms including lack of coordination, imbalance when standing and difficulty walking. Ataxia can afflict people of all ages, including young children. Managing the progressive symptoms may require lifetime use of cumbersome, heavily weighted walkers as mobility aids and to prevent falling.

GyroPack design

The 6 kg version of the GyroPack tested in this study contains two control moment gyroscopes (CMGs), which are attitude control devices that control orientation to a specific inertial frame-of-reference. Each CMG consists of a flywheel and a gimbal, which together generate the change in angular momentum that’s exerted onto the wearer to resist unintended torso rotations. Each CMG also contains an inertial measurement unit to determine the orientation and angular rate of change of the CMG.

The backpack also holds two independent, 1.5 kg miniaturized actuators designed by the team that convert energy into motion. The system is controlled by a laptop and powered through a separate power box that filters and electrically separates electrical signals for safety. All activities can be immediately terminated when an emergency stop button is pushed.

Lead researcher Jorik Nonnekes of Radboud UMC describes how the system works: “The change of orientation imposed by the gimbal motor, combined with the angular momentum of the flywheels, causes a free moment, or torque, that is exerted onto the system the CMG is attached to – which in this study is the human upper body,” he explains. “A cascaded control scheme reliably deals with actuator limitations without causing undesired disturbances on the user. The gimbals are controlled in such a way that the torque exerted on the trunk is proportional and opposite to the trunk’s angular velocity, which effectively lets the system damp rotational motion of the wearer. This damping has been shown to make balancing easier for unimpaired subjects and individuals post-stroke.”

Performance assessment

Study participant wearing the GyroPack
Exercise study A participant wearing the GyroPack. (Courtesy: npj Robot. 10.1038/s44182-025-00041-4)

For the study, 14 recruits diagnosed with degenerative ataxia performed five tasks: standing still with feet together and arms crossed for up to 30 s; walking on a treadmill for 2 min without using the handrail; making a clockwise and a counterclockwise 360° turn-in-place; performing a tandem stance with the heel of one foot touching the toes of the other for up to 30 s; and testing reactive balance by applying two forward and two backward treadmill perturbations.

The participants performed these tasks under three conditions, two whilst wearing the backpack and one without as a baseline. In one scenario, the backpack was operated in assistive mode to investigate its damping power and torque profiles. In the other, the backpack was in “sham mode”, without assistive control but with sound and motor vibrations indistinguishable from normal operation.

The researchers report that when fully operational, the GyroPack increased the user’s average standing time compared with not wearing the backpack at all. When used during walking, it reduced the variability of trunk angular velocity and the extrapolated centre-of-mass, two common indicators of gait stability. The trunk angular velocity variability also showed a significant reduction when comparing assistive to sham GyroPack modes. However, the performance of turn-in-place and perturbation recovery tasks were similar for all three scenarios.

Interestingly, wearing the backpack in the sham scenario improved walking tasks compared with not wearing a backpack at all. The researchers attributed this to possibly more weight in the torso area improving body stabilization or to a placebo effect.

Next, the team plans to redesign the device to make it lighter and quieter. “It’s not yet suitable for everyday use,” says Nonnekes in a press statement. “But in the future, it could help people with ataxia participate more freely in daily life, like attending social events without needing a walker, which many find bulky and inconvenient. This could greatly enhance their mobility and overall quality of life.”

The post Gyroscopic backpack improves balance for people with movement disorder appeared first on Physics World.

The pros and cons of reinforcement learning in physical science

17 September 2025 at 10:30

Today’s artificial intelligence (AI) systems are built on data generated by humans. They’re trained on huge repositories of writing, images and videos, most of which have been scraped from the Internet without the knowledge or consent of their creators. It’s a vast and sometimes ill-gotten treasure trove of information – but for machine-learning pioneer David Silver, it’s nowhere near enough.

“I think if you provide the knowledge that humans already have, it doesn’t really answer the deepest question for AI, which is how it can learn for itself to solve problems,” Silver told an audience at the 12th Heidelberg Laureate Forum (HLF) in Heidelberg, Germany, on Monday.

Silver’s proposed solution is to move from the “era of human data”, in which AI passively ingests information like a student cramming for an exam, into what he calls the “era of experience” in which it learns like a baby exploring its world. In his HLF talk on Monday, Silver played a sped-up video of a baby repeatedly picking up toys, manipulating them and putting them down while crawling and rolling around a room. To murmurs of appreciation from the audience, he declared, “I think that provides a different perspective of how a system might learn.”

Silver, a computer scientist at University College London, UK, has been instrumental in making this experiential learning happen in the virtual worlds of computer science and mathematics. As head of reinforcement learning at Google DeepMind, he was instrumental in developing AlphaZero, an AI system that taught itself to play the ancient stones-and-grid game of Go. It did this via a so-called “reward function” that pushed it to improve over many iterations, without ever being taught the game’s rules or strategy.

More recently, Silver coordinated a follow-up project called AlphaProof that treats formal mathematics as a game. In this case, AlphaZero’s reward is based on getting correct proofs. While it isn’t yet outperforming the best human mathematicians, in 2024 it achieved silver-medal standard on problems at the International Mathematical Olympiad.

Learning in the physics playroom

Could a similar experiential learning approach work in the physical sciences? At an HLF panel discussion on Tuesday afternoon, particle physicist Thea Klaeboe Åarrestad began by outlining one possible application. Whenever CERN’s Large Hadron Collider (LHC) is running, Åarrestad explained, she and her colleagues in the CMS experiment must control the magnets that keep protons on the right path as they zoom around the collider. Currently, this task is performed by a person, working in real time.

Four people sitting on a stage with a large screen in the background. Another person stands beside them
Up for discussion: A panel discussion on machine learning in physical sciences at the Heidelberg Laureate Forum. l-r: Moderator George Musser, Kyle Cranmer, Thea Klaeboe Åarrestad, David Silver and Maia Fraser. (Courtesy: Bernhard Kreutzer/HLFF)

In principle, Åarrestad continued, a reinforcement-learning AI could take over that job after learning by experience what works and what doesn’t. There’s just one problem: if it got anything wrong, the protons would smash into a wall and melt the beam pipe. “You don’t really want to do that mistake twice,” Åarrestad deadpanned.

For Åarrestad’s fellow panellist Kyle Cranmer, a particle physicist who works on data science and machine learning at the University of Wisconsin-Madison, US, this nightmare scenario symbolizes the challenge with using reinforcement learning in physical sciences. In situations where you’re able to do many experiments very quickly and essentially for free – as is the case with AlphaGo and its descendants – you can expect reinforcement learning to work well, Cranmer explained. But once you’re interacting with a real, physical system, even non-destructive experiments require finite amounts of time and money.

Another challenge, Cranmer continued, is that particle physics already has good theories that predict some quantities to multiple decimal places. “It’s not low-hanging fruit for getting an AI to come up with a replacement framework de novo,” Cranmer said. A better option, he suggested, might be to put AI to work on modelling atmospheric fluid dynamics, which are emergent phenomena without first-principles descriptions. “Those are super-exciting places to use ideas from machine learning,” he said.

Not for nuclear arsenals

Silver, who was also on Tuesday’s panel, agreed that reinforcement learning isn’t always the right solution. “We should do this in areas where mistakes are small and it can learn from those small mistakes to avoid making big mistakes,” he said. To general laughter, he added that he would not recommend “letting an AI loose on nuclear arsenals”, either.

Reinforcement learning aside, both Åarrestad and Cranmer are highly enthusiastic about AI. For Cranmer, one of the most exciting aspects of the technology is the way it gets scientists from different disciplines talking to each other. The HLF, which aims to connect early-career researchers with senior figures in mathematics and computer science, is itself a good example, with many talks in the weeklong schedule devoted to AI in one form or another.

For Åarrestad, though, AI’s most exciting possibility relates to physics itself. Because the LHC produces far more data than humans and present-day algorithms can handle, Åarrestad explained, much of it is currently discarded. The idea that, as a result, she and her colleagues could be throwing away major discoveries sometimes keeps her up at night. “Is there new physics below 1 TeV?” Åarrestad wondered.

Someday, maybe, an AI might be able to tell us.

The post The pros and cons of reinforcement learning in physical science appeared first on Physics World.

Are we heading for a future of superintelligent AI mathematicians?

16 September 2025 at 19:54

When researchers at Microsoft released a list of the 40 jobs most likely to be affected by generative artificial intelligence (gen AI), few outsiders would have expected to see “mathematician” among them. Yet according to speakers at this year’s Heidelberg Laureate Forum (HLF), which connects early-career researchers with distinguished figures in mathematics and computer science, computers are already taking over many tasks formerly performed by human mathematicians – and the humans have mixed feelings about it.

One of those expressing disquiet is Yang-Hui He, a mathematical physicist at the London Institute for Mathematical Sciences. In general, He is extremely keen on AI. He’s written a textbook about the use of AI in mathematics, and he told the audience at an HLF panel discussion that he’s been peddling machine-learning techniques to his mathematical physics colleagues since 2017.

More recently, though, He has developed concerns about gen AI specifically. “It is doing mathematics so well without any understanding of mathematics,” he said, a note of wonder creeping into his voice. Then, more plaintively, he added, “Where is our place?”

AI advantages

Some of the things that make today’s gen AI so good at mathematics are the same as the ones that made Google’s DeepMind so good at the game of Go. As the theoretical computer scientist Sanjeev Arora pointed out in his HLF talk, “The reason it’s better than humans is that it’s basically tireless.” Put another way, if the 20th-century mathematician Alfréd Rényi once described his colleagues as “machines for turning coffee into theorems”, one advantage of 21st-century AI is that it does away with the coffee.

Arora, however, sees even greater benefits. In his view, AI’s ability to use feedback to improve its own performance – a technique known as reinforcement learning – is particularly well-suited to mathematics.

In the standard version of reinforcement learning, Arora explains, the AI model is given a large bank of questions, asked to generate many solutions and told to use the most correct ones (as labelled by humans) to refine its model. But because mathematics is so formalized, with answers that are so verifiably true or false, Arora thinks it will soon be possible to replace human correctness checkers with AI “proof assistants”. Indeed, he’s developing one such assistant himself, called Lean, with his colleagues at Princeton University in the US.

Humans in the loop?

But why stop there? Why not use AI to generate mathematical questions as well as producing and checking their solutions? Indeed, why not get it to write a paper, peer review it and publish it for its fellow AI mathematicians – which are, presumably, busy combing the literature for information to help them define new questions?

Arora clearly thinks that’s where things are heading, and many of his colleagues seem to agree, at least in part. His fellow HLF panellist Javier Gómez-Serrano, a mathematician at Brown University in the US, noted that AI is already generating results in a day or two that would previously have taken a human mathematician months. “Progress has been quite quick,” he said.

The panel’s final member, Maia Fraser of the University of Ottawa, Canada, likewise paid tribute to the “incredible things that are possible with AI now”.  But Fraser, who works on mathematical problems related to neuroscience, also sounded a note of caution. “My concern is the speed of the changes,” she told the HLF audience.

The risk, Fraser continued, is that some of these changes may end up happening by default, without first considering whether humans want or need them. While we can’t un-invent AI, “we do have agency” over what we want, she said.

So, do we want a world in which AI mathematicians take humans “out of the loop” entirely? For He, the benefits may outweigh the disadvantages. “I really want to see a proof of the Riemann hypothesis,” he said,  to ripples of laughter. If that means that human mathematicians “become priests to oracles”, He added, so be it.

The post Are we heading for a future of superintelligent AI mathematicians? appeared first on Physics World.

Space–time crystal emerges in a liquid crystal

16 September 2025 at 14:27

A new type of  “space–time crystal” has been created in the US by Hanqing Zhao and Ivan Smalyukh at the University of Colorado Boulder. The system is patterned in both space and time and comprises a rigid lattice of topological solitons that are sustained by steady oscillations in the orientations of liquid crystal molecules.

In an ordinary crystal atomic or molecular structures repeat at periodic intervals in space. In 2012, however, Frank Wilczek suggested that systems might also exist with quantum states that repeat at perfectly periodic intervals in time – even as they remain in their lowest-energy state.

First observed experimentally in 2017, these time crystals are puzzling to physicists because they spontaneously break time–translation symmetry, which states that the laws of physics are the same no matter when you observe them. In contrast, a time crystal continuously oscillates over time, without consuming energy.

A space–time crystal is even more bizarre. In addition to breaking time–translation symmetry, such a system would also break spatial symmetry, just like the repeating molecular patterns of an ordinary crystal. Until now, however, a space–time crystal had not been observed directly.

Rod-like molecules

In their study, Zhao and Smalyukh created a space–time crystal in the nematic phase of a liquid crystal. In this phase the crystal’s rod-like molecules align parallel to each other and also flow like a liquid. Building on computer simulations, they confined the liquid crystal between two glass plates coated with a light-sensitive dye.

“We exploited strong light–matter interactions between dye-coated, light-reconfigurable surfaces, and the optical properties of the liquid crystal,” Smalyukh explains.

When the researchers illuminated the top plate with linearly polarized light at constant intensity, the dye molecules rotate to align perpendicular to the direction of polarization. This reorients nearby liquid crystal molecules, and the effect propagates deeper into the bulk. However, the influence weakens with depth, so that molecules farther from the top plate are progressively less aligned.

As light travels through this gradually twisting structure, its linear polarization is transformed, becoming elliptically polarized by the time it reaches the bottom plate. The dye molecules there become aligned with this new polarization, altering the liquid crystal alignment near the bottom plate. These changes propagate back upward, influencing molecules near the top plate again.

Feedback loop

This is a feedback loop, with the top and bottom plates continuously influencing each other via the polarized light passing through the liquid crystal.

“These light-powered dynamics in confined liquid crystals leads to the emergence of particle-like topological solitons and the space–time crystallinity,” Smalyukh says.

In this environment, particle-like topological solitons emerge as stable, localized twists in the liquid crystal’s orientation that do not decay over time. Like particles, the solitons move and interact with each other while remaining intact.

Once the feedback loop is established, these solitons emerge in a repeating lattice-like pattern. This arrangement not only persisted as the feedback loop continued, but is sustained by it. This is a clear sign that the system exhibits crystalline order in time and space simultaneously.

Accessible system

Having confirmed their conclusions with simulations, Zhao and Smalyukh are confident this is the first experimental demonstration of a space–time crystal. The discovery that such an exotic state can exist in a classical, room-temperature system may have important implications.

“This is the first time that such a phenomenon is observed emerging in a liquid crystalline soft matter system,” says Smalyukh. “Our study calls for a re-examining of various time-periodic phenomena to check if they meet the criteria of time-crystalline behaviour.”

Building on these results, the duo hope to broaden the scope of time crystal research beyond a purely theoretical and experimental curiosity. “This may help expand technological utility of liquid crystals, as well as expand the currently mostly fundamental focus of studies of time crystals to more applied aspects,” Smalyukh adds.

The research is described in Nature Materials.

The post Space–time crystal emerges in a liquid crystal appeared first on Physics World.

Physicists set to decide location for next-generation Einstein Telescope

10 September 2025 at 09:30

A decade ago, on 14 September 2015, the twin detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) in Hanford, Washington, and Livingston, Louisiana, finally detected a gravitational wave. The LIGO detectors – two L-shaped laser interferometers with 4 km-long arms – had measured tiny differences in laser beams bouncing off mirrors at the end of each arm. The variations in the length of the arms, caused by the presence of a gravitational wave, were converted into the now famous audible “chirp signal”, which indicated the final approach between two merging black holes.

Since that historic detection, which led to the 2017 Nobel Prize for Physics, the LIGO detectors, together with VIRGO in Italy, have measured several hundred gravitational waves – from mergers of black holes to neutron-star collisions. More recently, they have been joined by the KAGRA detector in Japan, which is located some 200 m underground, shielding it from vibrations and environmental noise.

Yet the current number of gravitational waves could be dwarfed by what the planned Einstein Telescope (ET) would measure. This European-led, third-generation gravitational-wave detector would be built several hundred metres underground and be at least 10 times more sensitive than its second-generation counterparts including KAGRA. Capable of “listening” to a thousand times larger volume of the universe, the new detector would be able to spot many more sources of gravitational waves. In fact, the ET will be able to gather in a day what it took LIGO and VIRGO a decade to collect.

The ET is designed to operate in two frequency domains. The low-frequency regime – 2–40 Hz – is below current detectors’ capabilities and will let the ET pick up waves from more massive black holes. The high-frequency domain, on the other hand, would operate from 40 Hz to 10 kHz  and detect a wide variety of astrophysical sources, including merging black holes and other high-energy events. The detected signals from waves would also be much longer with the ET, lasting for hours. This would allow physicists to “tune in” much earlier as black holes or neutron stars approach each other.

Location, location, location

But all that is still a pipe dream, because the ET, which has a price tag of €2bn, is not yet fully funded and is unlikely to be ready until 2035 at the earliest. The precise costs will depend on the final location of the experiment, which is still up for grabs.

Three regions are vying to host the facility: the Italian island of Sardinia, the Belgian-German-Dutch border region and the German state of Saxony. Each candidate is currently investigating the suitability of its preferred site (see box below), the results of which will be published in a “bid book” by the end of 2026. The winning site will be picked in 2027 with construction beginning shortly after.

Other factors that will dictate where the ET is built include logistics in the host region, the presence of companies and research institutes (to build and exploit the facility) and government support. With the ET offering high-quality jobs, economic return, scientific appeal and prestige, that could give the German-Belgian-Dutch candidacy the edge given the three nations could share the cost.

Another major factor is the design of the ET. One proposal is to build it as an equilateral triangle with each side being 10 km. The other is a twin L-shaped design where both arms are 15 km long and each detector located far from each other. The latter design is similar to the two LIGO over-ground detectors, which are 3000 km apart. If the “2L design” is chosen, the detector would then be built at two of the three competing sites.

The 2L design is being investigated by all three sites, but those behind the Sardinia proposal strongly favour this approach. “With the detectors properly oriented relative to each other, this design could outperform the triangular design across all key scientific objectives,” claims Domenico D’Urso, scientific director of the Italian candidacy. He points to a study by the ET collaboration in 2023 that investigated the impact of the ET design on its scientific goals. “The 2L design enables, for example, more precise localization of gravitational wave sources, enhancing sky-position reconstruction,” he says. “And it provides superior overall sensitivity.”

Where could the next-generation Einstein Telescope be built?

Three sites are vying to host the Einstein Telescope (ET), with each offering various geological advantages. Lausitz in Saxony benefits from being a former coal-mining area. “Because of this mining past, the subsurface was mapped in great detail decades ago,” says Günther Hasinger, founding director of the German Center for Astrophysics, which is currently being built in Lausitz and would house the ET if picked. The granite formation in Lausitz is also suitable for a tunnel complex because the rock is relatively dry. Not much water would need to be pumped away, causing less vibration.

Thanks to the former lead, zinc and silver mine of Sos Enattos, meanwhile, the subsurface near Nuoro in Sardinia – another potential location for the ET – is also well known. The island is on a very stable, tectonic microplate, making it seismically quiet. Above ground, the area is undeveloped and sparsely populated, further shielding the experiment from noise.

The third ET candidate, lying near the point where Belgium, Germany and the Netherlands meet, also has a hard subsurface, which is needed for the tunnels. It is topped by a softer, clay-like layer that would dampen vibrations from traffic and industry. “We are busy investigating the suitability of the subsurface and the damping capacity of the top layer,” says Wim Walk of the Dutch Center for Subatomic Physics (Nikhef), which is co-ordinating the candidacy for this location. “That research requires a lot of work, because the subsurface here has not yet been properly mapped.”

Localization is important for multi­messenger astronomy. In other words, if a gravitational-wave source can be located quickly and precisely in the sky, other telescopes can be pointed towards it to observe any eventual light or other electromagnetic (EM) signals. This is what happened after LIGO detected a gravitational wave on 17 August 2017, originating from a neutron star collision. Dozens of ground- and space-based satellites were able to pick up a gamma-ray burst and the subsequent EM afterglow.

The triangle design, however, is favoured by the Belgian-German-Dutch consortium. It would be the Earth equivalent to the European Space Agency’s planned LISA space-based gravitational-waves detector, which will consist of three spacecraft in a triangle configuration that is set for launch in 2035, the same year that the ET could open. LISA would detect gravitational waves with even much lower frequency, coming, for example, from mergers of supermassive black holes.

While the Earth-based triangle design would not be able to locate the source as precisely, it would – unlike the 2L design – be able to do “null stream” measurements. These would yield  a clearer picture of the noise from the environment and the detector itself, including  “glitches”, which are bursts of noise that overlap with gravitational-wave signals. “With a non-stop influx of gravitational waves but also of noise and glitches, we need some form of automatic clean-up of the data,” says Jan Harms, a physicist at the Gran Sasso Science Institute in Italy and member of the scientific ET collaboration. “The null stream could provide that.”

However, it is not clear if that null stream would be a fundamental advantage for data analysis, with Harms and colleagues thinking more work is needed. “For example, different forms of noise could be connected to each other, which would compromise the null stream,” he says. The problem is also that a detector with a null stream has not yet been realized. And that applies to the triangle design in general. “While the 2L design is well established in the scientific community,” adds D’Urso.

Backers of the triangle design see the ET as being part of a wider, global network of third-generation detectors, where the localization argument no longer matters. Indeed, the US already has plans for an above-ground successor to LIGO. Known as the Cosmic Explorer, it would feature two L-shaped detectors with arm lengths of up to 40 km. But with US politics in turmoil, it is questionable how realistic these plans are.

Matthew Evans, a physicist at the Massachusetts Institute of Technology and member of the LIGO collaboration, recognizes the “network argument”. “I think that the global gravitational waves community are double counting in some sense,” he says. Yet for Evans it is all about the exciting discoveries that could be made with a next-generation gravitational-wave detector. “The best science will be done with ET as 2Ls,” he says.

The post Physicists set to decide location for next-generation Einstein Telescope appeared first on Physics World.

Garbage in, garbage out: why the success of AI depends on good data

1 September 2025 at 12:00

Artificial intelligence (AI) is fast becoming the new “Marmite”. Like the salty spread that polarizes taste-buds, you either love AI or you hate it. To some, AI is miraculous, to others it’s threatening or scary. But one thing is for sure – AI is here to stay, so we had better get used to it.

In many respects, AI is very similar to other data-analytics solutions in that how it works depends on two things. One is the quality of the input data. The other is the integrity of the user to ensure that the outputs are fit for purpose.

Previously a niche tool for specialists, AI is now widely available for general-purpose use, in particular through Generative AI (GenAI) tools. Also known as Large Language Models (LLMs), they’re now widley available through, for example, OpenAI’s ChatGPT, Microsoft Co-pilot, Anthropic’s Claude, Adobe Firefly or Google Gemini.

GenAI has become possible thanks to the availability of vast quantities of digitized data and significant advances in computing power. Based on neural networks, this size of model would in fact have been impossible without these two fundamental ingredients.

GenAI is incredibly powerful when it comes to searching and summarizing large volumes of unstructured text. It exploits unfathomable amounts of data and is getting better all the time, offering users significant benefits in terms of efficiency and labour saving.

Many people now use it routinely for writing meeting minutes, composing letters and e-mails, and summarizing the content of multiple documents. AI can also tackle complex problems that would be difficult for humans to solve, such as climate modelling, drug discovery and protein-structure prediction.

I’d also like to give a shout out to tools such as Microsoft Live Captions and Google Translate, which help people from different locations and cultures to communicate. But like all shiny new things, AI comes with caveats, which we should bear in mind when using such tools.

User beware

LLMs, by their very nature, have been trained on historical data. They can’t therefore tell you exactly what may happen in the future, or indeed what may have happened since the model was originally trained. Models can also be constrained in their answers.

Take the Chinese AI app DeepSeek. When the BBC asked it what had happened at Tiananmen Square in Beijing on 4 June 1989 – when Chinese troops cracked down on protestors – the Chatbot’s answer was suppressed. Now, this is a very obvious piece of information control, but subtler instances of censorship will be harder to spot.

Trouble is, we can’t know all the nuances of the data that models have been trained on

We also need to be conscious of model bias. At least some of the training data will probably come from social media and public chat forums such as X, Facebook and Reddit. Trouble is, we can’t know all the nuances of the data that models have been trained on – or the inherent biases that may arise from this.

One example of unfair gender bias was when Amazon developed an AI recruiting tool. Based on 10 years’ worth of CVs – mostly from men – the tool was found to favour men. Thankfully, Amazon ditched it. But then there was Apple’s gender-biased credit-card algorithm that led to men being given higher credit limits than women of similar ratings.

Another problem with AI is that it sometimes acts as a black box, making it hard for us to understand how, why or on what grounds it arrived at a certain decision. Think about those online Captcha tests we have to take to when accessing online accounts. They often present us with a street scene and ask us to select those parts of the image containing a traffic light.

The tests are designed to distinguish between humans and computers or bots – the expectation being that AI can’t consistently recognize traffic lights. However, AI-based advanced driver assist systems (ADAS) presumably perform this function seamlessly on our roads. If not, surely drivers are being put at risk?

A colleague of mine, who drives an electric car that happens to share its name with a well-known physicist, confided that the ADAS in his car becomes unresponsive, especially when at traffic lights with filter arrows or multiple sets of traffic lights. So what exactly is going on with ADAS? Does anyone know?

Caution needed

My message when it comes to AI is simple: be careful what you ask for. Many GenAI applications will store user prompts and conversation histories and will likely use this data for training future models. Once you enter your data, there’s no guarantee it’ll ever be deleted. So  think carefully before sharing any personal data, such medical or financial information. It also pays to keep prompts non-specific (avoiding using your name or date of birth) so that they cannot be traced directly to you.

Democratization of AI is a great enabler and it’s easy for people to apply it without an in-depth understanding of what’s going on under the hood. But we should be checking AI-generated output before we use it to make important decisions and we should be careful of the personal information we divulge.

It’s easy to become complacent when we are not doing all the legwork. We are reminded under the terms of use that “AI can make mistakes”, but I wonder what will happen if models start consuming AI-generated erroneous data. Just as with other data-analytics problems, AI suffers from the old adage of “garbage in, garbage out”.

But sometimes I fear it’s even worse than that. We’ll need a collective vigilance to avoid AI being turned into “garbage in, garbage squared”.

The post Garbage in, garbage out: why the success of AI depends on good data appeared first on Physics World.

Nano-engineered flyers could soon explore Earth’s mesosphere

21 August 2025 at 11:00

Small levitating platforms that can stay airborne indefinitely at very high altitudes have been developed by researchers in the US and Brazil. Using photophoresis, the devices could be adapted to carry small payloads in the mesosphere where flight is notoriously difficult. It could even be used in the atmospheres of moons and other planets.

Photophoresis occurs when light illuminates one side of a particle, heating it slightly more than the other. The resulting temperature difference in the surrounding gas means that molecules rebound with more energy on the warmer side than the cooler side – producing a tiny but measurable push.

For most of the time since its discovery in the 1870s, the effect was little more than a curiosity. But with more recent advances in nanotechnology, researchers have begun to explore how photophoresis could be put to practical use.

“In 2010, my graduate advisor, David Keith, had previously written a paper that described photophoresis as a way of flying microscopic devices in the atmosphere, and we wanted to see if larger devices could carry useful payloads,” explains Ben Schafer at Harvard University, who led the research. “At the same time, [Igor Bargatin’s group at the University of Pennsylvania] was doing fascinating work on larger devices that generated photophoretic forces.”

Carrying payloads

These studies considered a wide variety of designs: from artificial aerosols, to thin disks with surfaces engineered to boost the effect. Building on this earlier work, Schafer’s team investigated how lightweight photophoretic devices could be optimized to carry payloads in the mesosphere: the atmospheric layer at about 50–80 km above Earth’s surface, where the sparsity of air creates notoriously difficult flight conditions for conventional aircraft or balloons.

“We used these results to fabricate structures that can fly in near-space conditions, namely, under less than the illumination intensity of sunlight and at the same pressures as the mesosphere,” Schafer explains.

The team’s design consists two alumina membranes – each 100 nm thick, and perforated with nanoscale holes. The membranes are positioned a short distance apart, and connected by ligaments. In addition, the bottom membrane is coated with a light-absorbing chromium layer, causing it to heat the surrounding air more than the top layer as it absorbs incoming sunlight.

As a result, air molecules move preferentially from the cooler top side toward the warmer bottom side through the membranes’ perforations: a photophoretic process known as thermal transpiration. This one-directional flow creates a pressure imbalance across the device, generating upward thrust. If this force exceeds the device’s weight, it can levitate and even carry a payload. The team also suggests that the devices could be kept aloft at night using the infrared radiation emitted by Earth into space.

Simulations and experiments

Through a combination of simulations and experiments, Schafer and his colleagues examined how factors such as device size, hole density, and ligament distribution could be tuned to maximize thrust at different mesospheric altitudes – where both pressure and temperature can vary dramatically. They showed that platforms 10 cm in radius could feasibly remain aloft throughout the mesosphere, powered by sunlight at intensities lower than those actually present there.

Based on these results, the team created a feasible design for a photophoretic flyer with a 3 cm radius, capable of carrying a 10 mg payload indefinitely at altitudes of 75 km. With an optimized design, they predict payloads as large as 100 mg could be supported during daylight.

“These payloads could support a lightweight communications payload that could transmit data directly to the ground from the mesosphere,” Schafer explains. “Small structures without payloads could fly for weeks or months without falling out of the mesosphere.”

With this proof of concept, the researchers are now eager to see photophoretic flight tested in real mesospheric conditions. “Because there’s nothing else that can sustainably fly in the mesosphere, we could use these devices to collect ground-breaking atmospheric data to benefit meteorology, perform telecommunications, and predict space weather,” Schafer says.

Requiring no fuel, batteries, or solar panels, the devices would be completely sustainable. And the team’s ambitions go beyond Earth: with the ability to stay aloft in any low-pressure atmosphere with sufficient light, photophoretic flight could also provide a valuable new approach to exploring the atmosphere of Mars.

The research is described in Nature.

The post Nano-engineered flyers could soon explore Earth’s mesosphere appeared first on Physics World.

Deep-blue LEDs get a super-bright, non-toxic boost

21 August 2025 at 08:00

A team led by researchers at Rutgers University in the US has discovered a new semiconductor that emits bright, deep-blue light. The hybrid copper iodide material is stable, non-toxic, can be processed in solution and has already been integrated into a light-emitting diode (LED). According to its developers, it could find applications in solid-state lighting and display technologies.

Creating white light for solid-state lighting and full-colour displays requires bright, pure sources of red, green and blue light. While stable materials that efficiently emit red or green light are relatively easily to produce, those that generate blue light (especially deep-blue light) are much more challenging. Existing blue-light emitters based on organic materials are unstable, meaning they lose their colour quality over time. Alternatives based on lead-halide perovskites or cadmium-containing colloidal quantum dots are more stable, but also toxic for humans and the environment.

Hybrid copper-halide-based emitters promise the best of both worlds, being both non-toxic and stable. They are also inexpensive, with tuneable optical properties and a high luminescence efficiency, meaning they are good at converting power into visible light.

Researchers have already used a pure inorganic copper iodide material, Cs3Cu2I5, to make deep-blue LEDs. This material emits light at the ideal wavelength of 445 nm, is robust to heat and moisture, and it emits between 87–95% of the excitation photons it absorbs as luminescence photons, giving it a high photoluminescence quantum yield (PLQY).

However, the maximum ratio of photon output to electron input (known as the maximum external quantum efficiency, EQEmax) for this material is very low, at just 1.02%.

Strong deep-blue photoluminescence

In the new work, a team led by Rutgers materials chemist Jing Li developed a hybrid copper iodide with the chemical formula 1D-Cu4I8(Hdabco)4 (CuI(Hda), where Hdabco is 1,4-diazabicyclo-[2.2.2]octane-1-ium. This material emits strong deep-blue light at 449 nm with a PLQY near unity (99.6%).

Li and colleagues opted to use CuI(Hda) as the sole light emitting layer and built a thin-film LED out of it using a solution process. The new device has an EQEmax of 12.6% with colour coordinates (0.147, 0.087) and a peak brightness of around 4000 cd m-2. It is also relatively stable, with an operational half-lifetime (T50) of approximately 204 hours under ambient conditions. These figures mean that its performance rivals the best existing solution-processed deep-blue LEDs, Li says. The team also fabricated a large-area device measuring 4 cm² to demonstrate that the material could be used in real-world applications.

Interfacial hydrogen-bond passivation strategy

The low PLQY of previous such devices is partly due to the fact that charge carriers (electrons and holes) in these materials rapidly recombine in a non-radiative way, typically due to surface and bulk defects, or traps. The charge carriers also have a low radiative recombination rate, which is associated with a small exciton (electron-hole pair) binding energy.

Li and colleagues overcame this problem in their new device thanks to an interfacial hydrogen-bond passivation (DIHP) strategy that involves introducing hydrogen bonds via an ultrathin sheet of polymethylacrylate (PMMA) and a carbazole-phosphonic acid-based self-assembled monolayer (Ac2PACz) at the two interfaces of the CuI(Hda) emissive layer. This effectively passivates both heterojunctions of the copper-iodide hydride light-emitting layer and optimizes exciton binding energies. “Such a synergistic surface modification dramatically boosts the performance of the deep-blue LED by a factor of fourfold,” explains Li.

According to Li, the study suggests a promising route for developing blue emitters that are both energy-efficient and environmentally benign, without compromising on performance. “Through the fabrication of blue LEDs using a low cost, stable and nontoxic material capable of delivering efficient deep-blue light, we address major energy and ecological limitations found in other types of solution-processable emitters,” she tells Physics World.

Li adds that the hydrogen-bonding passivation technique is not limited to the material studied in this work. It could also be applied to minimize interfacial energy losses in a wide range of other solution-based, light-emitting optoelectronic systems.

The team is now pursuing strategies for developing other solution-processable, high-performance hybrid copper iodide-based emitter materials similar to CuI(Hda). “Our goal is to further enhance the efficiency and extend the operational lifetime of LEDs utilizing these next-generation materials,” says Li.

The present work is detailed in Nature.

The post Deep-blue LEDs get a super-bright, non-toxic boost appeared first on Physics World.

NASA launches TRACERS mission to study Earth’s ‘magnetic shield’

13 August 2025 at 11:02

NASA has successfully launched a mission to explore the interactions between the Sun’s and Earth’s magnetic fields. The Tandem Reconnection and Cusp Electrodynamics Reconnaissance Satellites (TRACERS) craft was sent into low-Earth orbit on 23 July from Vandenberg Space Force Base in California by a SpaceX Falcon 9 rocket. Following a month of calibration, the twin-satellite mission is expected to operate for a year.

The spacecraft will observe particles and electromagnetic fields in the Earth’s northern magnetic “cusp region”, which encircles the North Pole where the Earth’s magnetic field lines curve down toward Earth.

This unique vantage point allows researchers to study how magnetic reconnection — when field lines connect and explosively reconfigure — affects the space environment. Such observations will help researchers understand how processes change over both space and time.

The two satellites will collect data from over 3000 cusp crossings during the one-year mission with the information being used to understand space-weather phenomena that can disrupt satellite operations, communications and power grids on Earth.

Each nearly identical octagonal satellite – weighing less than 200 kg each – features six instruments including magnetomers, electric-field instruments and devices to measure the energy of ions and electrons in plasma around the spacecraft.

It will operate in a Sun-synchronous orbit about 590 km above ground with the satellites following one behind the other in close separation, passing through regions of space at least 10 seconds apart.

“TRACERS is an exciting mission,” says Stephen Fuselier from the Southwest Research Institute in Texas, who is the mission’s deputy principal investigator. “The data from that single pass through the cusp were amazing. We can’t wait to get the data from thousands of cusp passes.”

The post NASA launches TRACERS mission to study Earth’s ‘magnetic shield’ appeared first on Physics World.

Jet stream study set to improve future climate predictions

13 August 2025 at 09:35
Factors influencing the jet stream in the southern hemisphere
Driven by global warming The researchers identified which factors influence the jet stream in the southern hemisphere. (Courtesy: Leipzig University/Office for University Communications)

An international team of meteorologists has found that half of the recently observed shifts in the southern hemisphere’s jet stream are directly attributable to global warming – and pioneered a novel statistical method to pave the way for better climate predictions in the future.

Prompted by recent changes in the behaviour of the southern hemisphere’s summertime eddy-driven jet (EDJ) – a band of strong westerly winds located at a latitude of between 30°S and 60°S – the Leipzig University-led team sifted through historical measurement data to show that wind speeds in the EDJ have increased, while the wind belt has moved consistently toward the South Pole. They then used a range of innovative methods to demonstrate that 50% of these shifts are directly attributable to global warming, with the remainder triggered by other climate-related changes, including warming of the tropical Pacific and the upper tropical atmosphere, and the strengthening of winds in the stratosphere.

“We found that human fingerprints on the EDJ are already showing,” says lead author Julia Mindlin, research fellow at Leipzig University’s Institute for Meteorology. “Global warming, springtime changes in stratospheric winds linked to ozone depletion, and tropical ocean warming are all influencing the jet’s strength and position.”

“Interestingly, the response isn’t uniform, it varies depending on where you look, and climate models are underestimating how strong the jet is becoming. That opens up new questions about what’s missing in our models and where we need to dig deeper,” she adds.

Storyline approach

Rather than collecting new data, the researchers used existing, high-quality observational and reanalysis datasets – including the long-running HadCRUT5 surface temperature data, produced by the UK Met Office and the University of East Anglia, and a variety of sea surface temperature (SST) products including HadISST, ERSSTv5 and COBE.

“We also relied on something called reanalysis data, which is a very robust ‘best guess’ of what the atmosphere was doing at any given time. It is produced by blending real observations with physics-based models to reconstruct a detailed picture of the atmosphere, going back decades,” says Mindlin.

To interpret the data, the team – which also included researchers at the University of Reading, the University of Buenos Aires and the Jülich Supercomputing Centre – used a statistical approach called causal inference to help isolate the effects of specific climate drivers. They also employed “storyline” techniques to explore multiple plausible futures rather than simply averaging qualitatively different climate responses.

“These tools offer a way to incorporate physical understanding while accounting for uncertainty, making the analysis both rigorous and policy-relevant,” says Mindlin.

Future blueprint

For Mindlin, these findings are important for several reasons. First, they demonstrate “that the changes predicted by theory and climate models in response to human activity are already observable”. Second, she notes that they “help us better understand the physical mechanisms that drive climate change, especially the role of atmospheric circulation”.

“Third, our methodology provides a blueprint for future studies, both in the southern hemisphere and in other regions where eddy-driven jets play a role in shaping climate and weather patterns,” she says. “By identifying where and why models diverge from observations, our work also contributes to improving future projections and enhances our ability to design more targeted model experiments or theoretical frameworks.”

The team is now focused on improving understanding of how extreme weather events, like droughts, heatwaves and floods, are likely to change in a warming world. Since these events are closely linked to atmospheric circulation, Mindlin stresses that it is critical to understand how circulation itself is evolving under different climate drivers.

One of the team’s current areas of focus is drought in South America. Mindlin notes that this is especially challenging due to the short and sparse observational record in the region, and the fact that drought is a complex phenomenon that operates across multiple timescales.

“Studying climate change is inherently difficult – we have only one Earth, and future outcomes depend heavily on human choices,” she says. “That’s why we employ ‘storylines’ as a methodology, allowing us to explore multiple physically plausible futures in a way that respects uncertainty while supporting actionable insight.”

The results are reported in the Proceedings of the National Academy of Sciences.

The post Jet stream study set to improve future climate predictions appeared first on Physics World.

❌