Normal view

Received before yesterdayEngineering & Science

Physicists discuss the future of machine learning and artificial intelligence

12 November 2025 at 15:00
Pierre Gentine, Jimeng Sun, Jay Lee and Kyle Cranmer
Looking ahead to the future of machine learning: (clockwise from top left) Jay Lee, Jimeng Sun, Pierre Gentine and Kyle Cranmer.

IOP Publishing’s Machine Learning series is the world’s first open-access journal series dedicated to the application and development of machine learning (ML) and artificial intelligence (AI) for the sciences.

Part of the series is Machine Learning: Science and Technology, launched in 2019, which bridges the application and advances in machine learning across the sciences. Machine Learning: Earth is dedicated to the application of ML and AI across all areas of Earth, environmental and climate sciences while Machine Learning: Health covers healthcare, medical, biological, clinical and health sciences and Machine Learning: Engineeringfocuses on applied AI and non-traditional machine learning to the most complex engineering challenges.

Here, the editors-in-chief (EiC) of the four journals discuss the growing importance of machine learning and their plans for the future.

Kyle Cranmer is a particle physicist and data scientist at the University of Wisconsin-Madison and is EiC of Machine Learning: Science and Technology (MLST). Pierre Gentine is a geophysicist at Columbia University and is EiC of Machine Learning: Earth. Jimeng Sun is a biophysicist at the University of Illinois at Urbana-Champaign and is EiC of Machine Learning: Health. Mechanical engineer Jay Lee is from the University of Maryland and is EiC of Machine Learning: Engineering.

What do you attribute to the huge growth over the past decade in research into and using machine learning?

Kyle Cranmer (KC): It is due to a convergence of multiple factors. The initial success of deep learning was driven largely by benchmark datasets, advances in computing with graphics processing units, and some clever algorithmic tricks. Since then, we’ve seen a huge investment in powerful, easy-to-use tools that have dramatically lowered the barrier to entry and driven extraordinary progress.

Pierre Gentine (PG): Machine learning has been transforming many fields of physics, as it can accelerate physics simulation, better handle diverse sources of data (multimodality), help us better predict.

Jimeng Sun (JS): Over the past decade, we have seen machine learning models consistently reach — and in some cases surpass — human-level performance on real-world tasks. This is not just in benchmark datasets, but in areas that directly impact operational efficiency and accuracy, such as medical imaging interpretation, clinical documentation, and speech recognition. Once ML proved it could perform reliably at human levels, many domains recognized its potential to transform labour-intensive processes.

Jay Lee (JL):  Traditionally, ML growth is based on the development of three elements: algorithms, big data, and computing.  The past decade’s growth in ML research is due to the perfect storm of abundant data, powerful computing, open tools, commercial incentives, and groundbreaking discoveries—all occurring in a highly interconnected global ecosystem.

What areas of machine learning excite you the most and why?

KC: The advances in generative AI and self-supervised learning are very exciting. By generative AI, I don’t mean Large Language Models — though those are exciting too — but probabilistic ML models that can be useful in a huge number of scientific applications. The advances in self-supervised learning also allows us to engage our imagination of the potential uses of ML beyond well-understood supervised learning tasks.

PG: I am very interested in the use of ML for climate simulations and fluid dynamics simulations.

JS: The emergence of agentic systems in healthcare — AI systems that can reason, plan, and interact with humans to accomplish complex goals. A compelling example is in clinical trial workflow optimization. An agentic AI could help coordinate protocol development, automatically identify eligible patients, monitor recruitment progress, and even suggest adaptive changes to trial design based on interim data. This isn’t about replacing human judgment — it’s about creating intelligent collaborators that amplify expertise, improve efficiency, and ultimately accelerate the path from research to patient benefit.

JL: One area is  generative and multimodal ML — integrating text, images, video, and more — are transforming human–AI interaction, robotics, and autonomous systems. Equally exciting is applying ML to nontraditional domains like semiconductor fabs, smart grids, and electric vehicles, where complex engineering systems demand new kinds of intelligence.

What vision do you have for your journal in the coming years?

KC: The need for a venue to propagate advances in AI/ML in the sciences is clear. The large AI conferences are under stress, and their review system is designed to be a filter not a mechanism to ensure quality, improve clarity and disseminate progress. The large AI conferences also aren’t very welcoming to user-inspired research, often casting that work as purely applied. Similarly, innovation in AI/ML often takes a back seat in physics journals, which slows the propagation of those ideas to other fields. My vision for MLST is to fill this gap and nurture the community that embraces AI/ML research inspired by the physical sciences.

PG: I hope we can demonstrate that machine learning is more than a nice tool but that it can play a fundamental role in physics and Earth sciences, especially when it comes to better simulating and understanding the world.

JS: I see Machine Learning: Health becoming the premier venue for rigorous ML–health research — a place where technical novelty and genuine clinical impact go hand in hand. We want to publish work that not only advances algorithms but also demonstrates clear value in improving health outcomes and healthcare delivery. Equally important, we aim to champion open and reproducible science. That means encouraging authors to share code, data, and benchmarks whenever possible, and setting high standards for transparency in methods and reporting. By doing so, we can accelerate the pace of discovery, foster trust in AI systems, and ensure that our field’s breakthroughs are accessible to — and verifiable by — the global community.

JL:  Machine Learning: Engineering envisions becoming the global platform where ML meets engineering. By fostering collaboration, ensuring rigour and interpretability, and focusing on real-world impact, we aim to redefine how AI addresses humanity’s most complex engineering challenges.

The post Physicists discuss the future of machine learning and artificial intelligence appeared first on Physics World.

Playing games by the quantum rulebook expends less energy

12 November 2025 at 09:00

Games played under the laws of quantum mechanics dissipate less energy than their classical equivalents. This is the finding of researchers at Singapore’s Nanyang Technological University (NTU), who worked with colleagues in the UK, Austria and the US to apply the mathematics of game theory to quantum information. The researchers also found that for more complex game strategies, the quantum-classical energy difference can increase without bound, raising the possibility of a “quantum advantage” in energy dissipation.

Game theory is the field of mathematics that aims to formally understand the payoff or gains that a person or other entity (usually called an agent) will get from following a certain strategy. Concepts from game theory are often applied to studies of quantum information, especially when trying to understand whether agents who can use the laws of quantum physics can achieve a better payoff in the game.

In the latest work, which is published in Physical Review Letters, Jayne Thompson, Mile Gu and colleagues approached the problem from a different direction. Rather than focusing on differences in payoffs, they asked how much energy must be dissipated to achieve identical payoffs for games played under the laws of classical versus quantum physics. In doing so, they were guided by Landau’s principle, an important concept in thermodynamics and information theory that states that there is a minimum energy cost to erasing a piece of information.

This Landau minimum is known to hold for both classical and quantum systems. However, in practice systems will spend more than the minimum energy erasing memory to make space for new information, and this energy will be dissipated as heat. What the NTU team showed is that this extra heat dissipation can be reduced in the quantum system compared to the classical one.

Planning for future contingencies

To understand why, consider that when a classical agent creates a strategy, it must plan for all possible future contingencies. This means it stores possibilities that never occur, wasting resources. Thompson explains this with a simple analogy. Suppose you are packing to go on a day out. Because you are not sure what the weather is going to be, you must pack items to cover all possible weather outcomes. If it’s sunny, you’d like sunglasses. If it rains, you’ll need your umbrella. But if you only end up using one of these items, you’ll have wasted space in your bag.

“It turns out that the same principle applies to information,” explains Thompson. “Depending on future outcomes, some stored information may turn out to be unnecessary – yet an agent must still maintain it to stay ready for any contingency.”

For a classical system, this can be a very wasteful process. Quantum systems, however, can use superposition to store past information more efficiently. When systems in a quantum superposition are measured, they probabilistically reveal an outcome associated with only one of the states in the superposition. Hence, while superposition can be used to store both pasts, upon measurement all excess information is automatically erased “almost as if they had never stored this information at all,” Thompson explains.

The upshot is that because information erasure has close ties to energy dissipation, this gives quantum systems an energetic advantage. “This is a fantastic result focusing on the physical aspect that many other approaches neglect,” says Vlatko Vedral, a physicist at the University of Oxford, UK who was not involved in the research.

Implications of the research

Gu and Thompson say their result could have implications for the large language models (LLMs) behind popular AI tools such as ChatGPT, as it suggests there might be theoretical advantages, from an energy consumption point of view, in using quantum computers to run them.

Another, more foundational question they hope to understand regarding LLMs is the inherent asymmetry in their behaviour. “It is likely a lot more difficult for an LLM to write a book from back cover to front cover, as opposed to in the more conventional temporal order,” Thompson notes. When considered from an information-theoretic point of view, the two tasks are equivalent, making this asymmetry somewhat surprising.

In Thompson and Gu’s view, taking waste into consideration could shed light on this asymmetry. “It is likely we have to waste more information to go in one direction over the other,” Thompson says, “and we have some tools here which could be used to analyse this”.

For Vedral, the result also has philosophical implications. If quantum agents are more optimal, he says, it is “surely is telling us that the most coherent picture of the universe is the one where the agents are also quantum and not just the underlying processes that they observe”.

The post Playing games by the quantum rulebook expends less energy appeared first on Physics World.

The Download: how AI really works, and phasing out animal testing

14 November 2025 at 13:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

OpenAI’s new LLM exposes the secrets of how AI really works

The news: ChatGPT maker OpenAI has built an experimental large language model that is far easier to understand than typical models.

Why it matters: It’s a big deal, because today’s LLMs are black boxes: Nobody fully understands how they do what they do. Building a model that is more transparent sheds light on how LLMs work in general, helping researchers figure out why models hallucinate, why they go off the rails, and just how far we should trust them with critical tasks. Read the full story.

—Will Douglas Heaven

Google DeepMind is using Gemini to train agents inside Goat Simulator 3

Google DeepMind has built a new video-game-playing agent called SIMA 2 that can navigate and solve problems in 3D virtual worlds. The company claims it’s a big step toward more general-purpose agents and better real-world robots.   

The company first demoed SIMA (which stands for “scalable instructable multiworld agent”) last year. But this new version has been built on top of Gemini, the firm’s flagship large language model, which gives the agent a huge boost in capability. Read the full story.

—Will Douglas Heaven

These technologies could help put a stop to animal testing

Earlier this week, the UK’s science minister announced an ambitious plan: to phase out animal testing.

Testing potential skin irritants on animals will be stopped by the end of next year. By 2027, researchers are “expected to end” tests of the strength of Botox on mice. And drug tests in dogs and nonhuman primates will be reduced by 2030.

It’s good news for activists and scientists who don’t want to test on animals. And it’s timely too: In recent decades, we’ve seen dramatic advances in technologies that offer new ways to model the human body and test the effects of potential therapies, without experimenting on animals. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Chinese hackers used Anthropic’s AI to conduct an espionage campaign   
It automated a number of attacks on corporations and governments in September. (WSJ $)
+ The AI was able to handle the majority of the hacking workload itself. (NYT $)
+ Cyberattacks by AI agents are coming. (MIT Technology Review)

2 Blue Origin successfully launched and landed its New Glenn rocket
It managed to deploy two NASA satellites into space without a hitch. (CNN)
+ The New Glenn is the company’s largest reusable rocket. (FT $)
+ The launch had been delayed twice before. (WP $)

3 Brace yourself for flu season
It started five weeks earlier than usual in the UK, and the US is next. (Ars Technica)
+ Here’s why we don’t have a cold vaccine. Yet. (MIT Technology Review)

4 Google is hosting a Border Protection facial recognition app    
The app alerts officials whether to contact ICE about identified immigrants. (404 Media)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)

5 OpenAI is trialling group chats in ChatGPT
It’d essentially make AI a participant in a conversation of up to 20 people. (Engadget)

6 A TikTok stunt sparked debate over how charitable America’s churches really are
Content creator Nikalie Monroe asked churches for help feeding her baby. Very few stepped up. (WP $)

7 Indian startups are attempting to tackle air pollution
But their solutions are far beyond the means of the average Indian household. (NYT $)
+ OpenAI is huge in India. Its models are steeped in caste bias. (MIT Technology Review)

8 An AI tool could help reduce wasted efforts to transplant organs
It predicts how likely the would-be recipient is to die during the brief transplantation window. (The Guardian)
+ Putin says organ transplants could grant immortality. Not quite. (MIT Technology Review)

9 3D-printing isn’t making prosthetics more affordable
It turns out that plastic prostheses are often really uncomfortable. (IEEE Spectrum)
+ These prosthetics break the mold with third thumbs, spikes, and superhero skins. (MIT Technology Review)

10 What happens when relationships with AI fall apart
Can you really file for divorce from an LLM? (Wired $)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)

Quote of the day

“It’s a funky time.”

—Aileen Lee, founder and managing partner of Cowboy Ventures, tells TechCrunch the AI boom has torn up the traditional investment rulebook.

One more thing

Restoring an ancient lake from the rubble of an unfinished airport in Mexico City

Weeks after Mexican President Andrés Manuel López Obrador took office in 2018, he controversially canceled ambitious plans to build an airport on the deserted site of the former Lake Texcoco—despite the fact it was already around a third complete.

Instead, he tasked Iñaki Echeverria, a Mexican architect and landscape designer, with turning it into a vast urban park, an artificial wetland that aims to transform the future of the entire Valley region.

But as López Obrador’s presidential team nears its end, the plans for Lake Texcoco’s rebirth could yet vanish. Read the full story.

—Matthew Ponsford

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Maybe Gen Z is onto something when it comes to vibe dating.
+ Trust AC/DC to give the fans what they want, performing Jailbreak for the first time since 1991.
+ Nieves González, the artist behind Lily Allen’s new album cover, has an eye for detail.
+ Here’s what AI determines is a catchy tune.

The Hardest Part of Creating Conscious AI Might Be Convincing Ourselves It’s Real

31 October 2025 at 14:00

What would a machine actually have to do to persuade us it’s conscious?

As far back as 1980, the American philosopher John Searle distinguished between strong and weak AI. Weak AIs are merely useful machines or programs that help us solve problems, whereas strong AIs would have genuine intelligence. A strong AI would be conscious.

Searle was skeptical of the very possibility of strong AI, but not everyone shares his pessimism. Most optimistic are those who endorse functionalism, a popular theory of mind that takes conscious mental states to be determined solely by their function. For a functionalist, the task of producing a strong AI is merely a technical challenge. If we can create a system that functions like us, we can be confident it is conscious like us.

Recently, we have reached the tipping point. Generative AIs such as ChatGPT are now so advanced that their responses are often indistinguishable from those of a real human—see this exchange between ChatGPT and Richard Dawkins, for instance.

This issue of whether a machine can fool us into thinking it is human is the subject of a well-known test devised by English computer scientist Alan Turing in 1950. Turing claimed that if a machine could pass the test, we ought to conclude it was genuinely intelligent.

Back in 1950 this was pure speculation, but according to a pre-print study from earlier this year—that’s a study that hasn’t been peer-reviewed yet—the Turing test has now been passed. ChatGPT convinced 73 percent of participants that it was human.

What’s interesting is that nobody is buying it. Experts are not only denying that ChatGPT is conscious but seemingly not even taking the idea seriously. I have to admit, I’m with them. It just doesn’t seem plausible.

The key question is: What would a machine actually have to do in order to convince us?

Experts have tended to focus on the technical side of this question. That is, to discern what technical features a machine or program would need in order to satisfy our best theories of consciousness. A 2023 article, for instance, as reported in The Conversation, compiled a list of fourteen technical criteria or “consciousness indicators,” such as learning from feedback (ChatGPT didn’t make the grade).

But creating a strong AI is as much a psychological challenge as a technical one. It is one thing to produce a machine that satisfies the various technical criteria that we set out in our theories, but it is quite another to suppose that, when we are finally confronted with such a thing, we will believe it is conscious.

The success of ChatGPT has already demonstrated this problem. For many, the Turing test was the benchmark of machine intelligence. But if it has been passed, as the pre-print study suggests, the goalposts have shifted. They might well keep shifting as technology improves.

Myna Difficulties

This is where we get into the murky realm of an age-old philosophical quandary: the problem of other minds. Ultimately, one can never know for sure whether anything other than oneself is conscious. In the case of human beings, the problem is little more than idle skepticism. None of us can seriously entertain the possibility that other humans are unthinking automata, but in the case of machines it seems to go the other way. It’s hard to accept that they could be anything but.

A particular problem with AIs like ChatGPT is that they seem like mere mimicry machines. They’re like the myna bird who learns to vocalize words with no idea of what it is doing or what the words mean.

This doesn’t mean we will never make a conscious machine, of course, but it does suggest that we might find it difficult to accept it if we did. And that might be the ultimate irony: succeeding in our quest to create a conscious machine, yet refusing to believe we had done so. Who knows, it might have already happened.

So what would a machine need to do to convince us? One tentative suggestion is that it might need to exhibit the kind of autonomy we observe in many living organisms.

Current AIs like ChatGPT are purely responsive. Keep your fingers off the keyboard, and they’re as quiet as the grave. Animals are not like this, at least not the ones we commonly take to be conscious, like chimps, dolphins, cats, and dogs. They have their own impulses and inclinations (or at least appear to), along with the desires to pursue them. They initiate their own actions on their own terms, for their own reasons.

Perhaps if we could create a machine that displayed this type of autonomy—the kind of autonomy that would take it beyond a mere mimicry machine—we really would accept it was conscious?

It’s hard to know for sure. Maybe we should ask ChatGPT.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post The Hardest Part of Creating Conscious AI Might Be Convincing Ourselves It’s Real appeared first on SingularityHub.

A 3D Printed 16mm Movie Camera

27 October 2025 at 18:30

The basic principles of a motion picture film camera should be well understood by most readers — after all, it’s been well over a hundred years since the Lumière brothers wowed 19th century Paris with their first films. But making one yourself is another matter entirely, as they are surprisingly complex and high-precision devices. This hasn’t stopped [Henry Kidman] from giving it a go though, and what makes his camera more remarkable is that it’s 3D printed.

The problem facing a 16mm movie camera designer lies in precisely advancing the film by one frame at the correct rate while filming, something done in the past with a small metal claw that grabs each successive sprocket. His design eschews that for a sprocket driven by a stepper motor from an Arduino. His rotary shutter is driven by another stepper motor, and he has the basis of a good camera.

The tests show promise, but he encounters a stability problem, because as it turns out, it’s difficult to print a 16mm sprocket in plastic without it warping. He solves this by aligning frames in post-processing. After fixing a range of small problems though, he has a camera that delivers a very good picture quality, and that makes us envious.

Sadly, those of us who ply our film-hacking craft in 8mm don’t have the luxury of enough space for a sprocket to replace the claw.

An Introduction to Access Technology for the Faith Community

24 October 2025 at 20:22

Access technology offers people with disabilities adaptive tools to use computers, smart phones and other devices. I am a blind Catholic professional with experience in academic political science. I also have training and program management experience in access technology; helping other blind and low vision users solve difficulties with their devices. As I have engaged with AI and Faith, I have noticed that the community has few current links with the conversation around accessibility, and I hope this article will begin to change that.

I will be focusing on the types of access technology I know best: screen readers and dictation. Screen readers (known as text to speech) allow the user to hear the device talking to them. Screen readers usually require the user to have some level of comfort with keyboarding or use of a touch screen. Dictation (known as speech to text) allows the user to talk to the device. Users can optionally receive a vocal response during the dictation process. To better understand the difference, note that for an iPhone, Siri is entirely voice activated while Voice Over requires the use of a touch screen by the user. Smart phones have built-in settings which allow more seamless integration of screen readers and dictation than is present on computers, for those who have a high comfort level with them (Voice Over for Apple, and Talkback for Android). Blind users often use a combination of screen readers and dictation when using AI. Because AI applications often have their own dictation abilities which also offer voice feedback, there are more options for those less comfortable with screen readers.

I hope that future articles by others more qualified will delve into access technology issues with other disabilities; adaptations for those who cannot use their arms, closed captioning for the deaf and hard of hearing, magnification and contrast for those with low vision, and bioethics issues around artificial body enhancements and Neurolink.

History of faith and accessibility

One of my reasons for interest in the conversation between faith and accessibility is that faith has already played a major part in uplifting people with disabilities; in particular, advancement for the blind. Technology originally for the blind has greatly impacted technology for all, as detailed in a great chapter of Andrew Leland’s book Country of the Blind. Louis Braille (1809-1852), the blind inventor of the braille reading code, was a devout Catholic who used his invention to create a larger library of sheet music for blind church organists. Religious groups took a leading role in producing and distributing braille books throughout the twentieth century, including the Xavier Society (of which I am a board member), the American Bible Society, and the Theosophical Society. Braille’s ingenuity and his attempts to develop an early version of the typewriter tested the boundaries of language and technology. Audio books, which were initially produced for the blind, are now used by many sighted readers, and many early audio books were religious.

Image description and faith

One of the lesser known uses of AI is its ability to describe images. You can share a picture or an inaccessible file with an AI application and it will provide information about what is in the image including any discernable text, along with the ability to ask further questions and share the image with another application or another person. The more common AI tools can describe images, but many of us in the blind community prefer to use apps built for the blind, including Microsoft’s Seeing AI and the blind-founded Be My Eyes. These apps predate the development of what most people think of as AI; Be My Eyes started off as an app to call human remote volunteers, while Seeing AI initially focused on reading labels; but they both received major updates in 2023.

The use of image description to benefit people of faith are numerous: from gaining a practical orientation of a sacred space, to providing a better understanding of religious art than blind people have had before. In my experience, AI applications can correctly identify the names of religious items, but continued collaboration is necessary to make sure models do not contribute to subtle misinterpretations.

Research and writing tools

Accessible AI tools allow blind users to research questions about religious doctrine, scripture, history, prayers, and current events, whether for personal study or professional work. The most common AI tools like ChatGPT and Gemini have accessibility teams which use WCAG and ARIA accessibility standards. One of these is the use of headings, especially for computer users. If I press the “H” key on my computer, I can move between my prompt and the various sections of the AI’s response. Buttons to copy, share, or download a file are also relatively easy to find.

I have used AI to shorten the process of finding traditional Latin mass propers that I sing in my Church choir. As for writing, I have found ChatGPT’s ability to generate a prayer plan based on a particular faith to be helpful. Of course, like anyone else, screen reader users need to avoid pitfalls of AI-driven research that come from asking the wrong questions, and hallucinations.

One project that needs further work is making sure smaller apps designed for a particular religious viewpoint are accessible. Many of them, in my limited experience, are mostly navigable but could use improvements for better user experience, especially making certain elements more clearly labeled.

Where do we go from here? Bridging Ethics and Accessibility

I will conclude by noting that like any other group, blind people (and the smaller group of blind people who identify with a religious faith) will have a variety of opinions about AI. Some of these are influenced by our life as blind people, but also come from our other deeply held personal and intellectual commitments. As a young father, I want to limit my children’s exposure to AI at an early age, primarily because it contributes to a preexisting problem of too much time spent in the virtual world. I am concerned with over-reliance on AI among students and others who need to continue developing their skills in critical thinking and various content areas. I think we should encourage our religious leaders to avoid using AI to write sermons; rather, it should be used for background research only.

Accessible AI has opened the world of information to blind people, in some ways building on the successes of search engines and human curated projects like Wikipedia (which I was an admin for when I was a teenager). I do not want accessibility to be the reason that someone does not use AI, even if it is for a purpose I personally disapprove of.

I look forward to continuing the conversation; I’m happy to receive emails (covich7@gmail.com) and LinkedIn messages with any thoughts, especially about improving religious apps.


Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

The post An Introduction to Access Technology for the Faith Community appeared first on AI and Faith.

Can AI and Faith-Based Hope Coexist in a Modern World

23 October 2025 at 20:55

“… hope does not disappoint, because the love of God has been poured out within our hearts through the Holy Spirit who was given to us.” Romans 5:5 NASB

Artificial intelligence isn’t a guest visiting for a season, it has moved in and set up shop. It lives in our phones, churches, hospitals, and homes. It curates our playlists, predicts our spending, suggests our prayers, and sometimes even writes our sermons. Coexistence, then, is not optional. The question is whether we can coexist in a spiritually healthy manner, one that deepens our humanity rather than dilutes it.

To coexist faithfully means to let neither fear nor fascination rule us. Fear convinces us that AI will replace us; fascination tempts us to let it. Both miss the point. People of faith are called to live alongside technology with discernment and humility, resisting both the illusion of control and the despair of irrelevance.

For all its predictive brilliance, AI cannot pray, weep, or wonder. It can mimic compassion, but not surrender. It can analyze human emotion, but not experience it. The Franciscan imagination reminds us that creation, including the human-made world of code and circuitry, is still part of God’s world. But only humanity bears the capacity for soul, for longing, for love that suffers and redeems.

Coexistence, then, is not a negotiation with machines, it is a spiritual practice among humans about how we use them.

1. Hope as Surrender, Not Optimism

Faith-based hope is not the same as optimism. Optimism is a weather forecast; hope is a covenant. Optimism predicts outcomes; hope surrenders them.

In the Franciscan tradition, hope emerges not from certainty but from trust, trust that divine love continues to work even in confusion and disruption. As St. Francis taught, we find God not in control but in relinquishment. Hope, for Francis, was not a rosy confidence that things would turn out fine, but the willingness to walk barefoot into the unknown, trusting that God’s presence would meet him there.

When we mistake AI’s forecasts for faith’s hope, we confuse data confidence with spiritual trust. An algorithm might predict recovery rates for the sick or estimate climate outcomes for the planet. These forecasts can be useful, even inspiring, but they can’t teach us how to sit with grief, how to pray through uncertainty, or how to love what we may lose.

Hope begins where prediction ends. It is born when we choose faithfulness over control, willingness over willfulness. The AI age tempts us to measure everything, to optimize, to manage risk, to secure results. But the Franciscan path teaches that surrender is not passivity; it’s the deepest form of participation. It is the art of letting God’s grace do what our grasping cannot.

2. Solidarity as the Face of Hope

Hope in the Christian imagination is never solitary. It is, as the prophets declared, born in community. Hope is sustained not by certainty but by companionship. The Franciscan way calls this being with rather than doing for.

Solidarity is where hope breathes. It is incarnational, embodied in listening, touch, and shared presence. In this light, AI can make hope more accessible and actionable by connecting communities across distance, revealing hidden needs, or amplifying marginalized voices. It can process massive amounts of data to show us who is being left behind. It can remind us, through pattern and prediction, that our neighbor is closer than we thought.

But solidarity must remain human. A chatbot can send comforting words, but it cannot keep vigil at a bedside or shed tears that sanctify suffering. Yet it can free human caregivers from administrative burdens so that they can show up in love. When technology serves relationships rather than replaces it, it becomes a partner in the work of hope.

Francis of Assisi would recognize this: the holiness of proximity. To “be with” creation and each other is the heart of hope. Even the best-designed algorithm cannot incarnate presence. It can only point toward it. And perhaps that is its highest ethical calling, to remind us of what only we can do.

3. Prophetic Hope in Disruption

The Hebrew prophets: Isaiah, Jeremiah, and Amos—offered hope not in comfort but in collapse. They dared to believe that God’s newness could rise from ruins. Walter Brueggemann calls this “the horror of the old collapsing and the hope of the new emerging.”

Our era’s disruptions: climate change, displacement, and digital isolation find a mirror in the age of AI. The prophetic task is not to resist technology outright but to reclaim its direction. Faith communities have a prophetic imperative to ensure that AI serves justice, mercy, and shared flourishing.

AI can go beyond prediction when it feeds real hope: when it exposes injustice, reveals truth, or helps imagine new economies of care. Imagine algorithms that prioritize the hungry over the profitable, or systems that help restore ecological balance rather than exploit it. Prophetic hope transforms technology from a mirror of power into a window of possibility.

Yet prophecy always begins with lament. We must name the pain of our age, the loneliness, the disconnection, the temptation to substitute simulation for presence. In naming it, we keep it human. The prophets of Israel didn’t offer quick solutions; they offered faithful witness. Likewise, our hope for AI is not that it will save us, but that through it, we might rediscover what needs saving: our compassion, our humility, and our sense of shared destiny.

4. A Future Worth Coexisting With

To coexist with AI faithfully is to remember that intelligence is not wisdom, and power is not love. AI may analyze vast datasets, but faith invites us into mystery, the space where surrender becomes strength and community becomes salvation.

A spiritually healthy coexistence doesn’t idolize AI nor exile it. Instead, it consecrates the tools of our age for the service of God’s reconciling work. Technology, like fire or language, can both heal and harm. Our task is to keep it lit with compassion, humility, and justice.

This is not nostalgia for a pre-digital past; it is a call for moral imagination. Coexistence means insisting that progress must serve presence, that algorithms must bend toward mercy, and that the ultimate measure of intelligence is love.

The Franciscan tradition, with its emphasis on humility and relationality, offers an antidote to the empire of efficiency. It invites us to see AI not as a rival intelligence but as a mirror reflecting what we value. The question is not, “Can AI love?” but “Can we?”

Conclusion: The Stubborn, Sacred Hope

Artificial intelligence can calculate probabilities, but it cannot kindle hope. Hope is the province of the soul, the stubborn, sacred belief that life can be renewed even when the data says otherwise.

If we approach AI with humility, we may yet find that it sharpens our awareness of what is uniquely human: our vulnerability, our longing for connection, our capacity for grace.

In the end, coexistence with AI is less about technological control and more about spiritual formation. The future worth coexisting with will be one where our tools amplify love rather than efficiency, justice rather than profit, and wonder rather than fear.

Machines may forecast the future, but only people of faith can hope their way into it.


Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

The post Can AI and Faith-Based Hope Coexist in a Modern World appeared first on AI and Faith.

Quantum computing and AI join forces for particle physics

23 October 2025 at 13:57

This episode of the Physics World Weekly podcast explores how quantum computing and artificial intelligence can be combined to help physicists search for rare interactions in data from an upgraded Large Hadron Collider.

My guest is Javier Toledo-Marín, and we spoke at the Perimeter Institute in Waterloo, Canada. As well as having an appointment at Perimeter, Toledo-Marín is also associated with the TRIUMF accelerator centre in Vancouver.

Toledo-Marín and colleagues have recently published a paper called “Conditioned quantum-assisted deep generative surrogate for particle–calorimeter interactions”.

Delft logo

This podcast is supported by Delft Circuits.

As gate-based quantum computing continues to scale, Delft Circuits provides the i/o solutions that make it possible.

The post Quantum computing and AI join forces for particle physics appeared first on Physics World.

4 Weird Things You Can Turn into a Supercapacitor

22 October 2025 at 16:00


What do water bottles, eggs, hemp, and cement have in common? They can be engineered into strange, but functional, energy-storage devices called supercapacitors.

As their name suggests, supercapacitors are like capacitors with greater capacity. Similar to batteries, they can store a lot of energy, but they can also charge or discharge quickly, similar to a capacitor. They’re usually found where a lot of power is needed quickly and for a limited time, like as a nearly instantaneous backup electricity for a factory or data center.

Typically, supercapacitors are made up of two activated carbon or graphene electrodes, electrolytes to introduce ions to the system, and a porous sheet of polymer or glass fiber to physically separate the electrodes. When a supercapacitor is fully charged, all of the positive ions gather on one side of the separating sheet, while all of the negative ions are on the other. When it’s discharged, the ions are randomly distributed, and it can switch between these states much faster than batteries can.

Some scientists believe that supercapacitors could become more super. They think there’s potential to make these devices more sustainably, at lower-cost, and maybe even better performing if they’re built from better materials.

And maybe they’re right. Last month, a group from Michigan Technological University reported making supercapacitors from plastic water bottles that had a higher capacitance than commercial ones.

Does this finding mean recycled plastic supercapacitors will soon be everywhere? The history of similar supercapacitor sustainability experiments suggests not.

About 15 years ago, it seemed like supercapacitors were going to be in high demand. Then, because of huge investments in lithium-ion technology, batteries became tough competition, explains Yury Gogotsi, who studies materials for energy-storage devices at Drexel University, in Philadelphia. “They became so much cheaper and so much faster in delivering energy that for supercapacitors, the range of application became more limited,” he says. “Basically, the trend went from making them cheaper and available to making them perform where lithium-ion batteries cannot.”

Still, some researchers remain hopeful that environmentally friendly devices have a place in the market. Yun Hang Hu, a materials scientist on the Michigan Technological University team, sees “a promising path to commercialization [for the water-bottle-derived supercapacitor] once collection and processing challenges are addressed,” he says.

Here’s how scientists make supercapacitors with strange, unexpected materials:

Water Bottles

It turns out your old Poland Spring bottle could one day store energy instead of water. Last month in the journal Energy & Fuels, the Michigan Technological University team published a new method for converting polyethylene terephthalate (PET), the material that makes up single-use plastic water bottles, into both electrodes and separators.

As odd as it may seem, this process is “a practical blueprint for circular energy storage that can ride the existing PET supply chain,” says Hu.

To make the electrodes, the researchers first shredded bottles into 2-millimeter grains and then added powdered calcium hydroxide. They heated the mixture to 700 °C in a vacuum for 3 hours and were left with an electrically conductive carbon powder. After removing residual calcium and activating the carbon (increasing its surface area), they could shape the powder into a thin layer and use it as an electrode.

The process to produce the separators was much less intensive—the team cut bottles into squares about the size of a U.S. quarter or a 1-euro coin and used hot needles to poke holes in them. They optimized the pattern of the holes for the passage of current using specialized software. PET is a good material for a separator because of its “excellent mechanical strength, high thermal stability, and excellent insulation,” Hu says.

Filled with an electrolyte solution, the resulting supercapacitor not only demonstrated potential for eco- and finance-friendly material usage, but also slightly outperformed traditional materials on one metric. The PET device had a capacitance of 197.2 farads per gram, while an analogous device with a glass-fiber separator had a capacitance of 190.3 farads per gram.

Eggs

Wait, don’t make your breakfast sandwich just yet! You could engineer a supercapacitor from one of your ingredients instead. In 2019, a University of Virginia team showed that electrodes, electrolytes, and separators could all be made from parts of a single object—an egg.

First, the group purchased grocery store chicken eggs and sorted their parts into eggshells, eggshell membranes, and the whites and yolks.

They ground the shells into a powder and mixed them with the egg whites and yolks. The slurry was freeze-dried and brought up to 950 °C for an hour to decompose. After a cleaning process to remove calcium, the team performed heat and potassium treatments to activate the remaining carbon. They then smoothed the egg-derived activated carbon into a film to be used as electrodes. Finally, by mixing egg whites and yolks with potassium hydroxide and letting it dry for several hours, they formed a kind of gel electrolyte.

To make separators, the group simply cleaned the eggshell membranes. Because the membranes naturally have interlaced micrometer-size fibers, their inherent structures allow for ions to move across them just as manufactured separators would.

Interestingly, the resulting fully egg-based supercapacitor was flexible, with its capacitance staying steady even when the device was twisted or bent. After 5,000 cycles, the supercapacitor retained 80 percent of its original capacitance—low compared to commercial supercapacitors, but fairly on par for others made from natural materials.

Hemp

Some people may like cannabis for more medicinal purposes, but it has potential in energy storage, too. In 2024, a group from Ondokuz Mayıs University in Türkiye used pomegranate hemp plants to produce activated carbon for an electrode.

They started by drying stems of the hemp plants in a 110 °C oven for a day and then ground the stems into a powder. Next, they added sulfuric acid and heat to create a biochar, and, finally, activated the char by saturating it with potassium hydroxide and heating it again.

After 2,000 cycles, the supercapacitor with hemp-derived electrodes still retained 98 percent of its original capacitance, which is, astoundingly, in range of those made from nonbiological materials. The carbon itself had an energy density of 65 watt-hours per kilogram, also in line with commercial supercapacitors.

Cement

It may have a hold over the construction industry, but is cement coming for the energy sector, too? In 2023, a group from MIT shared how they designed electrodes from water, nearly pure carbon, and cement. Using these materials, they say, creates a “synergy” between the hydrophilic cement and hydrophobic carbon that aids the electrodes’ ability to hold layers of ions when the supercapacitor is charged.

To test the hypothesis, the team built eight electrodes using slightly different proportions of the three ingredients, different types of carbon, and different electrode thicknesses. The electrodes were saturated with potassium chloride—an electrolyte—and capacitance measurements began.

Impressively, the cement supercapacitors were able to maintain capacitance with little loss even after 10,000 cycles. The researchers also calculated that one of their supercapacitors could store around 10 kilowatt-hours—enough to serve about one third of an average American’s daily energy use—though the number is only theoretical.

Handheld PC Build Is Pleasantly Chunky

22 October 2025 at 20:00

The cool thing about building your own computer is that you don’t have to adhere to industry norms of form and function. You can build whatever chunky, awesome thing your heart desires, and that’s precisely what [Rahmanshaber] did with the MutantC cyberdeck.

The build is based around a Raspberry Pi Compute Module 4. If you’re unfamiliar with the Compute Module, it’s basically a Raspberry Pi that has been designed specifically for easy integration into a larger carrier PCB. In this case, the carrier PCB interfaces all the other necessary gear to make this a fully functional computer. The PCB is installed inside a vaguely-rectangular 3D-printed enclosure, with a 5-inch TFT LCD on a sliding mount. Push the screen up, and it reveals a small-format keyboard for text entry. There’s also a hall-effect joystick and a couple of buttons for mouse control to boot. [Rahmanshaber] has designed the computer to run off a couple of different battery packs—you can use a pair of 18650 cells if you like, or switch to lager 21700 cells if you want greater capacity for longer running time.

If you want a portable Raspberry Pi cyberdeck, you might find this to be a great inspiration. We’ve featured many other designs in this vein before, too. Video after the break.

How to solve the ‘future of physics’ problem

22 October 2025 at 10:00

I hugely enjoyed physics when I was a youngster. I had the opportunity both at home and school to create my own projects, which saw me make electronic circuits, crazy flying models like delta-wings and autogiros, and even a gas chromatograph with a home-made chart recorder. Eventually, this experience made me good enough to repair TV sets, and work in an R&D lab in the holidays devising new electronic flow controls.

That enjoyment continued beyond school. I ended up doing a physics degree at the University of Oxford before working on the discovery of the gluon at the DESY lab in Hamburg for my PhD. Since then I have used physics in industry – first with British Oxygen/Linde and later with Air Products & Chemicals – to solve all sorts of different problems, build innovative devices and file patents.

While some students have a similarly positive school experience and subsequent career path, not enough do. Quite simply, physics at school is the key to so many important, useful developments, both within and beyond physics. But we have a physics education problem, or to put it another way – a “future of physics” problem.

There are just not enough school students enjoying and learning physics. On top of that there are not enough teachers enjoying physics and not enough students doing practical physics. The education problem is bad for physics and for many other subjects that draw on physics. Alas, it’s not a new problem but one that has been developing for years.

Problem solving

Many good points about the future of physics learning were made by the Institute of Physics in its 2024 report Fundamentals of 11 to 19 Physics. The report called for more physics lessons to have a practical element and encouraged more 16-year-old students in England, Wales and Northern Ireland to take AS-level physics at 17 so that they carry their GCSE learning at least one step further.

Doing so would furnish students who are aiming to study another science or a technical subject with the necessary skills and give them the option to take physics A-level. Another recommendation is to link physics more closely to T-levels – two-year vocational courses in England for 16–19 year olds that are equivalent to A-levels – so that students following that path get a background in key aspects of physics, for example in engineering, construction, design and health.

But do all these suggestions solve the problem? I don’t think they are enough and we need to go further. The key change to fix the problem, I believe, is to have student groups invent, build and test their own projects. Ideally this should happen before GCSE level so that students have the enthusiasm and background knowledge to carry them happily forward into A-level physics. They will benefit from “pull learning” – pulling in knowledge and active learning that they will remember for life. And they will acquire wider life skills too.

Developing skillsets

During my time in industry, I did outreach work with schools every few weeks and gave talks with demonstrations at the Royal Institution and the Franklin Institute. For many years I also ran a Saturday Science club in Guildford, Surrey, for pupils aged 8–15.

Based on this, I wrote four Saturday Science books about the many playful and original demonstrations and projects that came out of it. Then at the University of Surrey, as a visiting professor, I had small teams of final-year students who devised extraordinary engineering – designing superguns for space launches, 3D printers for full-size buildings and volcanic power plants inter alia. A bonus was that other staff working with the students got more adventurous too.

But that was working with students already committed to a scientific path. So lately I’ve been working with teachers to get students to devise and build their own innovative projects. We’ve had 14–15-year-old state-school students in groups of three or four, brainstorming projects, sketching possible designs, and gathering background information. We help them and get A-level students to help too (who gain teaching experience in the process). Students not only learn physics better but also pick up important life skills like brainstorming, team-working, practical work, analysis and presentations.

We’ve seen lots of ingenuity and some great projects such as an ultrasonic scanner to sense wetness of cloth; a system to teach guitar by lighting up LEDs along the guitar neck; and measuring breathing using light passing through a band of Lycra around the patient below the ribs. We’ve seen the value of failure, both mistakes and genuine technical problems.

Best of all, we’ve also noticed what might be dubbed the “combination bonus” – students having to think about how they combine their knowledge of one area of physics with another.  A project involving a sensor, for example, will often involve electronics as well the physics of the sensor and so student knowledge of both areas is enhanced.

Some teachers may question how you mark such projects. The answer is don’t mark them! Project work and especially group work is difficult to mark fairly and accurately, and the enthusiasm and increased learning by students working on innovative projects will feed through into standard school exam results.

Not trying to grade such projects will mean more students go on to study physics further, potentially to do a physics-related extended project qualification – equivalent to half an A-level where students research a topic to university level – and do it well. Long term, more students will take physics with them into the world of work, from physics to engineering or medicine, from research to design or teaching.

Such projects are often fun for students and teachers. Teachers are often intrigued and amazed by students’ ideas and ingenuity. So, let’s choose to do student-invented project work at school and let’s finally solve the future of physics problem.

The post How to solve the ‘future of physics’ problem appeared first on Physics World.

A recipe for quantum chaos

22 October 2025 at 09:44

The control of large, strongly coupled, multi-component quantum systems with complex dynamics is a challenging task.

It is, however, an essential prerequisite for the design of quantum computing platforms and for the benchmarking of quantum simulators.

A key concept here is that of quantum ergodicity. This is because quantum ergodic dynamics can be harnessed to generate highly entangled quantum states.

In classical statistical mechanics, an ergodic system evolving over time will explore all possible microstates states uniformly. Mathematically, this means that a sufficiently large collection of random samples from an ergodic process can represent the average statistical properties of the entire process.

Quantum ergodicity is simply the extension of this concept to the quantum realm.

Closely related to this is the idea of chaos. A chaotic system is one in which is very sensitive to its initial conditions. Small changes can be amplified over time, causing large changes in the future.

The ideas of chaos and ergodicity are intrinsically linked as chaotic dynamics often enable ergodicity.

Until now, it has been very challenging to predict which experimentally preparable initial states will trigger quantum chaos and ergodic dynamics over a reasonable time scale.

In a new paper published in Reports on Progress in Physics, a team of researchers have proposed an ingenious solution to this problem using the Bose–Hubbard Hamiltonian.

They took as an example ultracold atoms in an optical lattice (a typical choice for experiments in this field) to benchmark their method.

The results show that there are certain tangible threshold values which must be crossed in order to ensure the onset of quantum chaos.

These results will be invaluable for experimentalists working across a wide range of quantum sciences.

The post A recipe for quantum chaos appeared first on Physics World.

This jumping roundworm uses static electricity to attach to flying insects

17 October 2025 at 14:30

Researchers in the US have discovered that a tiny jumping worm uses static electricity to increase the chances of attaching to its unsuspecting prey.

The parasitic roundworm Steinernema carpocapsae, which live in soil, are already known to leap some 25 times their body length into the air. They do this by curling into a loop and springing in the air, rotating hundreds of times a second.

If the nematode lands successfully, it releases bacteria that kills the insect within a couple of days upon which the worm feasts and lays its eggs. At the same time, if it fails to attach to a host then it faces death itself.

While static electricity plays a role in how some non-parasitic nematodes detach from large insects, little is known whether static helps their parasitic counterparts to attach to an insect.

To investigate, researchers are Emory University and the University of California, Berkeley, conducted a series of experiments, in which they used high-speed microscopy techniques to film the worms as they leapt onto a fruit fly.

They did this by tethering a fly with a copper wire that was connected to a high-voltage power supply.

They found that a charge of a few hundred volts – similar to that generated in the wild by an insect’s wings rubbing against ions in the air – fosters a negative charge on the worm, creating an attractive force with the positively charged fly.

Carrying out simulations of the worm jumps, they found that without any electrostatics, only 1 in 19 worm trajectories successfully reached their target. The greater the voltage, however, the greater the chance of landing. For 880 V, for example, the probability was 80%.

The team also carried out experiments using a wind tunnel, finding that the presence of wind helped the nematodes drift and this also increased their chances of attaching to the insect.

“Using physics, we learned something new and interesting about an adaptive strategy in an organism,” notes Emory physicist Ranjiangshang Ran. “We’re helping to pioneer the emerging field of electrostatic ecology.”

The post This jumping roundworm uses static electricity to attach to flying insects appeared first on Physics World.

❌