Digital generated image of glowing dots connected into brain icon inside abstract digital space.

As machine-learning algorithms grow more sophisticated, artificial intelligence seems poised to revolutionize the practice of science itself. In part, this will come from the software enabling scientists to work more effectively. But some advocates are hoping for a fundamental transformation in the process of science. The Nobel Turing Challenge, issued in 2021 by noted computer scientist Hiroaki Kitano, tasked the scientific community with producing a computer program capable of making a discovery worthy of a Nobel Prize by 2050.

Part of the work of scientists is to uncover laws of nature—basic principles that distill the fundamental workings of our Universe. Many of them, like Newton’s laws of motion or the law of conservation of mass in chemical reactions, are expressed in a rigorous mathematical form. Others, like the law of natural selection or Mendel’s law of genetic inheritance, are more conceptual.

The scientific community consists of theorists, data analysts, and experimentalists who collaborate to uncover these laws. The dream behind the Nobel Turing Challenge is to offload the tasks of all three onto artificial intelligence.

Outsourcing (some) science

Outsourcing the work of scientists to machines is not a new idea. As far back as the 1970s, Carnegie Mellon University professor Patrick Langley developed a program he called BACON, after Francis Bacon, who pioneered the use of empirical reasoning in science. BACON was capable of looking at data and putting it together in different ways until it found something that looked like a pattern, akin to discovering a new physical law. Given the right data, BACON discovered Kepler’s laws, which govern the orbits planets make around the Sun. However, limited computing power kept BACON from taking on more complex tasks.

In the 1990s, with more computing power at their fingertips, scientists developed an automated tool that could search through formulas until it found one that fit a given dataset. This technique, called symbolic regression, bred formulas as if they were a species, with genetic inheritance and mutations, where only the ones that fit the data best would survive. This technique, and variants thereof, spurred on a new era of AI scientists, many with similarly referential names like Eureqa and AI Feynman.

These sophisticated algorithms can effectively extract new formulas, which may describe scientific laws, from vast datasets. Present them with enough raw information, and they’ll determine and quantify any underlying relationships, effectively spitting out plausible hypotheses and equations for any situation. They play the role of the data analyst, but experts say this approach isn’t about replacing all human scientists.

“The biggest roadblock is knowledge representation,” says Ross King, a machine-learning researcher at the University of Cambridge. “Because if you look at big breakthroughs, like Einstein’s theory of special relativity, it came from a philosophical question about magnetism. And it’s a reformulation of our knowledge. We’re nowhere near a computer being able to do that.”