In our efforts to understand the Universe, we’re getting greedy, making more observations than we know what to do with. Satellites beam down hundreds of terabytes of information each year, and one telescope under construction in Chile will produce 15 terabytes of pictures of space every night. It’s impossible for humans to sift through it all. As astronomer Carlo Enrico Petrillo told The Verge: “Looking at images of galaxies is the most romantic part of our job. The problem is staying focused.” That’s why Petrillo trained an AI program to do the looking for him.
Petrillo and his colleagues were searching for a phenomenon that’s basically a space telescope. When a massive object (a galaxy or a black hole) comes between a distant light source and an observer on Earth, it bends the space and light around it, creating a lens that gives astronomers a closer look at incredibly old, distant parts of the Universe that should be blocked from view. This is called a gravitational lens, and these lenses are key to understanding what the Universe is made of. So far, though, finding them has been slow and tedious work.
That’s where artificial intelligence comes in — and finding gravitational lenses is just the start. As Stanford professor Andrew Ng once put it, the capacity of AI is being able to automate anything “a typical person can do […] with less than one second of thought.” Less than a second doesn’t sound like much room for thinking, but when it comes to sifting through the vast amounts of data created by contemporary astronomy, it’s a godsend.
This wave of AI astronomers aren’t just thinking how this technology can sort data. They’re exploring what could be an entirely new mode of scientific discovery, where artificial intelligence maps out the parts of the Universe we’ve never even seen.
But first: gravitational lenses. Einstein’s theory of general relativity predicted this phenomenon all the way back in the 1930s, but the first example wasn’t found until 1979. Why? Well, space is very, very big, and it takes a long time for humans to look at it, especially without today’s telescopes. That’s made the hunt for gravitational lenses a piecemeal affair so far.
“The lenses we have right now have been found through all sorts of ways,” Liliya Williams, a professor in astrophysics at the University of Minnesota, tells The Verge. “Some have been discovered by accident, by people looking for something completely different. There were some found by people looking for them, through two or three surveys. But the rest were found serendipitously.”
Looking at images is exactly the kind of thing an AI is good at. So Petrillo and colleagues at the universities of Bonn, Naples, and Groningen turned to an AI tool beloved by Silicon Valley: a type of computer program made up of digital “neurons,” modeled after those in the brain, that fire in response to input. Feed these programs (called neural networks) lots of data and they’ll begin to recognize patterns. They’re particularly good at dealing with visual information, and are used to power all sorts of machine vision systems — from cameras in self-driving cars to Facebook’s picture-tagging facial recognition.
As described in a paper published last month, applying this tech to the hunt for gravitational lenses was surprisingly straightforward. First, the scientists made a dataset to train the neural network with, which meant generating 6 million fake images showing what gravitational lenses do and do not look like. Then, they turned the neural network loose on the data, leaving it to slowly identify patterns. A bit of fine-tuning later, and they had a program that recognized gravitational lenses in the blink of an eye.
“An extremely good human classifier would classify images at a pace of about one thousand per hour,” says Petrillo. With the sort of data his team was using, he estimates that one would find a lens every 30,000 galaxies. So a human classifier working without sleep or rest for a week would expect to find only five or six lenses. The neural network, by comparison, ripped through a database of 21,789 images in just 20 minutes. And that, says Petrillo, was with a single ancient computer processor. “This time can be shortened by a great amount,” he says.
The neural network wasn’t as accurate as a computer. In order to avoid overlooking any lenses, its parameters were pretty generous. It produced 761 possible candidates, which humans examined and whittled down to a selection of 56. Further observations will need to be done to confirm these are legitimate finds, but Petrillo guesses that around a third will turn out to be the real deal. That works out at roughly one lens spotted per minute, compared to the hundred or so the entire scientific community has found over the past few decades. It’s an incredible speed-up, and a perfect example of how AI can help astronomy.
Finding these lenses is essential to understanding one of the grand mysteries of astronomy: what is the Universe actually made of? The matter we’re familiar with (planets, stars, asteroids, and so on) is thought to comprise only 5 percent of all physical stuff, while other, weirder forms of matter make up the other 95 percent. This includes a hypothetical substance known as dark matter, which we’ve never directly observed. Instead, we study the gravitational effects it has on the rest of the Universe, with gravitational lenses serving as one of the key indicators.
So what else can AI do? Researchers are working on a number of new tools. Some, like Petrillo’s, are taking on the job of identification: classifying galaxies, for example. Others are helping comb through data streams for interesting signals, like a neural network that removes human-made interference from radio telescopes to help scientists home in on potentially exciting signals. Still more have been used to identify pulsar stars, locate unusual exoplanets, or sharpen up low-res telescope imagery. In short, there’s a bonanza of potential applications.
This explosion is partly because of the larger hardware trends that have enabled the wider field of AI, like an abundance of cheap computing power. But it’s also because of the changing nature of astronomy. Astronomers no longer keep lonely vigils on cloudless nights, tracking the movement of individual planets; instead, they use sophisticated machinery that guzzles up portions of the sky in gulps of data unimaginable to early scientists. Better telescopes and better data storage means there’s more than ever to analyze, says Williams.
Analyzing great swaths of data is exactly what artificial intelligence is great at. We can teach it to recognize patterns, and then set it to work like a tireless assistant: never blinking, and always consistent.
Does it worry astronomers that they’re placing trust in a machine that might lack the human insight needed to spot something sensational? Petrillo says he’s not bothered. “In general, humans are more biased, less efficient, and more prone to mistakes than machines.” Williams agrees: “Computers may miss certain things, but they’ll miss them in a systematic way.” As long as we know what it is they don’t know, we can deploy automated systems without too much risk.
For some astronomers, the potential for AI goes beyond mere data sorting. They think artificial intelligence could be used to create information, filling in blind spots in our observations of the Universe.
Astronomer Kevin Schawinski and his team, who specialize in galaxy and black hole astrophysics, used AI to sharpen the resolution of blurry telescope pictures. To do this, they deployed a type of neural network that excels at generating variations of the data it studies, like a well-trained forger that can imitate a famous painter’s style. These networks, called generative adversarial networks, or GANs, have been used to create fake faces based on pictures of celebrities; fake audio dialogue that mimics individuals’ voices; and a range of other data types. They’re one of the richest seams of contemporary AI research, and for Schawinski, they meant getting information that wasn’t there before.
The paper published by Schawinski and his team earlier this year showed how GANs could be used to improve the quality of pictures of space. They lowered the image quality of a bunch of pictures of galaxies, adding noise and blurring, then used a GAN trained on telescope imagery to up their resolution, comparing these to the originals. The results were strikingly accurate: good enough to convince Schawinski that there’s potential for AI to improve all sorts of datasets in astronomy. He says he and his team have a “lot of cool results in the pipeline,” but they can’t reveal anything before they’re published.
Schawinski is cautious about the project. After all, it sounds like it goes against core principles of science: that you can only learn about the Universe by observing it directly. “This is a dangerous tool precisely for this reason,” he says, and one that should only ever be used where we a) have ample, accurate training data, and b) can check the results. So, you might train a GAN to generate data about black holes, then set it loose on a part of the sky that hasn’t been observed in much detail before. Then, if it suggests there is a black hole there, astronomers would confirm this finding first-hand — just like with the gravitational lenses. Schawinski says that, as with all scientific tools, there needs to be rigorous and patient testing to make sure the results you’re getting aren’t “leading you astray.”
If these methods prove fruitful, they could become a completely new method of exploration, with Schawinski places alongside classical computer simulations and good, old-fashioned observation. It’s very early days, but the pay-off could be huge. “If you have this tool,” says Schawinski, “you can go to all the existing data that sits in archives, and maybe improve some of it slightly, and extract more scientific value.” Value that wasn’t there before. AI would be performing a sort of scientific alchemy, helping us turn old knowledge into new. And we’d be able to explore space like never before, without even leaving Earth.