AI gets closer to reading minds in new ‘brain decoder’ study
Think of a number between 1 and 10. Odds are, we may be able to guess it. (Was it 7?)
Now, what if, instead, you think of a story? As you can imagine, it would be a lot harder for us to guess that—unless, of course, we could read your mind.
Sci-fi posturing aside, medical AI may be bringing us a step closer to something like mind reading with the technology we’re examining today: a “brain decoder.”
A team of researchers from the University of Texas at Austin have published a paper in Nature Neuroscience detailing the use of this new brain decoder, powered by GPT-1.
Three participants in the proof-of-concept study listened to audio stories—including NPR’s Modern Love and The Moth Radio Hour series—while in an f-MRI machine. The technology then took the data from their scans and was able to broadly describe what happened in the stories the subjects heard.
How does the brain decoder work?
As the participants listened to the podcast playing for them, neurons in their brains fired, using up oxygen in the blood. The f-MRI machine then picked up the flow of the deoxygenated blood back to the lungs and heart.
The study’s large language model was then fed 16 hours of data on which parts of the brain were activated according to these scans. By digesting this training data with the help of a predecessor of the technology behind ChatGPT, the model successfully was able to describe the gist of the stories participants heard.
Study co-author and assistant professor of neuroscience and computer science Alexander Huth underwent the scan himself, and he hesitated to say that “measuring thoughts” is quite what he and his team are up to.
“I think we are decoding something that is deeper than language,” he said in an interview with STAT.
Is this technology new?
Sort of. You may even remember we discussed a similar technology—a mood decoder to fight depression—back in January.
Earlier brain decoders focused on the parts of the brain that decoded speech. Now, with this advancement, the brain decoder was even used to output descriptions of videos participants watched with no spoken words—such as Pixar short films.
National Institute of Mental Health scientist Francisco Pereira, who has worked on brain decoders for over a decade, said that, with the technology available in 2015, scientists working on decoders most recently would have been “lucky if we would get two words together that made sense, but it would not get any close to what the performance they get here.”
But it turns out, the stories may be the key. Unlike previous studies, which involved individual words—”dog” or “cat”—flashing on a screen, the researchers were interested in processing continuous language.
“The Moth stories have been great. I’ve cried after listening to them. I’ve laughed really hard,” an anonymous study participant said. That laughter, unfortunately, also made it hard not to move in the f-MRI machine, which can compromise the data. “It’s a double-edged sword.”
However, this risk was worth it because the engaging nature of the stories resulted in good data. If a subject stopped paying attention or let their mind wonder, the researchers wouldn’t be able to train the model as they needed.
A sign of medical AI’s advancement
As the generative AI arms race in Big Tech heats up—and Google makes serious inroads in the medical AI space—there’s signs left and right (and above and below) that AI technology is advancing unbelievably fast.
For many, that is a concern. With regulations only just beginning to chase the runaway advancement, medical AI developers and researchers are forced to set ethical ground rules themselves. For this brain decoder’s researchers, that involved actively working with participants towards devising ways to protect their own privacy.
But there’s also an optimistic angle. While this technology is not completely new, it’s eons ahead of where its predecessors were. It’s a perfect example of how the acceleration of AI development is leading to research advancements that would’ve seemed impossible just a few years ago.
We can’t wait to see what brilliant, AI-curious medical researchers come up with next.