Manténgase sano!

  • Posted May 1, 2023

Mind-Reading Technology Can Turn Brain Scans Into Language

A mind-reading device seems like science fiction, but researchers say they're firmly on the path to building one.

Using functional MRI (fMRI), a newly developed brain-computer interface can read a person's thoughts and translate them into full sentences, according to a report published May 1 in Nature Neuroscience.

The decoder was developed to read a person's brain activity and translate what they want to say into continuous, natural language, the researchers said.

"Eventually, we hope that this technology can help people who have lost the ability to speak due to injuries like strokes or diseases like ALS,"said lead study author Jerry Tang, a graduate research assistant at the University of Texas at Austin.

But the interface goes even further than that, translating into language whatever thoughts are foremost in a person's mind.

"We also ran our decoder on brain responses while the user imagined telling stories and ran responses while the user watched silent movies,"Tang said. "And we found that the decoder is also able to recover the gist of what the user was imagining or seeing."

Because of this, the decoder is capable of capturing the essence of what a person is thinking, if not always the exact words, the researchers said.

For example, at one point a participant heard the words, "I don't have my driver's license yet."The decoder translated the thought as, "She has not even started to learn to drive yet."

The technology isn't at the point where it can be used on just anyone, Tang said.

Training the program required at least 16 hours of participation from each of the three people involved in the research, and Tang said the brain readings from one person can't be used to inform the scans of another.

The actual scan also involves the cooperation of the person, and can be foiled by simple mental tasks that deflect a participant's focus, he said.

Still, one expert lauded the findings.

"This work represents an advance in brain-computer interface research and is potentially very exciting," said Dr. Mitchell Elkind, chief clinical science officer of the American Heart Association and a professor of neurology and epidemiology at Columbia University in New York City.

"The major advance here is being able to record and interpret the meaning of brain activity using a non-invasive approach," Elkind explained. "Prior work required electrodes placed into the brain using open neurosurgery with the risks of infection, bleeding and seizures. This non-invasive approach using MRI scanning would have virtually no risk, and MRIs are done regularly in brain-injured patients. This approach can also be used frequently in healthy people as part of research, without introducing them to risk."

Powerful results prompt warning that 'mental privacy' may be at risk

Indeed, the results of this study were so powerful that Tang and his colleagues felt moved to issue a warning about "mental privacy."

"This could all change as technology gets better, so we believe that it's important to keep researching the privacy implications of brain decoding, and enact policies that protect each person's mental privacy,"Tang said.

Earlier efforts at translating brain waves into speech have used electrodes or implants to record impulses from the motor areas of the brain related to speech, said senior researcher Alexander Huth. He is an assistant professor of neuroscience and computer science at the University of Texas at Austin.

"These are the areas that control the mouth, larynx, tongue, etc., so what they can decode is how is the person trying to move their mouth to say something, which can be very effective,"Huth said.

The new process takes an entirely different approach, using fMRI to non-invasively measure changes in blood flow and blood oxygenation within brain regions and networks associated with language processing.

"So instead of looking at this kind of low-level like motor thing, our system really works at the level of ideas, of semantics, of meaning,"Huth said. "That's what it's getting at. This is the reason why what we get out is not the exact words that somebody heard or spoke. It's the gist. It's the same idea, but expressed in different words."

The researchers trained the decoder by first recording the brain activity of the three participants as they listened to 16 hours of storytelling podcasts like the "Moth Radio Hour," Tang said.

"This is over five times larger than existing language datasets,"he said. "And we use this dataset to build a model that takes in any sequence of words and predicts how the user's brain would respond when hearing those words."

The program mapped the changes in brain activity to semantic features of the podcasts, capturing the meanings of certain phrases and associated brain responses.

The investigators then tested the decoder by having participants listen to new stories.

Making educated guesses based on brain activity

The decoder essentially attempts to make an educated guess about what words are associated with a person's thoughts, based on brain activity.

Using the participants' brain activity, the decoder generated word sequences that captured the meanings of the new stories. It even generated some exact words and phrases from the stories.

One example of an actual versus a decoded story:

Actual: "I got up from the air mattress and pressed my face against the glass of the bedroom window expecting to see eyes staring back at me but instead finding only darkness."

Decoded: "I just continued to walk up to the window and open the glass I stood on my toes and peered out I didn't see anything and looked up again I saw nothing."

The decoder specifically captured what a person was focused upon. When a participant actively listened to one story while another played simultaneously, the program identified the meaning of the story that had the listener's focus, the researchers said.

To see if the decoder was capturing thoughts versus speech, the researchers also had participants watch silent movies and scanned their brain waves.

"There's no language whatsoever. Subjects were not instructed to do anything while they were watching those videos. But when we put that data into our decoder, what it spat out is a kind of a description of what's happening in the video,"Huth said.

The participants also were asked to imagine a story, and the device was able to predict the meaning of that imagined story.

"Language is the output format here, but whatever it is that we're getting at is not necessarily language itself,"Huth said. "It's definitely getting at something deeper than language and converting that into language, which is kind of at a very high level the role of language, right?"

Decoder is not yet ready for prime-time

Concerns over mental privacy led the researchers to further test whether participants could interfere with the device's readings.

Certain mental exercises, like naming animals or thinking about a different story than the podcast, "really prevented the decoder from recovering anything about the story that the user was hearing,"Tang said.

The process still needs more work. The program is "uniquely bad"at pronouns, and requires tweaking and further testing to accurately reproduce exact words and phrases, Huth said.

It's also not terribly practical since it now requires the use of a large MRI machine to read a person's thoughts, the study authors explained.

The researchers are considering whether cheaper, more portable technology like EEG or functional near-infrared spectrometry could be used to capture brain activity as effectively as fMRI, Tang said.

But they admit they were shocked by how well the decoder did wind up working, which led to their concerns over brain privacy.

"I think my cautionary example is the polygraph, which is not an accurate lie detector, but has still had many negative consequences,"Tang said. "So I think that while this technology is in its infancy, it's very important to regulate what brain data can and cannot be used for. And then if one day it does become possible to gain accurate decoding without getting the person's cooperation, we'll have a regulatory foundation in place that we can build off of."

More information

Johns Hopkins has more about how the brain works.

SOURCES: Jerry Tang, graduate research assistant, University of Texas at Austin; Alexander Huth, PhD, assistant professor, neuroscience and computer science, University of Texas at Austin; Mitchell Elkind, MD, MS, MPhil, chief clinical science officer, American Heart Association, and professor, neurology and epidemiology, Columbia University, New York City; Nature Neuroscience, May 1, 2023

El servicio de noticias de salud es un servicio para los usuarios de la página web de Banks Pharmacy gracias a HealthDay. Banks Pharmacy ni sus empleados, agentes, o contratistas, revisan, controlan, o toman responsabilidad por el contenido de los artículos. Por favor busque consejo médico directamente de un farmacéutico o de su médico principal.
Derechos de autor © 2024 HealthDay Reservados todos los derechos.