Article image

Breakthrough: Semantic decoder technology can read minds

A groundbreaking study led by researchers at The University of Texas at Austin has successfully developed an artificial intelligence system that can translate a person’s brain activity into a continuous stream of text. 

Named a “semantic decoder,” this system could potentially revolutionize communication for individuals who are conscious but physically unable to speak, such as those debilitated by strokes.

The research, published in the journal Nature Neuroscience, was carried out by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. Their innovative work relies in part on a transformer model, a technology similar to the one employed by OpenAI’s ChatGPT and Google’s Bard.

What sets this semantic decoder apart from other language decoding systems is its noninvasive nature. The process does not require subjects to undergo surgical implantation, making it more accessible and less risky for potential users. Additionally, participants are not limited to using only words from a prescribed list.

How a semantic decoder works

To achieve this feat, researchers trained the decoder using brain activity data collected from an individual listening to hours of podcasts inside an fMRI scanner. 

Once trained, the machine is able to generate corresponding text from the participant’s brain activity while they listen to a new story or imagine telling one themselves.

Huth explained the significance of their findings, stating: “For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences. We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”

It is important to note that the resulting text is not a perfect word-for-word transcript. Instead, the system is designed to capture the overall essence of what is being said or thought. The decoder produces text that closely (and sometimes precisely) matches the intended meanings of the original words about 50% of the time.

In experimental trials, for instance, when a participant listened to the phrase “I don’t have my driver’s license yet,” their thoughts were translated as, “She has not even started to learn to drive yet.” 

Another example involved the participant listening to the words, “I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!'” The decoder translated this as, “Started to scream and cry, and then she just said, ‘I told you to leave me alone.'”

Concerns about misuse of semantic decoder technology

The researchers addressed concerns about the potential misuse of their groundbreaking semantic decoder technology. The artificial intelligence system, which translates brain activity into a continuous stream of text, has raised questions about the possibility of unauthorized access to individuals’ thoughts.

However, the researchers stressed that decoding was only effective with cooperative participants who willingly participated in training the decoder. When tested on untrained individuals or those who actively resisted by thinking other thoughts, the results were unintelligible and unusable.

Jerry Tang, one of the researchers, expressed the team’s commitment to addressing these concerns, saying: “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that. We want to make sure people only use these types of technologies when they want to and that it helps them.”

As part of the research, subjects were asked to watch four short, silent videos while inside an fMRI scanner. The semantic decoder was able to use the brain activity data to accurately describe certain events from the videos, further demonstrating the potential of the technology.

Use of semantic decoder technology outside the lab

However, the system is not yet practical for use outside of a laboratory setting due to its reliance on time-consuming fMRI machines. The researchers believe that their work could be adapted to more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).

Alex Huth, another researcher, explained the potential for fNIRS integration: “fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring. So, our exact kind of approach should translate to fNIRS.” However, Huth noted that the resolution with fNIRS would be lower than with fMRI.

The study was supported by the Whitehall Foundation, the Alfred P. Sloan Foundation, and the Burroughs Wellcome Fund. Other co-authors of the study include Amanda LeBel, a former research assistant in the Huth lab, and Shailee Jain, a computer science graduate student at UT Austin.

In anticipation of the potential applications of this groundbreaking technology, Alexander Huth and Jerry Tang have filed a PCT patent application related to their work. As the researchers continue to refine the system and explore its possibilities, the hope is that it will eventually be adapted for everyday use, empowering individuals who face communication challenges due to physical limitations.

This breakthrough in AI-assisted communication could potentially change the lives of countless individuals who face communication challenges due to physical limitations. As the technology continues to advance, it may one day become a vital tool for enabling clear and efficient communication for those who would otherwise struggle to express themselves.

Frequently asked questions about semantic decoder technology

Could this technology be used on someone without them knowing, say by an authoritarian regime interrogating political prisoners or an employer spying on employees?

No. The system has to be extensively trained on a willing subject in a facility with large, expensive equipment. “A person needs to spend up to 15 hours lying in an MRI scanner, being perfectly still, and paying good attention to stories that they’re listening to before this really works well on them,” said Huth.

Could training be skipped altogether?

No. The researchers tested the system on people whom it hadn’t been trained on and found that the results were unintelligible.

Are there ways someone can defend against having their thoughts decoded?

Yes. The researchers tested whether a person who had previously participated in training could actively resist subsequent attempts at brain decoding. Tactics like thinking of animals or quietly imagining telling their own story let participants easily and completely thwart the system from recovering the speech the person was exposed to.

What if technology and related research evolved to one day overcome these obstacles or defenses?

“I think right now, while the technology is in such an early state, it’s important to be proactive by enacting policies that protect people and their privacy,” Tang said. “Regulating what these devices can be used for is also very important.”

More about semantic decoder technology

Semantic decoders refer to artificial intelligence systems that are designed to interpret and translate brain activity into meaningful text or language. These decoders analyze the neural patterns associated with the processing of language and ideas in the brain and attempt to convert them into a coherent, intelligible output.

The development of semantic decoders involves several key steps:

Data Collection

Researchers use neuroimaging techniques, such as functional magnetic resonance imaging (fMRI) or functional near-infrared spectroscopy (fNIRS), to collect brain activity data from participants. These techniques measure blood flow changes in the brain, which reflect neural activity associated with language processing and cognitive tasks.


To train a semantic decoder, researchers expose participants to various stimuli, such as listening to stories, watching videos, or imagining telling stories. The brain activity data collected during these tasks is used to develop a model that can map neural patterns to semantic information.


Once the model is trained, it can be used to decode brain activity from a participant engaged in a similar task. The decoder translates the observed neural patterns into meaningful language or text, capturing the essence of the participant’s thoughts, albeit imperfectly.

Semantic decoders have significant potential in various applications, particularly in assisting individuals with communication impairments. For instance, they could help people who are mentally conscious but unable to physically speak due to stroke, brain injury, or neurodegenerative diseases to communicate more effectively.

However, there are also ethical concerns associated with semantic decoders, such as the potential for misuse or invasion of privacy. Researchers developing these systems are working to address these concerns by ensuring that the technology is only effective when used with cooperative participants who willingly take part in the training process.

While semantic decoders are still in the early stages of development, their potential to revolutionize communication for individuals with physical limitations is promising. As the technology continues to advance, it may one day become a vital tool for enabling clear and efficient communication for those who would otherwise struggle to express themselves.


Check us out on EarthSnap, a free app brought to you by Eric Ralls and

News coming your way
The biggest news about our planet delivered to you each day