A recent study published in Nature Neuroscience demonstrated how AI can now be used to take the first step toward “mind reading.” Scientists have developed a technique to use brain scans and GPT modeling to transcribe thoughts inside a person’s mind summarily.
The Path to Mind Reading
Scientists have essentially built a language decoder based on GPT models (Generative Pre-Trained Transformer). A deep learning model called GPT that has been pre-trained on a sizable dataset can be used to carry out a variety of tasks involving natural language processing. The most famous example is ChatGPT by Open AI.
Previous research has shown that a brain implant can enable people who cannot speak or type to spell words or even complete sentences. According to the research team, the language decoder’s primary goal is to help people who have lost the ability to communicate, like patients with locked-in syndrome or consciousness disorders.
The focus of these “brain-computer interfaces” is on the region of the brain controlling the mouth’s movement when it tries to form words. The language decoder developed by Alexander Huth’s team “works at the level of ideas, of semantics, of meaning,” according to Huth, a neuroscientist at the University of Texas at Austin who also co-authored the new study.
The research, published in the journal Nature Neuroscience, is the first system that can reconstruct continuous language without the use of an invasive brain implant, making it of paramount importance.
Transcribing thoughts to text
Three participants spent a total of 16 hours inside an fMRI machine listening to spoken narrative stories, mostly podcasts, in order to map out how words, phrases, and meanings elicited responses in the language-processing regions of the brain.
They fed this data into a neural network language model using GPT-1, training the model to predict how each listener’s brain would process speech, then used a filtration process to whittle down the possibilities until it identified the most likely response.
The accuracy of the model was next assessed while each participant underwent an fMRI scan while listening to a different story.
The researchers acknowledged that the decoder had trouble with personal pronouns like “I” or “she.” The decoder, however, was still able to understand the “gist,” according to the participants, even when they made up their own stories or watched silent movies. For instance, the model responded, “She has not even started to learn to drive yet,” when the participant said, “I don’t have my driver’s license yet.”
According to Huth, fMRI scanning actually gathers an agglomeration of information over a few seconds, as it is too slow to record individual words – allowing them to witness the development of the idea behind the speech.
The results, according to the scientists, were up to 82 percent accurate. While the AI model’s accuracy ranged from 41 to 74 percent when decoding imagined speech, it was between 72 and 82 percent when decoding perceived speech. The accuracy of interpretations of silent films ranged from 21 to 45 percent.
Privacy and ethical concerns
The significance of the study lies in the demonstration of how AI can passively decode thoughts without any explicit instructions or feedback from the participant, bringing us closer to a future where machines can actually read minds. The immediate concern to be addressed was the possibility of its usage against people’s will.
Tests performed by the scientific team revealed that the decoder did not function on a person if their unique brain activity had not already been used to train it. In addition, the three participants were able to “sabotage” the decoder by counting in sevens, naming and visualizing animals, or telling a different story in their heads while listening to one of the podcasts.
The research team wants to expedite the procedure so they can decode the brain scans instantly.
In response to the study’s ethical issues, they demanded legislation to safeguard mental privacy. They cautioned that the technology might be employed for nefarious purposes like governmental or commercial surveillance, invasion of people’s privacy, or influencing their thoughts and actions.
Further research is required to fully comprehend the capabilities and limitations of this technology, ensuring the deployment of appropriate ethical safeguards.