Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Wednesday, 16 August 2023

UT Austin’s AI ‘brain decoder’ can read minds. But how good is it?

 Scientists at the University of Texas at Austin have created a “semantic brain decoder” to guess someone’s thoughts based on brain activity.

During tests, it captured the gist of what someone was thinking, rather than a literal translation. And if participants resisted, it produced gibberish.

The decoder, written about in the journal Nature Neuroscience in May, is novel, said Edmund Lalor, an associate professor of neuroscience at the University of Rochester. But its threat to privacy is minimal.

“We’re very, very far away from just very quickly being able to mind-read anybody without their knowing,” said Lalor, who was not involved with UT Austin’s research.

Brain scanners and podcasts

Creating the decoder involved listening to podcasts — 16 hours worth.

Study co-author Alexander Huth, an assistant neuroscience and computer science professor at UT Austin, and two other participants laid in an MRI brain scanner while listening to the podcasts. Using the MRI data, the researchers taught the decoder which language patterns correspond to different kinds of brain activity.

They then asked participants to listen to podcasts or imagine themselves telling a story. The decoder made short “guesses” of what each participant was thinking and ranked them based on how well they corresponded to the person’s brain activity.

After eliminating the bad guesses, the decoder expanded on the good ones using an earlier version of the AI chatbot ChatGPT, which answers questions and responds to prompts by predicting the next word.

The decoder repeated the whole process until it returned a full prediction to the scientists, who compared it to the podcast the participant was hearing or a transcript of the story they imagined telling.

Did it work?

The decoder performed better than a randomly generated translation, and its predictions preserved the general meaning of participants’ thoughts.

“These people went into an MRI scanner knowingly for many, many, many hours in order to produce results that are quite imperfect, but work a bit,” Lalor said.

“I don’t have my drivers license yet,” for example, was translated to “She has not even started to learn to drive.”

And the decoder translated “That night I went upstairs to what had been our bedroom” to “We got back to my dorm room.”


Not a lie detector

Huth and his team recognized the fear that similar technology could someday be used to detect lies or to read people’s memories against their will.

“When we first got this model working, our first response was, oh my God, this is wonderful, this is amazing,” Huth said. “And the second response was, it’s kind of scary. We need to figure out what’s really going on here.”

So they ran tests to figure out what the decoder couldn’t do.

In one experiment, participants tried to “defeat” the decoder by silently naming as many animals as possible while listening to a story. The decoder, in turn, produced nonsense. Another experiment attempted to decode the thoughts of someone who had not first sat through hours of podcasts. This too was unsuccessful.

“People can definitely disrupt this, they can turn it off if they think something else,” Huth said, “which we thought was kind of a bonus here.”

AI models are trained on data that can carry human biases. Some models, for example, associate doctors with men and nurses with women. Could the brain decoder therefore make skewed predictions?

Huth said participants listened to podcasters of diverse genders, ethnicities and sexualities. But the researchers had less control over data used to train the GPT models. Lalor said this might make the decoder’s predictions slightly less accurate but not necessarily biased.

Thoughts, not memories

The main goal of the brain decoder, Huth said, is to help people — specifically those who cannot talk after a stroke or some other medical emergency.

But it currently needs an expensive, non-moveable MRI machine to function. And processing its predictions can take hours.

So Huth and his lab are exploring portable ways to measure brain activity, including functional near-infrared spectroscopy, or fNIRS, which measures similar signals as an MRI machine but from a helmet.

While the brain decoder appears to have tapped into how humans interpret the world through language, it can’t scan memories. Associating “hamburger,” “french fries” and “milkshake” — even relating them to a specific pattern of brain activity — is only part of how we experience the world.

Human cognition goes deeper, linking french fries to laughs shared with friends in a crowded diner booth to a salty taste lingering on the tongue. AI can’t grasp those sensations by predicting the next word. At least, not yet.

No comments:

Post a Comment

Connect broadband