Medindia LOGIN REGISTER
Medindia

Brain Waves to Eavesdrop on What We Hear Deciphered by Scientists

by Nancy Needhima on Feb 3 2012 10:55 PM

Someday Neuroscientists may be able to heed to the steady, internal soliloquy trotting through our minds or attend to the illusory speech of a stroke or a confined patient with speech inability.

Brain Waves to Eavesdrop on What We Hear Deciphered by Scientists
Someday Neuroscientists may be able to heed to the steady, internal soliloquy trotting through our minds or attend to the illusory speech of a stroke or a confined patient with speech inability state researchers at the University of California, Berkeley. The work, conducted in the labs of Robert Knight at Berkeley and Edward Chang at UCSF, is reported Jan 31 in the open-access journal PLoS Biology.
The scientists have succeeded in decoding electrical activity in a region of the human auditory system called the superior temporal gyrus (STG). By analyzing the pattern of STG activity, they were able to reconstruct words that subjects listened to in normal conversation.

"This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig's disease and can't speak," said Knight, Professor of Psychology and Neuroscience at UC Berkeley. "If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit."

"This research is based on sounds a person actually hears, but to use this for a prosthetic device, these principles would have to apply to someone who is imagining speech," cautioned first author Brian N. Pasley, a post-doctoral researcher at UC Berkeley. "There is some evidence that perception and imagery may be pretty similar in the brain. If you can understand the relationship well enough between the brain recordings and sound, you could either synthesize the actual sound a person is thinking, or just write out the words with a type of interface device."

Pasley tested two different methods to match spoken sounds to the pattern of activity in the electrodes. The patients then heard a single word and Pasley used two different computational models to predict the word based on electrode recordings. The better of the two methods was able to reproduce a sound close enough to the original word for him and fellow researchers to correctly guess the word better than chance.

"We think we would be more accurate with an hour of listening and recording and then repeating the word many times," Pasley said. But because any realistic device would need to accurately identify words the first time heard, he decided to test the models using only a single trial.

"I didn't think it could possibly work, but Brian did it," Knight said. "His computational model can reproduce the sound the patient heard and you can actually recognize the word, although not at a perfect level."

Advertisement
The ultimate goal of the study was to explore how the human brain encodes speech and determine which aspects of speech are most important for understanding.

"At some point, the brain has to extract away all that auditory information and just map it onto a word, since we can understand speech and words regardless of how they sound," Pasley said. "The big question is, what is the most meaningful unit of speech? A syllable, a phone, a phoneme? We can test these hypotheses using the data we get from these recordings."

Advertisement
In the accompanying Podcast, PLoS Biology Editor Ruchir Shah sits down with Brian Pasley and Robert Knight to discuss their main findings, the applications for neural prosthetics, as well as the potential ethical implications for "mind-reading".

Source-Eurekalert


Advertisement