Research shows that the human brain needs three different processing stages to identify sounds such as speech
Research shows that the human brain needs three different processing stages to identify sounds such as speech, quite similar to the pattern observed in non-human primates. According to neuroscientists at Georgetown University Medical Center (GUMC), both human and non-human primates process speech along two parallel pathways, each of which run from lower to higher functioning neural regions.
These pathways are dubbed the 'what' and 'where' streams and are roughly analogous to how the brain processes sight, but in different regions. The 'where' stream localizes sound and the 'what' pathway identifies the sound.
Both pathways begin with the processing of signals in the auditory cortex, located inside a deep fissure on the side of the brain underneath the temples - the so-called 'temporal lobe.'
Information processed by the 'what' pathway then flows forward along the outside of the temporal lobe, and the job of that pathway is to recognize complex auditory signals, which include communication sounds and their meaning (semantics).
The 'where' pathway is mostly in the parietal lobe, above the temporal lobe, and it processes spatial aspects of a sound - its location and its motion in space - but is also involved in providing feedback during the act of speaking.
The study helps shed light on the complex and extraordinarily elegant, workings of the 'auditory' human brain, according to Josef Rauschecker, professor in the departments of physiology/ biophysics and neuroscience.
Advertisement
"These sounds, such as speech, are vitally important to humans, and it is critical that we understand how they are processed in the human brain," he added.
Advertisement