<p>Scientists from Brown University and the University of Cincinnati found that a portion of the brain that handles decision-making also helps decipher different sounds. Details are in the July issue of the journal <em>Psychological Science.</em></p>

PROVIDENCE, R.I. [Brown University] — A front portion of the brain that handles tasks like decision-making also helps decipher different phonetic sounds, according to new Brown University research.

This section of the brain — the left inferior frontal sulcus — treats different pronunciations of the same speech sound (such as a ‘d’ sound) the same way.

In determining this, scientists have solved a mystery.

: speech waveSubtle differences
MRI studies showed that test subjects reacted to different sounds — ta and da, for example — but appeared to recognize the same sound even when pronounced with slight variations. These five sounds are the same, but the fifth (right) has a slightly different pronunciation.
“No two pronunciations of the same speech sound are exactly alike. Listeners have to figure out whether these two different pronunciations are the same speech sound such as a ‘d’ or two different sounds such as a ‘d’ sound and a ‘t’ sound,” said Emily Myers, assistant professor (research) of cognitive and linguistic sciences at Brown University and lead author of the paper. “No one has shown before what areas of the brain are involved in these decisions.”

Sheila Blumstein, the study’s principal investigator, said the findings provide a window into how the brain processes speech.

“As human beings we spend much of our lives categorizing the world, and it appears as though we use the same brain areas for language that we use for categorizing non-language things like objects, said Blumstein, the Albert D. Mead Professor of Cognitive and Linguistic Sciences at Brown.

Emily Myers: Assistant professor (research) of cognitive and linguistic sciences
Emily Myers Assistant professor (research) of cognitive and linguistic sciences
Researchers from Brown University’s Department of Neuroscience and from the Department of Psychiatry at the University of Cincinnati also took part in the study. Details will be published in the July issue of the journal Psychological Science.

To conduct the research, scientists studied 13 women and five men, ages 19 to 29. All were brought into an MRI scanner at Brown University’s Magnetic Resonance Facility. An MRI machine, with its powerful magnet, allows technicians to measure blood flow in response to different types of stimuli.

Subjects were asked to listen to repetitive syllables in a row as they lay in the scanner. The sounds were derived from recorded, synthesized speech. Initially subjects would hear identical “dah” or “tah” sounds — four in a row — which would reduce brain activity because of the repetition. The fifth sound could be the same or a different sound.

Researchers found that the brain signal in the left inferior frontal sulcus changed when the final sound was a different one. But if the final sound was only a different pronunciation of the same sound, the brain’s response remained steady.

Myers and Blumstein said the study matters in the bid to understand language and speaking and how the brain is able to understand certain sounds and pronunciations.

“What these results suggest is that [the left inferior frontal sulcus] is a shared resource used for both language and non-language categorization,” Blumbstein said.

Financial support for the study came from the National Institute on Deafness and Other Communication Disorders (NIDCD), an Institute of the National Institutes of Health, and the Ittleson Foundation.