Researchers expand study on area of brain causing speech deficits in stroke victims
Researchers expand study on area of brain causing speech deficits in stroke victims
- November 13, 2015
- Renewed funding from NIH brings total to $12.1 million
Researchers from UCI, Arizona State University, Medical College of Wisconsin and the
University of South Carolina have received a five year, $2,851,000 grant from the
National Institutes of Health to expand research on a region of the brain linked to
autism, schizophrenia, and stroke-induced aphasia. Called the planum temporale (PT),
a human brain contains two; the left hemisphere PT tends to be slightly larger compared
to the right hemisphere. The region is responsible for spatial hearing and auditory-motor
control and aspects of musical ability.
Leading the research team is Gregory Hickok, cognitive sciences professor and director
of UCI’s Center for Language Science. The study builds upon 15 years of work he’s
done on this specific region using fMRI to map its organization.
“We learned what we could from fMRI and uncovered the internal organization of this
region: it’s a patchwork of different functions rather than a catchall hub as others
had proposed,” he says. The new grant brings his total funded work on the region to
more than $12 million as the researchers home in on how stroke impacts three specific
functions that occur within and around the PT. Below, Hickok explains the new research
trajectory and how his team’s past findings are guiding them further toward results
with both basic science and clinical translation implications.
Additional researchers on the project are Kourosh Saberi, cognitive sciences professor,
UCI; Fan-Gang Zeng, otolaryngology professor, UCI; John Middlebrooks, otolaryngology
professor, UCI; Corianne Rogalsky, speech and hearing science assistant professor,
Arizona State University; Jeff Binder, neurology professor, Medical College of Wisconsin;
and Julius Fridriksson, communication sciences and disorders, University of South
Carolina.
Function 1: Hearing speech in noisy situations, like a busy restaurant
Hickok: “The auditory system has a mechanism for separating different streams of acoustic
signals allowing us to attend to the person we are talking to while partially filtering
out all the other noises and voices in a room. It does this, in part, using spatial
cues: sounds coming from different locations are segregated by the brain into different
streams. The PT region appears to be involved in this, according to our fMRI findings.
With the new grant, we will study the effects of damage to the PT on spatial hearing,
auditory stream segregation, and understanding speech in noisy environments. The work
will tell us how much of an individual’s difficulty hearing speech in noise—a common
complaint of many people—is a result of problems with a more basic spatial hearing
deficit. This knowledge could lead to new approaches to alleviating difficulties with
hearing speech in noise.“
Function 2: Audiovisual speech integration
Hickok: “Another source of information that the brain uses to segregate speech streams
and aid speech processing in noisy environments comes from visual cues when people
are talking. Our speech perception can improve dramatically in noisy environments
if we can watch the talker’s mouth. In some cases, observed lip movements can even
change what we hear when listening to speech. We know from functional imaging that
a region near the PT is important for putting together auditory and visual speech
signals. No one, however, has systematically studied the effects of stroke on audiovisual
integration. Interestingly, in our preliminary data we have found that some people
with brain injuries actually have MORE trouble hearing speech when they are looking
at the talkers’ lips. This may be because the brain damage has affected the timing
of the AV speech integration process, effectively misaligning the signals and leading
to perceptual interference. If this turns out to be true, we may be able to improve
some people’s speech perception ability simply by instructing them NOT to look at
the lips when listening to speech.”
Function 3: Integration of auditory and motor systems during speech production
Hickok: “Most people don’t realize that our auditory systems play an important role
in speech production not just in speech perception. This has been a focus of my research for 15 years.
Just like visual information about an object’s location, size, and shape is necessary
to reach out and grasp that object, so, too, the motor speech system needs to have
access to (previously learned and stored) memories about the sound targets of words that we speak. Our past work has identified the brain circuit involved,
including the area we discovered (area Spt) and now we are assessing the effects of
damage to the circuit. Stroke often affects people’s ability to speak and we have
developed computer models that allow us to pinpoint the processing or computational
source(s) of deficiency based on an analysis of errors in a simple picture naming
task. What we will do in this project is validate this diagnostic procedure and map
computational operations involved in speech production onto their corresponding brain
circuits. This knowledge could lead to simple diagnostic tools for clinicians or even
to new treatments using neural prostheses.
Learn more about Hickok’s previous funded work:
Studying health disorders at the neural level NIH funded work
Hickok receives grant to grow his brain research team NIH funded work
UCI researchers study brain region with links to autism and musical ability NIH funded work
Share on:
connect with us