The brain and stroke
The brain and stroke
- May 19, 2016
- Language model developed by UCI cognitive scientist may soon help clinicians better diagnose and treat stroke survivors
Each year, nearly 800,000 people in the U.S. experience a stroke – that’s one person every 40 seconds. Stroke is the fifth leading cause of death, killing almost 130,000 Americans each year – that’s one person every four minutes. But there are far more who survive. In fact, current CDC estimates place the number around 7 million, making treatment ever more important for the lasting disabilities stroke leaves in its wake.
Enter Greg Hickok, UCI cognitive sciences professor. Over the last 15 years, he’s received $16 million in funding from the National Institutes of Health - $4 million in the last year alone – to support research on how neural abnormalities affect speech and language in an area of the brain tied to stroke-induced aphasia.
“About one-million people in the U.S. suffer from aphasia, caused most often by stroke,” Hickok says. “A stroke can cause damage to networks in the brain that enable language, which – from a scientific standpoint – is the system that translates thought into speech and speech into thought.”
The result can be devastating.
“Our ability to communicate is fundamental to being human. Imagine not being able to speak or write, not being able to understand a conversation or the evening news, not being able to text, email or Tweet. Communication is the foundation of our relationships and society; aphasia can take it all away.”
In March, Hickok and researchers at the University of South Carolina and Johns Hopkins University received a Clinical Research Center Grant (P50) from the National Institutes of Health to better understand the nature of the various forms of aphasia, the prognosis for recovery, and how best to treat it.
The work relies on Hickok’s dual-stream model which posits that speech is processed in the brain along two different neural pathways. One, called the ‘ventral pathway,’ relates acoustic speech information to meaning (conceptual knowledge/memory) and is used for understanding speech; the other, called the ‘dorsal pathway,’ relates acoustic speech information to the motor/action system and is used for producing speech, he explains.
“The existence of a sensorimotor stream is easy to imagine for a visuomotor task like reaching for a cup, where we use visual information about its shape and location to guide our reach,” he says. “It’s less obvious in language, but studies have shown that in the same way, a word’s sound guides our speech production.”
The director of UCI’s Center for Language Science, Hickok first began seeing this in action at a neural level when utilizing fMRI to study brain processes related to speech production. He noticed that, in addition to the expected motor regions, auditory areas of the brain “lit up,” or activated, when participants named pictures – even if they only thought about, and didn’t actually vocalize, the words.
“Stroke-based research found that these activations reflected the critical involvement of auditory areas in speaking. When these regions are damaged, patients tend to struggle to come up with words, and when they do speak, they make a lot of errors,” says Hickok.
He has since been using fMRI and stroke-based methods to zero in on the planum temporale (PT) and in particular the Sylvian parietal-temporal (SPT) region of the brain, a region he discovered, where regulation of auditory-motor processes occur.
Researchers from the University of South Carolina will be using Hickok’s dual-stream model to test whether measures of proportional damage to the two different pathways leads to better diagnosis and predictions for aphasia treatment response, beyond biographical and cognitive/linguistic factors.
The collaborative study is directly tied to clinical practice; at its end, the researchers will understand why some patients respond better than others to aphasia treatment. They’ll also be using treatment approaches that are routinely employed in clinical practice, allowing for immediate translation of the findings directly into patient management.
At the same time, Hickok is also working on a study with the University of Texas at Houston using ElectroCorticoGraphy (ECoG) – which takes direct cortical recordings in neurosurgical patients – to understand the organization and dynamics of the dorsal stream in great detail.
“FMRI and stroke based methods can help us map the location of regions, but they tell us little above the millisecond by millisecond dynamics of how the brain actually carries out a given task,” he says. “ECoG provides very high temporal resolution recording brain signals with millisecond resolution and provides excellent spatial resolution measuring these signals in patches of cortex a few millimeters in diameter.”
He also received renewal funding this year from NIH to continue his five-year, multisite fMRI study to understand subdivisions of the planum temporale region of the brain, including SPT. The previous work yielded 40 publications on its functional organization in healthy young people. Now, Hickok and his team, which includes UCI faculty Kourosh Saberi, John Middlebrooks, and Fang-Gang Zeng, will be looking at stroke and hearing impaired patients to study what happens when people have damage to various portions of this system.
“We’re focusing on speech production, audiovisual integration, and spatial hearing,” he says. “This work will both refine our understanding of these circuits and the PT region as well as help us understand the sources of speech and language difficulties following stroke.”
A potential application of this better understanding, he says, is that neural prostheses -- brain implants – could one day be used to compensate for lost function in some aphasia cases. The idea may seem farfetched, but considering the advancements they’ve made in aphasia-based research over the past decade and a half, and considering that neural prostheses are becoming a reality for some neurological disorders, the sheer possibility is welcomed news among the millions who live with stroke-induced language deficits.
-Heather Ashbach, UCI School of Social Sciences
Share on:
Related News Items
- Computational language science post-baccalaureate program launches at UC Irvine
- Overexposure distorted the science of mirror neurons
- UCI language science at the 2023 Society for Computation in Linguistics
- 'Global importance' of UCI researchers honored
- Hickok is elected a fellow of the American Association for the Advancement of Science
connect with us