New technology could help people with paralysis to speak again
Recent research led by Northwestern University in Evanston, IL, finds that the brain generates speech sounds in a similar way to how it controls hand and arm movements.
The finding brings closer the day when people who are paralyzed — such as individuals with “locked-in syndrome” — will be able to speak through a “brain-machine interface” by just trying to say words.
A paper on the work now features in the Journal of Neuroscience.
The team foresees the technology using the brain’s own encoding of the sounds together with the commands that control the muscles in the lips, tongue, palate, and voice box to produce them.
More ‘intuitive’ than Hawking’s technology
Such a system, the authors explain, would be more “intuitive” than that used by renowned physicist Stephen Hawking, who died earlier this year at the age of 76.
Hawking had a rare disease called amyotrophic lateral sclerosis that left him paralyzed and unable to speak naturally for most of his life.
However, thanks to a computer interface that he could control by moving his cheek, he could write words and sentences that a speech synthesizer then read out.
Although the method does the job, it is slow and laborious. It is not articulating the speech that the brain encodes and sends to the muscles that make the sounds.
Instead, it requires that the person go through a process that is more akin to writing; they have to think, for instance, about the written form of the words and sentences they wish to articulate, not just their sounds.
‘Phonemes and articulatory gestures’
The study pursues a model of speech production that is in two parts: formulation of phonemes, and “articulatory gestures.”
The first is the hierarchical process of breaking down sentences, phrases, words, and syllables into individual sounds, or phonemes. The second is their production through control of muscles that articulate the vocal tract. Until this work, it was not known how the brain actually planned and represented these.
“We hypothesized,” notes senior study author Dr. Marc W. Slutzky, an associate professor of neurology and of physiology, “speech motor areas of the brain would have a similar organization to arm motor areas of the brain.”
He goes on to explain that they identified two brain areas that are involved in speech production, reporting, “The precentral cortex represented gestures to a greater extent than phonemes. The inferior frontal cortex, which is a higher-level speech area, represented both phonemes and gestures.”
He and his colleagues made the discoveries while studying brain activity in people with electrodes implanted in their brains while they underwent surgery to remove tumors. The patients had to be conscious during the surgery because they had to read words out from a screen.
The authors explain:
“These findings suggest that speech production shares a similar critical organizational structure with movement of other body parts.”
“This has important implications,” they conclude, “both for our understanding of speech production and for the design of brain-machine interfaces to restore communication to people who cannot speak.”
Based on their results, they now plan to build a brain-machine interface algorithm that, as well as decoding gestures, will also be able to form words by combining them.
Source: Read Full Article