Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Novel Brain Implants Help Paralyzed People Speak Faster, More Accurately

Date

High-performance brain-computer interfaces (BCIs) decoded brain activity into speech faster, more accurately, and with a bigger vocabulary than existing technologies, two early trials in Nature showed.

In the BrainGate2 study, speech-to-text BCI that recorded activity from intracortical microelectrode arrays (MEAs) decoded the speech of a woman with amyotrophic lateral sclerosis (ALS) at 62 words per minute, more than 3 times faster than the previous record, reported Francis Willett, PhD, of Stanford University in California and colleagues.

In the BRAVO study, Edward Chang, MD, of the University of California San Francisco, and co-authors reported success in three modalities — text, speech audio, and facial-avatar animation — using high-density surface recordings of the speech cortex. Decoding the text of a woman with a brainstem stroke reached a median rate of 78 words per minute, and the participant was able to “speak” through a digital avatar with software that simulated facial movements.

The findings signal a turning point in BCI technology to restore communication for people with severe paralysis, observed Nick Ramsey, PhD, of University Medical Center Utrecht in the Netherlands, and Nathan Crone, MD, of Johns Hopkins University School of Medicine in Baltimore, in an accompanying editorial.

Current systems to help paralyzed people communicate are slow, often achieving just a few words per minute, compared with normal speech which is about 150 words a minute, Ramsey and Crone noted. But the two high-performance BCIs reported in Nature “represent a great advance in neuroscientific and neuroengineering research, and show great promise in boosting the quality of life of individuals who have lost their voice as a result of paralyzing neurological injuries and diseases,” they wrote.

BrainGate2

In BrainGate2, Willett and colleagues recorded neural activity from four intracranial MEAs while a study participant with ALS attempted to make orofacial movements or speak. The recordings allowed her to communicate with a 9.1%-word error rate on a 50-word vocabulary, and a 23.8%-word error rate on a 125,000-word vocabulary.

The system was trained using recordings of neural activity collected when the participant tried to speak 260–480 sentences at her own pace. Training took an average of 140 minutes a day for 8 days, which could be reduced considerably without much loss of performance, the researchers suggested.

“This system is trained to know what words should come before other ones and which phonemes make what words,” Willett said at a press briefing. “If some phonemes were wrongly interpreted, it can still take a good guess.”

The researchers also found that neural activity recorded from a brain region known as Broca’s area could not be decoded, consistent with recent work questioning the role of Broca’s area in speech.

BRAVO

In BRAVO, Chang and colleagues implanted a silicon sheet embedded with 253 electrocorticography (ECoG) electrodes that recorded signals to muscles in the tongue, jaw, larynx, and face of a participant with a brainstem stroke. The study built on previous ECoG reports, including one involving a similar BCI implanted in a different participant.

The system was trained on a 1,024-word vocabulary and could decipher and compose sentences with a 25.5%-word error rate. It achieved a 4.9%-word error rate when decoding sentences from a 50-phrase set.

The BCI also decoded attempted facial expressions, which it reproduced through a digital avatar to provide visual feedback. Using signals being sent from the participant’s brain as she tried to speak, the system made the avatar’s jaw, lips, and tongue move and exhibited facial movements for happiness, sadness, or surprise.

“Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others,” Chang said. “These advancements bring us much closer to making this a real solution for patients.”

The two reports constitute crucial proof of concept that communication can be restored using implantable BCIs, but more work is needed, the editorialists pointed out.

The devices were too complicated for home use and required highly skilled researchers to operate, Ramsey and Crone noted. “Similarly effective BCI systems that operate with minimal or no researcher intervention will be needed in the future,” they wrote.

And importantly, both BCIs were trained and tested using the mimed speech of participants who had residual (though weak) articulatory movements, the editorialists said. “More studies are now needed to show efficacy in participants who lack residual movements, as occurs in locked-in syndrome (including late-stage ALS),” they suggested.

  • Judy George covers neurology and neuroscience news for MedPage Today, writing about brain aging, Alzheimer’s, dementia, MS, rare diseases, epilepsy, autism, headache, stroke, Parkinson’s, ALS, concussion, CTE, sleep, pain, and more. Follow

Disclosures

Support for the BrainGate2 study was provided by the Department of Veterans Affairs, Wu Tsai Neurosciences Institute, Howard Hughes Medical Institute, Larry and Pamela Garlick, Simons Foundation Collaboration on the Global Brain, and NIDCD.

Researchers for the BrainGate2 study reported relationships with Neuralink, Axoft, Reach Neuro, Synchron, Enspire DBS, MapLight Therapeutics, Facebook Reality Labs, MIND-X, Inscopix, and Heal.

Support for BRAVO came from the NIH, the Joan and Sandy Weill Foundation, Susan and Bill Oberndorf, Ron Conway, David Krane, Graham and Christina Spencer, the William K. Bowes, Jr. Foundation, the Rose Hills Foundation, and the Noyce Foundation.

Some BRAVO researchers are inventors on a pending provisional UCSF patent application that is relevant to the neural-decoding approaches used in this work, or other patents broadly relevant to the neural-decoding approaches in this work. One co-author is chief technical officer at Speech Graphics.

The editorialist declared no competing interests.

Primary Source

Nature

Source Reference: Willett FR, et al “A high-performance speech neuroprosthesis” Nature 2023; DOI: 10.1038/s41586-023-06377-x.

Secondary Source

Nature

Source Reference: Metzger SL, et al “A high-performance neuroprosthesis for speech decoding and avatar control” Nature 2023; DOI: 10.1038/s41586-023-06443-4.

Additional Source

Nature

Source Reference: Ramsey NF, Crone NE “Brain implants that enable speech pass performance milestones” Nature 2023; DOI: 10.1038/d41586-023-02546-0.

Please enable JavaScript to view the

comments powered by Disqus.

Facebook
Twitter
Reddit
LinkedIn
Email

More
articles

Join DBN Today!

Let DBN help guide you to success!

Doctors Business Network offers everything new and existing health care providers need to establish and build a successful career! Sign up with DBN today and let us help you succeed!

DBN Health News