“The assignment tells him please say that word now and we’ll take that neural activity,” says David Moses, PH.D.
Dr. Moses says the electrodes are connected to the part of the brain that is responsible for language. Sophisticated machine learning software is able to recognize and decode the signals. So far, the team has built a vocabulary of around 50 fully formed words, enough to form full sentences.
“To see how it all comes together in the sentence decoding demonstration where he is prompted with a sentence and tries to say each word and those words are decoded,” he explains.
The UCSF team says the approach is different from some other computer-brain interface systems that tap into the function of the brain to relay messages generally related to touch. In a project that ABC7 unveiled earlier this summer, a research team at Stanford successfully used this technique to enable a patient to type by considering writing the letters.
“We wanted to be able to translate the signals directly into words,” says UCSF neurosurgeon and study director Edward Chang, MD.
Dr. Chang says both approaches offer unique benefits and could benefit different types of patients depending on their impairment. He says the ultimate goal of his team would be to translate the brain signals for the sounds we use to produce words and create natural human language.
“Someday do we really want to think about how we synthesize oral speech?” adds Dr. Chang added. “You know how you and I are talking, which is on the order of 120 to 150 words per minute.”
A device for decoding human speech in real time is probably still a long way off, he says. But hopefully it will be a breakthrough that patients who cannot speak for themselves now can one day tell us about it.
The team drew on previous research by the UCSF, which recruited epilepsy patients who agreed to have their brain signals mapped during treatment. This work helped isolate and identify brain signals related to language.
Copyright © 2021 KGO-TV. All rights reserved.