Neuroscientists are teaching computers to read words straight out of human brains.This was reported by Kelly Servick a writer for ScienceMag on three papers posted to the preprint server bioRxiv in which three different teams of researchers demonstrated that they could decode speech from recordings of neurons firing.

In each study, electrodes placed directly on the brain recorded neural activity while brain-surgery patients listened to speech or read words out loud. Then, researchers tried to figure out what the patients were hearing or saying. In each case, researchers were able to convert the brain’s electrical activity into atleast somewhat-intelligible sound files.

The first paper  which was posted to bioRxiv on Oct. 10, 2018, described an experiment in which researchers played recordings of speech infront of patient with epilespy who were in the middle of brain surgery.As the patients listened to the sound files, the researchers recorded neurons firing in the parts of the patients’ brains that process sound. The scientists tried a number of different methods for turning that neuronal firing data into speech and found that “deep learning” — in which a computer tries to solve a problem more or less unsupervised — worked best. When they played the results through a vocoder, which synthesizes human voices, for a group of 11 listeners, those individuals were able to correctly interpret the words 75 percent of the time.

The second paper, posted Nov. 27, 2018, relied on neural recordings from people undergoing surgery to remove brain tumors. As the patients read single-syllable words out loud, the researchers recorded both the sounds coming out of the participants’ mouths and the neurons firing in the speech-producing regions of their brains.Instead of training computers deeply on each patient,these researchers taught an artificial neural network to convert the neural recordings into audio, showing that the results were at least reasonably intelligible and similar to the recordings made by the microphones.

The third paper, which was posted Aug. 9, 2018, relied on recording the part of the brain that converts specific words that a person decides to speak into muscle movements. While no recording from this experiment is available online, the researchers reported that they were able to reconstruct entire sentences (also recorded during brain surgery on patients with epilepsy) and that people who listened to the sentences were able to correctly interpret them on a multiple choice test (out of 10 choices) 83 percent of the time. That experiment’s method relied on identifying the patterns involved in producing individual syllables, rather than whole words.

Recordings of experiment 1 : here
Recordings of experiment 2 : here (zip file)
Recordings of experiment 3 : Not Present..

One thing to keep in mind that these all are small studies. The first paper relied on data taken from just five patients, while the second looked at six patients and the third only three. And none of the neural recordings lasted more than an hour.

The main aim of all these experiments that were done is to make it possible for people to speak who’ ve lost the ability to speak through a computer-to-brain interface.


Please enter your comment!
Please enter your name here