Electrodes in brain pick up faint ‘inner voice’ signal by probing intersection of speech and movement
subscription

Image. Can a device picking up electrical signals in the brain identify numbers and figure out what the user is doing with them: writing, saying out loud, or merely thinking? Credit: Generated by AI model MirageMaker on Deep Dream Generator by Nicolas Posunko/Skoltech PR with fragments courtesy of the researchers


Russian researchers from Skoltech, the Federal Centre of Neurosurgery in Tyumen, Sechenov University, and Lomonosov Moscow State University have conducted a study of brain activity in two patients with electrodes implanted in their brains, while they performed speech-related and handwriting tasks. Available on the medRxiv preprint repository, the team’s findings are essential to building a body of knowledge that would eventually enable “mind-reading” neural interfaces that could identify the user’s thoughts and recognize unspecified intentions without being primed to any particular task.

“In neuroscience, we used to tie specific brain functions to their dedicated brain areas in an almost one-to-one fashion,” study co-author Senior Research Scientist Nikolay Syrov from Skoltech Neuro commented. “But the current understanding is more to the effect that some network of potentially many brain areas dynamically interacting with each other underlies a function. This is certainly the case with coordinated movement, for example, which our findings once again confirm. One exciting conclusion is: Seeing how a function tends to be distributed throughout the brain, can we pick up its electrical activity and determine what it’s trying to do without knowing in advance what kind of an intention we are looking for?”

This question underlies the notion of so-called multimodal neural interfaces. These are brain chips that are not geared toward one particular function the way most state-of-the-art devices are. Rather than only look for signals encoding an intention to move a prosthetic limb, as distinct from signals related to speech, for example, a multimodal interface would figure out the nature of the user’s intention based on brain activity. The Skoltech researchers think the future of brain-computer interfaces lies with this approach, which will require many studies like this to enhance our understanding of the complex mapping of functions to brain areas.

The study was carried out on two epilepsy patients both of whom had multiple electrodes implanted in the brain at the Federal Centre of Neurosurgery in Tyumen, Russia. The surgery was recommended by their physicians to pinpoint the seizure onset zone as part of their epilepsy treatment. This means that the positions of the electrodes were chosen without any regard to expectations about the areas that should be activated in response to the handwriting and the speech-related tasks featured in the Skoltech study.

In the first task, the patients wrote digits on a pen tablet with their brain activity monitored. In the second task, they first pronounced words normally, then mouthed the same words silently, and finally only imagined saying the words without any movement of the tongue, lips, etc.

The team found that the handwriting task, which is fundamentally a motor task, elicited a response detectable by electrodes regardless of their placement. This agrees with the expectation that coordinated movement correlates with activity spread throughout the cerebral cortex — the outermost part of the brain.

Some of the brain areas exhibited activity in response to engaging either of the two functions — speaking or writing — and this seems reassuring with a view to multimodal interfaces.

In the speech-related task, the electrical responses to proper speaking and to silent articulation matched each other closely. While the “inner voice” signal was very faint, it fitted neatly into the normal speech signal. This makes sense, because you could think of that weak signal as being normal speech minus whatever the speech organs — and the ears — are doing. The “residual” signal could include things like word retrieval from memory, etc. “Trying to pick up the inner voice signal is exciting in and of itself, because this is basically like mind reading. There aren’t that many research groups working on this,” the study’s lead author, Junior Research Scientist Gurgen Soghoyan from Skoltech Neuro, said.

What makes the Skoltech scientists’ experimental setup novel is recording both speech- and motor-related brain activity in the same patients. The two functions are usually treated separately. Speaking, however, presupposes a motor component because of all the movement involved in articulating the words. And picking up the signal in one and the same patient engaged in each of the tasks will make it possible to determine where the two functions intersect neurally. This is essential for creating multifunctional neural interfaces that would be capable of decoding many functions, including the intentions to move and to say something.