Brain computer interface, from "decoding language" to more possibilities
2025-07-15
Brain computer interface technology establishes a direct information pathway between the brain and external devices by detecting and regulating brain activity, creating unprecedented ways of human-computer interaction and bringing "thought dialogue" from science fiction to reality. In recent years, with the iterative development of technology, many countries have conducted experimental explorations in related fields, especially in the field of language brain computer interfaces, and have made a series of breakthroughs: from real-time communication of "brain wave to speech" for stroke paralysis patients, to writing Chinese characters with brain controlled robotic arms, and to helping ALS patients improve their quality of life... This new technology is building a bridge for communication in the world for people with language disorders, and will also provide new ideas and solutions for the treatment of neurological and other diseases. The brain can convert brain activity into speech in real-time. The brain is a powerful and solitary organ, tightly protected by the skull, responsible for processing information such as sensation, emotion, memory, decision-making, and movement. The entry of external information into the brain, or the transmission of brain information to the outside world, depends on the biological information interface of our body (i.e. sensory and nervous systems). The development of modern technology has enabled humans to detect brain activity signals, decode the information contained in them, and use this information to bypass the muscle system and directly control external devices. This is equivalent to establishing an artificial information interface between the brain and the external world, which is called brain computer interface technology. Language brain computer interface, as a specific application direction of brain computer interface, directly detects brain activity, especially extracts speech related signals from brain regions that control movement, decodes the sentence information contained therein, and then controls speech synthesis devices to "say" what patients want to say. Ideally, it would be like a real-time simultaneous interpretation system that not only accurately interprets people's intentions and thoughts, but also outputs natural language as quickly and authentically as possible. To achieve this function, scientists need to solve a series of technical problems such as signal decoding, speech synthesis, and output delay. With the advancement of neuroscience and engineering technology, multiple studies around the world are rapidly iterating language brain computer interface technology from different dimensions, and are expected to enter a new stage of medical applications of "millisecond level decoding+natural dialogue". In March of this year, China's independently developed "Beinao-1" semi invasive system completed its third human implantation. Its flexible high-density electrodes achieved 128 channel synchronous signal acquisition, allowing patients with amyotrophic aphasia to successfully restore their language communication ability and reduce the risk of surgical trauma. Not long ago, a research team from the University of California, Davis, released a new language brain computer interface system. The team implanted a 256 channel microelectrode array into the brain of a 45 year old male patient who suffered from aphasia due to ALS, and used deep learning algorithms to capture relevant signals in his brain, in order to decipher the words he wanted to say. The system can capture brain wave signal features every 10 milliseconds, decode the sounds that aphasia patients are trying to make in almost real-time, display changes in intonation, and hum a string of notes in three different pitches, resulting in a more natural and smooth overall expression. Artificial intelligence algorithms are the key to technological breakthroughs, integrating and utilizing advanced artificial intelligence models. They are the key to decoding brain neural signals, generating and outputting natural language through brain computer interfaces. In recent years, research institutions around the world have successively released the latest developments in this field. The medical center of Utrecht University in the Netherlands and the team from Radboud University have optimized deep learning models to convert the neural activity of the sensory motor cortex into recognizable speech in real time. This model can achieve a classification accuracy of 92% -100% for a single word, while highly preserving the intonation and timbre features of synthesized speech. In the deep learning model developed by the research team at the University of California, Davis, the AI algorithm was trained using pre aphasia audio recordings of patients, enabling it to synthesize and output speech that is close to the patient's original voice. Chinese has 418 syllables and four tones, and compared to languages such as English, developing neural encoding and decoding mechanisms and information processing methods that target Chinese features faces greater challenges. The joint team of Huashan Hospital affiliated with Fudan University, ShanghaiTech University, and Tianjin University in China has developed a language brain computer interface for Chinese. This multi stream neural network model decodes both tones and syllables simultaneously, achieving a maximum accuracy of 76% for single subject tone syllable classification and 91% for single character decoding classification. These research advances have laid a solid foundation for the practical application of language brain computer interfaces. The greater challenge in the future may lie in decoding intent and semantics. The current research mainly focuses on solving the problem of decoding language motor instructions from the cerebral cortex that controls vocalization. However, a considerable number of patients with aphasia have difficulty organizing smooth sentences due to damage to the brain regions that organize language rather than those that control vocalization. This requires recording signals directly from the higher-level cerebral cortex for decoding patient intentions, and combining them with artificial intelligence technologies such as large language models to generate corresponding language expressions. At present, the decoding of complex intentions in the brain is still in the early stages of research, and we hope that future language brain computer interfaces can further break through and achieve true "what you think is what you get". It is expected to bring about changes in the treatment of neurological diseases in the medical field. Brain computer interface technology can not only help patients with aphasia restore their language ability, but also has the potential to trigger more changes in the treatment of neurological injuries or diseases. For example, researchers from the Swiss Federal Institute of Technology in Lausanne and the University Hospital of Lausanne have previously developed a brain spinal interface that enables paralyzed patients to walk by decoding the brain's motion control instructions and stimulating the spinal cord regions involved in walking. At present, the system has maintained stable operation for more than a year after implantation, and patients can use it independently in their home environment without frequent calibration. Recently, the joint team of Fudan University and the Chinese Academy of Sciences developed the world's first visual prosthesis with a spectrum covering visible and infrared light. After implantation in the fundus, this device can replace photoreceptor cells in the retina to receive light signals, convert them into electrical signals, activate ganglion cells on the retina, and transmit visual information to the brain. This technology has enabled blind experimental animals to regain their perception of visible light and infrared radiation, and is expected to make breakthroughs in the treatment of retinal diseases in the future. In addition, brain computer interfaces can precisely regulate the activity of specific targets in the brain through implanted electrodes, or through non-invasive methods such as transcranial electrical or magnetic stimulation. Successful examples of the former include the use of deep brain stimulation to treat Parkinson's disease, while the latter has been extensively explored for the treatment of various brain diseases ranging from severe depression to Alzheimer's disease. However, there are still many challenges that urgently need to be overcome in brain computer interface technology. The implantable brain computer interface needs to be further validated for its long-term stability and safety in vivo, to further reduce the trauma of electrode implantation, and to improve the accuracy and operational stability of decoding neural signals. At the same time, neuroscience research needs to reveal more knowledge about the brain's information processing processes and patterns, so that brain computer interfaces can interact with the brain more efficiently. At the same time, as brain computer interface technology directly involves the detection and intervention of brain activity, its future development must pay high attention to potential risks such as ethics, privacy, and data security. These issues have received high attention from international organizations such as the United Nations and relevant regulatory authorities in China. The ethics, standards, and norms related to the research and application of brain computer interfaces are gradually being improved to ensure the healthy and sustainable development of brain computer interface technology and applications, ultimately benefiting all mankind. (New Society)
Edit:XINGYU Responsible editor:LIUYANG
Source:people.cn
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com