Method for decoding language from non-invasive brain recordings

Problem

The loss of speech, whether resulting from accidents or diseases, profoundly impacts individuals and their communities. The emotional and social ramifications of losing the ability to speak cannot be understated. Language decoding technology has applications to dramatically improve the lives of those suffering from loss of speech and has future applications in consumer brain-computer interfaces (BCIs) to help people directly interact with technology using thought.

Existing methods for decoding continuous language use electrodes implanted via invasive brain surgeries, limiting applicability. Non-invasive language decoding is currently limited to single words. A method to non-invasively perform continuous language decoding will have utility in a wide range of applications.

Solution

Researchers at The University of Texas have invented a method for continuous language decoding that uses non-invasive brain recordings as an input. The language decoding method has been demonstrated on brain recordings made using functional magnetic resonance imaging (fMRI) and has the potential to be adapted for brain recordings made using more portable methods such as functional near-infrared spectroscopy (fNIRS).

The method analyzes brain responses using a machine learning language model and a map of where the meaning of different words and phrases are represented in the brain. The method outputs a text prediction of the full, continuous sentences that the user is hearing, reading, or thinking. This method respects mental privacy, with subject cooperation required for training and applying the decoder.