Artificial Intelligence Achieves 74% Accuracy in Reading Thoughts — Opening New Horizons in Neuroscience and Rehabilitation

Recent research at Stanford University has unveiled a groundbreaking brain-computer interface capable of translating a person’s internal speech into text with an impressive accuracy of 74%.
This development holds enormous promise for revolutionizing communication methods for individuals unable to speak due to severe disabilities or neurological conditions.
The device functions by recording neural signals from the motor cortex, the area of the brain responsible for speech formation.
In the study detailed in the journal Cell, four volunteers were implanted with microelectrodes that detect electrical signals from neurons.
During experiments, participants were asked to either pronounce certain words or simply imagine speaking them.
Both tasks activated similar brain regions, enabling the AI system to learn and recognize internal speech even when it was not spoken aloud.
To enhance privacy, researchers also tested a ‘password’ function, where participants selected a code phrase — “Chitty chitty bang bang” — which the system correctly recognized in 99% of cases, demonstrating the ability to control access to one’s inner thoughts.
Currently, brain-computer interfaces are already being used to help people with disabilities communicate, control prosthetic devices, and perform other actions solely through brain activity.
Lead researcher Erin Kunc highlighted that this is the first time scientists have successfully recorded and interpreted brain activity associated specifically with ‘imagined speech.’ Professor Frank Willett emphasized that the technology has the potential to restore communication in a manner closest to natural speech, with the option for users to train the system to ignore internal dialogue if desired.
While the technology is still in its early stages, modern BCIs cannot yet ‘read minds’ without significant safeguards.
Nonetheless, ongoing research suggests future models will be capable of doing just that.
The growing interest in such systems is reflected in projects like Elon Musk’s Neuralink, which tests neural implants for mobility-impaired patients.
Unlike Neuralink’s focus on restoring movement, Stanford’s system aims to decode internal speech, representing a promising new approach in brain-computer interface development.