Arthur J. Villasanta – Fourth Estate Contributor
Boston, MA, United States (4E) – Researchers at the Massachusetts Institute of Technology (MIT) have developed a computer interface that can transcribe words a user verbalizes internally but doesn’t actually speak aloud.
Called “AlterEgo,” the system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face triggered by internal verbalizations — or saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.
The AlterEgo device also includes a pair of bone-conduction headphones that transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.
The device is part of a complete “silent-computing system” that lets the user undetectably pose and receive answers to difficult computational problems. In one experiment, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computer-recommended responses.
“The motivation for this was to build an IA device — an intelligence-augmentation device,” said Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. “Our idea was ‘Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition’?”
Subvocalization as a computer interface is largely unexplored, however. Researchers’ first had to determine which locations on the face are the sources of the most reliable neuromuscular signals. So they conducted experiments where the same subjects were asked to subvocalize the same series of words four times, with an array of 16 electrodes at different facial locations each time.
Researchers then wrote code to analyze the resulting data. They found that signals from seven particular electrode locations were consistently able to distinguish subvocalized words. In their published study, the researchers report a prototype of AlterEgo, which is a wearable silent-speech interface that wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws.
“We basically can’t live without our cellphones, our digital devices,” noted Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive.
“If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”
Researchers describe their device in a paper they presented at the Association for Computing Machinery’s ACM Intelligent User Interface conference. Kapur is first author on the paper, Maes is the senior author, and they’re joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.
Article – All Rights Reserved.
Provided by FeedSyndicate