NASA's silent speech system, How to Talk with out saying a word

9d7bfaba.jpg


The "silent speech" research was launched to deal with the high-noise environments encountered by astronauts. Of course, space itself is silent. But according to NASA scientist Chuck Jorgensen, there are many factors, from machinery to pressure changes, that can make communication difficult for astronauts onboard the shuttle or sporting a space suit. Related, he adds, is the drive to develop alternative human-machine interfaces such as speech recognition.

Speech recognition is only practical if the computer can hear what its operator is saying. Earlier, Jorgensen and his colleagues demonstrated sensors that when applied to your hand would detect subtle muscle signals known as electromyogram (EMG) signals. That EMG data could then be amplified to control a computer or robot appendage. The question that arose was whether a similar approach could be used to solve the noise problem surrounding vocal communication and speech recognition?

"We realized that it might be possible to intercept the signals that the brain sends to the vocal system, extract them and understand them before the person produces a sound," Jorgensen explains.

Last year, Jorgensen and collaborator Brad Betts showed off their silent speech prototype system for the first time. The user wears button-sized sensors under his chin and near the Adam's apple. Those sensors measure the nerve signals that control the vocal chords, muscles, and position of the tongue.

"It's very much like whispering or reading to yourself," Jorgensen says. "You don't have to move your mouth though. In one demonstration, I'm communicating while holding my lips closed."

The electrical nerve signals are delivered from the sensors to an amplifier and a digital signal processor that filters out the noise. Finally, software scours the data to identify the signature patterns corresponding to words that the system was programmed to recognize. When the silent speech technology was unveiled, the software was trained to identify six words such as "stop," "go," "alpha," and "omega," and ten digits. Since then, they've increased its vocabulary to around twenty words and begun teaching the system to detect the telltale EMG signals of vowels and consonants, the first step in building a full-blown speech recognition system.

As the researchers improve the sensing and processing side of the system, Betts is also developing mobile software to translate the EMG information back into a language humans can understand. He's written software for Windows Mobile-based smartphones that enable you to hear what someone is saying subvocally. Processed data from the subvocal speech system are delivered to the smartphone via GPRS, and Bett's software than triggers audio samples -- clips of his wife's voice, in fact --stored on the phone that correspond to the words transmitted from the subvocal system. The words are also displayed as text on the phone's screen for situations when it may be too loud to hear.

While the research began with space applications in mind, Jorgensen and Betts are also exploring a variety of terrestrial applications. For example, a firefighter might use the technology to stay in contact with central command while navigating the chaos of a burning building. To simulate such a scenario in the lab, Betts dons a firefighter's breathing apparatus, switches on sirens, and cranks up high-power chainsaws and water pumps.

"Even surrounded by that kind of cacophony, he was able to communicate," Jorgensen says.

Of course, even if noise is not an issue, subvocal speech is still inaudible. And that characteristic perks up the ears of those in the intelligence community, law enforcement, and the military. Indeed, Jorgensen and Betts are working with the United States's Defense Advanced Research Projects Agency who are also supporting other research efforts in this area through their Advanced Speech Encoding Program.

Eventually, subvocal technology will undoubtedly trickle down into the commercial space. NTT DoCoMo researchers are already experimenting with a system that recognizes vowel sounds by detecting muscle activity around the mouth. The NASA researchers believe that subvocal communication systems could dramatically improve adaptive technologies for disabled individuals. Indeed, they've tested their EMG sensors on an individual who had undergone a laryngectomy. The residual muscles still provided strong signals, Jorgensen says.

And once the technology enters the consumer marketplace?

"You might see someone on a silent cell phone," Jorgensen says. "It's a nice motivating thought to imagine that we won't hear people talk on their phones in a restaurant."

[RANK="www.nasa.gov"]NASA[/RANK]

Source
 
Back
Top