Poor voice communication has been cited in numerous air accident investigations as a significant contributing factor, the most deadly case being the 1977 runway collision of two 747s on Tenerife Island that killed 583.
Even though the United Nations’ International Civil Aviation Organization mandates English proficiency for international aircrews and controllers, social media is replete with recordings of controllers and air crews having trouble understanding one another when strong regional accents or local slang is involved.
Bringing clearer voice communication to aviation is a top concern for Tor Finseth, a human factors scientist at Honeywell in Plymouth, Minnesota. Finseth leads a team whose members include researchers from Texas A&M University, the University of Texas Dallas and the University of Rochester. Their goal is to convert verbal exchanges into computer-generated voices in a highly understandable form. This would be done by combining artificial intelligence with today’s natural language processing computer code.
Natural language processing is familiar to anyone who has encountered automated telephone call directories or dictated a message or music request to their car, smartphone or home speaker.
The Honeywell researchers are developing a digital test bed called RASP, short for Real-time Anonymization & Speech Protection. It will be the foundation for software and hardware that would instantly convert accents, local slang and other voice characteristics into standard, clearly enunciated speech in an anonymized voice that doesn’t sound like a particular person. RASP is currently focused on English, and Finseth anticipates branching out into other languages.
Honeywell, as an avionics manufacturer, is interested in RASP for its aviation applications, but the project potentially benefits other sectors.
“Most research in this area is primarily looking at call centers,” explains Finseth. “We’re looking at human-to-human interaction in other high-noise environments like paramedics or manufacturing where the breakdown of communication is really going to cause some issues.”
There might also be government intelligence applications, given that Honeywell is developing the technology under a “multi-million dollar contract” awarded last September by the Intelligence Advanced Research Projects Activity, or IARPA. This U.S. government agency sponsors “moonshot” technology development for the U.S. Intelligence Community, particularly in advanced computing. The contract spans 18 months.
While AI today is most often reserved for long-term analysis tasks with plenty of access to data, the RASP concept in part aims to take AI out to what computer scientists characterize as “the edge.” IARPA asked the RASP team to develop the capability to process speech without connectivity with the cloud or a central server.
Finseth anticipates that advances in computer hardware and data communications will effectively and affordably make RASP suitable for existing airframes, control centers and communications architectures.
He also expects RASP and similar natural language processing applications to catch on with private aviators. “They could just put it on their iPad and just use it as they want for general aviation,” says Finseth.
Which is not to say that the regulatory road will be an easy one for RASP and natural language processing. The developers expect that FAA and other civil aviation authorities will demand thorough testing of RASP with demonstrably consistent results across a dizzying spectrum of use cases.
Other industries’ operational experience with natural language processing and testing will likely be key to industry and regulatory acceptance, Finseth says.