Fly by Voice
By Michael Peck|October 2016
Voice-controlled cockpits would have advantages
You can tell Siri to draft a text message or dial a phone number, so shouldn’t the pilot of a Cessna or a Boeing 777 be able to command his or her plane in a similar manner? That was the question a research team from the Rockwell Collins Advanced Technology Center in Cedar Rapids, Iowa, set out to address nearly a decade ago.
If the technology can be proven and brought to market, it could be useful for both commercial and general aviation. Rockwell Collins has flight-tested speech recognition to verify that it works with cockpit avionics. Next, it must show that the software recognizes the myriad tones, cadences and accents of human speech, and do that more accurately than Siri or similar software in a noisy cockpit and in emergencies. Saying, “the cow jumped over the moon” and having it appear on the screen as “the corn lumped the room” won’t fly for aviation applications.
Geoffrey Shapiro, a senior engineering manager at Rockwell, estimates that voice recognition can shave up to 75 percent off the time required to complete such cockpit tasks as changing altitude, speed and heading, as well as tuning a radio or displaying charts.
“Anything we can do to reduce the amount of time needed to complete a task will benefit flight crew,” Shapiro says.
Another benefit of a voice-controlled cockpit is better situational awareness. Speech recognition can function as a sort of verbal head-up display so that a pilot can keep his or her eyes on what’s going on outside the cockpit and focus on tasks such as avoiding another aircraft, rather than diverting his or her eyes to the instrument panel to activate the controls.
“With speech recognition, you don’t necessarily need to look down at the avionics to control them,” Shapiro says. “You can be looking out the windscreen for traffic, and push the button to recognize your voice and turn left heading 258. And you hear the voice in your ear repeating, ‘heading 258.’”
Shapiro describes the process as essentially working like this: The pilot says, “Turn left heading 340.” The speech is picked up by a microphone. Algorithms in the avionics computers compare the words against a preprogrammed list of commands, and choose the command that best matches the pilot’s words. Those are then converted into a machine language that the avionics can recognize. In turn, that command is sent to a central routing application that routes it to the correct avionics subsystem, in this case the primary flight display application. The aircraft then turns to the desired heading.
As with their automobile-based counterparts, aviation voice recognition systems also keep the pilot’s hands on the controls instead of pushing buttons. Shapiro notes that this is particularly useful for helicopter pilots who need to fly with their hands on the stick.
As smartphone users can attest, voice recognition can also assist with navigation. Rather than drilling down through a series of touchscreen menus or leafing through papers to find a chart of a specific area, a pilot can call up that exact chart needed by issuing a specific command like, “Display Cedar Rapids Iowa ILS 9 chart.”
The biggest hurdle by far for cockpit voice recognition is noise.
“For an aircraft, the engines can be turboprops, which are loud,” Shapiro says. “Or if you are going fast, you can have a lot of windscreen noise.”
The issue isn’t so much decibel level as the sound frequency of the background noise. “It’s not just that one aircraft is louder,” Shapiro explains. “It’s the frequency that reaches the mic. The closer this is to the frequency that humans speak on, the more conflict there is.”
So, Shapiro’s team is using individualized speech recognition algorithms tailored to the noise characteristics of specific aircraft. Users of word processing software such as Dragon Naturally Speaking already do something similar when installing the product on their computers: The software asks them to read a few paragraphs into the microphone, while the software adjusts to the user’s voice and microphone quality. Also like Siri or word processors, cockpit voice recognition must be linguistically flexible to recognize commands spoken in multiple languages. Not surprisingly, given that English is the lingua franca of aviation, Shapiro is focusing on English spoken in a variety of accents, though the technology can work with other languages.
Among the unique challenges for aviation is that a pilot would need the ability to communicate with his or her plane in an emergency, such as depressurization. Testing in cockpit simulators revealed other issues. Shapiro had to adjust the software to recognize the tonal qualities of a voice muffled by an emergency oxygen mask. With speech recognition, pilots can focus on responding to an emergency, say by looking out the window to see what’s going on, rather than having to look down at their instrument panels to change course or speed. On the other hand, human voices change under stress, and speech recognition software needs to understand commands uttered under hectic circumstances. Shapiro says that even if a pilot were to become incapacitated, the system will respond to crew members speaking the proper commands.
Software controlling an aircraft would need to be much more reliable than software controlling an iPhone.
“If Siri gets it wrong, you can take a moment to fix it,” Shapiro says. In aviation, no. It has to work in all noise conditions, with all accents.”
So, Shapiro needed to add a confirmation function for safety-critical actions such as changes in speed or course.
“We found early that this is an essential element,” he says. “Speech recognition is very good, but not 100 percent.”
How pilots want to receive that confirmation varies with the type of aircraft. Shapiro says air transport pilots have told him they are accustomed to flying with a sterile cockpit with no small talk. Rather than the speech recognition software repeating voice commands, they would rather the confirmation signal be an aural tone. On the other hand, helicopter pilots prefer voice confirmation.
“They said, ‘I am a single pilot. I have my head out looking around. I don’t want to look at my displays. I just want to hear the command and execute,’” Shapiro says.
When it comes to writing the code for the software, it’s critical to create software flexible enough to interpret the language and commands based on context, says Timothy Wittkop, a Rockwell senior software engineer who works with Shapiro on voice recognition. Software coders also need to talk with a variety of avionics experts who can guide them on ensuring that the various avionic subsystems respond to voice command.
“It is not possible for one person to know the intricate details of every subsystem in the avionics,” Wittkop says.
Shapiro also cautions that voice recognition isn’t practical for all cockpit tasks, such as centering on the correct region of a map.
“For panning a map, touchscreens just work better,” he explains. “I can just flick my finger. With speech recognition, it’s ‘pan left, pan left, more pan left.’”
“We don’t think speech recognition will be the only tool pilots will use,” Shapiro adds.
As exciting as the technology is, at this point, Rockwell Collins doesn’t know when the voice-command technology might be brought to the market. What will likely be the most persistent hurdle is old-fashioned resistance to change.
“Some folks aren’t comfortable with speech recognition,” Shapiro says. “It’s more of a personal preference than a logical limitation.” ★