A.I. in the cockpit


Modern airliners do a good job of flying automatically until something unexpected happens. At that point, a pilot takes control and typically resolves the problem with no drama or fanfare. Very rarely, though, a pilot must save the day or die trying. For passenger planes to fly autonomously, software would have to be capable of handling these edge cases.

One day on a test range at Fort A.P. Hill in Virginia, two years ago, NASA researchers commanded a model airplane into an unstable flight mode as though it were encountering turbulence. After less than two seconds of porpoising up and down, the plane leveled off without any human intervention.

Autopilots on airliners fly through turbulence every day. What was different about this software was that researchers did not preprogram it with the aerodynamic model of the plane that would normally define how the autopilot should change thrust or the disposition of the plane’s flight control surfaces. Instead, researchers designed the software to rapidly figure out how to make the aircraft execute the appropriate pitch, roll and yaw maneuvers.

The software outperformed a human pilot who moments earlier tried but failed to level the plane off by remote control, the test organizers said.

This software, developed under a NASA aeronautics initiative called Learn-to-Fly, is just one example of the kind of research underway in the U.S. toward the vision of fully autonomous aircraft, someday potentially including passenger jets.

Today, airliners do a fine job of flying automatically until something unexpected happens, as was vividly illustrated by the 2009 Air France crash off Brazil and November’s Lion Air crash off Jakarta.

What’s needed before the flying public will entrust their lives to completely automated aircraft is artificial intelligence software that might utilize Learn-to-Fly code during flight or during software development. This AI software would have to cope with emergencies that by definition play out in three dimensions, with numerous flight control surfaces involved, a range of ambient conditions and data arriving from multiple sensors.

Computer scientists point to in-flight emergencies as examples of edge cases, rare scenarios that can be too complex and uncertain to be resolved by today’s combination of automation and human pilots.

Validating performance in these edge cases remains arguably the largest stumbling block toward the goal of assigning complete control of a passenger plane to AI. The software would need to make the right decision in a situation that might never have arisen before, and AI designers and flight regulators would need to be assured that it would make the right decision.

The Air France crash was an edge case in which ice crystals likely accumulated in the pitot tubes on the fuselage of the Airbus 330, creating inconsistent airspeed readings and prompting the autopilot to disengage. Sadly, the crew flew the jet into the surface of the ocean without ever seeming to understand that the plane was in a fatal aerodynamic stall, according to French investigators. All 228 aboard were killed. In Lion Air, software called the Maneuvering Characteristics Augmentation System steered the nose downward some 20 times, a reaction to incorrect angle-of-attack readings that suggested the plane was at risk of stalling, investigators from the Indonesian National Transportation Safety Committee said in preliminary findings. The crew fought the MCAS auto trim software and did not manage to turn it off. All 189 aboard were killed.

“One of the things that automation has a hard time dealing with at this point is uncertain or ill-defined problems,” says MIT’s John Hansman, chair of the FAA Research Engineering and Development Advisory Committee, referring to today’s early attempts at AI.

Planning for unforeseen circumstances

For starters, AI software would need to recognize when sensor readings are incorrect, just as the pilots of Lion Air must have known judging by their fight against the MCAS software. The task would be to keep the aircraft under control despite those incorrect readings, as the crew in the Air France crash was unable to do.

One of those conducting research toward AI for aircraft is Mykel Kochenderfer, an assistant professor of aeronautics and astronautics at Stanford University and co-director of the University’s Center for AI Safety. He has confidence in his team’s AI software, which is not to say that all challenges have been solved.

“When you have these autonomous systems in the real world, you’re basically committing to the program that you write before you experience all the variability in the real world,” he says. “If that system is flying your aircraft and there isn’t a human operator to take over if it goes wrong, then you have to be really sure that what you programmed is what you want.”

Kochenderfer won’t speak about the Lion Air and Air France crashes specifically. But he says the AI software that he and his colleagues are designing would make the correct decision even when a sensor fails.

“One of the basic tenants of our research is that the world is inherently uncertain. We’re uncertain about how the world will evolve, and we don’t place absolute trust in any of our sensors,” he says. “What you don’t want to have is the system to fail in a very unusual way and say, ‘I give up, I’ll just transfer control back over to the human.’ And then a human won’t know how to recover,” he adds.

To avoid that scenario, his team is applying an approach called dynamic programming, which is different from the Learn-to-Fly approach of modeling on the fly. In dynamic programming, the bulk of the computing work would be done ahead of time. For each scenario, a decision strategy is defined. An automated car, for example, must slow down to 25 kph when approaching a crosswalk. As applied to passenger planes, this decision strategy (slow to 25 kph before crosswalks) would be worked out ahead of time so it could be validated. For each decision strategy, the AI software extracts an optimal decision from a mathematical model of every possible scenario — no matter how unlikely — and every possible outcome of those scenarios. Reflecting billions of outcomes or choices algorithmically would be impossible, so programmers encode approximations in a process called discretizing. This generally limits the potential outcomes to hundreds of millions.

In flight, the software would infer that a sensor failure has occurred, such as a blocked airspeed indicator or an inaccurate angle-of-attack sensor, and then behave appropriately.

Sensors and the avionics equipment that provide information to pilots or computers will never be perfect, which means there will always be a bit of uncertainty in the data.

“The AI will need to reason about these failure modes and make inferences about the reliability of the different sensor systems,” Kochenderfer says.

Kochenderfer and his Stanford colleagues model various situations over time, breaking these scenarios down into probabilities to judge what’s likely to happen next, so that the software can decide the best action.

For example, if an airspeed indicator shows a speed of 200 knots at one point in time and then 0 knots a second later, the dynamic model would recognize a low probability that the reading is accurate and base its decisions on that probability.

The advantage of defining all the possible outcomes ahead of time is that the programmers can put powerful computer clusters to work on the calculations, lessening the computational burden on AI executing the strategy in the moment. Also, the AI is easier to validate if the decision-making strategy — what to do in a given situation — is pinned down prior to execution, Kochenderfer says.

A possible advantage of this approach would be the ability to show regulators how the software reacts in specific situations. No one as yet wants to turn aircraft or automobiles over to neural nets in which the reasoning logic is impossible to validate.

To pass muster, AI would need to prove itself virtually foolproof. “What you don’t want to have is the system to fail in a very unusual way and say, ‘I give up, I’ll just transfer control back over to the human.’ And then a human won’t know how to recover,” Kochenderfer says.

Although Kochenderfer was not speaking specifically about the Air France case, the crew in that incident made “inappropriate pilot inputs” after being surprised when the autopilot disengaged, investigators concluded.

Kochenderfer thinks that when AI is deployed aboard aircraft, it will do better in edge cases than humans. People “like to think roughly deterministically: If we do this, then this thing will happen,” he says, “but computers can entertain the wide spectrum of different things happening, along with their likelihood.”

“I think the strength of AI is in its ability to reason about low-probability events,” he adds. “However, you still need to validate that that reasoning is correct,” he says.

The AI designers are going to have to make sure that when they define their mathematical universe that it’s large enough to include every possible scenario, even those that have never occurred and likely never will. AI doubters say that AI will miss some scenarios and humans should be available to step in.

Adapting on the fly

The Learn-to-Fly researchers view their algorithms as a tool that AI software could employ in novel flight situations, such if a plane were to lose a flight control surface in flight.

Eugene Heim, one of those leading NASA’s Learn-to-Fly projects, says he doesn’t consider his team’s algorithms as AI, in part because they lack a high-level executive or mission-manager function. The algorithms could be building blocks, however, handed off to AI researchers. An AI-controlled flight system could apply the algorithms to control the flight surfaces of an airplane without knowing anything beforehand about the plane’s aerodynamics — a valuable capability when a plane suffers extreme damage, for example.

A learn-to-fly algorithm works like a baby bird leaving its nest, learning to control its wings and body in flight for the first time, Heim says. “Eventually they’ve got to make that jump, and then they learn how to control themselves; not just hit the ground, but fly around and navigate their environment.”

The algorithms merge real-time aerodynamic modeling with adaptive controls and real-time guidance, and then model the aerodynamics to steadily improve the control of the vehicle. The algorithms learn by beginning with a guess, which is often wrong, about how to control the plane, but they don’t need to know anything about the airplane’s design to start with. As they see the vehicle’s aerodynamics in flight, they can determine what impact its controls have on the six degrees of freedom for the aircraft: pitch, roll, yaw, up, down, and left and right.

“All of this happens at the same time, so it’s not like your normal flight test where you do one factor at a time” and then see what happens, Heim says. “This is happening on all surfaces, all axes, all at the same time. That’s really part of the beauty and the uniqueness of this approach.”

Modeling the aerodynamics of an airplane — whether for AI, an autopilot function or a human pilot ­— today requires putting the relevant control surfaces and propulsion components in a wind tunnel, or through a computational fluid dynamics model, and recording what happens under various conditions. An aerodynamic model is then developed to capture the effect on factors such as sideslip and angle of attack. For some new designs, this comprehensive modeling could take years.

Heim was curious about how many years this process would take for a particularly complex aircraft, so he looked at the GL-10, a hybrid diesel-electric tilt-wing aircraft. It has two ailerons, four flaps, two elevators and a rudder, plus eight motors on the wing and two on the tail. His findings were similar to those of other researchers who examined the GL-10. He calculated that assessing it the conventional way would take 45 billion years.

Once installed on an aircraft, the algorithms would do more than save the day in an emergency. The algorithms could run in the background, establishing aerodynamic models over long periods, through the full flight envelope of the aircraft. In the nearer term, these could help tune autopilots, provide a health monitoring function by detecting aerodynamic changes caused by icing, for example, or help update and tweak the control laws for the plane for optimal performance.

Building public acceptance

The learn-to-fly algorithms could also build a stepping stone toward public acceptance of autonomous flight for large passenger planes by modeling the aerodynamics of autonomous single-passenger aircraft, such as electric vertical takeoff and landing vehicles, or eVTOLs. Heim says the learn-to-fly algorithms would help identify aerodynamic models quickly for new urban-air-mobility aircraft because the designs often have redundant control surfaces or propulsion vectors. That complexity makes it extremely difficult to determine their aerodynamic models, and how their control surfaces and propulsion interact, in wind-tunnel testing.

“This is where we can use learn-to-fly techniques where we can change everything at the same time, or vary everything at the same time, and then produce what the aerodynamic model is,” Heim says.

Ultimately, the key hurdles for AI flight systems will be certification and approval, not the technology itself, Hansman says. “How do we assure that it’s good enough that we can either put passengers on it or have a big airplane flying around that’s considered safe?”

“We can automate it tomorrow,” he says. “In an airplane like an A320 or a 787, the pilot taxis it out, gets to the end of the runway, and as long as you want the airplane to fly the trajectory you’ve predefined, you can press a button and the pilot won’t touch the control until the airplane rolls out on landing and they put the brakes on and taxi it in.” The question is what happens in a crisis.

Related Topics

Aircraft Safety

About Keith Button

Keith has written for C4ISR Journal and Hedge Fund Alert, where he broke news of the 2007 Bear Stearns hedge fund blowup that kicked off the global credit crisis. He is based in New York.

A learn-to-fly algorithm works like a baby bird leaving its nest. “Eventually they’ve got to make that jump, and then they learn how to control themselves; not just hit the ground, but fly around and navigate their environment.”

Eugene Heim, NASA’s Learn-to-Fly initiative
A Lion Air Boeing 737 MAX 8 aircraft is taxiing on the runway at an airport with another airplane visible in the background about to land.
About a month after this photo was taken, this Lion Air Boeing 737 MAX 8 crashed shortly after takeoff from Jakarta. In the minutes before the October crash, the plane's auto trim software steered the nose downward in a reaction to incorrect angle-of-attack readings, investigators believe. Credit: PK/REN/Ikko Haidar Farozy

A.I. in the cockpit