The elusive fully autonomous airliner


Contrary to what some passengers believe, modern airliners are not permitted or equipped to travel from gate to gate entirely on their own. Their automation software does well during cruise and landing, but it can’t taxi the aircraft or learn or think through a crisis the way a human pilot can. What would it take to get there? Jon Kelvey tells the story.

It was a bright July morning in 2013, and Asiana Airlines Flight 214 with 307 souls aboard was cleared to descend to San Francisco International Airport. The pilot flying the plane and the captain next to him had flown into SFO dozens of times, but this day was different: The Instrument Landing System at runway 28L was not functioning.

Normally, the duo would tune their Boeing 777’s ILS receiver to radio frequencies emitted by an antenna at the far end of the runway and another at its head. These signals would be converted into graphical markers on the aircraft’s primary flight display — one marker to indicate the center of the runway, another to note the proper glidepath down to the tarmac. Depending on visibility, the pilot and captain would have the choice of hand flying the aircraft to the ground either by keeping it on the centerline and glidepath or by allowing the aircraft’s autoland software to do the work.

With the ILS offline, air traffic control instead cleared Flight 214 for a visual approach, meaning it would be up to the pilot and captain to keep the plane on course by looking out at a row of red and white color-coded lights at the right side of the runway, the Precision Approach Path Indicator.

On this day, things were about to go tragically wrong. Noticing the aircraft was coming in above the glidepath, the pilot selected the “flight level change speed” mode of the 777’s autopilot software. As it turned out, the pilot had mistakenly selected an altitude higher than the aircraft’s current position, so the autopilot initiated a climb. This prompted the pilot to switch off the autopilot and set the throttles — one for each engine — to idle, likely thinking that the plane’s autothrottle software would maintain the appropriate airspeed. Unbeknownst to the pilot and captain, the autothrottle was no longer controlling the airspeed. About a half kilometer from the runway, the airspeed dropped too low and the plane experienced aerodynamic stall. Its tail struck a seawall, separating it from the aircraft, and the main wreckage slid to rest near the runway.

Three passengers were killed and 182 injured, while five crew members were also hurt. The pilot and captain were uninjured.

“In this instance, the flight crew over-relied on automated systems they did not fully understand,” Christopher Hart, the acting chairman of the National Transportation Safety Board, told reporters in 2014.

Today, as then, pilots of passenger jets operate in a hybrid world of hand flying and automation. While deadly commercial crashes are rare (Asiana 214 was the last one in the United States), a small cadre of computer scientists in the U.S. and abroad are trying to break this stasis by developing software so sophisticated and dynamic that passenger jets might only need a single pilot aboard, and perhaps someday, no pilot aboard.

Today’s automated functions are simultaneously powerful and limited. Under some circumstances, autopilot and autoland software can fly the aircraft better than a human can.

“In poor weather conditions where if the visibility is below a certain threshold and the cloud ceiling is less than 200 feet, the pilots are required to use autoland,” says Mike Stengel, a principal at Aerodynamic Advisory, an aerospace consulting firm based in Ann Arbor, Michigan. Had the ILS on runway 28L been functioning and a heavy San Francisco fog rolled in, the Boeing 777’s autoland would likely have brought Flight 214 safely to Earth without a hitch.

On the other hand, today’s automation isn’t smart enough, aware enough or reliable enough to respond to every kind of problem that can arise during a flight, or to anticipate pilot errors far in advance.

Wrapping in more automation slowly, with the goal of sentient software someday, presents a human factors challenge.

“Automation can do a great deal,” says Clint Balog, a pilot, engineer, psychologist and associate professor at Embry-Riddle Aeronautical University who studies human cognition in the cockpit. “But the more it does, the less transparent it becomes. And the more difficult it becomes to keep the pilot in the loop when the automation fails and the pilot has to take over.”

While experts like Balog are studying the human factors transition, the computer scientists are developing artificially intelligent software that could one day be sophisticated enough to react to its environment, adapt and respond to novel situations. Instead of being programmed to do X in response to input Y, one or multiple algorithms would teach themselves to perform correctly by ingesting vast numbers of examples rich with complexity. The result would be novel responses that no programmer could write.

“How do we make autonomous flying systems intelligent enough to understand that there is something wrong although the relevant sensors are providing normal readings?” says Haitham Baomar, founder of Artificial Horizon, an AI aerospace company based in Oman. “I believe utilizing the currently available advancement in technology, especially AI, to design, develop and train fully capable autonomous flight systems that can literally take over and land safely in case the pilot is not capable anymore for whatever reason is quite feasible.”

Baomar and Peter Bentley, an honorary professor and teaching fellow in computer science at University College London, have been developing their candidate, the Intelligent Autopilot System, since 2016. They trained the software by connecting it to a Boeing 787 flight simulator flown by professional pilots that emulated a flight out of Heathrow International Airport in London. Inside were layers of interconnected “nodes” that functioned analogously to the neurons in our brains. In the case of IAS, the software ingested pilot control actions from the simulator, as well as airspeed, altitude, attitude, position, among other factors, via its input nodes. Each piece of information was represented by an input node and multiplied by a numerical weight being before sent to an output node. The software continually adjusted the weights based on the outcome of commands sent to the simulated flight control surfaces via output nodes, gradually learning which outcomes were safe.

After this training, Baomar says, IAS resolved novel situations it had never been presented with in the simulator. That included executing safe landings in unique, extreme weather conditions. Baomar believes IAS could be installed without major redesigns of cockpits, essentially replacing existing autopilot software.

In one scenario that simulated a final approach and landing, the IAS kept the aircraft on the ideal glideslope amid crosswinds of 50 to 70 knots, “while the standard autopilot kept disengaging every time,” Baomar and Bentley said in a 2021 paper in the journal Applied Intelligence, “Autonomous Flight Cycles and Extreme Landings of Airliners Beyond the Current Limits and Capabilities Using Artificial Neural Networks.”

Baomar says development of IAS continues today, but he would not give specifics, saying he has multiple nondisclosure agreements with a number of manufacturers in Europe and North America.

In the regulatory realm, contractors working for the European Union Aviation Safety Agency, which sets crew requirements for aircraft in European airspace, are in the midst of a research program to assess the feasibility of reducing the size of flight crews for commercial passenger flights. The EASA project, started in 2022, initially focused on the concept of extended minimum crew operations, in which two pilots would be at the controls for takeoff and landing, and then during the cruise phase, only one would have to be at the controls. EASA also raised an even bolder possibility: requiring only one pilot aboard the aircraft by sometime in the 2030s. EASA has since backed off the timeline for such single-pilot operations, and now says there’s no timeline. As for the extended minimum crew operations, EASA had targeted that for 2025 as a precursor to end-to-end single-pilot operations, but the agency has backed off that timeline too.

Airbus and Boeing in the past have said they are investigating AI-based automation for future aircraft, but neither company would make an expert available to be interviewed.

Airbus conducted multiple autonomous taxi, takeoff and landing demonstrations in late 2019 and the first half of 2020 in France with a modified Airbus A350-1000 and a safety crew aboard. Cameras on the aircraft imaged the terrain ahead, and “on-board image recognition technology” identified features to “see” the runway, according to Airbus, thereby eliminating the need for ILS signals. Airbus trained the software over the course of two years by conducting hundreds of piloted test flights to gather images and video “to support and fine tune algorithms.”

Technologists interviewed for this story noted that, at the moment, governments have no process in place for permitting automation such as ATTOL and IAS aboard airliners.

“The technology is ready,” Baomar says, “but the regulators are probably not.”

At the moment, FAA’s guideline for determining the reliability of critical flight software, the DO-178C standard, isn’t designed to deal with neural networks that are nondeterministic, meaning they react differently to the same situation at different times, according to Sanjiv Singh, the CEO of Pittsburgh-based Near Earth Autonomy. His company has been developing software for autonomous drones and rotorcraft for the U.S. military and commercial clients for the past 10 years. This software was used in proof of concept projects, including U.S. Army’s Combat Medic program with the Unmanned Little Bird helicopter and the Office of Naval Research’s Autonomous Aerial Cargo/Utility System program.

“Imagine you had an aircraft that would do collision avoidance. And if you ran it 100 times straight at a tower, let’s say a water tower or something, and 40 times it would go left, and 60 times go to the right,” Singh says. “That kind of nondeterminacy does not meet the 178 standard.”

FAA declined to make anyone available to discuss regulations for autonomous flight. But if the agency were to consider something new, Singh envisions a different, performance-based standard for AI flight computers that might be more like a driver’s license test, in which the computer flies some number of kilometers and performs certain standard maneuvers to demonstrate reliability. To certify such technology otherwise, he says, might require extreme engineering, such as gold-plated components, which could make the whole enterprise too expensive to be worth it.

“Technical feasibility is one thing, whether you can actually do something. And then doing it reliably is another thing,” Singh says. “Doing it economically viably is yet another thing.”

Insurance companies, for example, will want to make sure they can quantify the risk of autonomous flight technology. Plus, there are pilots unions that don’t like the idea of reducing the number of pilots on board. And there are pilots themselves, who, like Balog, recognize that technology’s march may be inevitable but nevertheless worry about risks along the way.

“Because of the automation, what we’ve seen is a degradation in situational awareness among pilots and a degradation in manual flying skills,” he says. “That’s perhaps the biggest change I’ve seen in the 45 years I’ve been flying.”

The better the automation, the less manual flying time pilots get and the less ready they are to handle an emergency without automation — and this irony drives the push for more automation. It’s what Roger Connor, who curates the autonomous aircraft collection at the Smithsonian National Air and Space Museum in Washington, D.C., suggests is an “uncanny valley” problem. That’s a robotics term for when a robot looks like a dog or a person but falls short of nailing the resemblance, which can give people the creeps. Such a robot would fall into this figurative valley, rather than becoming the next big thing.

Similarly, the current level of automation is good enough to make it difficult for pilots to keep their skills up but not good enough to make pilots obsolete. “I think we’ve traded reliability for this human factors problem,” Connor says.

But the only way out of the valley is to keep going, and the trek may just take time. Connor notes that the first automated landing was demonstrated in 1937 by the U.S. Army Air Corps with a Fokker C-14B and a ground-based radio guidance signal similar to ILS. “They essentially demonstrated that it was technically feasible,” he says. “It’s not really until about ’65 that it’s starting to get to a practical point of demonstration, and then it’s not till the early ’70s that it starts to become certified.”

Balog sees a possible way to push through the valley: Before AI software is good enough to fly a plane entirely on its own, he suspects it will power a new generation of automation software designed with the needs of pilots front and center.

“Adaptive automation is interactive with the pilot, almost like a co-pilot,” he says. Such an autopilot could “take independent action while keeping the pilot informed, make recommendations to the pilot, or [do] both.”

Somewhere beyond that, and beyond his career and maybe even his lifetime, Balog sees AI becoming powerful enough to potentially take over for pilots completely— although regulators, and perhaps society, might for some time still require a human pilot in the cockpit or perhaps in a ground control station, similar to how the U.S. military and intelligence community fly drones. Any such changes are likely to be seen in cargo flight operations first, he adds, it being easier to experiment with single-pilot or no-pilot operations when the only passengers are cardboard boxes traveling over an ocean.

“I think it’s going to be much more difficult to convince 300, 400, 500 people to get on an airplane without a pilot—either a pilot on board or a remote [pilot],” Balog says.

But that doesn’t mean the public won’t have to answer the question. The technology is coming, one way or the other.

“If you learn anything from history, it’s that technology moves on, and we can’t stop it,” Balog says. “It’s like a snowball rolling downhill. At the top of the hill, it starts as a snowball, and at the bottom of the hill, it looks like the boulder that Indiana Jones was trying to run away from.”


About Jon Kelvey

Jon previously covered space for The Independent in the U.K. His work has appeared in Air and Space Smithsonian, Slate and the Washington Post. He is based in Maryland.

"Automation can do a great deal. But the more it does, the less transparent it becomes. And the more difficult it becomes to keep the pilot in the loop when the automation fails and the pilot has to take over."

Clint Balog, Embry-Riddle Aeronautical University
Asiana Flight 214, a Boeing 777, stalled on final approach to San Francisco International Airport in 2013. The aircraft’s tail separated from the fuselage when the plane hit a seawall near the runway. Credit: National Transportation Safety Board
In the Qantas QF32 brush with disaster, pilot Richard de Crespigny and his flight crew of four had to manually land their Airbus A380 when an oil fire caused the inboard left engine to explode. This took much of the plane’s automated software offline, and the Electronic Centralized Aircraft Monitor gave the pilots incorrect instructions. Credit: Australian Transport Safety Bureau
The green lines in this photo were added by image recognition software aboard an A350 to denote the boundaries of the runway at Toulouse-Blagnac Airport in France. Airbus conducted these test flights under its Autonomous Taxi, Take-Off and Landing project. Credit: Airbus

The elusive fully autonomous airliner