Achieving autonomy


Following in the tire tracks of autonomous cars, the development of self-flying aircraft continues. As similar as the challenges are in some regards, Adam Hadhazy discovered that aircraft engineers must largely forge their own solutions.

Back during World War I, starting up a fighter plane with hand-propping meant someone pushing downward as hard as he could on the propeller and jumping back in fear for his life. Today, a pilot starts an F-35 by simply pushing buttons. The first pilots had to navigate through sight alone, relying on the positions of celestial objects to maintain a heading. Nowadays, GPS-connected navigation systems automatically guide a plane from here to there.

Over the last decade-plus, this technological trend from the laboriously and precariously manual to the conveniently automatic has begun reaching into the eyebrow-raising realm of autonomy in the world of automobiles and aircraft. With autonomy, decision-making and execution become no longer the sole or even partial purview of human operators; vehicles must think and act for themselves, keeping occupants safe while ensuring the safety of bystanders.

Self-driving cars are the most obvious manifestation of this “Nightrider”-ish trend toward smart vehicles. Heavily researched and pursued by big tech companies like Google and Uber, pioneering car manufacturers like Tesla, along with established industry titans including Toyota and Ford, autonomous cars have already logged tens of millions of kilometers in cities around the world. Most of that kilometerage has been for training purposes, but the vehicles have also ferried some intrepid, early-adopter passengers, while Tesla owners routinely let their vehicles run in “autopilot,” a semi-autonomous mode that handles street driving with ease.

Far behind this curve and with far less fanfare, autonomous aircraft are also in the works. Aviation industry leaders — including the three biggest manufacturers Boeing, Airbus and Embraer — all recognize the promise and arguably requirement for this revolution in air travel. Most ongoing efforts focus on urban air mobility, small, personal air taxis for urban environments, with an eye toward full-fledged autonomy for conventional long-haul commercial flight.

In some respects, self-flying aircraft are building on the advances already achieved by ground vehicle developers, says Arne Stoschek, who himself worked on autonomous cars prior to becoming project executive for Wayfinder, an autonomous flight software and hardware initiative at Airbus’ A3 innovation center in Silicon Valley. “We step on the shoulders of giants,” Stoschek says, thanks to “the huge investment that the car industry did.”

But as he and others point out, in key respects, aircraft will have to make their own way dealing with challenges unique to their aerial operational environment. Navigating these challenges will determine when and if autonomous vehicles take to the skies in appreciable numbers — before, alongside or after their self-driving counterparts come to dominate streetscapes as many expect.

The pressure is on for self-flying to become viable sooner rather than later. A prime motivator: looming pilot shortages, given the widely expected doubling in the number of commercial airliners over the next couple decades. Mark Cousin, A3’s CEO, estimates that 600,000 pilots will need to be trained in the next 20 years, although “we’ve only trained 200,000 pilots since the start of commercial aviation.”

Raising the stakes further is that the consequences of failure are terribly severe for aircraft compared to cars.

“There is no such thing as a fender-bender between aircraft,” says Jack Langelaan, who studies autonomous flight as an associate professor of aerospace engineering at Penn State. “You can have a minor collision between two automobiles — nobody gets hurt, no nothing. But if two aircraft collide in the sky, they’re coming down somewhere, and potentially on top of passersby that are completely uninvolved in the whole situation.”

Self-driving or self-flying?

Asked which will prove ultimately harder, developing self-driving cars or self-flying aircraft, researchers demur, pointing to the apples-and-oranges nature of the question. In laying out the obvious, cars operate in a two-dimensional environment, essentially, while aircraft operate in three dimensions. Speed is another key difference, at least when considering conventional airplanes’ approximately 900-kph cruise speeds to cars’ 100-plus-kph highway speeds. Air taxis and delivery drones with their rotor-based propulsion and low-altitude, short-haul trips, would operate at speeds more akin to cars.

Finally, unlike cars, airliners would not have an option of slamming on the brakes or even significantly slowing in case of obstacles; collision (read: disaster) avoidance comes down to evasive maneuvers that must still preserve airworthiness.

“To sum it up,” says Stoschek, “the big difference is it’s a 3D problem, it’s basically 10 times faster than cars, and you cannot stop if things go wrong.”

Seeing the road, seeing the sky

Although operational environments profoundly differ, both kinds of autonomous vehicles, groundcraft and aircraft, will still rely on conceptually the same sorts of sensors — cameras, radar and lidar — to perceive surroundings in real time. Both vehicle types will feed that data into artificial intelligence systems that, having been trained through so-called machine learning, will identify, characterize and evaluate external phenomena. The vehicles’ AI will then rapidly decide on and execute any changes in speed, heading and so forth necessary to navigate safely and efficiently from point A to point B.

So far, so similar. But again, the disparate worlds the transportation vessels operate in will require tailored solutions.

Where aircraft have it easy compared to cars is that, except for when in proximity to landing and takeoff zones, the sky is a relatively big, open space. Other flying objects of significant speed and heft to pose significant collision concern are — freak bird strikes aside — other aerial vehicles. Such vehicles can communicate with each other directly to help maintain safe distances. Aircraft are also monitored and coordinated by air traffic control, which itself will have to eventually become largely autonomous to accommodate projections for the high volumes of air taxis and drones in cities, notes Parimal Kopardekar, an expert in autonomy and air space management, principal investigator of the Unmanned Aircraft Systems Traffic Management project, director of NASA’s Aeronautics Research Institute at Ames Research Center in California.

Cars, on the other hand, must deal with orders of magnitude more complexity on roads. This is especially evident in congested urban quarters, where multiple nearby vehicles interweave with myriad pedestrians, bicyclists, skateboarders, construction signs, temporary barriers, litter, blithely jaywalking pigeons — you name it.

Huei Peng, a professor of mechanical engineering at the University of Michigan and director of Mcity, the university’s automated vehicle research center, points out that aside from other cars, these objects will not have any ability to communicate with or coordinate their movements with self-driving cars. “They don’t talk to you,” Peng says, meaning cars must be keenly reactive and flexible in navigating their object-addled arenas.

Making sense of the world

In terms of seeing and sensing environments, sensors are already highly advanced, says Langelaan; matching or exceeding human visual capabilities, for instance with pixel density, is not a problem. Instead, the biggest challenge is on the artificial intelligence side of the equation with regard to analyzing, interpreting and ultimately making sense of the deluge of data in order to snappily decide and aviate safely.

“You can have a 4K camera with a gazillion pixels on it,” says Langelaan. “But you need to process every single one of those pixels to figure out what the camera is telling you. It’s this problem of taking sensor data and turning it into useful information.”

The difficulties inherent to data processing are amplified considerably in off-nominal situations, where objects on a roadway or in the sky do not behave as expected — often due to the whims and peculiarities of human operator behavior. For example, a car could be rolling forward out of a McDonald’s parking lot into a roadway entrance. Is the driver merely trying to see out into the road better before attempting to enter? Or is the driver in fact already pulling onto the road, wolfing down fries, completely unaware of the other vehicle already in the roadway? A human driver could notice if the other driver was eating or had his head down, likely looking at a phone, and assume that this other driver is in fact dangerously not paying attention. Self-driving systems struggle making these sorts of everyday inferences. “For a robot to know what human intent is has been really tricky to figure out,” says Langelaan.

In the sky, a partial solution for gauging intent would be to train artificial intelligence to identify types of aircraft as well as their orientations, which together prescribe a range of possible physical behaviors. (Algorithms in self-driving cars can do this to an extent, for instance, by differentiating between a pedestrian and a bicycle, and churning out predictions about the paths and velocities each is likely to take compared to the other in terms of presenting a possible collision hazard.) For instance, if a particular kind of fixed-wing aircraft — whether human-piloted or autonomous — has a certain tilt, then it must be turning. An entirely different set of such inferences would apply for quadrotor-powered drones and air taxis, given their radically different aerodynamics, notes Langelaan.

“That would be a really cool thing if [autonomous aircraft] could do it,” says Langelaan. “Humans do it all the time if we’re playing with a baseball versus a frisbee. You know the thing behaves very differently, and we can account for that when we go to catch it.”

The human advantage

In that last point, Langelaan hits on a critical deficiency of all autonomous systems: the fact that all they do is drive or fly. They lack the sort of rich life experience of a human pilot, who has seen and interacted with countless objects and phenomena outside of the typical aviation environment. That broad world sense enables a person to usually and quickly diagnose what an anomaly is, however unexpected or bizarre, and take appropriate action. Consider landings. “If everything is hunky-dory,” says NASA’s Kopardekar, “it’s not an issue” for autonomous systems. But what if there’s an object on the runway? Humans can gauge the threat level innately, drawing on their vast knowledge outside of aviation. “It’s very easy for humans to figure out that’s just a dry leaf, not a big rock,” says Kopardekar.

Of course, in most cases, landings are routine affairs — enough so that already today, they are becoming increasingly automated. Autopilot programs can approach land and start taxiing when an airport has a so-called CAT III ILS (Instrument Landing System). In this system, instruments at the airport and aboard the aircraft communicate with each other. The airport transmits information from runway radar arrays for aircraft position guidance, with localizer radio antennae continually serving up data on the aircraft’s lateral deviations from the desired centerline. Another instrument, the glide slope antenna, measures the aircraft’s vertical distance from the ground so the aircraft maintains the right descent angle (usually 3 degrees above the horizon) for reaching the intended runway touchdown point. Yet fewer than a hundred airports worldwide — typically with low-visibility issues — have plumped for the expense of a CAT III ILS, according to Harvest Zhang, head of software for A3’s Wayfinder.

Stoschek, Zhang and colleagues are looking to “teach” aircraft to perform landings without these sorts of automated landing systems. One such effort involves the Airbus ATTOL (Autonomous Taxi, Take-Off and Landing) project, which has outfitted an A320 test aircraft with the kinds of sensors, actuators and computers an actual autonomous aircraft would use. The effort is a collaboration with the aforementioned Wayfinder program, whose goal is to develop a common, certifiable set of software and hardware that will scale for autonomous fliers, from air taxis to jumbo jets.

The Wayfinder computer system consists of an artificial neural network that uses so-called deep learning to understand and perform programmed tasks. In plain English, that means a computer system that learns in a manner akin to humans by poring over examples, at first discerning basic, common elements in the examples and then building up those elements into concepts of increasingly greater complexity. For runways, that means first recognizing basic line edges and colors, then compiling that information to discriminate the tarmac from surrounding terrain, for instance. The examples presented to such a computer system include not only true runways, but thousands upon thousands of images of simulated runways. So far, the neural network can robustly identify runways in real-life images at distances of several kilometers and with encouraging accuracy, given the still-early nature of the research and development.

To an extent, those are baby steps still for the landing portion of autonomous flight, but Stoschek and others point to how rapidly autonomous car development has proceeded. “Fifteen years ago, we would say autonomous cars was pretty nuts,” says Stoschek. “Now it’s a reality. Every day that I drive to work to Airbus in Silicon Valley, I see several autonomous cars driving around me.”

The leap to full autonomy

Before the switchover to full autonomy can occur, Peng of the University of Michigan warns about a significant chasm that both cars and aircraft will have to bridge, and into which both vehicle types are in some ways alarmingly descending. In short, partial autonomy — where human operators might be suddenly forced to intervene, should the highly automated or autonomous system not know how to react — is dangerous. Peng describes this as “you want the human to come in and save the world when something goes wrong.”

Trouble is, the human operators have likely had little to do, perhaps for hours, beyond blandly monitoring vehicle operations. A person will lack the necessary situational awareness, not to mention adequate time to take action, if thrust into an anomalous, emergency scenario.

Breaking the problem down further, Peng says there are five levels of autonomy widely recognized for cars. Everything operated manually is Level 0, including basic cruise control that a human operator switches on and off (not unlike air conditioning, say). Level 1 involves minimal driver-assistance technologies, like adaptive cruise control and lane keeping. In Level 2, the options of accelerating, braking and steering can be turned over to the vehicle, enabling it to, for instance, slam the brakes to avoid an imminent collision.

This Level 2 of semi-autonomy is the vanguard today, for example with Tesla Autopilot. Two fatal accidents have infamously highlighted safety lapses, though statistical data does bear out that Autopilot use results in fewer crashes than human drivers per miles driven. In the latest dataset (Q2 of 2019), Tesla reported one accident per every 3.27 million miles (5.26 million kilometers) driven with Autopilot engaged. For Tesla vehicles without Autopilot or other active safety features engaged, that rate rose to one accident per every 1.41 million miles (2.27 million kilometers) driven. Both methods were significantly safer than the National Highway Traffic Safety Administration’s data indicating one car crash per every 498,000 miles (801,500 kilometers) driven in the United States.

Where things could trend in the wrong direction is in Level 3 conditional autonomy, Peng says. Level 3 cars handle all driving aspects themselves in certain operating conditions but expect a driver to remain alert and fully engaged during operation (just in case). Aircraft have already entered the equivalent of Level 3, Peng says, with the Boeing 737 MAX 8, the fourth generation of the 737 introduced in 2017. Two fatal crashes — Lion Air Flight 610 in 2018 and Ethiopian Airlines Flight 302 in 2019 — were linked to erroneous activation of the anti-stall system MCAS, short for Maneuvering Characteristics Augmentation System. The software autonomously lowers the aircraft’s nose if the aircraft appears at risk of stalling, according to speed, altitude and angle-of-attack sensors. Pilots did not know how to disengage MCAS after it had seized control and continued pointing the nose down until the aircraft crashed. The MAX fleet remains grounded as software fixes are sought and better training procedures established.

“The Level 3 option is just not a good idea,” says Peng. He thinks car companies — as well as aircraft manufacturers — have to fully develop Level 4 autonomy, with no human intervention called for over an entire journey, before autonomy comes to rule the roads and skies. “Companies should offer Level 1, Level 2, then boom, Level 4,” Peng adds.

All the technological development will not matter, of course, if human operators’ trust cannot be earned. Humans will have to have faith in the machine in order to let go of the steering wheel and stick; come Level 5, the top level of autonomy, those human interfacing control elements won’t even exist anymore, and vehicles will look more like mobile lounges.

How will the developers of autonomous vehicles ever know what they have made is in fact safe “enough”? Stoschek offers a guiding principle.

“Early on in my career, I asked one very experienced engineer who was actually in charge of the designing of safety systems, ‘How do you know when a system is safe’? And the answer was, ‘When I would put my kids and my family’ on it,” Stoschek says. “I really, really kept that in my mind.”


About Adam Hadhazy

Adam writes about astrophysics and technology. His work has appeared in Discover and New Scientist magazines.

The big difference is it’s a 3D problem, [flying is] basically ten times faster than cars, and you cannot stop if things go wrong.

Arne Stoschek, A3 by Airbus
Two pedestrians cross a street in front of an autonomous car and a city bus.
Human drivers benefit from visual cues from pedestrians and other drivers, something that developers of autonomous vehicles, like this Ford from Pennsylvania-based Argo AI, must account for. Credit: Argo AI
Engineers in high-visibility clothing work on a large, drone-like aircraft with four circular propellers on an airfield, with hangars and equipment in the background.
The CityAirbus demonstrator is designed to fly autonomously in cities, but Airbus says a human will pilot the aircraft until it is certified and the public is comfortable with autonomous aircraft. Credit: Airbus
Autonomous shuttle with
A driverless shuttle carries passengers at the University of Michigan’s research campus. The project examined how passengers, drivers of other vehicles and pedestrians interacted with the shuttle. Credit: University of Michigan
A close-up of a mounted sensor and camera device on the roof of a vehicle, used for navigation or safety systems.
The lidar sensor on the driverless shuttle maps the vehicle’s surrounding environment. Credit: University of Michigan

Achieving autonomy