ANALYSIS: Rise of the AI fighter pilots
May 2023
As powerful as artificial intelligence is, don’t expect a sophisticated collection of bits and bytes to replace Top Gun’s Capt. Pete “Maverick” Mitchell in the cockpit anytime soon. Air combat is both art and science. An air autonomy expert, a fighter pilot and a computer scientist explain how the future of uncrewed combat aircraft could unfold.
In an early scene from “Top Gun: Maverick,” the world’s most famous on-screen fighter pilot almost loses his job flight testing the latest and greatest hypersonic fighter jets. But the threat to his career isn’t atrophying pilot skills. It’s his robotic competition: a new drone program with support from senior leadership in the Pentagon.
Despite his Mach 10 misadventures, Maverick manages to save his job by leading a younger generation of Top Gun pilots into combat on a mission that demands his well-honed intuition and piloting skills. Our takeaway from Maverick’s story is that air combat, which at its most demanding involves taking human life, is both a science and an art. No matter how fast drone technology advances, some level of human judgment — which is the art of war — will almost certainly be essential when decisions involve a high degree of risk, the kinds of decisions that fighter pilots have to make all the time. Aerial combat unfolds so rapidly and dynamically that at least some of the intelligence will always need to be aboard the aircraft, whether it is piloted by a human or a computer or some combination of the two. Remote piloting won’t be an option, because even a brief break in a data link could lead to mission failure. Beyond these basic principles, there is plenty to discuss.
Striking the appropriate balance between art and science is at the forefront of the emerging debates in the U.S. national security community regarding the future of uncrewed combat aircraft. On one hand, military leaders are eager to harness advancements in artificial intelligence and other technologies to field Collaborative Combat Aircraft, drones that later this decade will fight in contested combat environments as a team with piloted aircraft or potentially without humans in the loop. On the other hand, they are cautiously navigating a variety of ethical, policy and technological considerations associated with allowing software to augment airborne human judgment. Despite the challenges, the rise of AI fighter pilot wingmen in the future is only a matter of time.
Why AI?
Since the beginning of powered flight, aviators and engineers have experimented with drones and flown them in conflicts, including World War II, the Vietnam War and, of course, the wars in Afghanistan and Iraq. In the first two cases, the U.S. military’s interest in drones plummeted when the conflicts ended due to a mix of cultural resistance to a pilot replacement, technology shortfalls and peacetime budget constraints. Now, however, those obstacles are falling away due to seismic shifts in geopolitics and rapid advances in technology — particularly AI.
As the Department of Defense focuses on deterring great power conflicts with peer adversaries, the Air Force and other military services are pursuing next-generation drone technology with increasing urgency. Gen. C.Q. Brown Jr., the Air Force chief of staff, has warned that a conflict with China could see levels of attrition on par with World War II. The People’s Liberation Army consists of more than 3 million personnel. Fighter pilots in the People’s Liberation Army Air Force fly more training hours per year than U.S. pilots and employ air-to-air missiles that can outrange ours. To offset these disadvantages, the Air Force is seeking to rapidly build large numbers of low-cost, autonomous aircraft that can swarm the battlespace to increase firepower, provide more situational awareness and overwhelm the adversary.
These drones won’t replace fighter pilots — at least not initially — but they will support them as pilots engage in dangerous counterair missions that require penetrating highly contested airspace. The Air Force’s concept for future Collaborative Combat Aircraft calls for capability beyond today’s remotely piloted drones. Robot wingmen piloted by well-trained algorithms offer the potential for a huge American airpower advantage by reducing the costs of air superiority, not only in terms of dollars but also in terms of pilots’ lives.
The linchpin to making these autonomous aircraft a reality will be AI, onboard software that will be capable of tasks normally performed by humans. Increasingly, AI will allow for the development of thinking machines that can orient themselves in the battlespace and make decisions to act as humans do. This will dramatically speed up decision making in war, allowing whomever has autonomous drones to gain first mover advantages, such as launching an offensive before the other side even has time to react. AI will also allow the U.S. military to recognize its vision for deploying large numbers of autonomous drones. Whether they fly as tethered wingmen with a human flight lead or are sent off on their own as an untethered swarm, machines that can think on their own free up precious human capital; autonomy will enable the military to deploy lots of them without overtaxing aircrews and intelligence analysts. It’s a problem of affordable quantity and numerical advantage: The U.S. cannot afford the millions of dollars and years of experience that are required to train human fighter pilots in the massive numbers envisioned for future conflicts.
Eventually, AI might allow for an autonomous drone to do everything that a fighter pilot can do. Already, the U.S. military is experimenting with AI for arguably the most challenging aspect of aerial combat: dogfighting. DARPA’s Air Combat Evolution program famously pitted a highly experienced Air Force fighter pilot against an AI-driven fighter in a series of aerial contests in a flight simulator. The AI fighter scored simulated kills against the human every time, in part because the AI-driven fighter could aim its cannon with superhuman accuracy from seemingly impossible attack angles, allowing the AI fighter to outmatch his human adversary in an old-fashioned, close-in, turning dogfight. AI pilots excel at black and white problems in the simulator; however, air combat in the real world presents many gray areas that seem likely to require human judgment for the foreseeable future.
DARPA’s AI dogfighting program explored the science of one-versus-one aerial combat, which is measured in angles, acceleration, high-G turn circles and split-second maneuvering decisions. However, the more important aspect of the DARPA program was the art of aerial combat. One of the most important elements of the art of war is teaming and trust; the obscured but foundational goal of the DARPA program was really to understand what factors will enable human pilots to trust AI pilots as wingmen. Measuring trustworthiness in complex air combat scenarios is more art than science. Building a trusted AI fighter pilot will require both.
Building AI brains versus automating drones
Autonomy can simply be thought of as the degree of machine independence from human control. So not all autonomous drone operations require AI in any form. Many simply require so-called deterministic behaviors that can be programmed into software. For example, the MQ-1C Gray Eagle, the Army’s workhorse drone for operations in Afghanistan and Iraq, can take off and land automatically. These and other simple tasks, such as stationkeeping over a particular target, require predictable behaviors that can be preprogrammed into a drone’s mission plan.
For now, the Air Force is pursuing a crawl-walk-run approach to introducing AI into uncrewed aircraft. The goal is to build on the foundation of deterministic behaviors that drones can already do today to create scalable autonomy. Human fighter pilots could potentially dial the autonomy of their autonomous drone wingmen up or down, depending on a mission commander’s willingness to trust the drone to complete the mission. Autonomous drones conducting passive surveillance might require a human “on the loop,” meaning monitoring operations in an oversight capacity, whereas an autonomous drone carrying air-to-air missiles will require a human “in the loop,” meaning someone would have to approve the release of a weapon.
In addition to adjusting the length of the wingman’s leash based on ethical considerations for weapon employment, the scalable approach would also allow the Air Force to update autonomous wingmen’s capabilities as advances in AI allow them to perform more complex tasks. There are several paths available for the technical development of AI fighter pilots. AI encompasses a range of machine learning techniques, spanning three types of learning: supervised, unsupervised and reinforcement learning. Supervised learning involves the use of labeled data to train an algorithm to make predictions or classifications. Unsupervised learning involves finding patterns in unlabeled data, and it can be used for tasks like clustering similar entities. Reinforcement learning is used to train agents to make decisions in a simulation environment based on positive or negative feedback.
In the context of developing uncrewed fighter jets, machine learning algorithms can be trained using supervised learning to identify objects — such as hostile versus friendly aircraft — and make decisions based on their sensor data. Similarly, reinforcement learning can be layered onto other machine learning techniques like supervised learning and can be used to train software agents to make correct decisions about tactical maneuvers, target selection and weapon employment based on rewards and penalties within a training simulation environment. The sheer amount of data required to train a software agent and the difficulties associated with training that agent in a simulated environment — which in all likelihood does not reflect the real world and the complexity of aerial combat — pose serious challenges to developing AI pilots.
Will we see AI fighter pilots in our lifetime? Building trust in the AI and overcoming technological challenges to AI development will be decisive factors. We almost certainly will see the U.S. military iteratively build AI wingmen from drone-like tools into trusted teammates. The Air Force is already planning future fighter concepts such as the Next Generation Air Dominance family of networked aircraft, which will include a human-machine team of crewed fighters and Collaborative Combat Aircraft.
Yet AI fighter pilots are unlikely to completely replace humans anytime soon. Machines can replicate some aspects of human judgment and may be able to complete tasks more efficiently than humans. But even if AI advances to the point it can make every operational decision more efficiently than a human, it’s not necessarily the case that the AI can apply the moral reasoning and understanding of the adversary that humans can apply in combat. The future of air combat is likely to see a prominent role for the human, even if it’s not Maverick flying Mach 10 in a hypersonic jet.
U.S. Air Force Maj. Joshua Reddis, a veteran F-35A pilot and a fellow at the U.S. Air Force Warfare Center in Nevada, contributed to this article.
Caitlin Lee
leads the Center for Unmanned Aerial Vehicles and Autonomy Studies at the Mitchell Institute for Aerospace Studies in Virginia. She holds a doctorate in war studies from King’s College.
U.S. Air Force Lt. Col. Jesse Breau
is a veteran F-16 pilot and an Air Force fellow at Argonne National Laboratory outside Chicago.
Keeley erhardt
is a computer scientist and Ph.D. candidate at MIT specializing in network dynamics, machine learning and modeling of complex systems.