Global Panel Addresses Tough Questions as Collaborative AI-Enabled Aircraft Programs Advance
The future of air power lies in the seamless collaboration between humans and machines, according to a panel of thought leaders from the Canadian Department of National Defence, the U.S. Air Force (USAF) and leading defense firms Anduril, Lockheed Martin, Northrop Grumman, and Ribbit who discussed “Injecting Intelligence in Military Programs” at AIAA AVIATION Forum last month in Las Vegas.
“The most important question we should be asking when it comes to AI and technology is not what does the technology do? But how do we use it? How do we incorporate it into our operations?” said Lt. Col. David Dunwoody, Canadian Armed Forces advisor to the Department of National Defence AI Centre.
These questions are timely given the growing significance of collaborative autonomy in U.S. defense priority initiatives like the Collaborative Combat Aircraft (CCA) program. The USAF multi-pronged initiative, part of its wider Next Generation Air Dominance program, calls for developing a new type of uncrewed autonomous aircraft designed to operate alongside crewed aircraft.
“We did the foundational work at AFRL to make the CCA program of record possible,” recalled panelist Trenton White, portfolio lead for Autonomous Collaborative Platforms at the Air Force Research Lab (AFRL). “These aircraft are designed to work together with other uncrewed and crewed assets…and bring in data links and open architectures to facilitate collaboration.”
Defining Collaborative Autonomy
The most important aspect of collaborative autonomy “is it’s two-way. It’s not just a person tasking an autonomous system and then walking away,” said Sean O’Brien, manager of Advanced Autonomy Research at Northrop Grumman.
Historically, there’s not been a focus on computers understanding humans, but collaborative autonomy requires independent decision making and cooperation, which implies mutual understanding of context, added Dunwoody.
Recalling his time as a crew member on CP140 Aurora patrol aircraft while serving in the Canadian Armed Forces, Dunwoody said as new crew members joined, it took a few flights before the crew meshed as a team.
“We were able to predict how the other people were going to respond,” he explained. He suggested that people should look at how AI is going to react to and anticipate the needs of humans, and that requires people to shift from viewing AI from the lens of just software engineering to include aspects of psychology.
Is AI Essential for Collaborative Autonomy?
A key question posed to the panel was whether AI is necessary for achieving collaborative autonomy, or if it is simply one of several tools at aircraft planners’ disposal.
The answer is yes, according to Lockheed Martin’s Peter McArdle. “We now have the collision of CCA, some affordable platforms that can do a broad spectrum of missions…. but bringing in autonomy and AI is really what realizes the capability of these type of platforms,” said the acting vice president of Integrated Systems for Advanced Development Programs (ADP), also known as the Skunk Works®.
He further noted that the advent of CCA programs and competitive dynamics has forced positive changes in how the industry works together: “The idea that you would take 15 or 16 competitors, assemble them into a consortium and then get them to agree on a common architecture… that was not possible 10 years ago. Today, it is.”
Andrew “Scar” van Timmeren, senior director, Autonomous Airpower at Anduril Industries, said AI can provide better solutions than humans in novel scenarios. As an example, he pointed to the DARPA ACE (Air Combat Evolution) program, which demonstrated air domain agents coming up with novel behavior in scenarios that humans found themselves in.
“That’s been leveraged by the Air Force Test Pilot School and others. It shows that we as humans may not know the full pantheon of decision making within a particular scenario.”
However, he cautioned that AI shouldn’t be relied on in operations that could jeopardize human safety: “When you’re flying through FAA-controlled airspace, you really don’t want AI to be like, ‘Well, I think I should just change my altitude right now,’ because you’re on approach in a prescriptive route…procedures that are well understood and are for the safety of you and everybody else around you. There are use cases like edge autonomy where you can be a little freer, but there are also use cases where it is not the right answer.”

Lt. Col. Taylor Wilson, commander of the U.S. Air Force’s 40th Flight Test Squadron, sees value in autonomy and AI. He emphasized that isolating and refining the objective “really, really matters.”
“The rules of thumb used in tactical environments quickly go out the window when it comes to novel or next-gen future threat environments,” he said, noting that there will undoubtedly be complex multi-domain scenarios that humans haven’t seen before. Properly trained AI-driven technology stacks that are integrated into tactical and strategic platforms will likely go a long way to help the warfighter.
Integrating Legacy and Next-Gen Platforms
McArdle addressed another question: as the Air Force fleet continues to evolve, how do we integrate legacy crewed and uncrewed aircraft with minimal autonomous capabilities into an increasingly connected force?
“That’s one we tackle every day inside Lockheed Martin because we have a portfolio of platforms that were designed in a different era that continue to provide value to the warfighter and continue to be modified and upgraded to compete with the modern [threat]. With that said, how do we have them collaborate and operate in these new system environments?” he questioned.
He outlined two key strategies: first, iterative development, and two, embracing open system architectures and transitioning to service-oriented architectures.
“That means we are able to take a traditional legacy platform and create an Open System Architecture enclave…and then bring the new capability into that platform without disrupting the whole system,” McArdle said.
Testing and the Path to AI Airworthiness
AI calls for new testing paradigms, with the panelists agreeing that rapid iterative testing is essential for both safety and progress. According to O’Brien, these systems can’t be tested in the same way as previous systems, “because in autonomy, we’re asking the systems to do things that we’ve never asked systems before.”
It’s early days before these systems will be fully certified for airworthiness. For now, these capabilities are being deemed safe for flight in controlled test environments.
Anduril’s van Timmeren observed, “For pretty much any UAS, especially the (Air Force) ACE demos that we’ve done, airworthiness risk is assessed as high. I don’t think it’s going to be assessed anything lower than high until and unless we tailor or create some kind of new airworthiness processes or criteria to process against.”
He added that many people in the test community “are very uncomfortable because we like to see things happen a certain way.”
The Anduril leader suggested ways to mitigate risk within the vehicle autonomy interface could include filtering out suggestions that put the aircraft in an unsafe state. The end result may make the systems less deterministic, but at least the systems would be safer. “There’s still a long way to go in terms of getting test centers and the overall community more comfortable with taking those kinds of risks,” he said.
Who Pays and Who Shoulders the Risk?
Another question concerned as AI makes decision cycles much faster, how much financial responsibility and risk should be shouldered by industry versus the government?
“If the government is investing any dollars in it, we have to take some responsibility, as there is inferred liability,” said White from the AFRL. “Additional rigor comes with the government test review process. We have to make sure that we’ve checked all the boxes to ensure technical test objectives are met while test operations are conducted safely.”
O’Brien noted that the government has been supportive of Northrop Grumman’s pursuit of autonomy and AI solutions, many of which the company has funded in-house.
“They want to see new ideas; they want to see the new capabilities. There’s always been a place for private and government investment. Northrop Grumman has been investing in autonomy for many, many years and will continue to do so. We’re not doing it just to throw money away. There’s a purpose for this and it’s intended to move the mission forward. We’re all in this because we believe in national security.”
The Slippery Slope of Delegating Authority to AI Systems
Dunwoody cautioned against the human tendency to get complacent, noting the “slippery slope” of gradually delegating more authority to AI.
“We think right now, there’s always going to be a human involved. But over the course of…decades of use of AI and lethal autonomous systems, we slowly give the AI more and more control,”
he predicted.
Anduril’s van Timmeren indicated that DoD pilots are pursuing “very scripted scenarios,” and applauded the 40thSquadron, which is on the cutting edge of AI and autonomous flight testing. Seeking to level set people’s concerns, he added, “Skynet is not here. Mind-control robots are not around the corner, and we’re barely in the crawl phase of the crawl, walk, run process. We’ve got a long way to go before we get to that slippery slope.”
As the discussion concluded, the panel’s consensus was clear: the path to trustworthy, transparent, and effective autonomous aviation will require collaboration, rigorous testing, ethical foresight, and above all, a commitment to keeping the human at the center of the mission.
“This was an incredibly well-attended and lively session at this year’s AVIATION Forum, thanks to the diverse insights and backgrounds of the panelists,” noted the session moderator Jeremy Wang, co-founder and COO of Ribbit. “The audience was brimming with questions, and the level of engagement reflects the overwhelming importance of collaborative autonomy and affordable mass to force modernization in the years ahead.”
Audience member Jeff Canclini, president of the Society of Flight Test Engineers in Fort Worth, Texas, said, “This was a group of experts who are at the forefront of AI and autonomy. They were in some agreement about the challenges, but how it’s going to manifest itself is still unknown.”