LAS VEGAS —A group of experts from NASA and industry put to rest several myths regarding artificial intelligence (AI) certification during AIAA AVIATION Forum in Las Vegas.
The all-star panel included Collins Aerospace’s Darren Cofer, principal fellow; Joby’s Kim Wasson, autonomy certification lead; NASA’s Natasha Neogi, senior technologist for Assured Intelligent Flight Systems; and Merlin’s Robert Voros, senior director of Product Safety Assurance. The panel was moderated by Yemaya Bordain, president of the Americas for Daedalean AI, which is creating certifiable AI-powered automation to enable autonomously flying aircraft in the future.
Myths that the panelists “busted” included:
- AI and autonomy are the same thing.
- All AI is non-deterministic.
- AI systems are not safe.
- Tools to monitor AI systems in real time do not exist.
- A whole new set of assurance methods would be needed to certify aircraft using AI.
- No regulatory agency would actually approve AI on an aircraft.
- AI will directly control the aircraft.
Key points raised by the aviation experts included the need for clear requirements and traceability to reduce human error, the importance of functional hazard assessments, and the role of probabilistic requirements in guiding hardware and software architecture.
Concerning the myth that AI systems are unsafe, Joby’s Wasson said, “Systems are declared safe based on the ability to demonstrate processes.”
Cofer with Collins Aerospace added, “The main way AI systems are unsafe is through unintended behaviors; that’s why we need larger validation data.”

On the issue of people using AI and autonomy interchangeably, Wasson said the two are different, noting that autonomous systems are a set of behaviors of how one wants a system to behave in certain situations, whereas AI and machine learning (ML) are the implementation choice—the tools to achieve certain behaviors—which could include autonomy or not. “Autonomy isn’t dependent on AI. Also, you can do lots of other things with AI and ML that have nothing to do with autonomy,” she stated.
NASA’s Neogi added that people in the space industry are well aware of these differences, since the industry has implemented autonomous systems for space-based operations without using AI.
The experts also noted that misleading information from AI (or any system) is not a new problem; it is a system-level issue that must be addressed through requirements and assurance processes.
The panel emphasized that human factors are a critical consideration in the certification and operation of all aviation systems, including AI-based systems in aviation. Regulations require that the flight crew understands the behavior of the system, not necessarily the internal workings of AI.
Wasson emphasized that humans cannot be forced to understand everything about a system; instead, the design must enable and support humans to do what is needed. This is reflected in regulatory language and design practices.
During the Q&A, the panel was asked if industry is the best entity to determine requirements for AI safety in commercial applications. “It has to be a partnership,” said Wasson, noting that industry needs agencies that can provide a depth of expertise and an independent position. The panel emphasized that civil agencies and industry are working together to ensure that human-system interaction is safe and effective.
“We need everyone — all hands-on deck. Industry, agencies, the regulatory authorities — we need people who are walking through this process, building tools, building infrastructure, coming up with emerging questions,” said Merlin’s Voros.
The final audience question harkened back to people’s fundamental fear about AI that is reinforced by sci-fi movies like “2001: A Space Odyssey” and “The Terminator:” “Will AI allow computers to take over the world and kill us all?”
“Today’s AI is like a worm in evolutionary terms. I think in 10 years we will be dealing with the T-rex of AI. I’m calling it possible,” said Daedalean’s Bordain, in a tongue-in-cheek comment that echoed the unanimous sentiments of the panel in terms of AI’s power but not malicious intent.
Following the talk, Robb Gregg, senior technical fellow in the Aerodynamics Group at Boeing, admitted to helping push up the number of the votes for that final audience question. On whether AI could advance to the point that it would eliminate humanity, Gregg said, “It’s all a matter of how it’s applied and who applies it.” Overall, he found the session both entertaining and informative.
“This was a very broad conversation, and it really pointed out that AI can be used in many ways, on many systems. It’s going to continue to expand in use and into the future of airplanes. The panelists laid out that there are processes being put in place to show that it’s going to be predictive or will do the job intended. As long as we continue to work in that direction, I think we’ll have safe systems.”