Advances made in the development, testing and certification of AI-based aerospace sensing and control
By Kerianne Hobbs|December 2024
The Intelligent Systems Technical Committee works to advance the application of computational problem-solving technologies and methods to aerospace systems.
This year saw notable demonstrations of artificial intelligence and intelligent robotics for air and space, as well as progress toward certification of AI-based aerospace systems.
In January, the Aerospace Corp. in California demonstrated a new approach for pose estimation and imagery from ExoRomper, its in-space machine vision testbed aboard the Slingshot-1 small satellite. The ExoRomper imagery was analyzed with Aerospace Corp.’s BetterNet method, designed to be a watchdog for autonomous systems with AI and machine learning algorithms. Once added to existing deep networks, BetterNet can assess if predictions from an AI model are based on sufficient training support and validation data to be trustworthy. In this initial demonstration, BetterNet was able to catch nine out of 10 bad pose estimates. The Aerospace Corp. has made ExoRomper’s in-space imagery and data labels freely available on its website.
In January, NASA’s Langley Research Center in Virginia demonstrated autonomous robotic assembly under the Precision Assembled Space Structure project. Here, researchers have developed and tested autonomous systems to drive robotic manipulators to join structural modules. This includes a trajectory generation system, a vision-based pose estimation algorithm for fine alignment and software to control the end-effector tools.
The January test consisted of retrieving two 2.8- meter tall truss modules and joining each to another module on a test stand, the first step toward a planned full-scale ground demonstration of autonomously assembling the backbone truss structure of a telescope. It also marked the first test conducted in NASA Langley’s new full-scale In-Space Assembly Laboratory.
In April, DARPA announced completion of first AI versus human dogfight, conducted between the AI-piloted X-62A VISTA test aircraft and a human-piloted F-16, as part of its Air Combat Evolution program. Across 21 test flights from Edwards Air Force Base in California, some 100,000 lines of software were changed to test and refine AI-agent performance, including defensive and offensive maneuvers. The AI-controlled X-62 maneuvered within 610 meters of the F-16 at closure rates up to 1,931 kph. In addition, the team showcased effective ethics and safety procedures by incorporating ground and aerial collision avoidance measures as well as combat training rules.
In June, GE Aerospace submitted a report to FAA documenting the first recorded attempt at generating certification evidence for AI-based digital aerospace systems using the Overarching Properties framework. NASA, FAA and other agencies are evaluating the OPs for developing an alternate means of compliance to streamline the certification process. The GE report described promising results and experiences that could help FAA shape future policies for certifying AI-based systems, which is not supported by existing standards such as DO-178C. GE also publicly released a tool on GitHub for generating OP-based assurance cases.
In June, the Local Intelligent Networked Collaborative Satellites Laboratory of the Air Force Research Laboratory’s Space Vehicles Directorate integrated autonomy software, showcasing collaborative inspection under the AFRL seedling Safe Trusted Autonomy for Responsible Spacecraft program. Specifically, the laboratory evaluated algorithms developed using reinforcement learning as the decision-maker, coupled with run-time assurance to maintain safe behaviors, and then the outputs were displayed on a human-autonomy interface for operator interactions and aerial platform emulation.
The LINCS lab is creating an environment for testing multisatellite close-proximity operations, using aerial vehicles as emulation platforms. Established in 2022, the lab aims to provide a testing environment to allow academia, industry and other government agencies to test algorithms and software in an operationally relevant environment.
Contributors: Anthony Aborizk, Benjamen Bycroft, John R. Cooper, Saswata Paul and Sean Phillips