Demonstrating and testing artificial intelligence applications in aerospace
By Kerianne Hobbs|December 2021
The Intelligent Systems Technical Committee works to advance the application of computational problem-solving technologies and methods to aerospace systems.
The aerospace community saw many novel applications of artificial intelligence this year, including demonstrations of artificial intelligent flight control, multiagent planetary exploration vehicles, autonomous navigation on Mars, aids to crewed missions and research into novel detect-and-avoid systems.
In March, the U.S. Air Force Research Laboratory’s Autonomy Capability Team 3 and the U.S. Air Force Test Pilot School flew the first deep reinforcement learning flight control agent on a jet aircraft as part of the Autonomous Air Combat Operations Program. The flight testing, which Test Pilot School students named “Have Cylon,” included a series of two-hour flights using either a single ship (Calspan LJ-25 Learjet) or two ships (Calspan LJ-25 Learjet and an F-16 Fighting Falcon). The artificial intelligence agents were trained using reinforcement learning in simulation and then transferred to the Learjet. Researchers designed the flight tasks to identify simulation-to-real-transfer challenges associated with a zero-shot transfer approach; the tasks culminated in a series of flight maneuvers.
Also in March, the University of Cincinnati’s Aerospace Engineering and Engineering Mechanics Department and NASA’s Ames Research Center in California developed a novel small unmanned aerial systems traffic management system that can autonomously identify, track and manage large-scale sUAS operations. This research primarily focused on developing an intelligent conflict detection and resolution system that uses high-level heuristics and a low-level fuzzy controller to keep sUAS separated, known as the Tactical Intelligent Detect and Avoid System for Drones, or TIDAS-4D. Using only current-state information, the TIDAS-4D system can resolve potential conflicts with and without knowledge of intruder intent. When compared to other state-of-the-art systems, such as ACAS-Xu, the performance of TIDAS-4D was similarly effective at preventing near-midair collisions.
In April, the Autonomous Pop-Up Flat Folding Explorer Robot team demonstrated multiple autonomous PUFFERs cooperatively exploring an environment without a map. A series of tests focused on the new multiagent technologies: a mapping database for storing and synchronizing cost maps that supports updating cost maps in response to new localization estimates; pose graph optimization using ultrawideband ranging radios when visual loop closures are not present; and a modular exploration pipeline that allows multiple rovers to explore an environment while satisfying recurrent connectivity constraints. The team completed testing with three PUFFER v4.0s in the mini-Mars Yard at NASA’s Jet Propulsion Laboratory in California.
Since June, novel AI software developed at JPL enabled the Perseverance rover to drive itself autonomously on Mars, over much greater distances than can be achieved with humans alone. This Enhanced Autonomous Navigation software creates a 3D map of the environment using the navigation cameras’ stereo images and generates a path optimized to reach the goal in minimal time while avoiding hazards. ENav enables Perseverance to drive itself beyond the terrain human operators on Earth can see and thus make much faster progress toward the mission’s scientific destinations.
In October, Campaign 6 at the Human Exploration Research Analog facility began at NASA’s Johnson Space Center in Houston. For 45 days, a crew of four subjects were to live in this confined space while conducting scientific experiments, with a focus on increased crew autonomy. Experiments included a technology demonstration for an AI assistant called Daphne, which assists astronauts with diagnosis and resolution of spacecraft anomalies during long-duration exploration missions, when long communication delays preclude timely communications from mission control. This experiment will help NASA develop standards and guidelines for development of similar AI assistants for space exploration.
Contributors: Brandon Cook, Jean-Pierre de la Croix, Daniel Selva and Olivier Toupet