Stay Up to Date
Submit your email address to receive the latest industry and Aerospace America news.
In the sky above Edwards Air Force Base in California, 1st Lt. James Wang watched from the front seat of an F-16 as the aircraft banked and dove. The goal was to practice missile evasion maneuvers, but neither Wang nor the pilot in the aircraft’s backseat was in control. Artificial intelligence was.
This “memorable” and “extremely nauseating” flight — as Wang described it in an interview — was one of several live tests conducted for the final stage of Have Remy, a U.S. Air Force-funded project focused on developing a methodology for training and evaluating tactical AI. Throughout 2025, graduating students of the USAF Test Pilot School at Edwards worked with Lockheed Martin Skunk Works to build AI models capable of maneuvering an aircraft to avoid incoming missiles. The project hoped to prove AI agents could be trained through simplified, randomized flight simulations and then could effectively transfer those skills to live flights with the X‑62A VISTA, a modified F-16 testbed.
The results of the trials, which Lockheed Martin first revealed publicly in February, will inform other ongoing projects involving AI-controlled aircraft.
Have Remy follows earlier experimental flights on the X-62 during DARPA’s Air Combat Evolution program, in which the AI-piloted aircraft was pitted against conventionally piloted F-16s in human-versus-AI dogfights. Have Remy was more narrowly focused on missile evasion scenarios and differed from other autonomous-piloting programs in its methodology and collaboration with a contractor, according to participants.
For the initial phase of the project, Lockheed Martin developed “dozens of different agents,” said Air Force Capt. Young Wu, a fighter pilot who participated in Have Remy. Engineers then put the agents through a variety of simulated missile evasion scenarios that were simplified and randomized, according to Wang, a flight test engineer who participated in the program and a former AI instructor for the test pilot school.
“The agents must learn to adapt,” Wang said. “The hope, then, is when you deploy to the real world, the real-world dynamics are some intersection of all the domain randomizations you’ve done in training.”
The highest-performing AI agents graduated “from the lower- to the medium- to the high-fidelity simulations,” Wu said. For the final level of virtual simulations, test pilot students sat in the cockpit and watched the agents “perform the maneuvers that we were expecting to fly in real life.”
At least five AI agents were selected for the final phase: a half dozen live flights over Edwards last fall.
For each trial, students loaded the AI agents onto a tablet and connected it to the X-62A VISTA, Wu explained. A two-person flight crew took off manually. Once airborne, pilots selected the desired scenario and AI agent, then “hit ‘go’ on the tablet,” he said.
An AI agent took control of the jet and responded “to the simulated missile in the Live Virtual [Constructive] environment,” Wu said. Once the agent dodged the missile, was hit by it or exceeded safety parameters, the crew took over the flight controls and either landed or set up for another flight with a different AI agent or scenario.
“For the most part, we saw the expected behaviors where, say, the aircraft rolls 180 degrees and does a pull or does a weave-like maneuver airborne to avoid the missile,” Wu said. “At times, the agent would do things that, as a fighter pilot, we don’t necessarily recognize as a traditional missile evasion technique.”
In one case, Wang recalled, “the agent decided it wanted to turn left, but instead of banking left and pulling, it banked right and then pushed so that we got negative gs in the cockpit. [We were] kind of floating out of [the] seat doing this big outside turn that transitioned to a bunt and dive.”
Wang, Wu and Matthew Beard, a Lockheed Martin manager who participated in Have Remy, stressed that the project’s focus was not on whether the AI avoided all missiles — in some scenarios, evasion was impossible. Rather, they wanted to determine whether their technical methodology enabled the agents to manage the leap from simulation to real life. All agreed it did.
“What we’re hoping,” Wu said, “is that some of the lessons that we learned from this project in terms of how the agents fly, how we train them … kind of lays the groundwork for future analysis and applications of these techniques.”
Wang and Wu said they expect Have Remy to contribute to other autonomy experiments — such as DARPA’s ongoing Artificial Intelligence Reinforcements program — by providing a standardized technical method for effectively training and evaluating AI agents on aircraft.
“The standards that we established in this test management project on the technical side … that is a landmark application,” Wang said. “That is an innovation we’ve made that has also been disseminated and will carry on to the future of these programs.”
About Aspen Pflughoeft
Aspen covers defense and Congress, from emerging technologies to research spending. She joined us in early 2026 after nearly four years at McClatchy, leading international and science coverage for the real-time news team.
Related Posts
Stay Up to Date
Submit your email address to receive the latest industry and Aerospace America news.

