Stay Up to Date
Submit your email address to receive the latest industry and Aerospace America news.
WASHINGTON, D.C. — The U.S. aviation industry and federal regulators are still grappling with how to determine the appropriate guardrails for artificial intelligence to ensure safe aircraft operations, one lawmaker said at an event here Wednesday.
The concern is that someone could train or instruct an AI system to do something dangerous or even disastrous, Rep. Jay Obernolte, R-Calif., said at the American Aviation Leadership Summit held at Honeywell’s offices near Capitol Hill.
“You’re getting to the heart of what we are thinking about in Congress when we’re talking about federal AI regulation, such as what is the responsibility of government to set guardrails and what those guardrails should be,” Obernolte said during an on-stage interview.
Obernolte has a pilot license and chaired the House Bipartisan Task Force on Artificial Intelligence, which published its final report in December. The report highlighted how AI can increase productivity and national security but warned of risks such as malicious AI use by adversaries, the strain on energy grids and fragmented state regulations.
For routine airport operations, Obernolte said he doesn’t think AI today can or should replace human air traffic controllers and probably can’t do the job better. However, it could potentially be applied in specific circumstances, like flying to an uncontrolled small airport with no tower.
“Let’s say I’m coming down into the traffic pattern and I’m looking at my traffic display with aircraft signals, and no one knows what they’re doing,” he said. “Well, AI can look at that, and based on the airport, the time of day, the airspeeds and what the target has been doing for the last couple of minutes, it can tell me, ‘This is a student doing pattern work,’” and suggest a safe approach.
He added: “That’s useful information to know, and it’s something that only AI can do.”
The bottom line, he said, is AI shouldn’t replace humans but can make humans more productive.
“AI can recommend, but AI should never be deciding particularly when, in technical parlance, when the decision is a highly consequential decision,” Obernolte continued. “In aviation, if it’s something that could affect safety, that’s something that a human needs to look at before the button is pushed.”
Regulations like the International Traffic in Arms Regulations already restrict any unauthorized transfer of knowledge and data related to use of deadly military aircraft or weapons. But, Obernolte said, “let’s make sure that AI isn’t trained” to create such technology.
That isn’t a pressing worry today, he said, because if someone asked an AI system how to create such devices, it would likely produce something gleaned from web references that include fictional accounts in Tom Clancy novels and other popular literature. That reflects the findings of leading institutions, including Stanford University: AI often lacks skepticism about sources and sometimes fails to fact-check.
In Obernolte’s mind, the more dangerous and plausible scenario is that people who may already know how to build such technology could use AI to build them faster or more efficiently.
“We always have to be mindful of those things,” he said.
About paul brinkmann
Paul covers advanced air mobility, space launches and more for our website and the quarterly magazine. Paul joined us in 2022 and is based near Kennedy Space Center in Florida. He previously covered aerospace for United Press International and the Orlando Sentinel.
Related Posts
Stay Up to Date
Submit your email address to receive the latest industry and Aerospace America news.

