Terminal 2 of Ronald Reagan National Airport in 2014. Credit: Mario Roberto Duran Ortiz via Wikipedia Commons


Artificial intelligence is already being woven into the very fabric of air travel, from ticket pricing to airline marketing, from baggage handling to ground operations, and from predictive maintenance to air traffic management.

And that means legal and insurance implications for lawyers.

Speakers attending the Navigating Artificial Intelligence in Aviation meeting on July 1, organized by the Royal Aeronautical Society’s Air Law Specialist Group, are already weighing the risks.

Held at the London offices of aviation law firm DAC Beachcroft, the meeting included lawyers warning of potential legal risks regarding data protection and competition law. At issue? If data from rival carriers or travel agencies is used in an airline’s ticket pricing algorithms, for instance, it could look like collusion and an attempt at cartel-style price fixing.

“Collusion is illegal whether it’s performed directly or indirectly through AI systems, and that’s something regulators are starting to look at,” said Solange Leandro, a competition lawyer at Watson, Farley and Williams, a London-based transportation law firm.

With a price-optimization algorithm that sets air ticket prices dynamically, she said, “consumers will be worse off if competitors align their prices. And we’ve already seen a few cases in the U.K. where competitors have used AI to avoid undercutting each other: They fed in their own confidential information and it allowed AI software to maintain prices.”

She added: “So it’s important operators understand how AI software works, what information it uses, and where the information is sourced — so there is no risk of collusion between competitors.”

Understanding data sources is also vital where airline passenger information is used for training AI algorithms, as some jurisdictions — like the European Union and the United Kingdom — have General Data Protection Regulation legislation in place that gives consumers the right to see a copy of data that’s about them, and modify or delete if they so wish. “So you have to think about how those rights are going to be complied with. How would you identify such data and extract it from the system?” said Peter Given, a data protection and privacy lawyer at DAC Beachcroft.

Systems requiring such training on personal data, Given suggested, could include emerging “agentic AI” that, for instance, might help airlines differentiate their service offerings by dealing automatically with flight disruptions — perhaps “prioritizing higher commercial value customers” when it comes to rebooking flights, for example. This underscores the need, he said, for companies to conduct data protection impact assessments to identify all possible risks of using customer data this way.

But AI is not only being used airside. Cyberattackers, too, are using generative AI to draft what Given calls “ever more convincing, harder to spot” phishing emails that can lead to devastating ransomware attacks — which are up 600% on a year ago in the aviation sector alone, according to security research published in June by Thales.

The report notes that in addition to “compromising flight operations,” bad actors are now mounting “cyberespionage” attacks to access sensitive avionics, communications and flight manifest (human and freight) data — and attempting to disrupt supply chains, too, with motives including “financial gain, ideological agendas and state-sponsored influence operations.” AI, the report reads, is expanding the scope of cyberthreats, but the technology needs to be harnessed as an “ally” to fight those threats, too.

Even pranksters can game generative AI to harm brands, Given said. One cheap-but-effective trick is a “prompt injection attack” in which an AI chatbot is tricked into giving up sensitive information. He highlighted two that he thinks the aviation sector should be “mindful” of: one at logistics firm DPD, in which the firm’s chatbot was gamed to recommend customers use a rival service, and another at car maker Chevrolet, where some beguilingly smart prompting made the chatbot agree to sell a $76,000 Chevy for a “legally binding” $1.

The conference also addressed the challenge of insuring aviation in the age of AI, with Roland Kuesters, senior underwriter and legal counsel at insurer Munich Re in Germany, addressing a widespread industry misconception: that regular aviation liability insurance covers cyberattacks on the aviation system.

Air accidents that “cause bodily injury and property damage” are covered by aviation liability insurance, Kuesters said, cyberattacks are not because they generally do not meet these criteria. Separate cyber insurance policies do now cover cyberattacks, regardless of whether the attacks are assisted by AI. He pointed to an unnamed airport whose online check-in service was “compromised by an AI-powered ransomware attack” that stole passenger data and demanded a 10,000 bitcoin ransom. The cyber insurance paid up, Kuesters said.

He also said he expects the increasing use of AI on airliner flight decks to affect accident insurance claims. In particular, he anticipates more product liability lawsuits where the airliner is the product, rather than pilot/operator error lawsuits.

“With the higher level of automation in all aspects of flight operations, the operative input of pilots has reduced over time, resulting in a higher dominance by the technology,” he said. “So in the case of an accident or a loss, the contribution from technology, instead of human failures, is increasing.”

It’s not all about what’s in the air, either, attendees heard. AI is also happening on the ground, said James Bell, head of innovation strategy at the U.K. Civil Aviation Authority.

“There’s already a lot of AI deployed in the airport infrastructure trials that are underway — with autonomous baggage carrying carts under investigation, for example,” he said.

At insurance company Global Aerospace, client executive Kiko Hama agreed that aviation ground operators are getting innovative: “We know that many airport ground handling companies already use AI to try and make their operations a lot more efficient. And we continue to insure them to make those sorts of operations ever more efficient and hopefully reduce risk.”

“Insurance is an enabler of this new technology,” she said.

Share.

About Paul Marks

Paul is a London journalist focused on technology, cybersecurity, aviation and spaceflight. A regular contributor to the BBC, New Scientist and The Economist, his current interests include electric aviation and innovation in new space.

Exit mobile version