Company Focused on Applying Existing AI Solutions at Scale
Las Vegas —“AI will be the mainstream fabric of everything we do going forward,” John Clark, Lockheed Martin’s senior vice president of Technology and Strategic Innovation, said during AIAA AVIATION Forum, kicking off a full day of AI-centered programming for the aerospace community.
Speaking with Graham Warwick, executive editor of Technology for Aviation Week, Clark discussed a range of challenges confronting the defense aerospace sector in today’s AI race, while sharing Lockheed Martin’s specific path forward, embracing a modular deployment of AI focused on use cases and iterating fast from existing models.
Clark knows of what he speaks: part of his leadership responsibility is overseeing Lockheed Martin Ventures, which invests in early-stage companies developing technologies in strategically important areas like AI, robotics, cybersecurity, space, and advanced materials. A chemical engineer by training, Clark previously led Advanced Development Programs (ADP), also known as the Skunk Works®, for Lockheed Martin Aeronautics, where he set the strategic priorities for the growth engine of the business.
The Evolution of AI: From Theory to Practice
Clark recalled his first AI project in 2007, when Lockheed tried to determine adversarial intent based on the actions an adversary was taking, such as turning on multiple radars. “At that time, the computational density was not there,” he said.
The landscape changed dramatically with the advent of technologies like Nvidia’s CUDA or Compute Unified Device Architecture. The computing platform “unlocked a lot of computational capacity and capability,” enabling the practical application of AI techniques that had existed in theory for decades, such as reinforcement learning and expert systems, Clark said.
Today, the challenge is less about inventing new AI concepts and more about responsibly applying existing ones at scale. “There is a requirement for stuff to work reliably, responsibly, and ethically,” he emphasized, especially in Department of Defense (DOD) ecosystems where determinism and reliability are paramount.
Leveraging Commercial AI
Clark noted the limitations of even the largest aerospace and defense contractors when it comes to competing with commercial AI giants, pointing to massive investments by companies like Google, Meta, and OpenAI. Lockheed Martin isn’t trying to compete with these AI firms; instead the company is partnering with leading AI modeling providers and iterating rapidly to integrate the best available technologies. This approach, Clark argued, is not only pragmatic but also economically sound, as the commercial sector’s investments drive innovation that the defense industry can leverage.
Clark said that the competition between AI solution providers has allowed the United States to be a leader in AI, especially within U.S. capital markets. “It’s driving a whole new set of capabilities,” he noted.
Regulatory Challenges & the Imperative of Nuclear Energy
Restrictive regulations, however, are a major area of concern, said Clark. He cited an example: how green energy regulations favoring solar and wind over a new generation of modular nuclear reactors have hindered progress in addressing the power demands of data centers that fuel advanced GenAI systems.
“Solar panels aren’t going to charge one-gigawatt data centers powered by AI,” he observed, pointing out the emergence of a new ecosystem of small modular reactors have struggled to bring their technology to the United States because of regulations inhibiting their deployment. He said one company ultimately chose to test its technology abroad after encountering restrictive U.S. regulations.
Clark reiterated that the regulations must be applied across the board, from the silicon-level base, to the power sector, to the capital markets that can fund people to innovate and introduce new capabilities.
“We need to embrace the right regulations here that allow people to run fast – and then put the right framework in place that can persist over time in a responsible way,” he said.
Another recurring theme in Clark’s remarks was the importance of modularity. He rejected the notion of “one AI to rule them all,” and instead said the market will embrace different AI techniques to address specific problems.
“It all becomes centered around what specific use case or what specific problem we’re trying to solve,” he said. “Whether it’s reinforcement learning for radar optimization or large language models for knowledge management, the key is to apply the right technology to the right challenge,” he said.
Democratizing AI Across the Workforce
Lockheed Martin has made significant strides in democratizing AI within its workforce. Clark described a system where individual engineers can develop AI agents for personal productivity, but broader deployment is managed within a framework to ensure scalability and resource efficiency.
He said the company’s Genesis platform, for example, boasts 70,000 users—over half of Lockheed’s employee base—who regularly engage with AI tools. This approach balances innovation with oversight, preventing resource waste and ensuring that AI deployments are both effective and cost-conscious.
AI in Action: Real-World Impact
One of the most compelling examples Clark shared was the use of AI to enhance the Aegis radar system for the U.S. Navy. By applying reinforcement learning algorithms to radar data, Lockheed Martin was able to update radar parameters daily, improving the system’s ability to detect low-speed threats without overwhelming operators with clutter.
“That’s evidence of where AI has been used as a tool, and what it’s done is that there are behaviors that it’s been able… to pluck out the exact elements that historically would have been seen as clutter, and identify them as threat tracks,” Clark explained.
Ethical and Security Challenges
Clark also addressed the ethical considerations posed by AI, particularly in military applications. He highlighted the paradox of adhering to strict ethical standards when adversaries may not.
“We have to figure out how we navigate that effectively, so that we can be on the moral and ethical side of the equation, yet [not] compromise ourselves when it comes to an adversary that maybe doesn’t fight behind the same rules,” he said.
Security concerns also loom large, especially when integrating AI into classified programs. Warwick raised an audience question about the hurdles of bringing AI into classified programs and into platforms that are highly sensitive: “How do you tackle that? Do you have to fence them off somewhat into a separate, different approach?”
“Unfortunately, the answer to this is dependent on your customer,” Clark said, noting that the approach often depends on the customer’s risk tolerance, with some willing to innovate and others imposing strict controls that can stifle progress.
Warwick cited an instance in which the U.S. Army’s first big model-based system engineering program used, because of security concerns, two models – a government model and an industry model.
“There was this constant interchange between the two and bandwidth was getting unbelievably big. If we layer AI on top of that, and have to start doing an interchange, does it make it better or worse?” he asked.
Clark emphasized “AI could make things better but at the same time, any technology, even a good technology employed the wrong way, can make things worse.”
Lessons of the Banking System
He continued, “This is not an AI problem; it’s a problem inside our DOD ecosystem — we spend so much time trying to protect data that we actually impinge upon our ability to do our job,” Clark said. “If we put the right network ecosystem in place and still have the right cybersecurity protections in it, it actually would alleviate a lot of these problems that I think are self-imposed.”
While acknowledging that “China is trying to get into every one of our networks every day of the week,” Clark pointed to how the global financial market has used AI to stay ahead of adversaries.
“Our whole banking ecosystem is one of the largest attack surfaces in the world. There’s a lot of bad guys trying to break into the banking system and you don’t read every day about a different bank being hacked,” he said.
While being vulnerable to the same threats facing the defense industry, banks and entities like the stock market are using AI to look for behavioral trends and patterns. He said the DOD “can learn a lot from what’s happening in the banking system and not self-impose these constraints.”
Building an AI Supply Chain and Ecosystem
Looking ahead, Clark sees the emergence of a robust AI supply chain as inevitable. He envisions a vibrant ecosystem where commercial, DOD, and in-house innovations blend to create best-in-class solutions.
“Those companies that are able to figure out how to find that sweet spot of bringing new stuff in from a supplier, getting it into their ecosystem, blending with the things that they already have organically inside of their business… that’s where you’ll see the differentiation,” he predicted.
Contrary to fears that AI might de-skill the next generation of engineers, Clark argued that it can actually accelerate their development. The defense prime is using its in-house AI capabilities to capture the expertise of the company’s Tech Fellows who are nearing retirement to bridge the gap between experience and innovation.
“There’s an opportunity space where you actually use AI as part of your training curriculum,” he said, sharing how early career engineers can gain access to knowledge that would otherwise take years to acquire.
The Human Element: AI and the Future of Piloting
Finally, Clark addressed the perennial question of whether AI will replace human pilots. He sees a future where humans and AI work in tandem, each augmenting the other’s strengths.
“We’re not removing humans from the system. Humans are going to exist somewhere in an ecosystem, one way or the other,” he concluded.
In this new era of AI, Clark’s insights underscored the importance of modularity, responsibility, and human-AI collaboration across aerospace and defense.
Positive Audience Reaction
Clark’s remarks earned high praise from Cincinnati-based attendee Doug Shafer, a nuclear engineer who retired from GE six years ago. Shafer, who spent most of his career in aviation, came to AIAA AVIATION Forum and ASCEND because of his love of technology.
“This is the best talk I’ve heard since I’ve been here,” he said. “I happen to be a nuclear engineer. [Clark] was talking about nuclear power as the next-generation of power. I absolutely 100% line up behind that because I also have interest in the ASCEND program and their number one requirement is power and it’s all nuclear fusion power. I also like the idea that he didn’t advertise AI as the “be all, end all” — that the human is still in the loop and it’s an accelerator or an enabler, but hardware still is the deliverable. He also had a healthy level of sarcasm, which I appreciated, because he’s not just embracing the talking points. Very, very impressive.”