Q&A

NASA’s AI czar


David Salvagnini, NASA’s chief AI officer

Positions:  Since May, NASA’s inaugural chief artificial intelligence officer, an expansion of his role as chief data officer that he began in June 2023. 2005-2023, various roles at the U.S. Defense Intelligence Agency, including chief data officer. 1984-2005, active duty in the U.S. Air Force, 10 of those as a communications and computer systems officer.
Notable: At the Office of the Director of National Intelligence, led the agency through a two-year effort in which the intelligence community and foreign allies updated standards for data sharing. Since 2018, FAA-certified flight instructor; accumulated 3,462 combined hours of flying and instruction time.
Age: Declined to say
Residence: Falls Church, Virginia
Education: Bachelor of science in education, 1994, Southern Illinois University.

David Salvagnini likens his role as NASA’s chief artificial intelligence officer to that of an orchestra conductor: He’s not playing the instruments himself, but he must lead and guide the players. His May appointment by Administrator Bill Nelson was in response to an executive order issued by President Joe Biden last year, directing the Office of Management and Budget to make sure each federal agency designates a person to take “primary responsibility” for ensuring the appropriate development and use of AI software. For NASA, Salvagnini believes that AI could do everything from helping to create more accurate simulations of the lunar surface for astronaut training to generating financial reports. But there are also potential pitfalls, such as staff or contractors unknowingly creating artist renderings based on copyrighted materials. Salvagnini knows that he and his staff must guard against such missteps. I reached Salvagnini in his home office in Virginia to find out how he plans to get NASA’s 18,000 employees and numerous contractors all playing from the same sheet of AI music.

Q: How will you balance the innovation and oversight aspects of the chief AI officer role?

A: I was asked to lead the NASA tiger team when the executive order was released late last year to think about how we could best manage this activity. So when I first started talking to leadership at NASA about this role, I said, “Well, you know, there’s complying with these actions that our overseers in the Biden administration are asking us to respond to, but there’s also what’s in the best interest of NASA.” I’ve pitched the role as doing both. When you think about building coalitions within an organization, you don’t necessarily build a lot of support and momentum around, “Hey, we’ve got to comply with something.” You do, however, when you start to show up as a value-added partner to an organization, and you want to help them in their adoption of AI, and you want to equip them to do so in a responsible, ethical and transparent manner. As you start to demonstrate those kinds of behaviors in your role, you start to build a lot of partnerships, and you’re able to build momentum. So that’s really what I endeavor to do in the role. Yes, I’ve got to take care of that compliance requirements, but I also want to do what’s right for NASA. And really what’s right for NASA is equipping the workforce and our leaders to understand how to navigate all the change that’s among us. The change related to, in particular, generative AI, which I think is really where the Biden administration’s executive order and guidance are focused.

Generative AI refers to software trained on massive quantities of information to generate text, images and other materials in response to prompts from users. — CH

They’re focused on privacy, ethical use, responsible use, transparency, because now we’re taking AI out of the hands of experts and for niche use cases that are very specific, to the hands of, effectively, everyone. If you don’t equip the workforce to be able to understand the implications of AI use in what they do, you could create some unforeseen consequences.

Q: To what extent can NASA’s existing policies for vetting new tools and technologies be applied to vetting AI tools, and where do you think you’ll have to create some new processes?

A: That’s a great question. Let’s talk about AI in a number of different vantage points: First, let’s talk about the systems engineering lifecycle. Any new technology that makes its way into a space-based platform or an aeronautics system is rigorously tested following these procedures. So that’s unchanged. AI is a new form of technology that offers a lot of promise and will go through that same kind of rigor. When you think about some of the science applications and how the scientific community does validation of scientific discoveries and peer reviewing, the scientific methodology, at its core, is about being able to repeat results and verifying those results. There’s a lot of rigor there as well. And there’s a lot of methodology in the open science community around the science lifecycle. And let me talk about cyber for a moment. When you think about how you onboard a tool and how you bring it into the environment and how you assess and authorize it for use within the nasa.gov protected boundary, there are very well-established risk-control mechanisms. In fact, NIST [the National Institute of Standards and Technology] released a document last year dealing with a risk management framework for AI tools. So that dovetails very nicely into the existing cyber policies. So a lot of that is intact. What’s not intact is there’s probably some nuanced differences around understanding AI, how to assess it from a security risk framework perspective, how to test it, how to assure transparency. In other words, it’s not just a magic black box; we need to understand what’s going in, why what’s coming out is coming out, and we have to have mechanisms in place to verify that there’s not model drift and that we have reliable outputs from those systems. There’s some nuanced differences there that we’ll be looking to address. The primary focus, however, is going to be really equipping the workforce. In other words, the skilling of the workforce to be able to safely handle the tools that are going to be dropped in their laps, effectively, through their desktop automation software: things that are going to be provided by Microsoft, for example, or Google or some of the other cloud providers like Amazon. Making sure that people are equipped to make good decisions when they use these tools. There’s some other things too around copyright protection that we probably need to address. You can go out and you can create an image using DALL·E, but you don’t necessarily know the source of all of those tidbits of information that has resulted.

He’s referring to “dolly,” OpenAI’s text-to-image model built into ChatGPT. Users can type prompts into the ChatGPT website or smartphone app to create “exceptionally accurate images,” OpenAI says on its website. — CH

If you’ve ever played with DALL·E, there’s also some inaccuracies that find their way into some of those images. Let’s say you might use it for the image of an astronaut, and then you look at the image and you find that the American flag doesn’t have the right number of stripes, doesn’t have the right number of stars. We’ll have to address things like that that are very specific to certain use cases in certain communities.

Q: How much of this training will be giving employees new technical skills as opposed to teaching them NASA’s guiding principles? For instance, the ideas that the human is accountable for accuracy even when AI is used.

A: At the NASA town hall in May, I made that very point: The accountability lies with the individual, not with the AI. It’s the individual’s responsibility to use the AI in a way that is well-suited for the use case — so you’re not using a hammer where a wrench is appropriate, and that they validate that the outcome from the AI is, in fact, accurate. So they understand the data that led to the outcome, or at least at some level the algorithmic approach that the AI took to get to an outcome, and then they’re comfortable that the outcome is in fact an accurate outcome. We have a “Summer of AI” campaign going on right now, where we have made available an enormous amount of content around how to effectively understand and use these AI tools. We have to adapt it to a broad range of skills and abilities. In some cases, we have published Ph.D.s in this field who have built careers around very, very specialized use of AI. Think about autonomous systems, or maybe AI is augmenting some type of space operation. One that we often talk about is navigating the Perseverance rover on the surface of Mars, where there’s AI assistance to make sure that the movements of the vehicle is terrain aware and safe. You also have people who maybe first heard of AI when ChatGPT broke in the news, and they really don’t understand any of what’s underpinning it. So we’ve tried to ensure that the campaign has content that applies to both. And certainly, that highly credentialed Ph.D. still would need to learn things about the implications of generative AI use as it relates to privacy and some of the ethical implications, especially if it’s in, say, a rights-impacting use case, like we’re hiring people, and we’re now using AI as part of that. That’s not to say we’re doing that, but that’s just an example of an AI use case that would rise to the level of needing a lot of additional scrutiny, because it would have rights implications to the population at large that’s applying for positions in NASA.

The U.S. Equal Employment Opportunity Commission last year found that AI software used by an online tutoring company automatically rejected applications from people 40 and older, a violation of the Age Discrimination in Employment Act. — CH

Q: Is this scrutiny done by people in your office at NASA Headquarters, or is the plan to embed AI reviewers at the different field centers or within specific programs?

A: I started at NASA as the chief data officer. That meant that, similar to this AI role, one could extrapolate that to mean that I’m responsible for all data. How can I possibly be aware of all the nuanced aspects of handling the various different data types? It’s the same with AI. So we distribute this out; we put a framework in place; we identify people who are in a leadership role — whether it’s data or AI — to speak for their organization and represent the equities of that organization. And they really come forward where teams need assistance with navigating a specific area. We have attorneys; our OGC [Office of General Counsel] colleagues are involved. We have our privacy people involved. We have our security people involved. So we have people looking across the broad enterprise — and of course, I’m in that role too — and then we’re going to have people who were close to the mission who were very aware of those specific details. So it’s a team effort, bottom line.

Q: What are some of the interesting AI applications at NASA?

A: One is augmented reality, virtual reality training. So let’s say you want to equip astronauts who are preparing for a mission for various different crew procedures. AI is now being used as part of research work to create those AR/VR environments, for training purposes. So for instance, you’re in a suit for an EVA [extravehicular activity] of some sort on the surface, and you’re conducting some type of operation on a lunar habitat or you’re doing something on the surface of Mars. They can do a lot of testing in that AR/VR environment. Think about a medical emergency for an astronaut who’s in space. You know that the astronauts are not doctors, but they need to reference some type of curated library of medical conditions to try to help quickly diagnose the condition. That’s an exciting opportunity. We have vehicles that are autonomously operated in space, and I think we’re going to continue to see more autonomy, more augmentation in the area of human spaceflight, and certainly in aeronautics as well. There’s a lot of efficiencies potentially to be gained. Think about if you’re a physician operator in Mission Control. Traditionally, you think of binders of content that you might have to parse through as part of your duties, and then you get to the point where now it’s a PDF, and you can search it. Maybe in the future, there’s a generative AI model sitting on top of it, and you can ask native language questions and get responses back. Think about the financial performance reports that we produce on a recurring basis in the same format. Give it the new numbers, and generative AI could produce the report. These are areas that we’re piloting now to determine the efficacy of generative AI use.

Q: When do you anticipate it being mature enough to assist with those tasks? Today, it seems like humans must review everything for accuracy.

A: It’s very use-case dependent, and a lot of the effectiveness of AI and AI use is dependent on data quality. It’s interesting; there’s a lot of free-form data that can exist within a dataset where there’s a lot of very highly structured data that can exist. It’s certainly easier for machines to interpret highly structured data, so there are certain use cases that are closest [to maturity]. There are certainly use cases that are already tried and true, like image detection. You’ve got 10,000 or 20,000 images and you train the model on “What does a cat look like?” Then you ask for it to produce all the images that have cats. We’re pretty mature in that space, but that’s very structured data, so you’re really training it on a pattern. That’s easier for a computer. I think, in some cases, we’re going to see rapid advances, while in other cases we’re going to try some things and we’re going to see that didn’t really yield reliable outcomes, so therefore, that is not a good use of AI. I often say, “Here at NASA, we’re on a learning journey,” and I would hope that every other organization would see it the same way. There’s a lot to be learned about not only the opportunity space but also cases where the reliability starts to degrade, so you’d want to avoid those use cases.

Q: Is the expectation that the industry will follow NASA’s AI policies?

A: From a systems engineering perspective, if they’re developing systems that NASA is going to use, their work is going to be tested. It’s a “trust but verify” model where they’re doing their own testing, and then of course we have to be satisfied with the results. So for the contractor community that’s supporting us, it’s shared responsibility; they will have as much responsibility in ensuring that use of AI is ethical, responsible, it’s reliable, there’s transparency. And there has to be good transparency not only in the AI but also in the business processes leading to the various different testing events so that the government has confidence in the work that the vendor was doing. I think where the risk starts to enter into the equation is not those types of uses, but let’s say, a small startup company who’s rushing to market a capability that helps with hiring. Have they put as much rigor into that as we would like to see, and are we going to be confident in their assertion that the AI is reliable and bias-free? Those are some of the areas that I think are a little more of an unknown for us. But so many of our vendors are going to say, “Hey, we’ve got AI in our tool now. And in our next update, we’re going to have AI doing these five things for you.” That’s nice, but how do we verify? How do we have confidence in the approach that you’ve taken within your AI, for us to be able to comfortably use it?

Q: What would you like to have accomplished by this time next year?

A: We’re entering into our third AI inventory.

NASA has submitted this annual report to the White House Office of Science and Technology Policy since 2022, listing NASA projects that use “AI tools developed in house,” as the 2023 report reads. — CH

We’ve gotten a lot more clarity now from OMB [Office of Management and Budget] on the nature of that inventory as far as what’s in and out of bounds. We are assessing safety and rights use cases as it relates to our use of AI as part of that inventory activity, and that’ll be published later this year. One goal would be certainly having that awareness of AI use more broadly across NASA, and building that network, having a distributed team of people. We’re thinking about all of the various different aspects of how to do this responsibly and how to really innovate in a way that’s optimal to NASA’s mission and meets, of course, the ethical, responsible requirements. Having that network alive and well, continuing to build upon that, and then having our governance in place. In other words, you can have people at the lowest level in an engineering organization who are extremely familiar with AI, but are they able to present their use case up to the most senior leader at NASA, and will that most senior leader understand? It’s about the team and making sure that we’ve synchronized activities. It’s almost like harmonizing the parts of an orchestra: In many ways, I’m not playing the instruments. I’m not the musical expert in any one of the musical areas. But I certainly am able to organize and synchronize activities across NASA and get everyone rowing in the same direction.


About Cat Hofacker

Cat helps guide our coverage, keeps production of the magazine on schedule and copy edits all articles. She became associate editor in 2021 after two years as our staff reporter. Cat joined us in 2019 after covering the 2018 congressional midterm elections as an intern for USA Today.

NASA’s AI czar