Q&A

Analyzing AI


Nisar Ahmed, aerospace engineering professor at the University of Colorado Boulder

Positions: Since 2021, assistant professor of aerospace engineering sciences at the University of Colorado Boulder, which he joined in 2014 as an assistant professor. He oversees a group of graduate students conducting research at CU Boulder’s Cooperative Human-Robot Intelligence Lab. Since 2018, CU Boulder director for the National Science Foundation’s Center for Center for Autonomous Air Mobility and Sensing, a program for university researchers and their students to investigate, with industry partners, the challenges related to operating drones and remotely piloted aircraft, including the coming electric air taxis. 2012-2014, postdoctoral research associate at Cornell University’s Autonomous Systems Laboratory studying how autonomous robots could learn to share information among one another.
Notable: As principal investigator of a DARPA-funded university consortium, led development of algorithms that assess the “competency awareness” of machine learning software, meaning the ability of the software to know when it won’t be able to complete a task and alert the humans in charge. Principal investigator of the Collaborative Analyst-Machine Perception project funded by the U.S. Space Force. CAMP aims to give satellite operators AI-driven surveillance visualization of the space environment to help them detect unusual or interesting events.
Age: 39
Residence: Boulder, Colorado
Education: Bachelor of Science in engineering, Cooper Union for the Advancement of Science and Art in New York, 2006; Master of Science in mechanical engineering, Cornell University, 2010; Ph.D. in mechanical engineering, Cornell University; 2012.

A conversation about artificial intelligence is like a finely woven sweater. Pull on a single thread — how machines could be taught to reason like humans, for example — and you quickly begin to unravel a series of interconnected threads: How do you make sure the AI is being trained with good information? How large of a role should humans play in monitoring the decisions AI makes? Aerospace engineering professor Nisar Ahmed has studied these and many more questions over the course of his career, which most recently has focused on the dynamics of human-AI collaboration. For the aerospace industry, the implications of more powerful AI range from machine-controlled fighter jets that could fly in formation with human-piloted craft to an AI-augmented spacecraft that assists astronauts traveling to deep space destinations, including Mars. I called Ahmed at his office at the University of Colorado Boulder to discuss these and other topics related to AI and machine learning. Here is our Zoom conversation, compressed and lightly edited.

Q: Aerospace has long relied on various forms of autonomous technology. What distinguishes those from artificial intelligence/machine learning?

A: AI is the broad field of using computation and algorithms for problem solving, and that really started kicking off in parallel with the blossoming of computer science as a field. It motivated a lot of people to ask how computers can think like people and what’s the difference, and that’s been very promising. ML only came onto the scene relatively recently, in the last few decades. It’s arguably a subfield of AI that really looks at getting computers to automatically find functions that turn data X into data Y. It’s almost like computational alchemy — “Turn this into this.” And because it’s been developed with these off-the-shelf tools and black box kinds of things like TensorFlow and PyTorch and other frameworks, anybody can use them without fully understanding exactly what’s happening under the hood. In contrast, the autonomous systems that we’ve previously used in aerospace required a lot more specialized knowledge and required an understanding of the platforms and the systems and the domains you’re operating in. The big difference is that whereas those systems are primarily built around the physics of the platform and what you had to do to keep things stable or to behave a certain way, with AI and machine learning, you can make higher-level decisions that before you had to have a person make for you: where to drive the car, where to drive the airplane, where to land on a planet, what to do in situation X. Now you can empower computers to do that for themselves and take people out of the equation to some degree. The other side of it is how they solve the problems. The other systems that we used to build for spacecraft — like Mariner 10, the Voyager probes — those are autonomous but extremely basic in what they could do. Now, we have all kinds of hardware and software and sensors and platforms, and you can hook them all up to computers and enable them to do more. That allows spacecraft and other vehicles to be deployed in more situations than we previously were able to do. But that comes with its own challenges and fundamental limitations of what AI and ML can do.

Q: An example I frequently encounter is automated transcriptions, which sometimes make the silliest errors. What explains the discrepancy of why AI is so good at some tasks but so bad at others?

A: It’s just like any other engineered system, where if it’s designed well — with a scope and a purpose in mind — then it should be really good at what it’s designed to do. The problem comes when we’re trying to solve more ill-posed or open-ended problems where suddenly context and meaning and other kinds of variables that are not necessarily easily captured become important. If we’re talking about things like automatic transcription, what helps these systems improve over time is having more data and retraining them and getting access to more and more context so they learn from their mistakes. That doesn’t always translate to every single kind of problem. Self-driving cars are a great example: Even though they’ve driven millions of miles, suddenly they can run into one situation that’s nowhere in their training data set, and they don’t know what to do because they don’t recognize this object or that object or this situation. When we talk about autonomy, we mean the ability to make your own decisions, usually under uncertainty or without complete information, and being able to intelligently respond to the circumstances and situations around you. The problem is that these meanings are very fuzzy and flexible to us as people, and we know what we mean when we say that, but when you tell the computer, you have to tell it exactly what to do in those situations. At the end of the day, you need to pair the technology with the right kind of risk assessment and an understanding of what it needs to be able to do versus what it can actually do. So using something to write a document: If it makes mistakes, you can live with those mistakes. But if it’s making a mistake on the road or in the air or in space, the consequences are very, very different.

Q: That reminds me of the MIT researcher Josh Tenenbaum, who’s studying how humans are able to make such big inferences from such little information. It makes me wonder if AI can ever be taught to fully think and reason like a human.

A: It’s a fascinating question. People are really good at filling in the details or coming up with some kind of a model of what’s happening and forming beliefs around what they think they want to have happen. Another secret ingredient to human cognition is this desire element, which machines don’t inherently have. They’re programmed to do what we want them to do, so they don’t necessarily have this desire to go out and find the answers to all these questions, because they’re just built to answer certain questions. That goes back to why they’re so good at some things but not others: They’re very narrowly designed to solve very specific problems in very deep ways, but then they are not able to generalize that very easily the way humans can, because we are much more adept at finding those kinds of connections and associations without necessarily having to be very precise about it. It’s an odd mix. I always go back to the idea that intelligence is not just being able to solve problems. It’s also being able to ask the right questions and then taking logical steps to find more information and detail about what the question really means. And then you find the answer eventually, but you learn more by asking than you do by just having an answer that you always throw at the same problem all the time, which is essentially what machines are doing.

Q: For the aerospace industry, what are the areas where you think AI and ML will make the biggest difference?

A: The Holy Grail for robotics is to have something that’s as capable as a human in terms of adaptability and intelligence but even better, and having more computation, hardware power, horsepower. In the near term, we see things like air taxis and the next generation of unmanned aerial systems. How do regulators understand if these systems are safe to use and if they should be used and deployed in a way, as well as how do they work with the actual people operating inside these systems? In truth, there can never really be an entirely solo AI system because no one would care, right? Maybe we can automate things that are really monotonous, dull work that doesn’t require anybody to be there. But very often, those kinds of problems already have pretty good guardrails and standards set up, so we don’t need people to do them. But if you’re talking about space exploration, landing on another planet or exploring the moon, doing search and rescue with drones to look for other people, you’ll likely always need humans. There is a lot of information, and every problem is going to be a little bit different; oftentimes, people will immediately understand what’s going on, but then they have to figure out a way to tell this machine what to do with it. And the machine system has to be able to go back to the person and say, “Where can you help me, and what am I missing?” So in the end, it needs to be human-centered, and the trick that we think we have at our disposal is that we have to design these algorithms to be exposed to some extent to the users in order to really work at that level. For example, very often the people who design things like autopilots aren’t necessarily pilots; they don’t necessarily know how to fly airplanes, but they understand control theory, and they understand physics, flight mechanics. In the same way for autonomy, the people who design robots to go out and look for people out in the wild are not computer scientists or roboticist. What do they know about what a real search mission involves? So to some degree, allowing the user to help reprogram or to keep up the programming or to maintain the system is part of the challenge. How do you make it possible for these algorithms to still work the way they’re supposed to work and still do things the right way, without sacrificing the ability of people to inform them in whatever situation that they’re in?

Q: Human spaceflight seems like an interesting case, because not only will a high degree of AI and autonomy be required for things like missions to Mars, but the humans would be uniquely reliant on the AI.

A: Trust is an interesting concept. As designers and as engineers, there are things we can do to engender that trust in human-AI interactions or make sure that the right levels of trust are there. At the end of the day, it comes down to whether we understand what people expect of these systems in the first place. What is the person’s job versus what is the machine’s job? Sometimes they have to work together because there’s no other way to do it; other times it will be a choice of whether they get to work together, and then that’s where things get a little bit more fuzzy and difficult. Self-driving cars as an example — should you take the wheel or not? Imagine your car is driving itself down the middle of the highway, and the human driver suddenly gets control of the wheel and isn’t ready for it. There’s a term I heard recently called locus of control, and in cognitive science, just like when you’re driving your car, you have a mental model of the motion you’re going to get if you turn the wheel or hit the brake or do something. If you abstract away too much of that and you detach people from the problem, that can be a lot harder to grab onto. And then people start reverting to different patterns of behavior to try and maintain whatever locus of control they have, or they start misusing or abusing autonomy because it doesn’t quite line up with their mental model of how they want things to go. So sometimes it comes down to whether or not it’s convenient to use it. For things like sending people to Mars or exploring the moon with robots, we have to make it possible for the people working in those environments to adapt those systems the way they need, because we’re not going to have ground support for long periods of time; you’re not gonna be able to send it back to the shop. Sometimes that dictates simplicity in the design of the system. But it’s hard to guarantee trust, because it’s a personal choice at the end of the day. Training people to understand what the systems can and can’t do and what they can and cannot do with the system will be the key aspect there. But allowing that fluidity, and allowing that flexibility so that people still feel like they have the control, is going to be one of the harder things.

Q: And depending on the application, the users might not have training. As a member of the flying public, I wouldn’t get the opportunity to learn about the AI controlling my airliner, if we ever have self-flying aircraft.

A: I was a part of a panel at a robotics conference a few years ago where this exact same question came up: What if you don’t know who’s flying the plane? The cabin doors are locked, so it could be a robot for all you know. It’s not an irrational fear. You trust the human pilot because you know that you both have a fear of death. If the aircraft is about to collide with something, you can trust that they’re going to take steps to avoid that, so even though you’re not in control, you at least understand what’s going on or you can have some capability or feeling that puts your fears at rest. Whereas if you’re dealing with a black box system that doesn’t communicate with you and is thinking about the problem in a totally different way that humans can’t necessarily comprehend, then that becomes a different prospect. And even with astronauts or people who are highly trained and skilled in certain areas, there can still be a range of reactions. We have a research project with the U.S. Space Force that’s looking at automated target tracking and classification for satellite-based surveillance systems. AI is really good at chopping on really large amounts of data and analyzing all this information and trying to sort it and prioritize it for people to look at so that it helps them do their jobs a little faster and tags interesting events. But the challenge is that people have their workflow, and they’re trained a certain way. So if the system is presenting information, even with the wrong color or the wrong shape and size fonts, those are things that to us as engineers seem completely trivial, but to somebody who’s in the situation, they depend on those visual cues to order their tasks. Like, saying there’s a 20% chance it’ll be cloudy versus an 80% chance that it’ll be sunny means different things to us even though they’re the same. Those are the kinds of things that are hard to teach your computer, and it’s doesn’t necessarily show up in the data all the time.

Q: Can you elaborate on the friction or areas of tension you’ve seen in human-AI interactions?

A: Good teams, even human teams, don’t work without friction. They work through friction. You actually need that conflict to slow down and reflect and think about what the other agent or person or machine is trying to tell you, instead of just blindly accepting it or ignoring it. At CU Boulder, we did a study for a search-and-rescue scenario where we had people help reprogram the algorithms on the fly by providing new information as it came in. They could draw in features on a map that were not there before and assign semantics to that, and then the AI could actually come back and ask questions. While it was clear that the AI and the human had more or less the same picture of the world, the actions that resulted from that were not necessarily agreeable to all the humans who were interacting with the system. They would actually try to hack the perception, almost tell lies to the machine to get it to behave the way they wanted it to behave given the information that they thought they just provided, instead of just trusting that the machine knew exactly what it was doing and solving this really complicated optimization problem. That showed us that people expect a certain kind of interaction or a certain kind of behavior based on information that they give. In aerospace, computers are often designed to make decisions by the OODA Loop method — observe, orient, decide and act — but that’s not how people make decisions. But that’s how computers are built to make decisions. People don’t just want to give information; they also want to suggest actions. And they don’t want to just address actions; they want to give you information. Designing something that can accommodate both those approaches can be a little challenging.

Q: I can also imagine that in many cases, humans chafe at the idea that they are equal participants with the AI, instead of clearly being in charge.

A: I’ve heard stories that some fighter pilots don’t like the idea of having the plane flown for them because they don’t know that it’s going to get done the way that they think it should be done, even if it’s done in an objectively better way. But I think it depends on the person, and it depends on the context. For some people in high-risk, high-tempo situations, they don’t want to depend on something that they don’t completely understand in the heat of the action. And in some cases, you can’t stop and ask questions. In other situations, you have that ability to deliberate and to question and to go back and forth. But knowing when that is tricky, and very often it’s up to a person to decide. But if the person doesn’t know what the system is capable of doing, then they won’t know when the right time is to do that. So it’s a bit of a chicken and an egg problem. That’s why getting people into the design process sooner rather than at the end is important. It’s unfortunate that for aircraft design, very often the pilot is the last person or the controls of the last thing you think about, whereas maybe you should design for handling and controllability before everything else. The same would be true for the algorithms: Maybe we have to have a more holistic picture of how these things work, how people and these machines work together first, and be OK with not knowing exactly how that might turn out later, but give them enough guardrails and affordances to adapt in the moment.


About Cat Hofacker

Cat helps guide our coverage, keeps production of the magazine on schedule and copy edits all articles. She became associate editor in 2021 after two years as our staff reporter. Cat joined us in 2019 after covering the 2018 congressional midterm elections as an intern for USA Today.

Analyzing AI