5 executives on how AI powers their Earth observations


Rapidly expanding satellite constellations gather such extensive imagery and other types of location data that making sense of the information is becoming impossible without heavy reliance on artificial intelligence and machine learning. The commercial sector’s approach to Earth observation has attracted the attention of the U.S. National Reconnaissance Office, which operates the country’s spy satellites. In 2022, NRO awarded contracts worth billions of dollars over a decade to the U.S. companies  BlackSky Technology, Maxar Technologies and Planet to provide imagery to U.S. intelligence, military and civil agencies. NRO is also evaluating products and services from other companies, including HawkEye 360, which detects radio signals and geolocates their sources, and Spire Global, which collects the Automatic Identification System broadcasts from ships and automatic dependent surveillance broadcast from aircraft and also puts GPS signals to work for weather forecasting. We asked executives from these companies how artificial intelligence and machine learning figure into their future plans.

There’s a couple of areas in which machine learning can be powerful. The first one is on-orbit data processing. Traditionally, people downlinked raw data or select derivatives. The more intelligence you can do onboard, the easier it gets to downlink. You get fresher insights, lesser data volumes to downlink and things generally get easier. We are going toward that direction. A number of our satellites have devices for this type of computing onboard. We post-process radio frequency data before downlinking it, or we reprocess data to get extra value.

For ship tracking, we extract AIS [Automatic Identification System] messages with traditional digital signal processing methods. Then we run the data once again through our machine learning processor to extract messages that we weren’t able to detect with traditional methods. For example, when a satellite is over an area with a lot of ships talking at the same time, we pull these different signals apart to process them one by one.

As machine learning computing platforms become more power efficient and volume efficient, we’ll be able to extract more in-depth insights, like determining which messages are emanating from specific directions or specific emitters.

We also use machine learning to prepare data to be ingested into weather models and to automate satellite and network operations. The satellite schedules are synchronized to do what they need to do and downlink what they need to downlink. Then, they’ll say, “Give me a new schedule” or “Something’s wrong.” Because we’ve been operating for years, we have a lot of historic telemetry data on how we’ve responded to specific events. We can train some of these models to get us to a more autonomous state of operations.

One more interesting application is vessel detection. Similar to picking out signals from RF [radio frequency] data, this is looking at pictures and seeing which one is a ship and which one isn’t. Then, we enable other sensors to capture information about that vessel or area.

 

We build, maintain and sustain machine learning models that we execute at a global scale with hundreds of terabytes of data weekly. Our customers are trying to find needles in haystacks. They’re trying to find key items of interest and understand where events are happening. With the explosion of data, that haystack is getting larger. We apply machine learning techniques at scale to reduce the size of that haystack, so that customers can get to the answers that they want right away.

By fusing together many different data sources, we create foundational maps of the Earth to understand human activity. We enable multisource analysis by merging geospatial data in our Precision3D foundational map of the world, which makes it significantly easier to find patterns.

On the change-detection side, we create indicators that direct human beings to see where events of interest are happening. We’re able to build a picture of a pattern of life that helps analysts keep track of evolving situations.

There’s a tremendous amount of data that comes from space. Traditionally, a lot of that data has been downloaded and pushed to a central repository for enormous amounts of cloud-scale computation. Maxar is pushing much of that processing toward the locations where the data is collected so that only mission-relevant data is downlinked.

Maxar is developing a just-in-time modeling concept, meaning quickly creating image-processing algorithms to highlight objects or events, because Earth is rapidly changing beneath us all the time. The missions that our customers care about, such as national security,  are also evolving rapidly. Oftentimes, downloading data, analyzing data, building training datasets, building models and deploying them can take a long time. We’re investing heavily in the infrastructure to reduce the cost and the time it takes to produce those models. When a customer asks a question like, “Where are the cranes in this build site?” we want to be able to provide an extremely fast response. That model may not be on the shelf today, but by gathering the requirements and having a system to generate models that is very flexible and rapid, we can swiftly produce analytics results for them.

 

We designed this system to monitor the entire globe and generate a lot of data. From that data, we can understand patterns of activity, spot anomalies, count objects, see manufacturing rates and track objects across the supply chain. We also built our system for extremely low latency. Everywhere in the chain, from tasking to downlinking, processing and exploiting the imagery, we look for ways to make it as fast as possible. Because of those two reasons, we knew that we needed an AI-powered system.

Tasking is automated. Our AI reads the world’s news, including hyper-local foreign language news sources all the way from the Associated Press and BBC. It identifies emerging events around the world and automatically tasks our satellites to take an image. This is really helpful for natural disasters or anything that needs a quick response.

During the fall of Afghanistan when a city or an air base would be captured, the system would task our satellites to take an image. It collected hundreds of images acting on hundreds of tips from open-source reporting.

Going forward, we are doubling down on the success that we’ve had to date in processing our own imagery with our own AI. Now, we’re processing other forms of imagery, including synthetic aperture radar. Customers can get updated looks at facilities they care about. We are moving toward that multisensor, multidata-source approach to keeping track of activities happening around the supply chain, national security and other things that have major impacts on the world.

We want our customers to be the first to know about anything that’s going on around the world. To do that, you need a fully automated system that has that kind of cognition built in to adapt to a dynamic world.

 

We collect images of the whole world every day, which creates this massive dataset, an archive of over a billion images. We use machine learning to help make that imagery consistent and ready to be analyzed by algorithms or models. We do things like detect clouds and run our analysis with the nice clear data. We use computer-vision techniques to extract objects and to identify patterns. We find roads, buildings, vessels and planes. We can identify change through time. In addition, we measure soil water content and land surface temperature. These algorithms run automatically against massive datasets to make Planet’s data more accessible to our customers. You don’t need to be a geospatial expert. We pull these data products together in a way that makes them look like a time series, like something you could put in a spreadsheet.

In the future, we want to use machine learning algorithms to extract all the insights that exist in that archive and get to more refined products that we call planetary variables, like soil water content and land surface temperature, roads, buildings, vessels, aircraft. These are derived information products that quantify what’s happening on the Earth around us. We can combine planetary variables into more of a predictive product. An example of that would be combining land surface temperature with biomass and soil water content to predict yield for agriculture applications.

We use a massive amount of automation in our collection planning and our mission planning. We take in tens of thousands of customer orders at a time for our high-resolution SkySat fleet. Day in and day out, we are constantly looking at ways to optimize that collection. Machine learning is enabling us top to bottom.

 

RF data, particularly in its raw form, can be very complex to understand. Machine learning tools help pull out trends within the raw data to help the analysts reach conclusions faster and derive value from our data.

When you have thousands, hundreds of thousands or even millions of geolocation points on a map, finding those few that are truly relevant to whatever information you’re seeking can be difficult. Machine learning algorithms can process that data at a very large scale and provide recommendations like “There’s an interesting pattern here” or “Here are a couple of vessels that may be engaged in nefarious activities.”

Some folks want to geolocate signals within their area of interest; perhaps they’re monitoring port activity. Other customers are interested in specific types of emitters, like a type of radar. For other customers, the presence of activity can be of interest. GPS interference being present in an area where it shouldn’t be is interesting to some customers.

Going forward, we’re using these tools to give customers more context. We start by providing more information about our geolocations, grow that into providing more information about the behavior of the various emitters and then add context through things like imagery, AIS [the Automatic Identification System broadcasts from ships] data and other sources we can fuse with our information.

Say we’ve detected an X-band radar in the ocean. OK, whose X-band radar is that? What’s the make and model of the X-band radar? Where has this radar been before? Where might it be going? Who has it interacted with? We use machine learning to help create those links for the customers.


About Debra Werner

A longtime contributor to Aerospace America, Debra is also a correspondent for Space News on the West Coast of the United States.

5 executives on how AI powers their Earth observations