Needed: Rules of thumb for avoiding collisions in space


The International Space Station and thousands of anthropogenic space objects such as satellites and launch debris often pass uncomfortably close to each other. During the runup to one of these close approaches or conjunctions, one organization or another will invariably release a probability-of-collision figure to the media.

PoC is often the deciding factor about whether spacecraft operators should expend precious fuel maneuvering to avoid their craft becoming orbital wreckage. NASA, for instance, makes use of PoC for deciding whether or not to maneuver the International Space Station, which reportedly was maneuvered as recently as July 3 to avoid a potential conjunction with debris within hours.

The interest in potential collisions is understandable, given the stakes, but relying on PoC alone is the wrong measure of collision risk because its calculation is subjective. Two people with the same data will compute different PoCs just due to differing assumptions with how they treat the measurements and the underlying astrodynamics.

Before I suggest a better process, let’s look at our sources of knowledge about the trajectories and characteristics of these objects. The information we have comes from radar and telescope measurements that are noisy and biased. Determining and predicting the trajectories of the entire population of objects would be impossible, so instead we must invoke statistical inference. Here, the analysts are divided into two philosophical camps: Frequentists and Bayesianists.

Frequentists believe that probabilities represent the frequency of an outcome based on how often that outcome has occurred in the past, such as how often specific satellites have been predicted to come close to each other. They assume that the observed data are partly the product of a random process but also dependent upon some fixed, deterministic, model parameters such as a satellite’s location. Frequentists try to choose a hypothesis that minimizes wrong decisions given a set of hypothetically repeated trials.

Bayesianists, by contrast, assume that probabilities represent the degree to which a hypothesis is believed to be true. They assume that the observed data are a realization of a random variable which itself is dependent on a set of uncertain model parameters. They also assume a prior belief, like the satellite’s location, to be uncertain but this uncertainty is known. Herein lies the danger of assuming that we know exactly what is unknown, meaning there’s no uncertainty about the uncertainty.

All orbital analysts assume that the true orbit lies within our measure of error uncertainty. The error is the difference between what is true and our belief. In space surveillance, most algorithms represent the error as random. A Frequentist considers randomness, also known as aleatory uncertainty, as irreducible. No further measuring will provide more information or ability to better predict an outcome. However, in practice, we see that as we gather more observations, we can learn more about the objects. The fact that we can learn more about something implies that our uncertainty wasn’t all due to randomness but rather, ignorance.

Consider the uncertainty in the computation of PoC. The error uncertainty of the relative positions of two objects is modeled as a probability. As such, the PoC calculation tends toward zero as this error uncertainty grows larger, known as probability dilution, giving a false sense of risk. It makes no sense that the more ignorant we are of an event, the less probable it is to occur. Probabilities should model and represent randomness, not ignorance. We’re unlikely to know for certain when our ignorance is equivalent to the randomness, so we should avoid using probabilities as such to quantify the collision risk. Instead, we should use a combination of the possibility of a collision occurrence via null hypothesis testing, in which we only remove hypotheses discarded by evidence, and consider the environmental consequences if the collision were to occur. To wit, instead of modeling our uncertainty as a probability we should model it as a possibility, meaning ultimately there is a yes or a no answer. We would then also take into account whether an operationally hazardous cloud of debris would result for decades, if the collision occurred.

Pragmatically, we need some rules of thumb resulting in a default action that we take under certain conditions, once we predict a possible conjunction between a pair of objects, assuming at least one of them can have its trajectory intentionally altered. The criteria for the default action at the maneuver decision threshold are when:

  • We have no further data.
  • We have too few data.
  • We have overwhelming evidence that our null hypothesis is true.

In fairness, there are times when PoC may be a meaningful measure of the risk, such as when there are sufficient quantities and qualities of measurement data for the inference process and our beliefs are sufficiently informed. However, we may be unable to find a prior belief that properly accommodates our actual ignorance. It is of no surprise that Rudolf Emil Kalman, a Hungarian-American expert in systems theory and estimation, was in search of a “prejudice free” inference method when he co-developed the Kalman Filter, a method of inferring model parameters from a given set of observations.

My recommended approach to minimizing collision risk in space would be to:

  • Assume the uncertainty is epistemic, meaning there are systematic effects we are just ignorant of, and if a conjunction is predicted, evaluate the possibility of a collision.
  • Determine the default action based upon the criteria previously provided and the predicted debris-generating consequences assuming the collision occurs (number of objects and/or contribution to the saturation level of the local orbital carrying capacity).
  • Given the default action (maneuver or not), determine what the null hypothesis needs to be.
  • Always perform the default action unless the collected evidence makes the null hypothesis look ridiculous.

About Moriba Jah

Moriba Jah is an astrodynamicist, space environmentalist and professor of aerospace engineering and engineering mechanics at the University of Texas at Austin. An AIAA fellow and MacArthur fellow, he’s also chief scientist of startup Privateer Space.

Needed: Rules of thumb for avoiding collisions in space