Flying digitally


Today, earning type certification for a new kind of passenger jet culminates with months of expensive test flights, some of them harrowing. Keith Button spoke to aerospace engineers who aspire to change this with a bold idea: certification by analysis.

On a clear morning in April 2011, an experimental Gulfstream G650 rolled down the runway with two pilots and two flight test engineers aboard. One of the pilots slowed the right engine back to idle, just as planned. Gulfstream was in the midst of gathering data on the multitude of performance points required by FAA to earn type certification for the G650. Of special concern for Gulfstream, accident investigators would later note, was demonstrating that the G650’s slow stall speed would permit it to climb into the air safely on one engine, even on a relatively short runway of 6,000 feet (1,800 meters). The key for that would be the V2 takeoff safety speed, the velocity required to climb off the runway to a safe altitude once the plane had gone too far down the runway to abort. For the G650, a slow V2 was desirable, because Gulfstream intended to certify the G650s for shorter runways. Otherwise, the design would be limited to operating out of large airports.

As the plane lifted off a few feet, it rolled to the right due to an outboard wing stall that sent the wingtip into the runway. The aircraft spun off the runway and into a concrete structure and a weather station before skidding hundreds of meters and bursting into flames. All aboard were killed in the crash at Roswell International Air Center, New Mexico.

Many lessons were drawn from the crash. Gulfstream appointed an aviation safety official who reports directly to the company’s president, and it set about improving communications throughout the company. Another lesson had to do with the power of computer simulations to avert tragedies like this one. As the U.S. National Transportation Safety Board put it, “Gulfstream did not validate the speeds using a simulation or physics-based dynamic analysis before or during field performance testing. If the company had done so,” it could have recognized that the pilot could not have gotten the plane safely into the air at the targeted V2 speed, NTSB concluded.

Flash forward 11 years, and a loose-knit group of computer scientists and aerospace engineers in the United States and abroad want to embed simulations more deeply into the certification process, and not just for business jets but also for future generations of airliners built by Airbus and Boeing. To do it, they are running experiments and demonstrations aimed at sharpening the simulations that underlie their “certification by analysis” or CBA concept, in which computer models would substitute for many of the individual flight tests or ground tests now required to earn a type certification.

Today, CBA has earned only limited acceptance from FAA and the European Union Aviation Safety Agency, which certify the bulk of the world’s passenger aircraft types. Wider adoption could reduce the number of high-risk flight tests, although accidents during certifications remain thankfully rare. The Gulfstream crash was the last fatal accident anywhere in the world during certification tests of an airliner or corporate jet, according to aircraft records maintained by the Aviation Safety Network, a unit of the Flight Safety Foundation of Virginia. While adopting CBA would likely never completely replace all physical flight tests, proponents say it would increase the pace of innovation by shortening the certification process and lowering the costs of certification, thereby opening up new design possibilities.

Manufacturers typically need 12 to 18 months to run through all the flight maneuvers required to certify that an aircraft meets each of the hundreds or thousands of requirements for its overall certification, says Juan Alonso, an aerospace professor at Stanford University. Longer, if things go wrong. Gulfstream, for instance, earned the type certification for the G650 in September 2012, about a year later than planned.

Of course, it’s not just flight tests that are required for type certification: Manufacturers must demonstrate through ground testing that their planes and components will hold up under extreme forces. For example, a requirement might specify that wings must stay intact under a maximum of 2.5 Gs, plus a margin of safety to 3.75 Gs. In such a case, a manufacturer will put the plane in a test rig and pull up on the wings with cables to the 3.75-G standard and beyond, until the wings snap. CBA could play a role, here, too, say advocates.

Gauging uncertainty

In the design phase, engineers predict the aerodynamic performance of an airplane with computational fluid dynamics, or CFD, modeling. The model does this by processing factors including the shape, speed and operating conditions. But relying on a model to simulate a flight test in a particular set of operating conditions would be even more complex. The model must show the lift and drag, how quickly the plane turns, whether it is stable or unstable, and other characteristics that would be required by a regulator to certify the plane for that particular set of conditions. For instance, builders must show the plane could remain in stable flight with an engine out at a specific speed, altitude and angle of bank. Similarly, engineers can predict structural performance — how much a wing will bend under a certain amount of force, for example — with computational structural mechanics modeling, or CSM.

The problem is that for many of the scenarios a plane must be certified for, a model’s prediction of performance isn’t accurate enough to replace flight testing, and the margin of error is not known, says Alonso, one of the authors of a NASA-sponsored 2021 report “A Guide for Aircraft Certification by Analysis.”

“Quantifying uncertainties, I believe, is absolutely necessary in order to be able to have any credibility with the regulators,” Alonso says. He means uncertainties that could come from slight temperature or wind speed differences, for example, or from imperfections in the model itself. Computing uncertainties is “kind of the holy grail” for CBA, showing regulators the margin of error for CBA predictions while also proving that CBA can be faster and more effective than physical testing. However, he says, currently there is a lot of research into quantifying model errors, and none of the research is “ready for prime time.”

“If you’re a regulator, you want that value and a real uncertainty distribution coming from all of the sources,” Alonso says. “As a member of the flying public, I would say I don’t want to get on an airplane that does not prove things to me beyond a reasonable doubt. And reasonable doubt typically is four to five nines” — simulations that can guarantee that 99.9999% to 99.99999% of the time, the airplane will meet the safety threshold that it is being tested against, the 3.75-G load on its wings without breaking, for example.

“I only want these airplanes to fail once every several million hours of flight.”

Flight tests also have uncertainties, but to a lesser degree than today’s computer simulations, Alonso says. For example, the results of a flight test may be thrown off by slight variations in wind speeds, temperature or air density, or tiny imperfections in a wing’s leading edge, and these could affect the accuracy of the flight test’s prediction.

“All one should ask from simulations used for certification purposes is that they are as good as the flight test,” he says.

Regulators’ view

FAA didn’t respond to questions about CBA, but Rob Gregg III, chief aerodynamicist for Boeing’s commercial airplanes division, says the agency accepts CBA as part of earning certification for minor design changes, such as the addition of an antenna when the same antenna had already been certified on a derivative model of that plane. In those cases, CBA typically reduces the number of flight tests required. As more trust is established in the CBA predictions, Gregg says, he expects regulations will allow for more applications of CBA.

By allowing CBA to replace flight testing for some type certification tasks, Gregg says, the matrix of conditions for the physical flights could be thinned out to eventually replace 50% of them. And the combination of flight testing and CBA could expand the scope of a type certifications to cover a wider range of situations, such as multiple engine failures or other combinations of failures or scenarios that would be difficult or impossible to have a test pilot perform.

Another advantage of CBA: The computer models that perform the analyses could also help designers bring their concepts to fruition faster and avoid surprise problems during the development of an aircraft that otherwise wouldn’t be revealed until flight testing, Gregg says.
EASA and FAA are “very much aligned” on their view of CBA, says Willem Doeland, a structures and materials specialist at EASA. “Our thinking is the same in terms of what it takes to accept tools, to accept analysis results, when to do a test or not.”

EASA decides on a case-by-case basis whether it will accept virtual in lieu of physical testing for a particular certification requirement. For derivatives of certified aircraft, virtual tests have begun to replace some of the physical ones for aspects of the aircraft that are similar to the certified version, provided the manufacturer’s computer model has been validated with flight tests and ground testing, Doeland says.

“It’s quite difficult; it takes engineering judgment to draw the line” for where physical flight testing should be required, he says.

For that decision, EASA considers whether there are changes from previous versions in structures and materials and in the electrical, flight control and hydraulics systems. Aircraft must be certified to fly within certain ranges of altitude, speed and temperature, so the agency considers proposed adjustments in those. Also considered are previous experience with the applicant and the software tools it applies in lieu of physical testing.

“The further you move away from where you are comfortable with, the more the need for additional testing is apparent,” Doeland says.

One question EASA has faced is how it can come to trust the findings of software tools when the agency does not have a knowledge base to do so, nor the resources to develop that knowledge. As a solution, EASA is pushing software developers to develop their own industry quality assurance standards. Tools that are widely used by aerospace companies are probably already well-vetted, the thinking being that any problems would be uncovered because there are so many users. There’s more of a potential for bugs in software that is developed internally by aerospace companies and restricted to in-house use only, Doeland says.

The good news is that the tools have been improved steadily over the past 10 to 15 years through innovations inspired in part by the competition between Airbus, Boeing and others to shorten development and certification cycles, Doeland says. The airplane builders apply the simulation tools in design and development, build experience with the tools and then apply the tools to certification, where permitted.

Wanted: Better models

Today’s computer models have a glaring limitation: They don’t do a good job of predicting the behavior of the air flowing around an aircraft when its wings have a high angle of attack in an extreme maneuver or near-stall conditions, Alonso says. “We really want to have simulation capabilities that are far more accurate than what we have today.”

Simulations cannot yet accurately predict the maximum lift coefficient when airflows become highly separated, which occurs during edge-of-the-envelope flight maneuvers. Without that coefficient, the velocity minimum unstick, meaning the minimum takeoff speed, can’t be confidently calculated, nor can the minimum runway length for takeoff, for example.

Such highly separated airflows are especially tough to predict with one of the most commonly employed aerodynamics model for new airplane designs: the RANS model, short for Reynolds-Averaged Navier-Stokes. Aerodynamics software that implements the RANS model can predict aerodynamics well for smooth air flows, such as air flows that are not separated.

Another commonly used method for predicting aerodynamics, the Large Eddy Simulation model, or LES, has shown significant promise for predicting aerodynamics during edge-of-the-envelope flight, Alonso says. LES makes fewer assumptions than RANS, so software based on LES is more accurate in modeling separated flows. LES results are more credible than those of RANS, Alonso says, and therefore have a better chance of being employed for CBA.

RANS uses more estimated values in its calculations for some of the smaller three-dimensional cells in the grid that covers the space it is simulating. LES is more detailed and more accurate for computing the actual values for all of the cells in that grid. For predicting the minimum takeoff speed for an aircraft, for example, the maximum lift coefficient predicted by RANS would be 10% to 15% different than the LES prediction, which would be more accurate, Alonso says.

The problem with LES is that it is computationally expensive, requiring vastly more time and computing power to predict the aerodynamics of an airplane or wing compared to RANS. Researchers are spending a lot of time trying to figure out when to apply LES and how to speed it up, Alonso says.

To simulate the aerodynamics of an entire airplane in cruise flight — flying straight and level — a RANS simulation would typically take two to four hours running on 500 central processing unit cores, dividing the three-dimensional space of the simulation into a mesh of 150 million cubes of various sizes, down to 1-millimeter cubes where the most detailed predictions were required. Meanwhile, an LES simulation of the same airplane under the same flight conditions would take 10 times longer. Both simulations would produce the same predictions, because the airflows during cruise flight are smooth and well-behaved.

It is currently possible to run a LES simulation for an entire airplane making an edge-of-the-envelope maneuver, though it would be a supercomputing task, requiring 10,000 to 100,000 CPU cores to complete in a reasonable time, Alonso says. He predicts that in five to 10 years, the cost of a LES simulation could be brought down to cost the same or less than a RANS simulation, and take about as long to run as a RANS simulation today.

“Eventually, I think we’ll be able to replace RANS with LES and have the accuracy, but at much lower cost,” Alonso says. As the cost of running LES simulations comes down, it has a better chance of being used for CBA.

Deploying graphical processing units, or GPUs, instead of CPUs for LES simulations is a “no-brainer”— a much faster way of computing, Alonso says. But the model’s computer codes have to be rewritten from scratch to take advantage of the GPU’s resources, such as the different types of bandwidth with a GPU and the different costs of moving data around with a GPU. GPUs can be easily 30 times faster than CPUs and could lower the cost of LES to make it comparable to the cost of RANS today.

Once the time is reduced, engineers or designers running computer models could run many more simulations at the same level of fidelity, or they could turn up the level of fidelity for a single simulation — such as running meshes that are much more fine. Or they could do some combination of both: more simulations and greater fidelity.

3,000 times faster

Data science, also referred to as machine learning, and artificial intelligence applications will improve the predictive capabilities of LES and RANS, but probably not for five to 10 years, Alonso predicts. “It is still early days. It’s kind of the Wild West. Everybody’s trying different ideas, different approaches.”

Alonso favors using data science to improve the models, but not trying to replace the models with a purely data science and machine learning approach.

“Use what you know works well and then complement it with data science/AI machine learning to improve the models that as engineers and physicists we’ve introduced into these predictive capabilities,” he says. “The stuff that you model is what carries errors. So whatever you can do with more data in order to improve those models is very helpful.”

As the cost of designing semiconductor chips comes down, another technology with great potential for improving the computer modeling of aircraft is ASICs, application-specific integrated circuits. Unlike CPUs and GPUs, which are chips designed to be programmed to solve any problem, ASIC chips can be designed to perform the operations of particular algorithms — so an aircraft modeling algorithm, in this case, could be etched directly on a chip rather than programming a general purpose chip to perform the calculations of the model. Today, it would cost $10 million to $30 million to burn an aircraft modeling algorithm directly into ASIC chips, but Alonso expects the price will drop into more reasonable territory, $500,000 to $1 million, within 10 years.

With ASIC chips performing calculations 100 times faster than GPUs, which are 30 times faster than current CPU chips, simulations would be 3,000 times faster than the processing speeds of today’s chips. So a simulation that today takes 48 hours to run would be shortened to less than a minute.

In 10 to 15 years, running 100,000 simulations could become “a pretty normal proposition,” Alonso says. “That’s when I think certification by analysis gets exciting.”

Designs could be certified more quickly at the same or higher level of confidence compared to flight testing, and at much lower cost. Bold concepts, such as blended wing bodies, truss-braced wings and hybrid-electric power, would have new life.

“The excitement for me, as an academic, is that [CBA] opens up the door to much more advanced designs that we don’t know how we’re going to certify today,” he says.


About Keith Button

Keith has written for C4ISR Journal and Hedge Fund Alert, where he broke news of the 2007 Bear Stearns hedge fund blowup that kicked off the global credit crisis. He is based in New York.

A Gulfstream G650 test aircraft suffered aerodynamic stall and crashed shortly after takeoff during type certification flights of the design in 2011. The National Transportation Safety Board determined that Gulfstream failed to validate safe takeoff speeds for the test program. Credit: NTSB
FAA in 2018 accepted Reynolds-Averaged Navier-Stokes models in place of physical flight testing to make sure that the addition of internet antennas and radomes, like the ones atop this Airbus A350, would not negatively affect performance of previously certified Airbus A330-200s and Boeing 737-800s. Credit: Gogo Inflight Internet

Flying digitally