Hearing aircraft that don’t yet exist


Designers and urban planners would love to know what the coming class of urban air mobility aircraft will sound like. The trouble has been that many of the designs are still at the digital blueprint stage. Keith Button tells the story of a NASA-developed software that’s poised to solve the problem.

It sounds like a new take on the age-old riddle about the tree that falls in the forest with no one around. If an aircraft exists only in digital form, can anyone hear it? For the coming breed of electric rotorcraft that would shuttle us around cities, the answer is about to be “yes.”

Acoustics researchers and software engineers at NASA’s Langley Research Center in Virginia have written software that, when connected to headphones or loudspeakers, produces the sound that an urban air mobility aircraft would make. The Langley group plans to release the program in June via the software.nasa.gov website for UAM developers, noise researchers or consultants who plan to make presentations to city planning boards about proposed vertiports for UAM aircraft.

This urban air mobility function will be an add-on to the existing NASA Auralization Framework, or NAF, a computer program developed in 2015 by a team of acoustics researchers and software engineers led by Stephen Rizzi, a senior aeroacoustics researcher at Langley. They wrote the first version of NAF based on an older NASA computer program that had outlived its usefulness: It didn’t work well with other software, and it was focused on airliners, while the Langley group wanted to add the ability to auralize rotorcraft and propeller planes, later to include UAM designs. After all, most UAM designs or prototypes have rotors and/or propellers. NAF produces the sound of a complete aircraft by predicting and auralizing the sound of its components and structures to include its airframe, landing gear, if any, and its method of propulsion, whether gas turbine engines, propellers or rotors. With the addition of UAM, it will cover the sound of just about every type of aircraft except rocket-powered vehicles and sonic booms for supersonic planes.

The 300 or so companies that are developing UAM vehicles promise that their aircraft will be quieter than conventional helicopters, but those claims have been difficult to verify.

“People are trying to understand: ‘Well, how much quieter, really? And will I be annoyed by it still or will I not notice it?’” says Ryan Biziorek, an acoustics consultant at Arup, a London-based engineering consulting firm, who created UAM noise simulations for a Los Angeles Department of Transportation project based partly on NASA auralization models. The setting also makes a difference. In downtown Chicago in the middle of a weekday, for example, UAM aircraft taking off and landing may be drowned out by car traffic noise.

No one can say for sure how the public will react to UAM rotorcraft, partly because of their unique sound quality. “It doesn’t sound like a helicopter; it doesn’t sound like a large commercial transport. What is it? Well, it’s this other thing,” Rizzi says.

The UAM sound has two distinct components: Irritating tones, which are equivalent to the “ddzzzz” sounds that a television in the old days would make at the end of the broadcasting day or perhaps now when a signal is interrupted. “You can listen to [that sound] for about three seconds before you hit the remote and turn it off,” Rizzi says. There are also masking sounds that sound like white noise.

Auralizations will help aircraft designers weigh the noise benefit from a potential design change, such as to the landing gear, versus the price tag of that change.

Seeing sound

Traditionally, planners and regulators have struggled to represent aircraft noise levels in a format that the public can easily understand. Computer modeling of noise levels surrounding an airport, for example, might be projected as a contour map showing the different levels of accumulated noise exposure over a 24-hour period, says Biziorek, who contributed to community noise studies for a proposed expansion of London’s Heathrow Airport.

“We’re not able to easily talk about it because it’s not something we can tangibly see,” Biziorek says. “When we try to get people to talk about sound with these numbers and these graphical representations of it, it’s very difficult for people to find a common language and a common understanding.”

Aside from UAM designers and entrepreneurs, acoustic researchers stand to benefit in their efforts to explain why particular structures produce the sounds they do so they can make aircraft acceptable to the public. “There’s a lot of information, even for us technologists, that helps us to understand the numbers that we’re generating,” Rizzi says.

Thumps and wshhhes

A key component of the UAM auralization computer program was born from wind tunnel research that colleagues of Rizzi at Langley — whose offices were just down the hall from his — were conducting with a quadcopter drone in 2018. Rizzi and his team could hear a difference between their own computer-generated auralization of the flyover noise from the drone and the actual audio recording in the wind tunnel. Similarly, they could also see the difference between the two sets of numbers that described the auralized sound and the actual sound: Something was missing from the auralization.

They realized their software accurately predicted one part of the sound from the rotor blades: the “thump-thump-thump” caused by pressure fluctuations on the blade that change as it rotates and by the air displaced by the blade, called periodic blade loading and thickness noise. But they didn’t have a model to predict the “wshhhwshhhwshhh” sound, generated by changing turbulence off the blade or airfoil, known as airfoil self-noise.

Rizzi and his team would need to put both elements together to accurately auralize the sound of UAM aircraft, most of which are similar to drones in that they are typically multirotor aircraft. Over two years, acoustics researchers developed the self-noise model, and software engineers turned that model into computer code to create sound that humans could hear. As a first step they needed to know the velocity of air from the wake of the spinning blades, the angle of attack and the geometry of the rotor blades as the aircraft was trimmed, so they obtained it from a comprehensive model of rotorcraft aerodynamics.

They also needed a computer program that could quickly compute the noise from each blade, so they took old code based on a 1989 NASA study of airfoil turbulence noise and rewrote it into a modern computer framework. They fed the data from the rotorcraft model into that framework, and the framework calculated the noise numbers for a rotating airfoil.

At the same time they were developing the computer code to predict the airfoil self-noise, they were writing the code that would turn that noise prediction into the sound that humans could hear if the rotors were flying overhead. For their model, they calculated noise predictions for every point on a hemisphere-shaped grid surrounding the rotating rotors in their model, like the bottom half of a globe. This represented what a listener would hear if positioned at any point on that grid.

To understand how their model represented the sound changing over time as the rotors passed overhead, picture the listener on the ground pointing up at the hemisphere grid under the rotors. The closest grid point to the listener would represent the initial sound heard. Then as the rotors and the hemisphere continued traveling, the listener on the ground would be oriented toward a different point on the grid, which would be the subsequent sound heard. Every 10 milliseconds the grid point and the sound heard by the listener on the ground would change slightly.

For the “thump-thump-thump” piece of the UAM auralization, meaning the periodic blade loading and thickness noise, the task was easier. They broke the noise into its two components: pressure on the blade that fluctuates as it rotates or the loading noise, and the air displaced by the blade, or thickness noise. Just as they had for the airfoil self-noise, they computed a noise prediction and auralization model for the periodic noise, wrote a NAF module for it and plugged in their blade loadings, motion and geometry data.

When the engineers had put together the computer models for predicting and auralizing the components of the blade noise, they ran it through another NAF tool that propagates the noise through the time and space between the source and the listener, accounting for factors that would change the sound. That would include the Doppler shift in frequency when an aircraft flies toward or away from a listener, and also atmospheric absorption. In an AIAA SciTech paper published in January, “Prediction-Based Auralization of a Multirotor Urban Air Mobility Vehicle,” Rizzi and his team demonstrated their prediction and auralization computer program with a six-passenger electric quadcopter design, combining the modeling for each of the four rotors on the design aircraft.

When the Langley group published the paper, its UAM model was a prototype: It worked but it “wasn’t yet slick” and required a lot of manual labor, Rizzi says. “Now it’s ready for prime time. We can put this in the hands of a user with the proper documentation, and they should be able to replicate the results.”

Producing an auralization with the NAF isn’t as simple as sitting down with the program and typing in some numbers, Rizzi says. An aircraft manufacturer or acoustics consultant, for example, would need the design of the aircraft that will fly, plus systems engineers to develop computer models of the aircraft with the right inputs for the NAF, plus systems analysis people and acousticians developing predictions and validating their predictions with experiments. Then they would need the people who produce the auralizations for test subjects and design the testing to gauge the responses.

Next step: annoyance levels

Within about two years, the Langley group hopes to take the next step in noise prediction: To produce a model that could predict annoyance levels without having to listen to an auralization. This would be based on the accumulated test results from a series of auralizations for a wide range of sounds with paid human test subjects. Those experiments would be conducted at the psychoacoustic test facility — an auditorium — at Langley. Listeners can rank noises by level of annoyance and provide other feedback that isn’t captured by traditional numeric measurements, Rizzi says.

The idea would be to directly predict annoyance levels, based on past test results, for any new or existing aircraft design in various environments, Rizzi says.

Biziorek plans to apply the UAM tool to simulate aircraft noise in three dimensions, demonstrating how a particular type of aircraft sounds as it approaches or flies away or flies overhead. Auralizations are typically played for listeners in his company’s dozen or so sound labs — specialized auditoriums. For a building developer proposing to build a vertiport rooftop UAM landing site, for example, Biziorek could layer in how the Doppler effect will change the sound’s pitch, background noise, how the sound might reflect off or be absorbed by other nearby buildings, different aircraft types being considered for the vertiport, mock arrival or departure sequences, and how changing each factor would change the sound.

UAM manufacturers so far have been hesitant to share their noise signature data, Biziorek says, but he’s hoping that more of them will start to use the NAF and reveal more “so that we can start to have more informed experiential conversations with city agencies and state agencies and communities that are going to be experiencing these vehicles daily.”

NASA researchers demonstrated their auralization computer program with this six-passenger electric quadcopter design, combining the modeling for each of the four rotors. Credit: NASA

Hearing aircraft that don’t yet exist