Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Five Processing Considerations to Keep in Mind

Audio in and around media now takes many forms, so what are the main considerations in managing it?

The technology of audio processing now encompasses a wide range of types developed to suit the various transmission platforms available to broadcasters. Mike Erickson, systems and support engineer and all-around hands-on processing guru at Wheatstone Corp., breaks these choices down into what are basically five categories: AM broadcasting, FM broadcasting, HD broadcasting, web streaming and podcasting.

For each of these systems there is a specific type of processing that will provide the best results. Over time, the industry has recognized these differences and developed processors aimed at each of these applications.

Erickson’s NAB Show presentation “Processing the Many Forms of Audio Delivery” is part of the Saturday afternoon lineup of engineering sessions. I spoke with him at length ahead of the show. The session will be co-presented with the company’s Senior Sales Engineer Brad Harrison, who will discuss audio processing for video delivery.


For radio and webcasters the audio “sound” is possibly the most important technical component for success. Audio processing for broadcasting at its core involves manipulation of signal dynamics to provide the best possible listening experience. Given this common goal, it would seem that the industry would have converged on a single-solution box that offers enough tools for any application. But this isn’t the case.

“At Wheatstone, we used to offer a Jack-of-All-Trades kind of box, designed to handle any application,” Erickson said. “But it turns out customers didn’t want that — they wanted processors that were optimized for each kind of delivery system.”

Each type requires a specific suite of tools to optimize the sound. Broadcasters should consider these when choosing the appropriate processing for each of their audio delivery platforms. By selecting platform-specific processing, he says, it is less expensive and simpler to get the desired sound appropriate for each.


To explore one example, Erickson examined the original forms of broadcasting in the electromagnetic spectrum, AM and FM, to demonstrate the unique limitations imposed by these formats. Broadcasting in the electromagnetic spectrum through the use of high-power transmitters is an efficient and effective method of reaching audiences. However, due to the limits of the allocation system and available spectrum, both of these technologies are limited in extent by legally imposed modulation limits and receiver technology.

Looking at the first of these in more detail, amplitude modulation is sharply constrained by a narrow 10 kHz channel spacing and the consequent need to prevent interference to adjacent stations. At the same time, AM is the most susceptible of all delivery systems to impulse noise, both atmospheric and man-made, which is demodulated as audible noise in the receiver. This imposes a tight limitation on the dynamic range of the audio.

These two aspects of AM will drive the optimum methods for processing. To avoid interference the audio must be band-limited with a “brick wall” high-frequency filter. To stay above the noise, the audio must be high-frequency pre-emphasized and compressed substantially. Techniques to create apparent loudness come into play.

Audio processing for FM broadcast comes with its own limitations that make it different from AM. Once again, the legal limitations imposed by limited spectrum and allocation systems bring with them the basic parameters of what is possible in FM. In Europe, add laws concerning loudness on commercial content that require additional considerations which must be addressed by the processing.


In recent years our increased understanding of the masking properties of audio have led to an explosion of new delivery methods that use data compression techniques to create data streams as a means of real-time (or downloaded and delayed) audio. The three main streaming techniques, HD Radio, webcasting and podcasting, share the common technical limits of digital transmission but each has its own specific requirements.

One common aspect of all streaming systems is the need to restore what is a digital audio file that has been broken into packets and transported to a destination without timing control over their reception. On the listener end, these packets must be reorganized into “streams” that restore the original timing of the audio. The requirement to build time-accurate streams by its nature introduces delays, most notably the delay associated with HD Radio which requires a full 7Ð8 seconds of buffering to be decoded successfully.

The additional requirement for HD Radio to “fall back” to its analog main channel audio defines its uniqueness relative to the other two digital platforms.

“With HD Radio, you have a highly data-compressed primary stream that has to reasonably match the analog FM loudness and spectral balance. At Wheatstone, we find that combining the early stages of FM processing with HD helps, although the processing on the output stages must be utterly different,” said Erickson. At the same time, the very critical delay timing can be precisely adjusted in the same processor, eliminating the need for an external delay device, and allowing the use of composite processing on the analog channel.

One other aspect of audio processing for digital streams is the need to compensate for any effects of the digital audio codecs that are employed. These effects become more pronounced the more loudness is desired.


Webcasting and podcasting raise unique considerations.

“There are still program directors who record the audio on their competitors’ web streams in order to make loudness adjustments for a possible advantage. There remains a feeling amongst some that loudness can help increase popularity. At the same time, if we tried to use the loudness techniques we employ on analog FM it would make web streaming hard to hold listeners for long periods of time.”

The introduction of distribution methods like Alexa, with its support for higher data rate and more modern audio codecs (AAC+ vs. MP3), offers a vision of the future of webcasting with potentially greater flexibility in audio processing.

In contrast, a new perspective appears to be taking hold in the world of podcasting. “It’s pretty hard for a listener to make loudness comparisons directly from podcast to podcast; there’s no button on the dashboard to ‘punch over’ to a competitor during a commercial,” said Erickson. Processing on podcasts tends to take a back seat to content, although it remains important.

Erickson will offer more ideas on how to process audio on these crucial new technologies during his presentation. In addition, he has tips and suggested tools that can assist in the setup of audio processing for all the delivery platforms, including the latest digital forms. It promises to be an informative session for audio engineers working in any delivery medium.