Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Special Report: Views on Processing From Out in the Field

I asked five experienced engineers for their opinions about trends in radio on-air processing

What are the most important trends in radio on-air audio processing? We asked several leading engineers for their perspectives from out in the field, part of a series from our recent ebook on the subject.

“Virtualizing audio processing is a big leap forward for the air chain,” said Jeremy Preece, president of Wavelength Technical Solutions.

Jeremy Preece. “Your stream is going up against Apple Music and Spotify, and listeners are used to a more open, dynamic soundstage.”
Jeremy Preece. “Your stream is going up against Apple Music and Spotify, and listeners are used to a more open, dynamic soundstage.”

“Years ago, advances in technology made huge improvements to on-air sound — think CBS Volumax to the Optimod 8100. Now it’s changed into more subtle improvements for the listeners and jumps in processor capability.”

Running a virtual processor, he said, allows for processing multiple AM/FM stations, HD signals, integrating RDS, streaming and soon EAS, all into one PC instance. 

“Upgrades no longer mean replacing hardware, just installing software. Virtual processing has much potential, and I suspect AI will be coming to future platforms.”

Joe Geerling, director of engineering for Covenant Network, said the number of features per dollar has grown significantly.

“Doing all the different processing and providing outputs for all services should be an easy standard for the traditional processor box,” he said.

“It will also be interesting to see the effects of makers and hobbyists, who now have access to very powerful tools at very low costs to play. I talked to a guy who was hired by a top YouTube channel, and he was excited to put his audio science degree into action with all the new data on how we hear sound.”

Geerling notes that the science of sound was key in developments that changed processing and helped with ratings watermarking. 

“There’s a lot of knowledge coming because of the new tools available both in hardware and software. I love how declipping can work, as well as other tools, and am looking forward to what’s next.”

David Bialik, director of engineering at MediaCo NY, emphasizes the importance of loudness control.

“Radio learned this from television. Compliance to the BS-1770 standard is crucial to building larger average quarter hours,” he said.

“Many radio stations are streaming the content but inserting different commercial content. The challenge to the broadcaster is to match the loudness level of program content and inserted commercials from advertising networks. If there’s a difference, the audience is encouraged to either adjust the volume knob or turn off the stream.”

David Bialik. “Compliance to the BS-1770 standard is crucial to building larger average quarter hours.”
David Bialik. “Compliance to the BS-1770 standard is crucial to building larger average quarter hours.”

Mike Buckner is chief engineer for the Cumulus Radio Station Group in Nashville, Tenn.

“In recent years, many processors have what some call a declipper on the front end of the box. It’s impossible to restore the information lost from aggressive clipping in the source material, but it is possible to restore some positive peaks, which certainly helps to lessen the perceived density crunch of the source material,” he said.

“Unlike many predecessors, most processors are now all software, with very little analog components in the chain, other than the MPX to feed the FM exciter.” 

A fan of the Omnia.9 box, he continued: “I still use this means so that I can take advantage of the highly superior final clipper in the 9. Between the front-end declipper structure on newer processors as well as precise means of width and peak control in the AGC and multiband, it is much easier to achieve loud, yet perceivably clean audio. This is useful not only to FM, but also HD channels and internet streaming.”

Right tool, right job

Veteran radio and TV engineer Dan Slentz is director of Blue Streak Media at John Carroll University, advisor to WDOG(LP) and a Radio World contributor. He says it’s critical for engineers to understand the method of transmission.

“Just the same as you’d never use an AM processor for FM, we shouldn’t consider streaming as ‘FM audio.’ Why would we limit digital streaming audio to 15 kHz or so to protect a stereo pilot that isn’t there? And other than loudness and keeping analog and digital in sync, why would we not want the HD to have a superior sound to FM? 

“In our case, our HD and FM blend like FM mono and stereo, based on signal strength; but it’s my opinion the change between FM and HD should be like Dorothy walking out of the sepia house into the Land of Oz in stunning color. Why would a listener care about HD if it’s really nothing special to FM?”

Each type of audio should be processed for the particular medium’s strengths, weaknesses and most common location of consumption.

“Our aggressive processing for broadcast comes from the idea that we overcome road noise. But the cars our parents and grandparents drove were very noisy compared to today’s vehicles, especially electric cars. Even then, having air conditioning in a car was a true luxury, so summer meant open-window air conditioning, and the radio HAD to be loud. Quiet passages in songs could not be quiet. We’ve become used to heavy processing and never changed.”

Another consideration, Slentz said, is how audio has changed.

Dan Slentz. “I follow the advice of Bob Orban in opening and listening to every song with a digital editor and making very subtle corrections to the wave file before it ever hits the studio and processing downstream.”
Dan Slentz. “I follow the advice of Bob Orban in opening and listening to every song with a digital editor and making very subtle corrections to the wave file before it ever hits the studio and processing downstream.”

“If we opened up songs from a decade or more ago on a digital editor, we saw ‘waves,’ but now the content we get is nearly ‘flat lining.’ This may be the best term both in how it appears and in the fact that a ‘flat line’ in medicine means the EKG shows no life! This factor should be weighed in our processing,” he continued.

“I follow the advice of Bob Orban in opening and listening to every song with a digital editor and making very subtle corrections to the wave file before it ever hits the studio and processing downstream. I try to make sure all levels, EQ and very simply ‘processing’ allow each song to at least ‘look’ similar when it hits digital automation and processing.” 

Slentz says he has learned much by talking to processing designers at all of the major broadcast equipment manufacturers. “Each nugget of wisdom I could grab from them, I try to weigh when developing my own thoughts.”

Multiplicity of platforms

We asked our respondents how an engineer can possibly choose the best processing when there are now so many ways a listener can consume audio.

“Step one is to have a well thought out audio chain for the mics, audio file standards, level setting, mixing and distribution,” said Joe Geerling.

“Process every mic before it can appear in the system and get mixed it into a channel the public can hear. Everything produced goes though at the standard level. Only live audio levels need to be adjusted on the fly.”

Once the audio program feed is correct, Geerling recommends a focus on the type of channel and the best experience to the listener you can send down that channel. 

“For FM radio there are some issues because of the 15 kHz bandwidth, pre-emphasis and perhaps minimizing multi-path. Most listeners are in cars these days, so there is a lot of noise in the listening environment to overcome.”

He notes that stations often look for a “sound” to differentiate themselves from competitors. “Sounding competitively loud is all I would care to accomplish — ratings and listening come from content more than tweaking the last bit of loudness or a little thumpier bass out of an FM system. (But I do like the bass!)”

Streams, Geerling continued, should be a dream. 

Joe Geerling. “The best stations have a good balance between mics, remote audio and music, and can achieve some openess of the audio they play.”
Joe Geerling. “The best stations have a good balance between mics, remote audio and music, and can achieve some openess of the audio they play.”

“The system is basically digital all the way to the speaker output circuit. Flat, 20–20 kHz audio if you want. You are one of millions of different audio streams that will be listened on the same device. Getting a pretty consistent level in that 14–16 LUFS area is the goal.”

But since streams are “data compressed,” he said, start with a plan that will not have artifacts at the listener end. 

“I always run uncompressed audio to the input of my stream processor and it is never run through the same processing as an FM station. Newer boxes make that feasible, but unless you want the same coloring of the sound on your stream it is not necessary.”

Jeremy Preece believes the answer to this question begins with knowing your audience. 

“Processing for your listeners’ tastes and habits is critical, especially when you want people to stay locked into your station,” Preece said.

“What is your format’s primary demographic? Mostly male or female? Younger or older? Where do they listen most often?”

He feels there is no one-size-fits-all processor. 

“I recommend trying out a few; ask for demos or visit vendors at radio shows. Dedicate quality time to critical listening in different environments. Ask staff or others in your target audience to listen with you. Remember, just because you like the sound doesn’t mean your audience will.”

Mike Buckner says that he no longer tries to please programmers by simply cranking loudness. He now seeks be as loud and as clean as possible — competitive but sounding superior to most other stations.

“I really believe that listeners hear two things: loud and absolutely godawful, with little in between the extremes. Often ‘loud’ is associated with the station’s signal strength or reach in the listeners mind, but as we know, it has very little to do with that.”

Mike Buckner. “You can still create your own station ‘sound,’ but it does not have to be listener-fatiguing or destroy what little source material integrity is left.”
Mike Buckner. “You can still create your own station ‘sound,’ but it does not have to be listener-fatiguing or destroy what little source material integrity is left.”

With a background in recording, Buckner has deep respect for what artists, record producers and mastering engineers are tasked with. 

“The final product released by the label and artist is intended to sound the way they want it to be consumed. Who are we in broadcast to change that or otherwise destroy the source material to make it fit our needs of distribution?”

He aims to get source material to the listener in the cleanest way possible while striving for consistent loudness so listeners are not wearing out the volume knobs or controls on various listening devices. 

“Traditional broadcast should not vary much from the sound and spectral balance of other means of delivery, such as streaming, Bluetooth, smart speakers, cell phones and so on. Codecs are in the mix with all of those means of distribution, but if processed the right way they can be transparent to the end user.”

In FM he seeks to keep the source material in a linear state all the way to the transmitter. “That way, there is plenty of information in the audio to be able to tastefully shape it for the FM means of broadcast and reception.” The same is true for HD channels and internet streaming. 

Below is a sampler of further thoughts from these experts.

On how to choose a processor wisely:

David Bialik: Trust your ears. Listen on multiple playback devices.

Joe Geerling: Know the station’s goals. For those who desire to be “best of market” or take on a signature sound, be ready to spend a lot. You may just need access to a particular preset. Some stations have unique needs like classical music, where apparent dynamic range is important. High-end processors can handle all formats and are a safe buy; the presets are diverse enough to get you close. But many less-expensive options will work great except for hyper-competitive users. 

What features or capabilities would you like manufacturers to add?

Joe Geerling: Some standardization in hardware, perhaps even including a hardware package in new transmitters, so that the processing software can run inside a lot of hardware. Changing audio processors could become more like changing an on-air system. Maybe some ARM processors and some DSPs with Linux as the OS. Some multiple ethernet outs to simplify sending to streams, monitoring, insertion points, etc. 

Dan Slentz: More inclusion into building the timing between the HD and FM processors would be great, and where the mod monitor provided a real-time return on the analog and digital to the processors so they would maintain the proper timing. Also, more intuitive and advanced RDS. Some might debate the best location, but for me, having it inside the processor keeps my air chain cleaner and less complex, with “widget boxes” not inserted.

If loudness was a goal in the past, what characterizes processing at most successful radio stations today? 

Mike Buckner: Clarity and intelligibility are king, as is tasteful loudness. You can still create your own station “sound,” but it does not have to be listener-fatiguing or destroy what little source material integrity is left once it leaves the record label and is released into the wild. 

Jeremy Preece: Quality over quantity. We need to stop playing loudness wars on the FM dial; there is no reason to be the loudest station on the dial if people can only listen for a few moments before becoming fatigued. We are competing more and more with online content that is processed very differently, if at all, and listeners are becoming accustomed to that sound. Aim for quality, with more openness and dynamic range, and less aggressive clipping. 

Joe Geerling: Loudness was more about impact and perception. I don’t think that will ever change in competitive audio. I still hear a lot of overprocessed music. The best stations have a good balance between mics, remote audio and music, and can achieve some openness of the audio they play. Just my opinion. Part of processing I would include is the pace of the music. More than a few PDs sped up music to make it sound more alive!

Should a station use the same processing on its streams that it uses on the air?

Jeremy Preece: No. First, your stream is going up against Apple Music and Spotify, and listeners are used to a more open, dynamic soundstage. Plus, overprocessing and clipping leads to poorer audio quality due to the nature of codec compression algorithms. Process streams so they match the “color and texture” of the broadcast sound, but with greater focus on increased dynamic range, peak control and reduced clipping. 

Mike Buckner: Absolutely not, if clarity and integrity of the streams is intended. FM is uncompressed and capable of passing lots more information to the radio that a codec otherwise throws away in the process. To process digital audio for streams, HD channels and the like, use another core or audio path in the processor. Most current models have that option. 

In many cases, the audio is unprocessed or “de-clipped” on the front end, then hits the main AGC bands before it splits off to FM and HD processing chains. Simply taking the FM path, lopping off some harsh high frequency and aiming it at the stream or HD importer will result in unintended audio artifacts in the final product. When the audio for digital distribution is processed separately, it gives the opportunity to further shape the audio for the codec’s bandwidth and achieve much perceived cleaner audio with fewer audible coding artifacts reaching the listener’s ears. An understanding of the codecs is crucial. 

Joe Geerling: Process the audio to the stream only enough to cover the listener environment. Is it mainly in a car? A tabletop speaker? Earphones? The car would be the only one where consistent loudness is desired. Leave as much of the natural dynamics as possible while maintaining that target of 14–18 LUFS. If your programming is pre-recorded you can run software to hit the LUFS goal. 

How can engineers assure the quality of audio for online or apps?

Jeremy Preece: It begins with a good input. I recommend splitting the feed to the stream encoder off from a raw source (not from the second output of the FM processor), and using a processor that is designed for streaming, such as Thimeo StereoTool, Omnia A/XE or Optimod 6300. Spend quality time listening to the stream on different devices under different circumstances. Earbuds sound much different than Bluetooth streaming in the car or listening on a Roku-connected TV. Set levels carefully and never allow peaks to clip at the input of the encoder. Avoid overprocessing, especially limiting and clipping, and, if possible, pull metrics to see what devices your listeners favor and adjust as needed.

Dan Slentz: With streaming, which I think of as a radio version of an “over the top” channel, I like to use Bluetooth and direct to my car radio because I can compare my OTA with my OTT streams. They are out of sync since streaming is about 30 seconds behind, but I can tell if there’s a compromise this way, to either audio. I find the stream is a bit cleaner and crisper, but I keep the overall processing sounding like “the same station” (but with just higher end).

Should processing on multicast channels be different from the main signal? 

Jeremy Preece: As with internet streams, HDs should be processed differently than FM, but with a caveat. A transparent FM-HD1 blend is important, so processing should generally match between the two (modern processors often have a dedicated HD output with no final clipper, which makes a big difference), but HD SPS channels should be processed more like internet streams: reduced multiband and final limiting/clipping, increased dynamic range and good level control. HD SPS channels should peak at the same level as HD1 to provide consistency. The use of a processor designed for HD is best, and it’s good practice to use an HD-capable modulation monitor for level setting.

Other tips?

Dan Slentz: Start with a manufacturer’s preset on your processing, then make subtle changes. Give yourself a day or two to get accustomed to the sound, then repeat as necessary. Turn off any “enhancers” in the receiving device.

Joe Geerling: Make sure you can trust your monitoring methods. When you’re asked to get louder by a PD, or when you’re listening to the competition and want to go bigger, know your mod monitoring setup — how it handles peaks, how accurate is it calibrated, does it track low to high frequencies accurately. Maybe send test tones overnight. Check the rules on modulation and be aware of the effects of modulating on different parts of the baseband on tuners. 

Jeremy PreeceAudio processing is just one piece of the puzzle. Make sure any IP/STL codecs are using the highest possible bitrates and that the source content is pure. Don’t pre-process before the FM processor; deliver content to the processor in the rawest form you can afford. Whenever possible, feed your transmitter MPX instead of utilizing the transmitter’s internal stereo generator. And in my opinion, less is more. People can adjust their radio’s volume knob, but they cannot adjust distortion from overprocessing. 

Read opinions from other experts in the free Radio World ebook.

[Sign Up for Radio World’s SmartBrief Newsletter]

Close