Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Virtual Roundtable: Trends in Processing

Not enough? Too much? That is the processing question.

Imbued with seemingly magical healing powers, no piece of equipment in audio has more mystery than the processor. Radio World’s latest Virtual Roundtable gathers together several Merlins of audio processing to tell us their tales of processing.

Frank Foti, Omnia Audio

Be honest — is there really anything new in processing? The flagship products today are powerful — they have so much processing power, have available incredible algorithms and are designed and built by audio geniuses with decades of experience (building on decades of previous audio research), how much better can they get?

Bob Orban, Orban: Answering this question requires a crystal ball. For example, in 2010 I was surprised how much improvement our new MX limiter technology could make in the FM analog peak limiting chain. If you had asked me this question in 2008, I would have said that the Optimod 8500 is a mature, fourth-generation product that would be difficult to beat. That being said, it’s hard to imagine dramatically improving our current fifth-generation technology. Creating the MX limiter technology took two years of R&D effort. It’s a great example of Edison’s famous quote about innovation being “One percent inspiration and 99 percent perspiration.” MX technology is extremely complex, and I certainly don’t see a path forward to any simple and straightforward improvements that will make as big a difference as the MX technology did compared to the technology used in our 8500.

An important feature that we are about to add to our high-end FM processors is a new method of multipath mitigation. This transforms a stereo recording into so-called “intensity stereo.” By eliminating phase shifts between the left and right channels, this technology not only corrects phase skew due to head gap misalignment in analog tape recorders (as do current-technology “phase correctors”), but it will also correct problems such as comb-filtering due to a single instrument (like a snare drum) being picked up by more than one microphone. Removing interchannel phase shifts minimizes the amount of energy in the stereo subchannel without compromising the width of the stereo soundstage or reducing separation. Minimizing the amount of energy in the stereo subchannel minimizes multipath distortion at the receiver.

An important question these days is how much ancillary processing should be incorporated in the on-air audio processor and how much should be left in the production studio. For example, some of our competitors incorporate declipping and upward expansion in their on-air audio processors, yet I believe that these algorithms can damage a significant amount of existing program material, so the best place for them is in the production studio where they can be monitored by human ears and only be used when they actually improve quality. Additionally, there is always a question as to whether such processing is musically valid or whether it circumvents the intent of the original artist and producer. Although designing digital audio processors requires hardcore math, ultimately they are a fusion of science and art, and using them requires taste, judgment, musicality and audience research.

Frank Foti, Omnia Audio: I’m always honest! Actually, there is plenty new in audio processing, and it’s testament to what’s always been the mantra of our company — keep pushing the bar forward!

Omnia always strives to find means to improve what we hear via audio processing. Take, for example, a couple of algorithms we recently derived: Undo and Solar Plexus. Both are designed to audibly improve what the listener hears.

Undo is the creation of Leif Claesson, and it answers the challenge of dealing with hypercompressed audio. It contains declipper and multiband expansion functions that restores fidelity to source material, which was badly damaged (some might say destroyed) in the production or mastering process. It does it automagically, and it’s smart enough to not affect source material that was properly recorded and mastered.

Solar Plexus grew out of a long desire of mine. It’s a musically natural algorithm that literally restores bass and sub-bass that was lost in the recording, production or mastering process. Take for example “Baba O’Riley” by The Who. John Entwistle’s bass guitar is very lackluster. It’s that way on the entire “Who’s Next” album, but this song always came to mind for me. Solar Plexus enables the bass guitar to sound as if it was recorded with today’s microphone techniques, and the low end is naturally solid; hence the name Solar Plexus. It’s also another “smart” algorithm as it will enhance content as described above, but it will not affect speech, or material that would sound synthesized or fake.

Add to that our work in recent years regarding single sideband usage for FM-Stereo transmission, along with the creation of MPX over AES (done in collaboration with Nautel) and hopefully you can see that we are not sitting around repackaging our products and slapping a new name or number on them.

Since we’re being honest with one another, I think it’s important to point out who’s innovating and leading, as compared to others whom are outdated and offering me-too stuff.

Omnia has been leading for quite awhile now. We initiated the work on Single Sideband Suppressed Carrier for FM-Stereo transmission. In 1998, we introduced the concept of a digital MPX connection to an exciter, and later on realized this along with our friends at Nautel. Add to that, our efforts with Undo and Solar Plexus, and it’s easy to see there is only one group of processing folks who are doing creative work, which yields results that the listener can hear.

Additionally, and by design, we have the next generation of processing developers. Leif Claesson, Cornelius Gould, and our collaborative efforts with Sound4 are prime examples of our bench strength. Concept and product development doesn’t fall onto the shoulders of one person, or someone who’s long past their prime. Omnia is only getting stronger.

Bob Orban, Orban

Is there something, some area that the modern processor isn’t hitting or isn’t doing?

Frank Foti, Omnia Audio: I’m writing this while attending the AES show in Los Angeles where I’m on a panel discussion about audio processing and a question in the abstract is “Will the audio processor save the world?” (Really, it’s part of the abstract!)

So, in answer to that question, no it’s not possible to save the world … yet!

I feel there’s room to grow in the area of metadata and transmission processing. We now have the means to package info about the dynamics, along with the audio, such that the end-user experience can be enhanced. It could even be done seamlessly like enabling the receiver to adapt itself to the listening environment, and then replicate the recovered signal accordingly so the audio sounds amazing, all the time.

Jeff Keith, Wheatstone: Modern processors can deliver amazing on-air sound even under some of the worst conditions, but one must understand that no on-air processor is a magic bullet that will fix broken and underperforming air chains or bad source audio. Even “all digital” air chains can have very common issues such as insufficient headroom. To get the very best on-air sound, good engineering practice dictates that the engineer be intimately familiar with every single thing that’s in his air chain, why it’s there, and how it works. In every case I’ve seen, the best-sounding stations understand this to the nth degree and have it nailed to perfection.

Mike Erickson, Wheatstone: The short answer to the question is, yes. But it’s not the area you would think … While we are always trying to improve the sound and functionality of our processors, we are also trying to make it more friendly in the environment its working in, how it interfaces with other gear, etc.

Bob Orban, Orban: Different processor manufacturers will disagree on which processing blocks should be in-line and operate automatically, and which processing blocks are better used in the production studio because one size does not fit all.

Jeff Keith, Wheatstone

When you listen to today’s radio stations, is there an issue with on-air signals that you see that isn’t being properly addressed?

Frank Foti, Omnia Audio: Tough question, as it is so market-dependent. What works well in one market may roll over dead in another, yet both markets feel they’re establishing some sort of processing “benchmark.” It’s so subjective.

To my ear, processing induced intermodulation distortion (IMD) is so annoying. I can just about always tell another processor because the transients get sucked out of all the content, especially drums and cymbals, and it sounds as if my head is in a vise.

I also feel there’s room to grow in the area of metadata and transmission processing. We now have the means to package info about the dynamics, along with the audio, such that the end-user experience can be enhanced. It could even be done seamlessly like enabling the receiver to adapt itself to the listening environment, and then replicate the recovered signal accordingly, so the audio sounds amazing, all the time.

Jeff Keith, Wheatstone: It’s something that I see over and over again in the field: poor source quality. Many stations, in an effort to be the first in their market to break a new song, play songs that have been downloaded from some Internet site somewhere. Most of that material, in an effort to save download bandwidth and perhaps even hard drive storage space, have already been conditioned by a low-bitrate codec. Unfortunately, these songs don’t sound very good before they go through the on-air processing, and they sound even worse afterwards. Of course, many stations intend to replace those songs later on with better sounding, linear material, but it almost never happens. Once the song is happily nestled in the station’s play-out system, it stays there — to sound bad on the air — forever.

Modern processors can deliver amazing on-air sound even under some of the worst conditions, but one must understand that no on-air processor is a magic bullet that will fix broken and underperforming air chains or bad source audio. Even “all digital” air chains can have very common issues such as insufficient headroom. To get the very best on-air sound, good engineering practice dictates that the engineer be intimately familiar with every single thing that’s in his air chain, why it’s there, and how it works. In every case I’ve seen, the best-sounding stations understand this to the nth degree and have it nailed to perfection.

Mike Erickson, Wheatstone: I think there’s a lot of misinformation lately about what exactly a processor can do for the transmitted signal. While there are some innovations that do help, oftentimes it’s either a placebo effect or you’re sacrificing something potentially as damaging to correct the original problem. In the end, good engineering practice before and after the processing is what goes into making the on air signal sound great!

Bob Orban, Orban: I am not qualified to second-guess the judgment of programming professionals, who are paid to get ratings and whose jobs are on the line if they don’t. My job is to give these professionals the tools they need to achieve their programming goals.

Mike Erickson, Wheatstone

Do users and GMs expect too much from processors? Are their abilities overpromised? Can they turn the sow’s ear into a silk purse?

Jeff Keith, Wheatstone: I can’t harp on this enough: The radio station’s on-air sound can “never” be better than that of the original source quality. From my point of view, some customers expect their newly purchased on-air processor to deliver some kind of magical fix to their bad audio. Even worse, there have been so-called “fixes” offered by some that, on the surface, do appear to work like magic. The problem is, these algorithms can’t remove distortion caused by clipping in the mastering stage, regardless of how pretty-looking a waveform they can make out of that clipped audio. Only the device that originally clipped that audio “knows” what was beyond the clip level, and that information is gone, not available to the declipper, and that makes any declipping process a guessing game that can only add new distortions that weren’t even in the original clipped waveform (this can actually be proven mathematically).

Mike Erickson, Wheatstone: I think there’s an expectation that a new processor will magically heal all the warts, and that’s rarely the case if you have compromised audio sources or a compromised transmission system. In fact, if your audio sources are compressed, you run a very good chance that the newer, cleaner algorithms in a new audio processor will reveal even more bad details in that audio than your previous one did. I think we’ve gotten to the point where marketing may be trumping science and that’s a problem. I’m waiting for someone to declare that mounting a processor upside down in the rack reduces distortion and improves stereo separation … and I guarantee, if someone said that, you’d see processors mounted upside down in racks in some stations.

Bob Orban, Orban: Processors are very good at correcting source-to-source loudness differences and inconsistencies in spectral balance. However, they cannot fix quality problems caused by lossy compression (such as MP3) that has been applied to the source. It is risky to expect online processors to undo clipping and hypercompression applied to the source material, particularly since some of this may be part of a given musical style, reflecting artistic choices made by the original artists and producers.

In the case of Orban’s processors I try not to promise more than I can deliver; the engineer in me dislikes hyperbole, and if I make a claim about performance, I prefer being able to back it up with research and measurements.

Frank Foti, Omnia Audio: Is this a variation of the recent AES panel discussion, “Can the audio processor save the world?” Not sure why, but I’m getting a sense of cynicism regarding processing from some folks out there. Come on guys, you’re shooting the messenger.

Audio processors, especially today’s generation, are very, very powerful tools. But, as with any tool the artisan needs to understand it, and then do what they can to achieve maximum results from it. If you have the latest/greatest sports car and the driver keeps crashing it, is it the problem of the car?

I think users and GMs all still want the same result: good clean audio that’s competitively loud. That’s not going to change.

How much processing is needed when many stations’ material is little more than crushed MP3 files streamed off a hard disk? Or does that provide a challenge for the processor operator — to revive those songs?

Bob Orban, Orban: With the low cost of modern disk storage, there is no excuse for using degraded sources. Information lost to lossy compression cannot be recovered. The best one can do is make educated guesses about what might have been there before compression. This is risky business (the guesses can be unmusical and/or damage the material further), and is best left to the production studio. Even there, it is usually impossible to undo the quality degradation caused by severe lossy compression.

Jeff Keith, Wheatstone: Crushed MP3 files are one thing, but the challenges of FM pre-emphasis and competitive loudness demand the use of very competent on-air processors in order to turn these files into something that’s even passably listenable. Most on-air processors have tools that can help make those songs sound better than they otherwise should, but again, the best on-air sound is achieved by playing material that is linear, that is, has not been through a codec (or two). Expecting a different outcome is foolish.

Mike Erickson, Wheatstone: There’s no way to revive those songs … The best thing you can do is replace them. When that’s not practical, whatever you do when setting up the audio processor is going to be more of a compromise. You have to remember that. If your competition is running uncompressed files and you are not, it’s going to be hard to be as clean because you’ve already started out on the wrong foot at the source.

Frank Foti, Omnia Audio: It’s been said for a few decades now, to process on-air for quality and competitive loudness, you must use the least amount of data compression, or hopefully none at all. Today, storage is cheap. There’s no reason a library of music needs to be low-bitrate MP3 files. It’s not a myth, if you are using low-bitrate MP3 files, and moderately heavy processing, that not-so-nice things will occur to your audio.

On the other hand, does the promise of pristine HD Radio give new life to processors?

Bob Orban, Orban: The main advantages of HD Radio are (1) it is not subject to multipath distortion, (2) it uses no preemphasis, so it has adequate high-frequency headroom for modern music, and (3) the gain of the HD Radio signal path at the receiver is 5 dB greater than that of the FM analog signal path, meaning that the HD Radio signal requires much less peak limiting than does the FM analog signal to achieve loudness parity between them.

However, the HD Radio codec is now decades-old. It is not state-of-the-art and introduces audible codec artifacts at lower bit rates, particularly 32 kbps. These artifacts can be mitigated, but not eliminated, by artful preprocessing.

Mike Erickson, Wheatstone: All digital paths can help reliability and increase signal-to-noise ratio. But that’s not to say that analog, which has fairly high signal-to-noise ratios in modern equipment, is a sin. What we’ve tried to do with every new step in audio processing is to maintain the loudness we had with the previous design and improve the algorithms so that you hear less of the processing and more of the source.

Personally, I’m not a huge HD Radio fan … I can hear the codec in the audio and it’s much more annoying to me than the blemishes of FM radio, especially if the FM station is processed tastefully. There are some very cool tricks we can play in the back end of an HD processor to try and minimize the artifacts that would come up later once the audio has passed thru the codec … And even though I think the potential audio on the HD channels has gotten better in situations where the processing is set up correctly, the artifacts of the codec still don’t sit well with me vs FM.

This leads to another point: stations that have compressed audio files and transmit in HD. When that audio reaches the listener on the HD path, it will now have been pushed through at least two audio codecs before it gets to your audiences’ ears. So while compressed audio doesn’t help FM, it has the potential of making the HD broadcast even worse.

Jeff Keith, Wheatstone: In my opinion, HD Radio “can” sound better than conventional analog FM, however it’s still audio that’s been through a bit-reducing codec. HD Radio sounds better than conventional FM only because in HD, the processor doesn’t have to deal with the huge amount of high-frequency pre-emphasis that’s used on the analog side. With this limitation out of the way, HD FM can sound remarkably good, even if it’s not “linear” audio.

Do all-digital signals create their own problems? What are those?

Mike Erickson, Wheatstone: There’s still no free lunch, but digital signals do minimize issues over analog, both in cost and in implementation time. When you consider what we can do with audio over IP, it’s quite amazing where we are right now. But there’s always a potential for problems and it usually comes in the form of IP address conflicts or someone plugging something into a switch that shouldn’t be there. If you manage your facility and use a bit of common sense and good engineering practice, there’s no reason why you can’t be successful and flexible in an all-digital plant.

Jeff Keith, Wheatstone: Actually, good engineering practice at the station level usually keeps all the weird things that can happen with digital audio out of the way. However, the most common digital audio issue that I see in the field has to do with audio headroom and a misunderstanding of the difference between dBFS and dBu/dBm. During the analog years we got pretty used to the fact that there was “more” above 0 VU, so banging above zero on the old VU meter wasn’t all that dangerous. But that’s not how it works in the land of dBFS — there is no more left above zero, that’s it — you’re simply out of bits.

Frank Foti, Omnia Audio: Well, sad to say, but I still hear certain radio stations that employ digital processors which are known to generate system aliasing distortion. That is an issue to be put on the shoulders of the processor designer. Likewise, it is best to know that the entire digital path is setup so that 0 dBFS at the output of the mixing console is also set to be 0 dBFS in every piece of gear along the path. If not, then added distortion, both harmonic and aliasing, will result. Additionally, too many sample rate conversions and/or time/sync issues can cause havoc.

What about streaming audio? What are the problems inherent there?

Frank Foti, Omnia Audio: Streaming should not be taken lightly. It has grown immensely over the past decade, and keeps growing. It offers many options that can aid the listener experience, data-reduced audio aside. While some wish to point to what they might think are the pitfalls of it, we must all remember what the listener is attracted to: content, content and content! Unless it sounds totally unlistenable, they will find it, and connect with it. Not much different than my childhood days of straining through lightning-induced static on AM radio while listening to my beloved Cleveland Indians baseball games.

Bitrate-reduced audio is the challenge. It’s the economy of scale issue. Folks want to use the least amount of data (bitrate) yet achieve the maximum amount of audio quality. It’s a bit of oil and water really.

While some view this question/topic as a dynamics audio processing issue, it’s a bit more related to bitrate. We’ve been able to improve intelligibility of lower bitrates through processing algorithms, but the keyword here is “improve.” Depending upon the transmission bitrate, this will determine what amount of spectra and fidelity we can offer.

The dynamics processor for streaming has the tools to help mitigate coding artifacts from developing, or suppress them from being heard. Also, it’s important to know that streaming is a different transmission animal, and on account of that, you must employ an audio processor designed for a streaming application.

Bob Orban, Orban: The advantage of streaming audio is that it can use the latest codec technology, as most of the player devices are software-based, so they can be easily upgraded as the state-of-the-art improves. HE-AAC v2 provides excellent results today, and the new MPEG USAC codec promises a modest improvement even beyond HE-AAC v2 in the future, particularly at bit rates of 32 kbps and below.

The main problem with streaming today is that multicast is not yet widely implemented, so transmission bandwidth is used more and more inefficiently as the number of listeners to a given stream increases. A further practical challenge is commercial replacement and insertion, because the inserted program material may not be properly loudness-controlled with respect to the main program material. Another practical challenge is that rich metadata, providing title, artists, graphics, etc. is well-defined in standards documents but is not correctly implemented (or even implemented at all) by all streaming encoders, streaming servers and player devices. It is only a matter of time before this is sorted out, but right now, certain companies handle this much better than others.

Mike Erickson, Wheatstone: It’s as simple as this: if you want your stream to sound the best it can be, you should use a processor that is designed for the task. The worst thing you can do is put an FM processor or feed an off air signal to a stream and expect it to sound acceptable. A codec, by definition, needs to find audio to throw away. Audio that has been processed for FM with very tight peak control doesn’t give the codec much choice. A processor that’s designed to be placed ahead of the codec will help that codec make those choices more efficiently and the final product will be pleasing.

Jeff Keith, Wheatstone: Streaming audio isn’t that much of a challenge as long as one remembers that when processing ahead of a bit-reduction process (codec), care must be taken to leave the codec something to work with. The reason is this: Codecs do their magic by finding opportunities within the audio to remove things that its built-in psychoacoustic model says we humans probably wouldn’t hear anyway. The more compressed, EQed, limited and/or clipped the codec’s input audio is, the poorer the job it can do to remove things it thinks we wouldn’t hear and still leave us with decent sounding audio. To say this another way, once we take those opportunities away from the codec, it’ll make mistakes in what audio information should be removed, and some of those mistakes can be quite audible if not annoying to listen to.

What’s the biggest issue for today’s processor designer?

Bob Orban, Orban: A big practical challenge is the pace of hardware technological change. Electronic components are available for sale for shorter and shorter periods of time before being obsoleted by their manufacturers. Traditionally, broadcast equipment has been supported by its manufacturers for decades, but this is becoming harder and harder to do because of progressively shorter parts obsolescence cycles. Hence, choosing a hardware architecture that can be manufactured and supported long enough to pay back the cost of the processor manufacturer’s R&D becomes more and more challenging.

Frank Foti, Omnia Audio: Imagination! Twenty years ago, Steve Church and I were dreaming of doing a lot of cool adaptive stuff for audio processing. The challenge was the cost of firmware at the time, which made it prohibitive. Today, that issue is gone. We have low-cost computational power and the tools to do just about anything. It’s on us really!

What was the best processor ever? Please, one you didn’t build…

Frank Foti, Omnia Audio: The work of the late Bob Kanner at KRTH(FM) (K-Earth), Los Angeles and KFRC(AM), San Francisco. These were custom air chains, and always sounded amazing. Bob’s attention to detail was always an inspiration to me. My mentor the late Jim Somich had also done some incredible work at WMMS(FM), in Cleveland. I also feel that Mike Dorrough, Glen Clark, Greg Ogonowski and Bob Orban have done some great stuff, too.

Jeff Keith, Wheatstone: Definitely Bob Orban’s 8100A. In my opinion, Bob did an incredible design job on that product to skillfully balance the natural and immutable tradeoffs of the FM medium. Even today, some 33 years after, as an Orban customer I bought one of the very first units, that product can be found in all kinds of markets, still doing a very credible job of managing FM’s tradeoffs. Sure, digital processors can do all of it better now, but that’s mainly because some of the things we have to do to create competitively loud and clean audio on FM are either impossible, or highly impractical to do in the analog domain. But the 8100’s still a great box, and always will be a benchmark in FM processing as far as I am concerned.

Mike Erickson, Wheatstone: Easy, the Orban Optimod 8100. Twenty five years after it was designed, before I was working for Wheatstone, I could still competitively use it against brand new processors that people were installing. Granted, I had some stuff before it and after, but the predictability of it and the classic sound was always a joy to listen to.

I would give an honorable mention to the Hnat Hindes Tri-Maze. It was a very listenable three-band processor that could be used for FM or set up for AM. WQEW in New York had a pair of Tri-Mazes for AM stereo during their days as a standards station. Still one of the best AM airchains I’ve ever heard and by far the best AM stereo chain I ever heard.

Bob Orban, Orban: Answering this question requires false modesty, but I think that most processor manufacturers would reply that the best processor ever is the one that is currently in their R&D lab! I have no nostalgic feelings for my old designs. I have found digital signal processing to be immensely liberating, and the cheaper it has become to implement, the better I have been able to make my designs.

Technology (and cost) aside, what would the perfect processor do and how would it do it?

Mike Erickson, Wheatstone: The perfect processor for me would allow me to mix and match plug-ins from hundreds of different compressors, AGCs, equalizers, limiters or clippers into a purpose-built piece of hardware … So then I could make a digital version of almost anything that has been out there over the past 50 years and quickly and easily study and listen to how all these different ideas would (or wouldn’t) work together!

Frank Foti, Omnia Audio: Perfect audio quality and maximum loudness. That’s the nirvana of every radio station. The how part of this, it is still in development, after all it’s just a few lines of code!

Bob Orban, Orban: Processing is a mix of art and science, so it is no more possible to answer this question than to say what the perfect painting, sculpture or piece of music would be. Moreover, there are certain things that processors can never do, such as reconstructing information that has been 100 percent lost due to lossy compression, peak clipping, etc. — mathematical singularities will always remain so, and entropy will remain part of the laws of physics.

Because processing should implement the preferences of programming professionals (which usually rests on a combination of audience research and instinct), the perfect processor would read the program director’s mind and adjust itself accordingly. Perhaps in a Star Trek universe such a thing could exist, but that bit of science fiction is still a long way in the future!

Close