Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

MicroMPX Is Designed Specifically for FM

An efficient solution to send full composite to sites without excessive bandwidth consumption

You may have heard the term MicroMPX and not been familiar with it. In the ebook “Trends in Codecs 2024” we asked Hans van Zutphen, founder and owner of Thimeo Audio Technology, to explain it. 

Radio World: What is MicroMPX and what should readers know about it?

Hans van Zutphen

Hans van Zutphen: MicroMPX is a specialized studio-to-transmitter or STL link codec. It transports a full FM composite MPX signal, including pilot and RDS, at a bit rate of only 320 kilobits per second, while ensuring perfect peak control. 

This technology allows for the generation and distribution of signals directly from the studio to multiple transmitters, eliminating the need for separate audio processors and stereo/RDS generators at each transmitter site. If you really need very low bit rates, the more aggressive MicroMPX+ codec can go down to 192 kbps.

We have designed MicroMPX specifically for FM. Making use of the known strengths and weaknesses of an FM signal, it avoids the typical artifacts associated with lossy codecs and it perfectly maintains peak control, even when using composite clipping, thus maintaining the integrity and quality of the broadcast signal.

RW: What inspired its creation?

Van Zutphen: We have been making FM processing software since 2007. We added a stereo/RDS encoder in our software 15 years ago, and a composite clipper shortly after that. This gives us access to the full signal, so we can take all kinds of things –—the stereo pilot, the stereo carrier phase, even the RDS data — into account in our clipper. 

This enables us to put more than 2 dB of extra loudness in the audio without clipping more, so the end result is louder, sounds cleaner and has more dynamics. On top of that, the fact that we know the total signal gives us real-time control over the RF bandwidth during FM transmission, which enhances reception quality.

Traditionally, most stations would send their raw left/right audio to transmitter sites and perform the processing there, or perform their main processing in the studio and then perform the stereo/RDS coding at the transmitter site, thereby losing all the benefits of composite processing unless there is a composite clipper at each transmitter and the processing is split between the studio and transmitter sites.

Sending the raw audio to the transmitter site is a good solution as long as the link is lossless. But using a lossy link will often seriously degrade the resulting audio. For an analog link, noise will be raised during quiet audio, and when using a lossy codec, many assumptions that the codec made about what’s audible will be voided by the processing, amplifying codec artifacts. 

In both cases, the input of the processor might sound fine to your ears, but the output can still be affected.

We received more and more requests from customers for an efficient solution to send the full composite signal from a studio to one or more transmitter sites, without excessive bandwidth consumption. Some of our customers would — if they used uncompressed MPX data — be sending more than 4 terabytes of data per day over the public internet, in some cases even over 4G or 5G or satellite links, without even counting backup links. 

Aside from the costs, just the amount of energy that’s basically wasted to send this much data over the internet is extreme.

As a company, we try to make at least one thing each year that nobody has done before, and this sounded like an interesting challenge. And our prior work gave us the expertise needed to do this. So we started to work on a codec, with a number of requirements:

  • Preserve audio quality, loudness and peak control.
  • Retain the advantages of composite clipping.
  • No effect on FM reception.
  • Reduce the bitrate as much as possible without breaking the other requirements.
  • If the original signal complies to ITU recommendation ITU-R SM.1268 (a specific RF bandwidth mask), the decoded signal must as well. Most audio processors don’t guarantee SM.1268 compliance (but ours do), and this is the one requirement that we dropped for MicroMPX+ to achieve even lower bit rates. 

When we started, we were aiming for bit rates around 600 kilobits per second. But we managed to go much, much lower.

RW: How does the growing use of the cloud influence radio codecs and how they are deployed?

Van Zutphen: With the move to do more things in a central location — be it the cloud or just in the studio building — the setup can be made much simpler and better sounding at the same time. 

I have seen several media groups that run more than 40 stations on a single server, which does everything: processing, watermarking (Nielsen PPM, Kantar, Intrasonics), and generating the full MPX signal with pilot and RDS, which is then streamed to the transmitter sites with MicroMPX. 

With lots of stations with similar content but different ad breaks or local news, running everything on a single system easily keeps all the signals perfectly in sync with each other, and the whole configuration can easily be copied to create backup systems. All that’s needed at the transmitter sites aside from the transmitter is a simple MicroMPX decoder, if the transmitter doesn’t accept MicroMPX directly.

[Sign Up for Radio World’s SmartBrief Newsletter]

Close