Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Codecs: What Do Leading Technologists Want?

From design to application, each broadcaster has unique requirements when selecting codec technology

Radio World spoke to a sampling of technical leaders from around the world to find out what features they seek when choosing a codec, ways they implement the technology to improve broadcast quality, and how they envision the codec of the future. Those who replied to the questions below included Qazi Ahmed Mateen, GM operation for FM100, Pakistan; Etienne des Roseaux, technical and production manager for RMC, France; Peter Verhoeven, radio host/producer for Qmusic, Belgium; Andre du Toit, head of technical for Primedia Broadcasting, South Africa; Gary Kline, owner and CEO, Kline Consulting Group, United States; and Masood Amery, president of Afghan Paiwastoon Media Communication, Afghanistan.

Radio World: What do you feel is the biggest trend in codec technology today?

Qazi Ahmed Mateen: MP3, WAV and AAC.

Etienne des Roseaux: I think that IP transmission is an important technology today. How codecs can manage public internet issues, lost packets, online remote management, provider restrictions, etc. With the end of the ISDN protocol, codec brands needs to innovate to propose the best solutions.

Peter Verhoeven: With an abundance of codecs today being used in everyday communication devices like smartphones, tablets, browsers, wireless speakers and so on, it feels as if consumer and professional use of codecs are drawing closer. Many of these applications now use the same tools to communicate with each other and some of them are open standard, which makes it more interesting for developers worldwide. Take Opus for example, an open and royalty-free codec that excels in quality and has lower latency than other codecs. It’s been used in professional applications, but you can also find it, for example, in WhatsApp on your smartphone.

Andre du Toit: Everything is moving toward IP-based codecs with reliance on high-speed mobile networks.

Gary Kline: I think there are a few tendencies, some of which have been gaining traction for a while. One is the now-commonplace (it wasn’t always) built-in aggregation and redundancy among studio and portable codecs. The ability to merge different cellular carriers, Wi-Fi and wired connections at the same time is now available on most codecs. Some refer to this as “aggregation.” This is a huge step toward reliable and good-sounding broadcasts using IP — more specifically the public internet — as the transport mechanism. Another trend is the capability of most codec models to offer a redundant streaming approach. It’s not just about having simultaneous connections aggregated at the same time but also the codec know-how to seamlessly splice the bits for a very robust connection across any path and in challenging bandwidth conditions. Another development is the continued reduction in size and pricing and form factor for portability. A great example of this is the newest smartphone software packages.

Masood Amery: Today audio codecs offer many advantages to radio broadcasters. For remotes, certainly many strides have been made. IP is a major development.

Radio World: What do you look for when choosing a codec?

Mateen: Audio quality, capacity and clarity.

des Roseaux: When I need to choose a codec, I look for three things: Latency, user interface and quality of audio preamp and circuit.

Verhoeven: Latency and quality (especially in lower bitrates) are the two most important aspects that I look for in a codec. For live applications like a remote interview, low latency is a must. I find nothing more annoying than to participate in a two-way conversation where gaps and unwanted silences tend to make the debate or dialogue really awkward for both the listener and the presenter. I always avoid a remote live interview when the delay is more than 500 ms. That said, I would never sacrifice audio quality over latency. Lower bitrates can reduce latency, but then it’s really important to choose a codec that can deliver excellent audio. I love the Apt-X and AAC codecs in that regard and I would love to test the Opus codec mentioned above.

du Toit: I Haven’t had much experience with different codecs. We use the Telos Z/IP One, but ultimately we would look for low delay and high quality.

Kline: It depends on what the use case is: General remote, sports remote, studio-transmitter-link, IFB, etc. Generally, I look for something with the appropriate form-factor (rack-mount, portable, smartphone, etc.) and built-in codec compression choices. I consider budget, density constraints, quantity, purpose (as stated above), ability to talk to other codec manufacturers if needed, input/output options, including AoIP, upgrade capabilities (for future improvements or features), bandwidth aggregation capability, and onboard algorithm options. It comes down to identifying the requirement and choosing the right codec considering cost, value and its ability to meet particular criteria.

Amery: When choosing a codec, we look for ease of use, flexibility, easy export and archiving.

Radio World: Do you prefer to set up a connection to 4G/3G mobile broadband networks using your own modem or connecting to Wi-Fi hotspots or LAN connection available on site? 

Mateen: At FM100, our first choice is LAN, then Wi-Fi and finally 4G. However, it always depends on the broadcast facility’s quality.

des Roseaux: It depends on what we are using it for. For simple usages, such as temporary news commentaries for example, we prefer to connect to a 4G network with a good audio algorithm. For an external live radio show or large event like the Olympics or World Cup, we prefer using a dedicated LAN access. We don’t like using Wi-Fi hotspot because of the encapsulating delay, and also due to the fact that the access is open to everybody.

Verhoeven: I never use Wi-Fi hotspots for live applications, as they are mostly capped in speed and bandwidth and not very reliable. It really depends on how mobile you want to be. If you need to run around or hop on the back of a motorbike then 4G/3G is the way to go. But if you’re in a crowded place like at a concert or in a packed stadium where everybody wants to stream the event on his or her phone, I would look for a local LAN connection.

du Toit: When given the opportunity we have found that dedicated fiber gives us the best performance. We have had mixed experiences with 4G due to the connection and speed fluctuating.

Kline: It depends on which of these networks is readily available at the location in addition to the degree of importance of the broadcast. It also depends on whether the use case is, for example, a short-term remote or long-term link to a transmitter site (“nailed up” STL connection). Generally, no matter what the scenario is, and if several bandwidth options are available, I prefer a wired LAN connection first. Then comes Wi-Fi followed by 4G. In a perfect situation, I would simultaneously aggregate LAN and Wi-Fi. 4G is great in many cases — especially now with decent network coverage worldwide — but at large events, it can become a nightmare. That’s because, as anyone who has used IP codecs in the field knows, you are sharing your 4G experience with what could be many thousands of people. Think concerts, large sporting events, large news events, such as an inauguration. So 4G is my last choice but not something I entirely shy away from — especially with aggregation options. I’ve aggregated two 4G connections from different carriers before.

 [Read: Will 5G Deliver for Radio]   

Amery: Here in Afghanistan, there is a lack of knowledge and sources regarding new technologies. The 4G/3G networks are not good in our country, and internet is not great either. However, in my opinion Wi-Fi is better than 4G/3G, and so is LAN, althought it’s not available everywhere.

Radio World: When working on remotes, how much do you use IP, and how much do you use more traditional technologies such as ISDN?

Mateen: In Pakistan, IP connectivity is nationwide, while ISDN is mainly metropolitan-specific. Thus, in big cities we primarily prefer ISDN.

des Roseaux: Today we are using IP on remotes more than 45 percent of the time. My goal is to reach 100 percent in the next two years.

Verhoeven: I try to use IP as much as possible. ISDN is gradually disappearing as an option and will be discontinued in the future. Although it was (and actually still is) a reliable choice, it is also a very costly solution compared to AoIP. IP networks are vastly improving and seem the logical pick, but they are still very reliant on the available speed and quality of the connection. Sometimes we use both IP and ISDN, one as main and the other as backup. It really depends on location and budget.

du Toit: We still tend to use the older ISDN technologies as far as possible due to reliability, but there is a growing need from the business for faster turnaround times for remote broadcasts. ISDN lines typically take 10 days from order to installation, so we generally do our bigger events on ISDN and the one’s that come up with short notice over IP.

Kline: In my newest design facility in Atlanta, it is all IP. There is no T1 or ISDN available, so we went completely IP for remotes and STL. The STL connections use IP via a landline wired circuit and over the air point-to-point microwave. In facilities where both ISDN and IP are available things tend to lean 75-percent IP and 25-percent ISDN, and that percentage is moving quickly toward all IP. At least in the projects I have been associated with.

Amery: When working in the field, particularly in remote areas, IP is much easier and faster from ISDN, and the truth is that IP is much more available in Afganistan than ISDN.

Radio World: Which bitrate do you typically use for different types of broadcast (live music, sports commentary, breaking news, etc.)?

Mateen: MP3, 256 kbps.

des Roseaux: At RMC we only have talk programs, no music at all. For all our connections, we use an OPUS 96 kbps as a minimum bitrate. For external live show or big events we usually make us of a 128 kbps.

Verhoeven: Years ago we used Apt-X over ISDN at 256 kbps for its low latency and great quality. We could have a remote conversation with the main studio without the listener ever knowing that we were miles apart. The H.264 encoder I used for my visual radio show last year had a variable video bitrate of around 6 Mbps with AAC audio embedded at 256 kbps. For my daily radio show, which is only audio and broadcast out of Los Angeles to Belgium, the encoder is fed a digital AES/EBU signal and sends lossless PCM audio over a VPN using the public internet at 1411.2 kbps. The delay is under a second, and that’s pretty acceptable. The reason I prefer to use the lossless audio is because of the chain the audio follows after it arrives in Belgium. It travels to a satellite uplink in a MPEG 2 lossy format to the transmitters. Some listeners prefer to listen through the website, which adds another lossy stage to the audio. So the cleaner I can deliver the audio to the mixing board in Belgium, the better.

du Toit: We generally use 64 kbps because we always only send voice from our OBs.

Kline: I prefer PCM uncompressed for any long-term or nailed-up connection or for stereo music remotes. For sports and talk I generally choose AAC mono, unless I am sending stereo.

Amery: The higher the bitrate, the higher the quality, and the more bandwidth it will require. So, mostly in developing countries like mine, the choice really depends on the project being carried out. With lower bitrates and a bad quality, at least we are still able to reach a majority of listeners, and sometimes that’s more important than airing a high-quality program but reaching fewer people.

Radio World: There is often a tradeoff between latency and error correction/jitter. How important is it to minimize latency? What is an acceptable amount?

Mateen: It is very important to have a low latency rate — less then 20 milleseconds is acceptable.

des Roseaux: Since RMC is a talk radio, the latency is very important. A lot of our guests are not in our studio, so to preserve the quality of our program, we need to have as little latency as possible. With ISDN had no more than 30 ms. Today with IP, I will accept no more than 150 ms latency. If it is more than that we start to loose reactivity between each speaker. Sometimes we have to accept 500 ms to preserve signal integrity because we have too much packet loss. But it’s really very difficult to work when that happens.

Verhoeven: It depends on the content. If you need to do a live interview where both parties are miles away from each other, it’s often preferred to try to avoid the awkwardness of gaps and silences while one party is still waiting for the question to arrive at the other end. It depends on both talking parties and the pace of the conversation, but I prefer to keep the latency under 500 ms. If there is music involved on the casting side, I would always choose quality over latency and increase the buffer or the error correction.

du Toit: Latency is the most important factor for us due to the nature of our OBs.

Kline: For me — in a perfect world — it’s never acceptable to have an IP broadcast that sputters or has dropouts often enough that your listeners notice it. So I choose to use a limited amount of latency as necessary to reduce the risk of a sub-par audio experience. That being said, if I find that I am adding too much latency to overcome a bandwidth issue or perhaps some weird networking problem in a venue, I stop and try to solve the problem at the network side. So for example, if I am having issues with Wi-Fi or LAN in a sporting arena, I will go find the on-site IT admin and work through the issue rather than add too much delay to the codec settings. I realize this is always easier said than done but I think it is best to have a good connection from the start.

Amery: Latency is a measure of the responsiveness of an application; how instantaneous and interactive it feels, rather than sluggish and jerky. In contrast to bandwidth, which is the rate at which bits can be delivered, latency is the time it takes for a single critical bit to reach the destination, measured from when it was first required. This definition may be stretched for different purposes depending on which part is “critical” for different applications. Mostly, I like to keep the latency higher and increase it even more if the connection is breaking up.

[Read: Malaysia’s Astro Radio Takes Virtual Approach]

Radio World: Packet loss can cause significant audio dropouts, and packet loss is not uncommon in connections over the public internet. How much is too much?

Mateen: Anything more than 2 percent tells us that there is a problem. 

des Roseaux: It’s too much when we start to have audio dropouts. In those cases, we have to increase the latency. We can only accept this solution for small news commentaries. For radio live shows, we need to be reactive, with as little latency as possible.

Verhoeven: I prefer zero tolerance in dropouts. The level of compromise you make in either latency or audio quality depends on your content. Check your internet or connection speed before you commit to any job in the field. If you need to do a voice-only remote interview or report where a small delay is important for communication purposes, I would say to sacrifice bitrate and sound quality. If you have to stream music or content with high-quality audio, I would suggest adding buffer size and thus also latency, so you can keep a better bitrate and quality.

du Toit: On the Z/IP codecs that we use, the buffers compensate for a small amount of packet loss. Packet loss is acceptable up to the point of audio interruptions.

Kline: That depends on the nature of the broadcast. Is it a four-hour football remote? Is it a two-hour client remote with a handful of two-minute breaks? Or is it a 24/7 nailed-up STL link? If it is a four-hour non-stop football remote, then there may be no margin for error — no clicks or dropouts allowed. Would you even allow one “pop” of audio during the Super Bowl with millions of people listening? For a short single client remote with a few quick DJ breaks, an occasional “pop” that might not even make it on the air might be OK. It also depends on the bandwidth options available. If only 4G is available inside a building and the remote is only for a few minutes, and it has to get on the air, then you tweak your latency/buffer settings (these can be automatic) and do the best you can.

Amery: In most cases, I carry out network performance troubleshooting to find if the problem is related to packet loss or excessive latency. Packet loss is literally when you do not receive a packet. This can be caused by a variety of factors, such as RF interference, dirty fiber connectors, oversubscribed links and routing issues.

Radio World: Is it important that a codec continually attempt to reconnect if the connection is inadvertently dropped?

Mateen: Not really. But depends on the scenario.

des Roseaux: Today, most codecs have an auto recall option. For us it’s essential because 80 percent of our connections are made by a journalist alone. As he is not a technician, the codec need to be in an auto recall mode.

Verhoeven: I think it is. In some cases it’s not possible to physically monitor the encoder or decoder. Sometimes the hardware device is located in a tech room maybe on a different floor and there is no time to have a technician run over to it to try a manual reconnect.

du Toit: Our codecs are setup to auto reconnect for up to five seconds, but we always have a broadcast engineer onsite monitoring and they will intervene if necessary.

Kline: For nailed-up STL connections I always set the modem to reconnect automatically. For anything else, it depends on the situation.

Amery: Yes, It is very important for a codec to continuously attempt to reconnect if the connection or signal is dropped. Otherwise the work needs to be taken from the top, and that takes time.

Radio World: Do you prefer working with a desktop or rack-mounted codec?

Mateen: Rack-mounted.

des Roseaux: I prefer a rack-mounted codec. It’s more simple to use for a journalist and it’s a dedicated device for live broadcasts.

Verhoeven: In professional situations I have always worked with hardware codecs in rack-mounts, but I feel — with the huge popularity of streaming content and podcasts — that desktop codecs and streaming apps are gaining significantly in market share.

du Toit: We work with rack-mounted codecs only, kept in flight cases for better durability and quicker setup time. Our base units are mounted in climate controlled environments.

Kline: If it is in the studio, I always prefer rack-mount. If in the field, usually portable (desktop). Unless it is located in a “remote kit,” where there is a portable rack with some other gear in it. Often these are used for sports remotes or larger remotes. Everyone has their own preference on this.

Amery: Personally, I prefer rack-mount, since it provides more stability for my requirements. But, for many, both are acceptable.

Radio World: How important is it to be able to get remote access to the codec while it is in use? For example, do you want to be able to make changes in its configuration even after the remote broadcast has started?

Mateen: Absolutely. It is very convenient to be able to have such an option, and also be able to configure the codec while broadcasting.

des Roseaux: Today it’s really important to get remote access to the codec. A good IP connection depends on a lot of network presets, and fake presets. It’s too complicated for reporters to configure their device, and it’s not their job to do so. That is why I like to have a remote access to manage the control.

Verhoeven: In my opinion it is extremely important to have remote access to the codec. As mentioned before, it’s not always possible to have a technician available when things go south. If you have sufficient knowledge about what you’re doing and the device itself or the software doesn’t adjust automatically it must be possible to manually correct latency or quality of the connection. Or even reset the codec if needed.

du Toit: This is very important to us as we can monitor the status in real time and make any configuration changes if necessary.

Kline: I would say it is important to always have that capability. Commonly, for station remotes, there is a remote technician (or DJ) who is responsible for setting things up at the far end. This person may or may not be codec expert with in-depth knowledge of every setting in every menu, but he or she certainly knows enough to connect and how to change bitrates or algorithms — things that can fix common problems. And for situations where they can’t figure out how to resolve an issue outside of the studio, I use remote access to make and disconnect connections, change algorithms, update firmware, etc.

Amery: This is a very good question because it’s extremely important to be able to change codec configuration during projects as required.

Radio World: How important is N/ACIP compatibility? Do you ever connect different brands of codecs to one another?

Mateen: Any new device that has N/ACIP compatibility would be a plus. Our station has not connected different brands together thus far.

des Roseaux: It’s rare for us to connect different brands of codecs to one another. But when it’s happen, N/ACIP compatibility affords us the possibility to easily connect two different brands together.

Verhoeven: Yes, I think it’s very important, and we do use different brands. I understand that some manufacturers want to protect their name by implementing exclusive protocols, but on the other hand in this day and age it’s all about ease of use and interchangeability in a fast-paced working environment. Sometimes in the field you land in unforeseen circumstances where you need to improvise and if you have a brand that instead limits possibilities and slows down your workflow, you’ll be thinking twice on what to use on your next assignment.

du Toit: We always connect Telos Z/IP to Telos Z/IP.

Kline: Not that often. I think it makes sense and is important to have as an option — especially as a traveling remote engineer who carries one type of codec and connects to lots of different studios. Or for a studio that owns one type of codec and has regular special guests from out of town, who then need to connect to their home base, which may have a different codec. But again, personally, I don’t do it often.

Amery: For the moment we haven’t tried to connect two different brands of codecs but I am sure we will in the future.

Radio World: Do you think codecs will remain a physical unit or will they be replaced by software applications, which are integrated into smartphones, tablets, etc.?

Mateen: It depends on the environment. For example, a small setup could do with software application, but a larger broadcaster generally requires good hardware if they could afford to invest in it.

des Roseaux: In my opinion, codecs need to be a physical unit for two reasons: Firmware stability and good audio circuit interface. A software codec needs to be installed on a desktop and operating systems are never stable enough.

Verhoeven: I think eventually it’s inevitable. We are not far away from a complete streaming radio studio inside your phone. It may be already possible today. Next thing you know you’ll be making a complete show from your smartphone while sitting on a bus with an elderly lady with groceries next to you.

du Toit: I think there will always be room for both, but the technology is already being integrated into smartphone, tablets etc.

Kline: Physical units and software applications have coexisted for many years. You can choose either and even cross-connect them (smartphone to physical unit). This is standard practice. Do I think one will ever replace the other completely? No.

Amery: Using software codecs on smartphones and tablets certainly simplifies the task and eliminates the need for additional devices when managing remotes. But for us that is still costly. So hopefully in the future, prices will decrease.