Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Codecs Must Continue to Adapt to the Times

Gladding expects more multichannel support and further integration into other platforms

Andrew Gladding is a radio broadcast engineer, interviewer, DJ and music producer. He is chief engineer for Salem Media in New York, which operates WMCA(AM) and WNYM(AM), as well as the chief of WRHU(FM) at Hofstra University and an adjunct instructor there. This interview is from the Radio World ebook “Trends in Codecs 2024.”

Radio World: How will the development of virtualization and of software-integrated air chains change how engineers deploy codecs?

Andy Gladding: As broadcast hardware becomes more soft platform dependent, codec solutions have to follow suit. Codec packages that can live on PC devices and seamlessly integrate with other hardware audio products will expand reach and access for a variety of broadcast opportunities. It will also open new opportunities for collaboration within station frameworks. 

RW: Given advances in audio coding and wireless IP, what improvements can yet be made in the quality of audio from the field?

Gladding: AAC+ seems to be the best codec for transport in terms of quality and overhead. The bit usage is lower than standard streaming codecs, while the format can support higher-quality stereo audio streams.

Andy Gladding on a visit to San Francisco’s Hyde Street Studios.

I would like to see more multichannel audio codec packages integrated into software and hardware solutions. This will make point-to-point audio transport easier between facilities that have greater technical demands. Multi-channel transport options would support a range of services, from traditional STL-type utilities to IFB, backhaul of multiple sat receivers from TX sites to the studio and general network audio sharing. 

It would also be nice to see codec platforms that can service multiple locations within the same device or software suite. 

RW: How will the growing use of the cloud influence how codecs are deployed?

Gladding: Cloud-based audio transport is a real game-changer for our industry. Radio and TV stations are no longer tied to costly P2P telephony services such as ISDN, which have metered channels. Cloud-based transport also encourages collaboration between organizations at great distances, meaning international stations can seamlessly integrate and share production. The opportunities are endless when it comes to cloud-based audio. 

RW: Have you done a recent installation that you can tell us about? 

Gladding: When I assumed engineering responsibilities for Salem Media of New York, I did a major reimagination of our existing codec infrastructure. The use of Verizon dedicated dry ethernet lines made it possible to eliminate the legacy T1 services, which resulted in cost savings and greater levels of reliability and transport options using a single circuit, as these lines have scalable bandwidth that far surpasses what is available using T1. 

I was also able to diversify our connectivity across different circuits, which provided relief from service outages and potential system failures. 

The New York plant has enjoyed expanded codec services, reliability and content distribution while realizing significant savings in telecom costs. 

RW: What long-lasting changes did Covid bring to codecs, given the impact it had on the industry at the time?

Gladding: Covid required a rushed shift of operational procedures to accommodate remote broadcasting and home-to-studio transport. Broadcasters were very lucky that the hardware and systems were readily available in 2020, which meant there were a plethora of options available for implementation with relatively quick install times.

Affordable solutions, such as QGOLive, provided an economically sustainable entry point for broadcasters, especially educational systems that needed to accommodate large numbers of personnel at diverse locations. Being able to rely on end-user IP services meant stations could minimize costs associated with remote access while still maintaining access to station services for members. 

Covid also acted as a proving ground for codec technologies and caused vendors to get creative with add-on services that expanded their suite of products to accommodate the shift in broadcaster technical needs. 

RW: How do today’s codecs avoid problems with dropped packets?

Gladding: Most hardware and software codecs provide ample buffering layers and error correction to deal with dropped audio packets and frames. They also provide clocking to maintain transport position and timing. Codecs that have multiple NIC cards can also use diverse networking to move audio from point to point. This means that there is a designated backup layer to handle primary connection failures. This combination of protocols along with a variety of codec formats and sample rates results in greater configurability and customization for individual system requirements.

RW: What is considered reasonably low latency now?

Gladding: Latency at 250 to 350 ms seems to have the best performance while minimizing latency effects in duplex conversations. As codecs improve and connectivity speed and bandwidth increase, this can probably be reduced even further. I have used codecs with higher buffering rates in duplex situations, specifically when hosts at remote sites want to interact with caller audio routed to the station; however this does create some frustration for both the caller and the host. 

RW: What considerations should be taken into account to allow talent to do shows using their phones?

Gladding: Hosts often have a variety of internet connectivity solutions at their home or mobile studio. Many of these connections are not always reliable and have shifting overhead depending on the physical location of the hosts. 

Stations need to be aware of the limitations and risks of having the host talk channel originating from a consumer device. Station engineers and operations managers should also be prepared to educate the host on the best practice for using their personal device to execute their broadcast. 

Devices that are overloaded with other apps or are connected to unreliable data services can be catastrophic to programming and delivery. A site survey should be performed ahead of time so that the station and host have a complete picture of the transport system.

Stations should also look for the best possible audio solution for the host when it comes to microphones and headphones and make sure they will interface with the device and app. Bluetooth devices can sometimes exhibit unwanted effects when working with codec apps, so testing and certification should be done ahead of the broadcast to ensure the station is transcoding the audio with the best possible blend of quality and reliability. 

RW: What tools are available for sending audio to multiple locations at once?

Gladding: I have been experimenting with DIY Icecast servers recently and have been getting some pretty terrific results. Icecast and Shoutcast servers are relatively easy to set up and can operate at various levels of sonic quality with minimal machine overhead. Both hardware and software encoders and decoders can interface with Icecast and Shoutcast, which can open up new avenues of transport that can handle a variety of user needs.

An Icecast server can serve as a great listen line, which can eliminate the need for POTS services and hybrids, reducing overhead. Icecast can also be a practical and inexpensive method to feed multiple decoders at transmitter sites for backup STLs, as they can be set up on a variety of networks, including MPLS and point-to-point connections. This eliminates the need for proprietary point to point encoder and decoder hardware to service each location. 

RW: What questions should an engineer ask when considering solutions for large-scale distribution?

Gladding: Treat the codec solution with the same approach they would any other capital project. You need to look at the IP services available to determine the best solution, taking into consideration bandwidth and cost. 

You also need to determine the reliability factor. A remote broadcast or primary STL would have a higher reliability need than something like a listen line or monitoring service. 

Finally, not all codec software and hardware packages are created equally. Audio quality needs vary greatly depending on application. An STL for an FM station may require full-bandwidth audio but could accommodate more latency if bandwidth is an issue. A remote broadcast for an AM station could trade stereo channels for an Opus mono solution to increase reliability with lower latency.

Conversely, the engineer may choose to drop quality in favor of preserving stereo if the second channel is needed for something like IFB / producer communication or secondary audio backhaul. Some transmitter sites have multiple satellite receivers, so in this scenario, the engineer may choose a multichannel codec, such as a Moseley Rincon, to accommodate the additional satellite backhaul feeds.

Multichannel transport can also be a consideration if you want to send multiple audio feeds to one transmitter site, and then use that site’s antenna as a UHF relay to feed the second station’s program audio to another transmitter site. The multi-channel codec could then be used to backhaul the OTA tuner audio for multiple stations back to the studio, if OTA reception is not available at the studio due to distance or structural limitations. 

RW: What about steps to create redundancy?

Gladding: Multiple network layers, along with primary and backup hardware, is a must, especially for high-revenue audio pathways. When using codec solutions as STL, bear in mind that OTT circuits are not impervious to congestion or failure. Even the most thought-out network will still suffer from downtime periodically. Therefore, redundant hardware and services, especially across different carriers, systems and modes, should be built into the system.

RW: How about protecting codecs and the related infrastructure from cyber attacks? Are firewalls and VPNs available in codecs?

Gladding: Point-to-point circuits, such as the Verizon E-line, can minimize the possibility of cyber attack, since this is a closed network not open to the outside world. When using an OTT network to deliver audio, passwords on the codec devices should be set up and changed periodically, along with any other hardware that is public-facing. Setting up a robust VLAN, VPN and firewall with a higher-caliber router will also minimize threats of attack. 

At this point, I would leave the firewall and VPN to network purpose-built hardware. Codecs should be tasked with transporting audio. Building in and relying on these features within the codecs could create a host of issues, which can ultimately affect network reliability and security. 

RW: What misconceptions do people have about codecs? 

Gladding: People don’t completely understand how to configure codecs for specific purposes, so they assume that audio quality will suffer when switching from T1 or analog line to codec. This is simply not true. Compressed audio formats, when configured properly, can stand alongside PCM circuits, delivering equality with lower overhead. 

RW: Last, Andy, how do you think codec design will change in the next five years?

Gladding: I anticipate seeing codec solutions more widely deployed onto PC and Linux platforms, with greater direct integration into radio automation and processing / DSP software packages. Virtualization of codecs will be extremely powerful, especially as cloud-based automation and containerized audio becomes more popular. 

[Sign Up for Radio World’s SmartBrief Newsletter]

Close