The free Radio World ebook “Getting Data Up the Hill” explores trends in studio/transmitter links. This is an excerpt.
William Harrison, CSRE, DRB, CBNE, is chief engineer of WETA(FM) and co-chair of SBE Chapter 37 in Washington, D.C. He has done work for clients including PBS, NPR, The Metropolitan Washington Ear and several FM and TV broadcasters.
Radio World: What would you say is the most important recent trend in how stations are delivering audio and data to their sites?

William Harrison: It is how broadcasters have embraced internet protocol, specifically via public internet. It has become the default delivery mechanism, as opposed to the exception.
In the past, we relied on specialized systems to allow multiplexing of audio, control and even extending a POTS line via dedicated circuits like ISDN, T1, microwave or even fiber. These days it seems we’re sending even more things over commodity services, via copper, cable, fiber and low earth orbit satellite. That translates directly to savings, especially when you consider the cost of installing a dedicated microwave system.
RW: What role do satellite internet-based systems like Starlink, Telesat, OneWeb and others play now?
Harrison: This is something I’ve been looking into quite a bit. Services like Starlink are attractive for use as an STL, especially in areas where options are limited at the transmitter. The cost of purchasing and installing a typical microwave system often can fund the usage charges for Starlink for several years.
Taken to the next step, where Starlink terminals are installed at both the studio and transmitter, you gain another layer of redundancy. The STL no longer traverses the public internet; it goes from your studio to the satellites, then down to your transmitter. It’s not quite a private link, but it’s not a public one at that point either.
On the other hand, you don’t get control of when those terminals update, which happens often and can take several minutes. If you rely solely on something like Starlink for your STL, you’re going to have to plan carefully to ensure that those updates don’t take you off the air.
RW: Some stations now operate some of their air chain in the cloud — a few use no studios at all. What are the implications for STL?
Harrison: It depends. Running an air chain in the cloud can be fine, if it has been designed in a way that provides redundancy and graceful degradation. If your implementation in the cloud can’t be switched from one data center to another, what do you do when your data center has an issue? Likewise, if you’re tied to one vendor’s infrastructure, what happens when that vendor has a problem?
Standardization is your friend here. Implementing a configuration that can migrate between data centers and even providers is going to result in the lowest risk; but there will still be risk involved.
RW: Many transmitter sites still do not have cell or hardwire connectivity. What options do these stations have?
Harrison: Stations have been dealing with this issue for a very long time. The old 950 MHz microwave systems were often unidirectional, in order to get audio from the studio to the transmitter; hence the name STL. Sometimes you would see a second, discrete system installed to create the reverse path, appropriately named TSL, which typically wasn’t absolutely required for operation and provided significantly less bandwidth.
A station running only analog obviously has very different requirements than a station running several HD channels, or subcarriers, or complex systems for monitoring and control. Situations vary.
Regardless, there are usually options to be had, with LEO satellite internet being a popular one right now. Another I’ve seen is to locate a nearby site that does have connectivity and using an inexpensive unlicensed link to bring it to your transmitter.
I know of an installation where the local American Legion hall could get internet, and the engineer happened to be a member. He suggested that he pay for internet for the hall in exchange for being able to install a small 2.4 GHz dish on the building to extend it to his tower. They’ve been operating that way for several years now.
RW: Many transmitter sites lack substantial IT infrastructure. What are the implications?
Harrison: Again, situations vary. Some transmitter site LANs are air-gapped from any network; others are tunneled back to the studio, and the IT department has full authority over anything that connects to it. This can be both a good and a bad thing.
One of my sites has extremely limited network connectivity, and when the IT department pushes major updates to the computers there or scans for vulnerabilities, it typically results in audio issues because the network simply can’t keep up.
The solution? Better cooperation between engineering and IT, to schedule updates and scans, and ensure that backup systems are in place prior to initiating them. And it never hurts to bring one of the IT folks to the site so they can understand that it isn’t an office building with tons of resources, as they might think.
RW: Are there tips you recommend to harden or prepare station audio for delivery over today’s links?
Harrison: Redundancy, redundancy, redundancy. I look for backups to the backups I already have in place. Often I see situations with several paths for the audio to get to the transmitter, but all going through a single device, such as an IP codec that can stream splice, but only has a single power supply. That’s a recipe for disaster — audio is arriving from several paths, but there’s no way to get it on air.
I also believe in applying the same logic to those systems that the IT folks apply to their backups: Test them. If you aren’t actively monitoring your backup links, you don’t know if they are working — or will work when you need them to.
RW: It’s about ensuring continuity of operations.
Harrison: Personally, I’m a big fan of graceful degradation. Redundant links don’t necessarily need the exact same specs as the primary.
For example, uncompressed stereo audio is ideal, but in the event it isn’t available, I’d rather be on the air with compressed program audio in mono than be silent. This is one of the reasons I’m such a fan of adaptive HLS. It can change the quality of audio on the fly as network conditions change. Essentially, the audio is encoded at multiple qualities, and the endpoint asks for a specific one. Depending on what happens — packets arriving late, out of order, missing, collisions etc. — the endpoint can ask for a lower-quality version. Once network conditions improve, the endpoint can ask for higher quality again.
RW: Is that what your talk at this spring’s Public Radio Engineering Conference was about?
Harrison: I gave a presentation about “non-traditional STLs.” For the most part, engineers are taught that an STL requires sufficient bandwidth for linear, uncompressed audio, but failing that, we choose a compromise of compression and bandwidth, and that becomes the new requirement. And as stations went from just an analog audio feed to adding RDBS, HD channels, PAD, PSD and control, we ended up using a bunch of different systems on different paths to get everything to the site: audio over a microwave; metadata over a serial link; PAD and PSD over public internet.
That makes it next to impossible to sync up. Enter HTTP Live Streaming, or HLS. As you encode, everything becomes packetized, so the audio plays out in sync with the appropriate metadata and control. You can even embed commands in the HLS stream, telling a receiver to switch to a different stream, or initiate an EAS test. And it adapts to changing network conditions, so if your link suddenly becomes congested, you still end up with audio, just at a lower bitrate.
Big players like Netflix and Hulu invested millions in this technology and have been using it for years. Now broadcasters can take advantage of all that R&D and adapt it for our own use.
RW: How can we secure the “pipe” against bad actors?
Harrison: Well, obviously the most secure pipe is the one nobody has access to, like a dark fiber. But those are difficult to come by. The next best thing would be to ensure that you have proper security at both ends and are encrypting your traffic through a tunnel. Verify you are only exposing the absolute minimum possible to the rest of the world, and make certain that you keep up to date with firmware, zero day exploits and other advisories. And finally, make backups and documentation. Nothing’s worse than needing to replace a network device, be it from failure or compromise, and having no idea how the previous one was set up so you have to start from scratch.
RW: Should audio be processed differently based on the type of STL infrastructure?
Harrison: This is a great question and difficult to answer. In certain circumstances, yes, the audio should most definitely be processed for the link. Some links simply have narrower bandwidth than others, so it’s a given that there needs to be some way of compensating for that fact. Not every link can handle full-fidelity linear audio, especially when trying to send multiple signals to a site.
This also comes into play as you consider backup systems, and the station might have to decide what takes priority. For example, consider a station with two channels, main and HD2, with primary delivery over an IP codec. In the event of the failure of the codec or the ISP, there might be a backup microwave system. If it’s an IP-based system, perhaps the signals need to be compressed in order to fit the available bandwidth. But if it’s an AES audio microwave, which channel is more important to use it for? Or would there be a preference to keep both up, but in mono instead of stereo?
RW: And what is the role of MicroMPX in this discussion?
Harrison: MicroMPX is great for bandwidth limited links. We don’t live in an ideal world with unlimited capacity, and it truly helps to keep things sounding good without requiring major upgrades to infrastructure. I’m also a fan of how it can be adjusted to use more or less bandwidth, depending on the circumstances of the link. Just as with audio processing, using MicroMPX can help significantly in a situation where a primary link has failed, and a reduction in bandwidth is the only way to keep multiple channels on the air.