
This is part of a series on streaming best practices. David Bialik is director of engineering for MediaCo in New York City. He is co-chair of the Broadcast and Online Delivery Technical Committee for the Audio Engineering Society and the chair of the Streaming and Metadata Usage Working Group for the National Radio Systems Committee. He has spent decades researching and servicing broadcast streaming workflows.
Radio World: David, what do you see as the most important issue in streaming for radio?
David Bialik: One of the most important things that’s beginning to get looked at is loudness control. As streaming has evolved and stations are monetizing it by injecting advertising from other areas, loudness control becomes very important. You don’t want listeners reaching for the volume knob because the ads are 3 to 6 dB louder than the content.
RW: What standards govern streaming loudness and what are the recommended levels?
Bialik: Loudness control has been highlighted in AES TD1008, AES71-2018 and AES77-20232. The rules are now a standard: For a video stream, the loudness level should be –24 LKFS. For a music stream, it is usually between –16 and –17 LKFS.
If you’re injecting your commercials at that level, people will not reach for the volume knob.
RW: Are there free resources or monitoring tools available for loudness control?
Bialik: Yes, the TD1008 paper is free on the AES website. Additionally, there are free software monitoring devices like the Youlean loudness meter or the Orban loudness meter, both of which work very well.
RW: Should a station’s over-the-air and streamed audio be processed alike?
Bialik: Absolutely not. You have to process a stream differently than your over-the-air broadcast because there are different parameters. OTA has the ability to be louder because you can go above zero, but in streams, you don’t want to go above zero. They should be processed to the point where they sound good, and you can recognize the instruments being played, avoiding over-processing.
RW: When does latency become a major concern for streaming, and what are the best practices for handling it?
Bialik: It depends on which codec you are using and your audio chain. A lot of stations these days are using HLS and AAC. Very few are using MP3 anymore.
If you’re just streaming music, latency is not as much of a concern. However, you want low latency if you’re streaming sports or any type of talk where listeners are interacting, like call-in radio. For sports, people often watch the television but listen to the local announcer over the stream, and a significant delay makes this impossible.
To cut down potential latency, many audio professionals are following the lead of television streaming, which has developed low-latency methodologies — like the CMAF segment format — because of the demands of sports coverage.

RW: How does ad insertion relate to streaming workflows, and what resources are available for managing it?
Bialik: Ad insertion is often controlled by metadata. You need to have your metadata clean so that your systems will call up the right ad at the right time. Nowadays, metadata even has the ability to control loudness.
The comprehensive resource for this is the NRSC-G-304, the Metadata for Streaming Audio Handbook, which is available as a free download.
RW: How has the move toward virtualization and cloud-based services changed streaming workflows?
Bialik: Many ad networks and content delivery networks are now cloud-based. The “cloud” is just another place where the servers and infrastructure are located — the equipment is somewhere else, and you’re linking to it.
Broadcasters often use CDNs, which fit the definition of a cloud because you send your signal up to them, and they distribute it out. This is how ad content is delivered from a different source and integrated into your stream.
RW: Beyond loudness, what are some other common technical mistakes in streaming?
Bialik: “Now playing” data not being synchronized is a big one. For instance, if you’re playing music, the song title has to come up on the screen in sync with the audio. A mismatch can occur across different platforms like HD Radio, RDS and the stream if the databases aren’t synchronized.
RW: Given the capabilities of modern streaming, what is one of the biggest things broadcasters need to focus on?
Bialik: Educate their salespeople on what they’re selling.
With streaming analytics, you can know exactly who is listening, what they’re listening to, what they’re listening on and even where they live. Since streaming is a bidirectional service, you can get the exact metrics of how many listeners you have. Leveraging this data for targeting is very important for sales.
RW: How should broadcasters view their role in the current content landscape?
Bialik: Broadcasters need to remember that they are first and foremost content distributors. They are distributing content over the air and on the stream, and some of that content might be the same, but some might be different. They are a conduit for distributing content, and by putting content on an HD channel or a stream, they are giving the public more ways to listen to more content, which increases competition.
Read more on this topic in the Radio World ebook “Streaming Best Practices.”