Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now



Are You Up to the Challenge?

Heraclitus has more to do with latency than you might think

Like just about everything else technical, the state of the art in broadcast technology is continually in flux. This has been the case all during my career, and it will continue long after I have hung it up.

The back panel of this Tieline Genie Distribution codec is a great example of a device spanning technological generations, offering analog as well as AES I/O, AOIP and Ethernet.

The challenges that we face often don’t particularly concern incoming or new technology but rather the integration of that new technology into the existing infrastructure. The new technology (usually) works well enough on its own, but it often doesn’t play well with older infrastructure without some work.

Thinking back to the 1970s, transmitter control ladders were often 120 VAC. Older stepper-relay remote control systems worked okay with this, easily handling the voltages and currents, but when late analog and early digital remote control systems (think Moseley TRC-15) began utilizing smaller and smaller relays, we had to interface our transmitters with 24-volt DC relay panels. I built quite a number of those over the years. It was certainly a pleasant change when transmitter manufacturers began using low-voltage control systems.

And then in the late 1970s and early 1980s, we began using composite STL systems and putting the audio processors at the studio. These audio processors had composite outputs, which was a new feature in those days, and feeding that composite output, along with whatever subcarriers we used for remote control, right into the STL transmitter represented a pretty cool architecture. It allowed us to make processing changes from the quiet environs of the studio instead of at a noisy transmitter site.

The problem with this arrangement was that it was prone to loudness-robbing overshoots. Then Eric Small came out with his revolutionary composite clipper, and we began putting those at the transmitter sites between STL composite output and exciter composite input. This provided the perfect marriage between technologies and allowed us to regain some, if not all, of what we had lost.

Another “hybrid” technological leap came in the mid-1980s when Steve Church came out with his Telos 10 phone hybrid, which would optimize the hybrid null upon connection, resulting in the best possible null and caller/host isolation. Before that, we had been jeeping speakerphones for this purpose, and while that worked, it was strictly a one-way-at-a-time proposition. (See jeeping sidebar.)

The Telos 10 changed all that, and it was, if I recall correctly, made to directly interface with the old-technology 1A2 telephone key system. It worked, and it worked well, marrying the old telco technology with the new. Calls sounded great and hosts loved the new system.


Thinking forward from there, we made (and arguably are still making) the transition from analog to digital audio.

Most stations, I suspect, are still using a combination of analog and digital of some sort, either AES, TDM or AoIP. Manufacturers have been good about accommodating this hybrid situation, with devices that provide either/or/both kinds of inputs and outputs. Even in my company’s “all-digital” facilities, we still retain some analog capability here and there, often for clients and hosts to use for their device interfaces (laptops, tablets, etc.), music on hold and the like.

Back in 2002 or thereabouts, the first wave of hybrid digital radio conversions began taking place. This was one of the biggest marriages of old and new technologies in my thinking. It required a lot of changes, often at both studio and transmitter (and in between).

At the transmitter site, a digitally-modulated signal was added to the analog FM or AM signal by any of several methods. That was often the easy part. It was in dealing with the separate audio paths — inserting a fixed time delay into the analog signal so that it matched the encoding/decoding delay of the digital signal, providing separate processing paths for the analog and digital audio paths, and providing an artifact-free transmission path from studio to transmitter for the main and multicast audio — that things got interesting.

In those early days of HD Radio, a station could well have either analog or AES audio feeding either or both the analog and HD modulators. A glance at the rear panels of exciters and exporters would often leave an engineer scratching his or her head — the ports may be labeled “Digital In,” but there are both left and right ports, a clue that this is actually an analog input of the digital signal audio source. The opposite was sometimes true on the analog (FM or AM) side, where a port may be marked “Analog In” but actually be an AES input. It was, and in some cases still is, easy to get confused!

The audio processors we are using these days have multiple inputs and outputs — digital, analog and AoIP in, and digital, analog and composite out. This is a good example of manufacturers responding to the widely-ranging infrastructures in broadcast plants.


The “in between” part of broadcast infrastructure is also in transition. You still see Mark and Scala grid antennas on studio building roofs here and there, but more and more you are likely to see solid dish antennas for 6, 11, 18 and 23 GHz Part 101 links that provide a digital pipeline between studio and transmitter site. A lot of FM stations have abandoned their composite STL architecture for an all-digital path. Processors can still be left at the studio in some cases, but it’s a whole new world in terms of the conveyance.

Back in November, Paul Kriegler, who has recently written about audio quality in these pages, stopped by my office for a visit, and we got to talking about latency.

For many years, monitoring was done off air in real time, with studio headphones fed from receivers tuned to the station. Paul was an on-air guy at one point, and he said that a delay as little as 10 mS would drive him nuts. Simple encoding/decoding/processing delays result in many times that amount, making real-time off-air monitoring just about impossible in today’s digital and hybrid infrastructures, but for many of us, it doesn’t matter anyway — we have over eight seconds of HD analog diversity delay in place plus whatever amount of profanity delay we use.

In most of my stations, we’ve got close to a minute of total delay. That requires a whole new method of “off-air” monitoring, particularly for the studio headphones. Once again, manufacturers (and particularly audio processor manufacturers) have recognized this and in some cases provide a “studio” processing path that can be used as a processed pseudo “air monitor” feed in the studio.

Heraclitus once said, in effect, that the only constant is change. That is certainly true in the broadcast engineering realm. Anyone who has been in this business very long recognizes that we are part of a continual evolutionary process that is leading … well, who knows where?

It is our job as engineers to adjust and adapt, rolling with the changes and making the very best of all the technologies at our disposal. That can be a daunting challenge, as it requires continuing education and a continuously broadening skill set. That’s what we call “growth.” We have to tap into all the available resources, including educational opportunities provided by the SBE, manufacturers and others, and we have to network with other engineers to share what we know (and ask about what we don’t).

Are you up to the challenge?

Cris Alexander, CPBE AMD DRB, is director of engineering of Crawford Broadcasting Co. and technical editor of RW Engineering Extra. Email him your thoughts and suggestions for articles to


“Jeep” and its conjugate forms are words that were part of the tech vernacular in an earlier age. It meant to shunt one circuit or component across another circuit or component, and that’s what I meant above when I mentioned “jeeping speakerphones” – the output and input audio was tapped and fed to and from the on-air console to air phone calls with some reasonable amount of quality.

In old AM phasing and coupling systems, we used what were commonly referred to as “jeep coils,” where one coil was connected across a few turns of another coil to take some power from the larger coil and send it to a different circuit. For example, a phasor with a tank-type power divider would have a large, resonant coil across the common point, and each tower would have a “jeep coil” tapped across a few turns of that large coil to send power to the circuit branch for that particular tower.

So… “jeeping” isn’t always about off-roading!