Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Trends in Virtualization & the Cloud

The story of how these concepts will affect radio’s technology landscape is starting to be written

Trends in Virtualization and the Cloud ebook cover

Doug Irwin, CPBE, DRB, AMD, is vice president of engineering for iHeartMedia’s Los Angeles region. He is a technical advisor to Radio World contributor and former editor of Radio magazine. This article appeared in the ebook “Trends in Virtualization & the Cloud.”

Billions of people around the world — users of Facebook, Google and Amazon to name but a few — have grown accustomed to using “the cloud.” As broadcasters, though, the idea of using the cloud for virtualization of functions that we’ve always done close to home — say, in the rack room at the radio station — may take some getting used to.

We’ve visited the subject before, but it’s a fast-growing arena. And if this is a new concept to you, managing with the cloud, not just being an end user, read on, as this ebook is for you. We’re going to examine ways in which you can take advantage of cloud services that are now ubiquitous, in the context of day-to day broadcasting. We’ll review some basics about the cloud and virtualization, then ask several console and automation manufacturers about how these concepts are playing out in their spheres of radio.


In actuality, you’ve used the cloud countless times, likely without giving it any thought. One could make the case that when long-distance telephone services came into use,  “the cloud” did as well. Think about it: On one end, you, with a telephone; on the other, another party, with a telephone. You had no idea just what went on between the two ends; you just knew that they communicated. The technology that existed between the two ends was of no importance, really.

Fast-forward to the mid-1990s when internet access became necessary at work, and naturally, at home as well. Like long-distance, this was another case of end-to-end communication that worked via “something in the middle.” No one was referring to the internet as the cloud, but conceptually, the idea was the same.

Cloud computing is a relatively new thing, especially for end users. According to one useful definition found on Wikipedia, “Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the internet. Large clouds, predominant today, often have functions distributed over multiple locations from central servers.”

[Read “How to Choose Your Next Radio Console”]

According to Ravi Namboori, writing in Dzone, “Cloud computing has its roots as far back [as the] 1950s when mainframe computers came into existence. At that time, several users accessed the central computer via dummy terminals. The only task these dummy terminals could perform was to enable users access the mainframe computer. The prohibitive costs of these mainframe devices did not make them economically feasible for organizations to buy them. That was the time when the idea of provision of shared access to a single computer occurred to the companies to save costs.”

So, in consideration of the PC revolution of the 1980s and ’90s, we’ve now come full circle. During the 1990s telecom companies began offering virtualized private network connections “whose quality of service was as good as those of point-to-point (dedicated) services at a lesser cost. This paved way for telecom companies to offer many users shared access to a single physical infrastructure,” according to Namboori.

According to Barry Parker, a recently retired networking engineer, “Cloud computing has been around in concept for many years but it wasn’t until 2006/2007 that it really took off commercially. There are different variations that can be adopted ‘as a service’ depending on business or individual need. One very common example of software as a service (SAAS) would be Microsoft Office 365, which runs on Microsoft’s cloud (Azure). Office365 was the first cloud application deployed by the company I worked for at Microsoft’s Singapore data center.

“Of course all of the major players (Amazon, IBM, Microsoft, Oracle) have their own services and data center (cloud) networks,” said Parker.


Not surprisingly there are multiple types of clouds and services available.

Namboori describes four types of clouds: public, private, community and hybrid.

With a public cloud, a provider offers services such as storage, available to users via the public internet. (Think ADrive or Dropbox).

A private cloud (also known as an internal cloud or corporate cloud) offers a means by which hosted services can be provided to a restricted number of users protected by a firewall or a direct WAN connection. This allows businesses to exercise control over their data, as the BBC does with ViLoR, to be discussed shortly.

A community cloud is a resource shared by more than one organization whose cloud needs are similar; a hybrid cloud is the combination of two or more of the types previously described.

There are three basic types of cloud computing services available. “Software-as-a-service” (SaaS) requires that a company pay a subscription fee to the provider, and that the company accesses the service via the internet. “Platform-as-a-service” (PaaS) affords a company the ability to make its own custom applications that will be used by its entire workforce. “Infrastructure-as-a-Service” (IaaS) is a service that provides subscribing companies virtual infrastructure. Think Amazon Web Services (AWS) here, though there are others such as Google Cloud Platform, Microsoft Azure, and Rackspace, to name a few.


According to, there are several compelling reasons companies are making more and more use of cloud computing, which I’ll also refer to as virtualization.
Not surprisingly, the first reason given is that by using cloud infrastructure and virtualization, your company won’t have to invest as much money in equipment and subsequent maintenance of said equipment. And think about this: Without all that hardware on-site, there won’t be as much of a need for a large rack room and its accompanying costs of air conditioning and electricity. A new radio station build will likely have less of a footprint, potentially reducing leasehold expense as well.

The second reason given is that of data security. As we have seen recently, data breaches and ransomware can have devastating effects on the operations of radio stations.

“Cloud offers many advanced security features that guarantee that data is securely stored and handled,” according to Globaldots. “Cloud storage providers implement baseline protections for their platforms and the data they process, such as authentication, access control and encryption. From there, most enterprises supplement these protections with added security measures of their own to bolster cloud data protection and tighten access to sensitive information in the cloud.”

A third reason is scalability. A large enterprise of 1,000+ employees doesn’t have the same IT requirements as a five- person startup, and so use of the cloud enables an enterprise to efficiently and quickly increase (or decrease) its IT departments, according to business demands.

No one expects that the cloud services will be provided for free, of course, but the presumption here is that cost will be less than the equivalent needed for space, electricity and air conditioning for a rack room that would enclose the automation system. One needs to consider the expense of connectivity as well, of course; but in reality, the type of connectivity needed is getting less and less as time goes on.


So now that we know some basic things about the cloud and virtualization (“cloud computing”), let’s put some basic questions to several industry manufacturers that have been active in this space.

For starters: Just what would be the advantage of operating an on-air radio facility in the cloud? Wouldn’t the station still need the morning show studios, and thus a place for all the talent to set, with computer monitors, headphones, air, conditioning, and naturally a place to collaborate before the show starts up?

“There will always be a need for on-air talent to sit somewhere with recording equipment,” said Dominic Giambo, development engineer at Wheatstone.

“However, the need is only momentary for brief periods throughout the day for most facilities. In many cases for things like morning shows, a national group could use similar mixing and processing equipment in multiple markets that are spread in different time zones. Commercial cloud models tend to charge by time the equipment is in use, so if there is not 24/7 need for the gear it is certainly possible this model can save money. But bandwidth and hosting costs do need to be considered. Typically bandwidth tends to cost the most, so using high-quality compressed codecs to transfer the audio is important,” he said.

Clark Novak, radio marketing specialist at Lawo North America, approached the question by first defining some terms.

“There’s really no such thing as a ‘cloud based’ studio; there must be physical space somewhere for the humans. The only thing that can live in the cloud is software —  virtualized equivalents of physical, in-studio tools like mixers, codecs, routers and playout PCs. So you can’t truly be rid of some real location for the talent to occupy — but they don’t all have to occupy the same location.

“Radio broadcasters have been looking at how the TV and recording industries have embraced remote production and distributed content contribution, and they see how it can save money and boost productivity,” he continued.

“So why have your morning team all in one building? With high-speed fiber internet becoming more available and more affordable, you could assemble your dream team, have them log into a portal at home with access to all the production tools they need — hosted in the cloud —  and create their shows without ever being in the studio together. All they’d need is a mic, an internet connection, and a PC. How much money would this save? Depends upon how you do it, but it might be a sizable amount over the long haul,” Novak said.

Kirk Harnack, senior systems consultant for the Telos Alliance, responded to my series of questions as well.

“Every broadcaster has different ideas about cloud-based operations. The BBC’s operational and ‘re-fit’ model for BBC Local Radio [ViLoR, for virtualized local radio] benefitted handsomely by moving a large part of their infrastructure to their own ‘cloud.’ They report saving millions by speeding up their refit cycle for 78 studios, removing several racks of gear at each station, standardizing on their infrastructure model at all 39 locations, using less HVAC and floor space at 39 locations, and centralizing their technical staff such that IT staff handle all the back-end, with far less troubleshooting and repairs at the 39 studio locations,” Harnack said.

“Moreover, BBC Local Radio stations are sharing many resources in the cloud, and the presenters aren’t even aware of it. Codecs and talk show phone systems, for example, are dynamically allocated as needed for different shows across Great Britain in different dayparts.”

Of course not all broadcast operations have the same needs, he noted.

“Cloud-based infrastructure certainly allows more flexibility in the studio end of things. One station group has a goal of investing under $50,000 in a given new studio facility instead of over $100,000 as they do now. The difference is that their automation systems, codecs and talk show systems will be in a data center. Plus, some or all of the data center functions will be virtualized. This implies easy backup and redeployment of what used to be committed to purpose-built hardware.”


If a station or group were setting out on a facility project in 2020, should it consider the cloud, or just go ahead with building the facility in the locality?

“Since this is all evolving technology, it’s difficult to say exactly what long-term impact it will have,” said Giambo of Wheatstone. “There are still valid downsides to cloud-based technologies, and some may elect to continue to build locally.

“However, it does make sense to at least plan for the cloud longer-term. With emerging 5G and other low-latency satellite alternatives, the increase in connectivity is likely to drive further growth. When a single network failure can bring down your cloud connectivity, relying on that link may not be the best idea. When you have fiber, plus 5G and potentially low-orbit satellites as secondary and tertiary backups, that redundancy makes cloud-based tech more accessible,” he wrote.

Novak of Lawo said there is no one-size-fits-all answer to the question.

“Your talent’s willingness to embrace new production methods and virtual tools, management’s desires and your overall comfort with the concept and your ability to support it all play a part, along with building cost, equipment purchase and installation and ongoing maintenance,” he said.

“Your corporate culture will play into this too. Is your company ready to lead, technically speaking, or do they want familiarity? All of these things should be factored into your decision.”

For Harnack at Telos, the best way to get ready for a cloud-based infrastructure for a radio station is to upgrade to a local AoIP system.

“This technology is not going away, and it just keeps getting better and more ubiquitous. AoIP gives rise to countless serendipities, and the rise of the AES67 AoIP standard vanquishes any fear about competing standards or being ‘stuck’ with a bad choice. Plus, with an AoIP infrastructure locally, you’ll be ready to plug right in to the equipment and technologies that cloud-based operation entails.”
For actually moving broadcast functions to the cloud, he suggested that you begin a dialog with both your AoIP manufacturer and your automation vendor.

“Several broadcasters, large and small, are making plans now for the cloud to be a large part of their broadcast infrastructure,” Harnack said. “Experimental systems are up and running. However, there are no off-the-shelf systems ready for a quick ‘go-live’ right now,  at least none that fully meet broadcasters’ operational needs and legal requirements. Yes, pieces and parts are available, but a broadcaster will need to be highly motivated and bring plenty of their own expertise to such a project today.”


Latency seems to be a roadblock to success in this kind of system architecture.

“Yes, latency is always an issue when transporting audio via the internet,” Novak said. “Lawo mitigates this in our virtual mixing products by constructing a parallel, zero-latency monitor mix using the local I/O equipment. If your mixing services are in the cloud serving a single studio, this approach solves the problem. Multiple contributors in multiple locations would require a different solution.”

Giambo of Wheatstone noted that in many cases, such as final play-out, content can be acceptably delayed without any problem.

“In other cases, such as required by monitoring of local mixes, the latency issue may be insurmountable. The key to deploying cloud-based technology is going to be proper understanding of the latency requirements for the end-use case. Cloud can have many applications, for instance, playout of audio files could very easily be moved to the cloud. Content distribution networks already use distributed datacenters to provide low-latency access to video and audio content,” he said.

Harnack at Telos said, “We apply low-latency local mixing where it belongs and let cloud mixing do its part in the data center. We’re able to have the ‘punch-start’ rhythm of the old cart machine days (under 100 ms) for the remotely-hosted automation, while the talent hears themselves with under 8 ms of delay. Remote talent in off-premise studios would have more delay, but not really much more than using off-the-shelf IP codecs available today,” he said.

“There are several operational models to handling cloud-based infrastructure. Our R&D team at the Telos Alliance started with the premise that too much latency would never be usable for the kind of tight and natural content creation that broadcasters need.”


Next I wanted to know if manufacturers anticipate having the “engine” functionality that one would find in their 2020 products, running locally, available to run in the cloud soon.

“Wheatstone is running a server-based, scalable engine in our lab right now,” replied Giambo. Additionally, “we have the Glass LXE product which allows remote mix engine control from across the room or across the country.”

Glass LXE is a multi-touch UI that operates as a standalone virtual console that is part of a WheatNet-IP audio network. It is modeled after WheatNet-IP’s LXE console, a reconfigurable control surface available in a main frame or in split-frame configuration for multiple operators. Glass LXE works with a mix engine to handle mixing and processing as part of the WheatNet-IP audio network.

Novak said Lawo’s RƎLAY suite of software products support touchscreen PCs and can run in virtual server environments.

“There’s mixer software, a multiple-IO AoIP virtual sound card for PCs, a very sophisticated virtual router and signal processor program, and AoIP stream monitor software that can watch and provide status for all of your most critical content streams. So the technology is already available,” he said.

“We do have clients using virtualized and cloud-based technology. I’m not at liberty to disclose who those clients are, however,” said Novak.

Harnack said, “There is an Axia audio console and Virtual Mixer that are designed for split (local and cloud) mixing, with perfect monitoring for the studio talent. This system is being tested by some U.S. and international broadcasters. The proposed pricing is quite a bit lower than the full-up ViLoR model.”


In this ebook I’ve mentioned ViLoR several times, so let’s now delve into it. As noted, ViLoR is an acronym for virtual local radio. It’s an interesting application of the virtualization and shared resource model, and one from which we can learn something.

In early 2014 the BBC started work on this project to centralize playout and storage for its local radio stations around the United Kingdom. The overarching goal of the plan, according to BBC Lead Project Manager Geoff Woolf, was to take as much technology out of the local station facilities as possible. The plan was in response to analysis of how money was spent at a few stations that had been previously refurbished. That analysis showed:

  • Funds being used in the creation of new space
  • Time spent by staff installing and commissioning new gear
  • Use of temporary facilities during refurbishment

It was estimated that the average time for a refurbishment could be brought down to less than eight weeks by moving towards centralized facilities for as much of the studio infrastructure as possible. In fact, a turnaround of fewer than six weeks was generally possible, while the need for temporary facilities was essentially eliminated, according to a 2019 article at

The ViLoR concept was initially used for BBC Radio Northampton, Suffolk, Essex, Three Counties and Plymouth. Due to the success of the implementation, the remaining 34 BBC local radio stations were moved to ViLoR, and the project reached completion for all 39 stations by March of last year.

Woolf was kind enough to answer some additional questions I had about virtualization. I asked: If I have the need to do a studio build, within say, 5 years, should I wait for further developments in cloud virtualization, or should I go with more standard hardware now?

“There is no need to wait,” he replied, “as the BBC has proved you can successfully virtualize and centralize the equipment and infrastructure required to support many studios, and for it to be more cost effective to deploy and operate. One factor is that we decided to build, own and operate our own virtualized environment, and in the future consideration will be given to operating the software services on third-party virtual environment providers.”

I asked about the life expectancy of a virtualized system, how often I should expect to do software updates, and would it obviate the need to buy hardware again?

“You will always need hardware, even if it is just headphones, mics, speakers, control surfaces, screens, PCs, etc., all of which will wear out and need replacing,” he replied.

“What we are finding at the BBC with ViLoR is the ease with which we can keep software and operating systems patched and up to date. The BBC’s ViLoR support team have a regular patching schedule they work through. The virtualized environment also lends itself to making replacing faulty hardware straightforward as we can migrate services from one host to another with little or no interruption of service.”

Readers might wonder what sort of connectivity is required between the BBC local radio station locations and the two BBC data centers,located in London and Birmingham.

Harnack of Telos, whose Axia equipment is part of the project, said that each location is connected back via guaranteed bandwidth circuits provided by British Telecom. “Broadcast audio is carried in either linear or low-delay AAC formats,” he wrote. Woolf told me the payload needed is about 20 Mbps per studio.

I also asked whether or not the distance between locations affects latency. “It just barely affects latency,” said Harnack. “The IP connections are all via fiber from city to city so ‘time of flight’ for the packets is pretty swift. A packet traveling 250 miles via the fiber connection may be delayed up to 2 milliseconds with respect to a ‘zero distance’ path.”

I wondered what kind of latency the talent experience in the current arrangement.

“The presenters don’t experience monitoring latency of their own voice, as that is mixed locally,” said Harnack, an important point. “But the latency on control round trip is about 25 ms and for audio about 50 ms. So, from pressing a start button you have one-way control latency 25 ms and about 50 ms return audio latency before the audio starts, plus the time it takes for the system to play —  say 100 ms or so total. This is a consistent value and presenters just allow for it now.

“There is more delay when interacting with callers, especially on mobiles as there is external network latency, too, but this doesn’t seem to sound unnatural when you listen on air.”


Of course this discussion has implications in automation too.

ENCO has offered a virtual server for “local” cloud applications for some time. I asked President Ken Frommert about use of the DAD system from a non-local cloud.

“Yes, this workflow is already being used,” he replied. “Online stations with multiple web streams are currently using a completely cloud-based DAD infrastructure. Terrestrial stations are also utilizing this workflow to save money and allow for extra flexibility and reliability by using a fully cloud-hosted DAD system to stream audio directly to their transmitter site, as well as simulcasting to web streams,” he said.

“DAD has been a preferred choice of streaming services for its ability to play out 16 different streams of audio from one instance of DAD. In a cloud-based environment, one can call up an array of virtual machines, each playing out up to 16 different channels of content. It’s never been so easy or cost-effective to create a fleet of internet radio stations, creating a hybrid streaming/radio platform.”

Listeners demanding more personalized content, he said, often turn to platforms like Pandora or Spotify. “However these platforms do not include the local talksets that make talk radio locally relevant to the individual.” He said the ENCO streaming solution allows “a virtually limitless amount of different station formats, all sharing personalities and local programming.”

I asked Frommert if one were to approach one of the cloud services with the idea about starting up an instance of ENCO, could he recommend the best way to go about it.

“Setting up a DAD system in the cloud is fairly straightforward,” he said. “All you’ll need is a standard virtual machine from any cloud provider, the DAD software and an AoIP driver. There’s no additional customization needed from the cloud provider. We recommend contacting our support department to save time and ensure a smooth setup.”

I told Frommert that if the big cloud services such as Google can extend the cloud to your premises, could that then facilitate an instance of ENCO operating on its own, effectively in the cloud, from a remote transmitter site? Given that most of the time these sites have plenty of space, with air conditioning and backup power, the system could make for a good off-site backup in addition to being the normal means of play-out.

“This is absolutely possible,” he replied. “The move towards cloud hosting was initially inspired by the need for an offsite disaster recovery system. In this case, the locally hosted DAD is mirrored on a cloud server, running in lockstep and duplicating all library and playlist maintenance, so it’s ready at any moment to take over.”

This can work in the reverse, he continued, if your main form of playout is through the cloud playout.

“If internet connection is knocked out, DAD will switch to its disaster recovery option, which would be the local machine hosted at the transmitter site.” The DAD system is intended to be flexible, he said. “When combined with the flexibility of on-premise, fully cloud hosted and hybrid computer systems, nearly any kind of virtualized radio automation is possible.”


RCS has been active in the cloud for several years, having introduced music scheduling as a service in 2016. And last year it introduced a cloud-based radio automation system, saying it expected early adopters to consider cloud-based playout for disaster recovery backup.

I asked Philippe Generali, president and CEO of RCS Worldwide, if a station could use Zetta Cloud, living in the AWS or Azure clouds, 100 percent of the time.

“Yes,” he replied. “For 100% streaming stations, it is ready today and we are just starting to market it. First Zetta Cloud produces a stream encoded to the flavor of choice. Together with our professional-grade Revma streaming technology, we offer the complete chain.

“We are also very pleased to report that we have worked with the best sound processing vendors in the world, such as the innovative Sound4 from France or U.S.-based Omnia, to create a native, cloud-based sound processing, which sounds as good as what you’d get from a traditional box in a rack. We even created an embarked version of Zetta Cloud capable of running in any box with a CPU and a bit of storage to run split frequencies or as a backup with cached content and logs at the transmitter. This includes instructions on what to do if the internet goes down and it loses the ability to update.”

He said that RCS views a “station” as being a schedule of content, going to one or more listening endpoints, one of which could be a transmitter. “We do have a major client right now who is running hundreds of streams 100% in AWS using our technology.”

As far as how to start, Generali said, “Getting stations up and running is as simple as creating a new tenant inside the system, in the appropriate geographical region. Practically, we send them a username and password to enter in their browser from any device — PC, Mac, tablet, etc. Anything goes. We can host a GSelector connection and the user can begin providing audio material to play, either manually or through a connection to their Zetta system anywhere around the globe.”

So if the big cloud services can extend “the cloud” to your premises, it can facilitate Zetta cloud operating on its own, effectively in the cloud, from a remote transmitter site.

“We view Zetta Cloud as an extension of the Zetta system, which can run totally on its own if desired/needed,” he said.

“But we also have broken the functions down to the point where an ‘edge player’ can be an extension of Zetta Cloud back on premises. This edge player contains all of the core functionality that the cloud does, making it easy to hook it up to a transmitter and go, but controllable through the web/cloud.”

Generali said every customer is different. “Some of our clients for example are eager to embrace the state-of-the-art RCS technology, albeit in their own private cloud.”


Jutel started out in the 1980s by building turnkey broadcasting systems in Finland, but later turned to focus on radio automation software.

It describes its RadioMan offering as the first virtual browser-based radio production and playout system built completely in the cloud. It’s a “radio-as-a-service” that combines the functions of a traditional radio broadcasting system, from planning to editing to playout, through software running in the cloud accessible in a web browser.

“Our goal with this new version of RadioMan is first and foremost to make the deployment of the technology agnostic and to give the user complete control over whether they want to deploy RadioMan in a cloud environment, on physical hardware, or as a hybrid of the two,” said VP of Sales & Marketing Ryan Jespersen.

The RadioMan infrastructure, from the user interface to the deployment model, was designed  using web-native technologies.

“RadioMan removes the roadblocks that arise when migrating physical radio studios to virtual ones,” said Jespersen. “The entire RadioMan backend and applications servers can be deployed in a matter of hours, instead of weeks or months [as] with traditional hardware deployment.”

Jutel says broadcasters can improve ROI and reliability while saving through ongoing maintenance and service costs. Any station, it says, might benefit from taking out on-site hardware and moving to virtual environments, but especially “small, pop-up, temporary and web-only radio stations,” according to Jespersen.

Users will access RadioMan with a browser on any thin client.

“With older systems, users tried to access one machine, which created a bottleneck that slows processes down considerably,” he said. RadioMan includes an HTTP load balancer that handles the HTTP traffic in an efficient way to avoid bottlenecks and streamline data flow through the system. (This is separate from data connection redundancy.)

I asked Jespersen about the minimum requirements for the thin client usage.

“The bandwidth requirements for a user’s local ISP are quite low and can be fully supported with a 2 Mbps asynchronous connection,” he replied.

“The bandwidth requirements are also varied depending on the tasks the user is performing. For example, if the user is in the Planning interface, the bandwidth requirements are even lower since most of the functionality is simple playback within the Playlist design or Multi-track editor modules. If the user is using the Playout interface, the bandwidth requirements can be higher if they are capturing and encoding their microphone using WebRTC. However, this can be supported with a ~2 Mbps asynchronous connection (symmetrical).”

For example, he said Jutel has used the microphone capture over a 3G connection with good results connecting from the United States to a RadioMan server running in AWS Ireland.

The cloud-based “back-end” uses Apache web servers, and messaging between the front and back-end infrastructure is controlled via web-native ActiveMQ messaging.

“We have also deployed PostgreSQL database with RadioMan 6 to make it even more affordable and easier to deploy for the user,” he said.

Regarding deployment, one goal is that the system has as few central locations as possible. Even though it is designed to work in a virtual environment, Jespersen said, it can deployed locally if need arises, as may be the case for local playout servers.

“This is useful in case of an unreliable internet connection because it synchronizes with the central playout server and the user can still broadcast over a local transmission tower,” he said. Multiple systems can run in a redundant fashion; the user can choose between local or cloud-based deployment, or they can deploy the system through different cloud-service providers.

The “back-end” architecture of RadioMan 6 facilitates its use as a mobile broadcasting tool.

“Since none of the intelligence or software is running on the user’s own device, they can use any thin client to produce and broadcast live radio events, including a tablet or laptop,” he said.

“The model also allows for several users to produce content simultaneously from multiple locations, essentially taking out the need for telephony services. Instead of a co-host or an interviewee having to patch in their call through a dedicated machine, pull it into the stations mixer and mix it, they can do everything necessary via the web interface. The station gives them a public URL with a username and a password, they can log in, and can then contribute with WebRTC from any remote location.”


During my discussions, Kirk Harnack raised the question: “What if EAS, processing and other radio infrastructure necessities were all ‘cloudifiable’? This could be quite an interesting option for broadcasters, especially those limping along with old equipment in disrepair,” he said.

“We have replaced on-premise accounting systems, some brick-and-mortar stores, video rental store and even Encyclopedia Britannica with cloud-based replacements that mostly cost less and work better than their predecessors. Why not radio content creation and execution infrastructure?”

Another virtualization model, he said, places “absolutely everything” in the cloud, including audio processing, RBDS encoder and the MPX stereo generator, then, using one or more (parallel) IP paths, delivering a ready-to-transmit MPX signal to the transmitter site.

It turns out that a group within the NAB Radio Technology Committee is working on such ideas (and more). It’s called the Next Generation Architecture subcommittee.

I had the opportunity to interview Roz Clark, senior director of radio engineering for Cox Media Group, who is helping with that effort.

“Virtualization is a component, I could say, of the next-generation architecture that we’re defining for the industry,” said Roz. “How can we design systems that are flexible enough so that if you wanted to host things in a cloud — all of your functionality, and all of the content, and all of the associated characteristics — just how would they make it to an endpoint, that being transmitter sites or something else?

“So what we’re really talking about are the general requirements of transport and other characteristics of the future of the architecture itself, not necessarily the details of bandwidth, for example,” Clark continued.

“Let me say it a different way: We know what we need to transmit. We know what kind of content needs to be moved around within a broadcast ecosystem. We want to make sure that that is defined in such a way so that if you have a use case where someone says, ‘I want to virtualize on Prem,’ we know what that entails.

“What we’re trying to do is to broadly describe all of the content that needs to travel and to make sure that there are placeholders or methods to do that so that indeed everything can be virtualized.”

At a broadcast facility, if you have a total IP system, and all of your automation is software-based, at some point you still have to come out of that content stream in and out of an EAS device and in and out of a PPM encoder, which is another physical device, in and out of various systems to add or contribute content before you finally make it to your endpoint destination of a transmitter.

And over the years broadcasters have put more layers on this cake of broadcast.

“We’ve added data and all sorts of little things to it, and it’s wonderful and it works; but it’s complicated,” said Clark. “You can’t just have a network connection going all the way to your exciter, because you wouldn’t have PAD data. You wouldn’t have RDS stuff. You wouldn’t have Artist Experience. This is what makes virtualization difficult.”

I asked how long the committee had been working on this topic.

“We started this up about a year ago. It’s a bit of a reaction to other work,” he said.

“Technology has evolved pretty quickly around broadcast. We take advantage of various advances in technology and incorporate them into our plants; we steal the best ideas from other related industries and put them into our facilities. So why don’t we do this? Why don’t we take a look at what’s going on in the world of technology, and how we can apply those best practices to broadcast in advance of the next generation of architecture, and help manufacturers and developers work together so that we can have the solutions that take advantage of things such as virtualization or software as a service, collaborating on the path forward instead of sort of reacting and waiting for someone to develop those solutions for us?”

With respect to the Next Generation Architecture Committee, work continues on various use cases, and all have different variables.

“You’re talking about cloud; that’s just one of many. Based on some of these scenarios, as long as you have the network designed correctly and the transport stream designed correctly, it doesn’t matter where the content originates, it doesn’t matter where it’s stored. It really is irrelevant because the transport stream can carry all of the content, whether it’s audio or data or whatever to the final destination.”


We noted earlier that under one virtualization model, “absolutely everything” conceivably could be in the cloud. So in the context of recent elimination of the main studio rule, a natural followup question is whether, from a technology perspective, a radio station would need any sort of actual locality at all.

Presumably the technical answer is no. What about from the legal viewpoint? For this I reached out to John Garziglia, communications attorney with the firm Womble Bond Dickinson.

“The short answer is sort of,” he replied.

“Radio stations are still required to have a toll-free number at which station personnel can be reached by callers in the community of license. Additionally, the FCC has not done away with the public file requirement that every radio station on a quarterly basis determine between five to 10 issues that are of concern to the community of license, and list the programs broadcast that address those issues.”

While the FCC in recent years has not challenged any radio station on the procedures used to determine such issues facing its community of license, it could in the future, he said.

“If a radio station was unable to show in the determination of such issues a sufficient nexus or contact with the community, the FCC might take adverse action against its license.

“Another required nexus to the community of license is the ongoing requirement for a designated chief operator who must sign the station log on a weekly. By definition, several of the technical responsibilities of the chief operator will likely require some local presence, although many of his or her responsibilities could likely be performed remotely.”

It’s important to note though that neither Garziglia nor I have attempted to answer the question of whether a radio business should run that way, if given the chance. As others have said: Just because you “can” doesn’t mean you “should” — or shouldn’t, for that matter.


The cloud has become ubiquitous, not only in terms of its presence — via wirelines or 4G and soon 5G — but also in its purpose. People scattered all around the world are using it right now.

As broadcasters, though, the idea of using the cloud for virtualization of functions that have previously been actualized still seems somewhat alien. To me, the worst part is that I feel somehow, in some way, I’m giving up some control, and that makes me uncomfortable.

The reality, though, is that this is the way technology is going in the future, whether we like it or not, and there’s not much sense in standing in its way. Broadcasting is a mature business, and conservative, technically speaking.

It seems fair to say that how the cloud and broad-scale virtualization might affect the broadcast technology landscape is a story that is only starting to be written.

Comment on this or any story. Email [email protected] with “Letter to the Editor” in the subject field.