Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

MacGyver in the Age of Centralized IT

A special essay on how radio companies can manage the interdependence of local staffs and centralized IT

The authors are chief engineer of Salem Communications’ Chicago cluster and CIO of Salem Media Group, respectively.

It’s been a bit over two decades since computers largely took over radio station content management. As these systems became more sophisticated, they also became increasingly data network dependent. A lot of this was driven by consolidation in the late 1990s.

Those who lived it would agree that consolidation came with a few bumps. In those days, picking the low-hanging fruit of expense reduction was lucrative; it was an environment where stations and groups were valued as a double-digit multiple of cash flow. Every expense dollar shaved was worth 13 to shareholders. Computers running the music and spots was an obvious idea.

But consolidation meant a new radio organizational chart. Publicly traded radio companies needed standardized reporting information which, in turn, meant that revenue and expense data needed to be rolled up into a consolidated balance sheet. Sarbanes-Oxley, enacted into law following the Enron collapse, brought a requirement that data be valid and independently audited.

Transmitting that data from a radio market cluster to corporate headquarters required an environment that prevented outsiders from learning stock-market-moving information in advance of public release. Public company boards and auditors had seen what happened with Enron and Arthur Anderson. They wanted to make sure their systems were adequate.

Radio also faced another major headache. Support for computers and other digital devices at radio market units was increasingly untenable. Every employee had a desktop machine, then a laptop, then a smartphone. Another computer? Just plug it into an available switch port. Not enough ports? Just add another switch. Most switches at stations were unmanaged. Routers were the ones station engineers used at home. Crossed wires often caused problems. Finding station engineers who understood networking was a challenge.

HERE COMES THE BUT

This is how the radio world looked in about 2005 — stable some places, chaotic in others but generally no-two-alike installations even within the largest of groups.

Network standardization held the promise of stability, predictability and, once the initial expense was absorbed, a continuing reduction of recurring expense. And unlike other cost-cutting efforts, centralized IT came with a help desk. Salespeople had someone to call, trained in computing and in bedside manner. Phone configuration was largely eliminated as a local headache.

You must know there is a “but” coming somewhere and here it is: Radio operations at the market level are not uniform. They cannot be. To survive, every outlet reaches a different audience. Every station has a different population within its coverage footprint. Tower sites are scattered in and around cities with differing configurations of powers, towers and utility availabilities. Each was independently designed and subject to a long history of changes reflecting good times and bad.

Remotely managing these facilities required an intimate knowledge of their architecture and limitations. Almost no examples of the standardized protocols and interoperability — the kind centralized IT offered on the business side — were present where radio’s tires met the open road of local station programming and operations.

As an example, operations and engineering staffs have, for years, been installing and using home-brew connectivity schemes. Going back to early dial-tone, touch-tone control systems, remote access allowed stations to run “unattended” and to automatically alert when things went wrong. Market engineers were accustomed to creating electronic solutions to business operation and product delivery problems. They’d MacGyvered their way to opportunity, and past trouble, for years.

Generally, these operations and engineering staffers took considerable pride in how little cash they spent to keep the radio revenue engine running. Older equipment was rebuilt, modified and repurposed. In an era when station valuations are stable to declining, a culture that celebrates cheap and nothing wasted was and should be encouraged.

The dark side of this frugality comes when Mr. MacGyver gets an offer from another show. Suddenly the efficient Borg Cube of inexpensive parts becomes the dreaded black box. The clock is ticking towards the inevitable death of the XT motherboard and the electronics from that old toaster oven … What happens then?

PATH TO SUCCESS

How can we find the best of both worlds: technical uniformity and supportability with economies of scale alongside the MacGyver culture of clever, low-cost solutions customized to circumstance?

It’s by unleashing local creativity and resourcefulness, supported by the robust back end connectivity that a centralized IT department can provide.

Here are nine suggestions for how to get there.

1. Bring Market Staffs Into Larger Networking Architecture Planning. Ours is a business that earns its money in the local markets where our facilities operate. Every expenditure should show a measurable return on investment and should compete for priority with other capital outlays.

For all the reasons above, a carefully considered outcome is likely to be different from market to market. An elegant solution in New York may be untenable in Peoria. Any solution proposed for everywhere deserves strict scrutiny and comment. The best case is a close collaboration when the design is happening. Use that creativity but temper it with parts that are obtainable in Peoria. This may add a few dollars but in the end much less grief.

2. Improve Communication. In an environment of complex interlocking processes, it is easy to assume that some inexplicable technical failings were caused by unseen changes in the network connectivity environment. It shouldn’t surprise us that changes made to the same system by individuals in separate places without coordination leads to problems. Of all the challenges, this might be the easiest to fix.

Trouble ticketing systems are a common management tool in IT. These systems record the reported symptom, the applied solution(s) and time stamps for all steps taken and parties who took them. This system could be used as an alerting method, such that affected markets would be copied on these tickets. But to be effective, every change to the networking environment must be similarly treated, even if the need for a change originates within the IT core staff. Front-line experience has demonstrated that seemingly innocuous changes to configuration have caused revenue impactful unforeseen problems.

Ditto at the market level. Incorporate local ops managers and engineers into the ticketing system. Changes in the local environment should be subject to the same ticketed documentation. This increases the likelihood that a quick review of documented changes yields clues to what is wrong. Besides, given the differing architectures and technologies, market to market, assuming that a change will have only a positive effect is needlessly risky.

This does not mean local markets should dictate the flow of software updates or needed configuration adjustments, only that they know changes are made so that any impact can be identified and reported. Similarly, the centrally managed IT core staff needs to know when connected assets change. No matter whose phone rings with trouble, having as much information as possible is always a plus.

Traditionally engineering and IT folk have similar skills, education and background. The diversion happens when it comes to the production environment. Engineers are specialists; they are intimately involved in configuring and supporting the automation tools and delivery systems that drive the broadcasts. The IT teams are responsible for moving the information around the LAN and WAN as required, as well as supporting the connectivity to the network directory services.

The point is that these groups should have the DNA required to back each other up. Cross-training can solve many of the challenges faced by both groups and tighten collaboration.

3. Create a Knowledge Base. Radio stations tend to grow ad hoc systems, driven by need. Often only the individual who created it has an understanding of how a system works. This is an invitation to disaster.

On a periodic basis, perhaps once or twice a month, some system should be chosen and “written up” with a summary description of the functionality, components, device locations and external connections. Identify obvious failure modes when possible. Over time every system would be similarly documented. As these systems change, this forms the core of a journal.

An external provider like Google Docs is probably a good choice to house this, since access is supported anywhere from almost any device. Such a system would foster understanding and is a resource for other markets with similar environments and challenges.

4. Encourage Adoption of Small-Scale Automation. Increasingly, the IP environment is being used to automate routine activities and monitor itself. These functions involve everything from simple scripting through IP codecs and even original software for the Raspberry Pi and other open-source embedded systems. In most cases, these new uses extract added value from the existing network in new station services or fewer employee hours. Initiatives of this kind are essential for increased productivity but don’t come in a one-size-fits-all package.

This brings us back to the MacGyver-moves-on problem, though. Original designs and one-off solutions can be a headache for the next engineer or IT person if they are not understood.

But maybe there’s an opportunity to make lemonade from these lemons. If original designs are presented in document form (schematics, text explanations, etc.) to the corporate tech staff before prototyping and deployment, useful ideas that originate in one market become off-the-shelf solutions for other stations and markets with similar needs.

5. Create a Sandbox for Local Programming and Control Functions. Given the difficulty in meeting the security requirements for business data on the same network platform as broadcast operations, one possible solution is to firewall off the programming and operations side.

In today’s M&A environment, it is important to separate the automated production environments and tower facilities from the business-side back office environment. When and if a market is separated, partially or wholly, it is not unusual for the buyer to take over the production automation systems and program delivery assets. The seller then retrieves any and all equipment holding proprietary intellectual property and/or configurations.

Designing these from the beginning as separate entities with strong edge protection makes sense. In this way, full access can be given to the local market technical staff while the corporate IT group keeps the global access separate. The log reconciliation process between automation and traffic systems provides a logical boundary and cross check for the firewalled entities.

6. Give Local Operations and Engineering Staffs Some Administrative Control Over the Network Assets They Rely on — at least to the level that allows maintenance of workflow software and directory tree access specific to the production and program delivery environment. Local technical staffs could then more easily maintain software and troubleshoot problems, often working directly with a vendor. In cases where outside connectivity providers like local telephone networks or ISPs are part of the program delivery system, local staffs need quick access to edge connectivity equipment.

This is acknowledged to be a break from traditional IT architecture and management. Levels of local autonomy may need to be staged based on local staff abilities. And the desire for autonomy might be an incentive for local staffers to pursue IT industry certifications, a worthwhile goal.

7. Acknowledge That Local Staff Will Make Use of Alternative Access Methods. Free or modestly priced remote access applications drive this process. Some are products of companies with a history of security.

For example, LogMeIn is a product owned and supported by Citrix, long in the application access business. Others like TeamViewer and Dualmon have different backgrounds. VNC is still around. There are and will be others. They’ll have differing feature sets and will be adopted as needs arise. In some cases, access to station assets will be granted to clients or to talent with home studio origination capabilities.

Work-from-home options broaden the pool of possible employees, a retention and cost reduction strategy needed in today’s tight labor market. But all these connectivity schemes breach the customary corporate-login firewall VPN approach usually granted to remote employees, vendors and customers. This is another reason to sequester programming and operations from other network functions. (See “sandbox,” above.)

8. Grant Configuration and Administration Rights Commensurate With Evidence of Competence. Nearly every well-known vendor of networking hardware grants certification. Cisco and Microsoft certification are both industry standard touchstones. There are others, but all the meaningful ones require study and proctored testing.

An employee who passes these tests has demonstrated initiative and interest. Broadcast groups should encourage and support continuing technical education through reimbursements, bonuses or both. Individuals so credentialed should be given preference in hiring decisions and compensation.

While this would qualify hands added to the access list, it does not address the importance of keeping up with global changes that are made outside the local market and, more importantly, strategic global changes that are being considered.

This would broaden the audience that would be required to participate in these changes and must be considered when evaluating the breadth of inclusion in decisions. If every change must be unanimously approved, progress would halt. Perhaps a small power-users group of tech-savvy market staffers, serving on a rotating basis, would be a solution.

9. Replace Corporate-Provided Local Market Services With Cloud-Based and/or Barter Providers Where Possible. The best example of this is found in on-air telephone systems. The requirement that a VoIP phone system be connected to the public switched telephone network has led some groups to combine office and on-air back-end VoIP services. This, in turn, has moved the boundary of security concerns into the operations and programming domain. There are dozens of highly secure VoIP connectivity providers; and some might even consider a barter arrangement for connections.

This same solution might be used for other services, where provisioning and routing through the corporate network presents a security concern or where revenues are insufficient to support a local IT staffer.

Radio will continue to demand more efficiencies. As an example, Chicago, once a market that supported $100-million-plus valuations, has seen a recent sale of an FM with full market coverage for near $20 million. This underscores the reality that a major part of the capital value of radio groups is no longer the broadcast licenses or other tangible assets. It’s in the productivity of its employees. Finding new revenue, efficiencies and economies at both the group and local market scale is essential as competition for advertising dollars gets ever tougher.

For this to happen requires that everyone pull on the oars in the same direction. For the sake of the industry we all love, this is an imperative we must all embrace.

Comment on this or any article. Email [email protected].

Close