Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Servers: Love ’em or Hate ’em

They are taking on a whole new role in our broadcast infrastructure

Servers have vexed me for many years. I’ll readily admit that some of that has been my own limited understanding of their functions, configurations and operating systems, but a big part of my trepidation has been based on unpleasant experiences.

One thing I can tell you for sure is that file servers work great … until they don’t! And when they quit working or start acting up, ugly things tend to happen downstream. Things slow down or grind to a halt. If those things downstream of the failing or dead server constitute your audio playout system or traffic/billing system, well, you’re in trouble.

I remember that back in the early days of my company’s use of server-based audio playout systems, we used a pair of mirrored HP servers. They were huge, taking up a third of an equipment rack with banks of SCSI drives that sounded like some kind of miniature buzz saws. 

The TOC has become the “server room” in many broadcast facilities.

Once in a while we would have to reboot or power-cycle those servers for some reason or other, and when we did, we always held our breath. Would everything come back up right? Sometimes it did, but sometimes it didn’t. And when it didn’t, we would spend hours on the phone with Novell support trying to figure things out. (Remember Novell?)

The move to Windows-based servers was a big improvement, as was the move to SATA drives with RAID. That solved a lot of problems, but we found after a while that not all 3.5-inch SATA drives were created alike, and if a server had been in service for a few years when a drive failed, it was sometimes impossible to get a drive that would fit the slot and work.

Standardization took care of a lot of that as time went on, but I made it a habit to buy a few spare drives of the exact type when I replaced or otherwise ordered a server to make sure we would have a compatible replacement or two on hand when a drive failed. I’m sure I wasted some money doing that, but better to be prepared than caught unprepared with no easy or available vector to easy repair and recovery.

Thankfully servers have gotten better and better, more and more reliable. And drives have gotten better, too. I can’t remember the last time we had a server drive fail. The issues we’ve had of late have been CPU and motherboard related. I wonder if the thermal compound drying up on the CPU heatsink isn’t responsible for a lot of this — some of it sure turns to powder after a while.

Based on our experience, we set a five-year life limit on servers, replacing them at the end of that fifth year. Seems like we can count on stuff starting to go wrong after that.

Now servers are taking on a whole new role in the broadcast infrastructure. Containerized software is all the rage, and for good reason. Do we really need a bunch of hardware to perform specific tasks? Sure, we need hardware in some cases for I/O — it’s hard to make a virtual microphone (but it probably won’t be much of a surprise when someone figures out how to do that someday). 

For decades we’ve been running our playout systems on computers, and with our infrastructures increasingly moving into the AoIP world, there isn’t a lot of real-world I/O taking place — music and spots are downloaded, and much of our content is produced in the virtual world.

So why not run our consoles, phone systems and all that on servers, whether on-premise or in the cloud?

I can think of a few reasons, particularly where it concerns the cloud environment, but the truth is that we can get by without a bunch of dedicated hardware these days, and that is becoming increasingly the case.

Probably about the time you read this, I will be experiencing a containerized on-air phone system for the first time. I have several in the budget for this year, and the first will go into a new studio build in upstate New York. The phone lines will be a SIP handoff, and the phone system will be a containerized app running on an on-premise server. I’m looking forward to and dreading that at the same time.

But even an old guy like me has to come to terms with the fact that this is where things are headed. The sooner we can accept that, the easier it will be to make the transition. And as it was with AoIP, SNMP and so many other technologies that we have come to embrace over time, we’ll likely wonder how we got along without it all these years.

In the current issue of RWEE, Dominic Giambo, engineering manager at Wheatstone, tells us all about running things on a server. I think it will be an eye opener for many.

And Amanda Hopp and I wrap up our project account of the KLDC move, the ultimate DIY, detailing the nuts and bolts of the construction and actual move. I hope you enjoy it and maybe even learn something. That project was certainly a learning experience for me.

Maybe the most important thing we can learn in this day and age is to keep an open mind. Things are changing at a breakneck pace, and to keep up we’ve got to be willing to accept and roll with the changes. 

[Sign Up for Radio World’s SmartBrief Newsletter]

Close