Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Upgrading a Critical File Server

Don’t try to move a busy server with 2 TB of data in one Saturday afternoon

Back in 1998 when I started with my present employer, our audio automation used a Novell file server. Though I was unfamiliar with Netware at the time, I wasn’t worried. After all, was I not an experienced system-level programmer? I had written device drivers in raw assembler!

Heh. Looking back on it now, it’s a miracle that I didn’t knock everything off air and set the ceiling on fire. But in the past few years alone, armed with experience (and a couple of good assistants named Todd and Jack) we’ve replaced our local FTP and Web servers, the file store for our audio automation and we’ve assisted with the upgrade of several servers in other markets.

If you’ve never done a complete server replacement, I’ve got some tips and guidelines that will help you. I’ll use a webserver as my example, but these tips can be adapted to most any server upgrade.


Some of the Crawford servers, including the new, upgraded web server. The old web server was kept as a backup.

You’re doubtless upgrading to a newer, better computer. In our case, we moved from a 32-bit to a 64-bit Dell PowerEdge with faster processors and significantly more memory. But don’t fly blindly; confirm that your application software will work nicely with the chosen hardware. Ask your vendor or use Google.

For Windows-based servers, we usually order the new PC with the OS and all drivers already installed. If it’s going to run Linux or some other OS, we generally order with no OS and install ourselves. It’s your choice, but keep compatibility in mind. If you buy from a big-name vendor like Dell or HP, they can recommend hardware that is guaranteed to work with your OS.

Once the new server arrives, get it running and test it. This may require some thought; once again, talk to your software vendor and browse online support. I’m doing our corporate webserver in this article, but each application will have important differences. For example, you might be told not to put the old and new servers on the same network at the same time, even with different IP addresses. The system could become confused and both databases could be seriously corrupted.

Even if the results wouldn’t be dire, there are practical issues. A default webserver wants to listen on port 80 for HTTP and port 443 for HTTPS. You obviously can’t have two servers on the same IP address listening to the same port. You can either use a non-standard port on the new server, or if you have one, a spare public IP address. We also needed remote administration, given that I’m in Birmingham and the server is in Denver, so we forwarded Secure Shell (SSH) and a few other ports.


No matter how many times I’ve done this, I’m always dismayed at how long the copy can take on a large store of data. In some cases, it can literally take days.

The biggest complication with any critical server, of course, is that the old server must continue to operate normally while you build the new. This probably means that people will be making changes while you copy — uploading new data, changing information, you name it. Unless you have the luxury of taking the old server offline while you build the new one, this takes thought and careful advance planning.

Conceptually, moving that data is similar to a standard backup and restore operation. Hopefully, you’ve discovered the joys of differential and/or incremental backups (Google it if you’re not sure). Confirm that your application software is compatible with this, but basically, it’s a two-step process:

1. Make the initial copy while the old server is running. This moves the bulk of the data. The copy will contain errors and stale information, but at least you’ve gotten the time-consuming part done.

2. As soon as possible after that initial, complete copy, take down the old server and copy only the new data. This freshens the copy, grabbing the latest changes.

There are many specialized programs that support this. In Linux, the standard is called “rsync” and it does the trick nicely. There are versions of rsync for other operating systems (including Windows), as well as commercial packages. Ask your vendor what he/she recommends; they may even offer a free download that will do guaranteed backup and restore. You may find that one part of the data can be done with the two-step method, while another requires special translation (especially if you’re moving from old, 32-bit hardware to 64-bit). Ask.

Fig. 1: You can dramatically reduce down time if you make a “bulk” copy with the server running, then do a second “fixup” with it stopped.

Fig. 1 shows an example with a Zimbra mail server. The server is running for the first rsync; it’s shut down for the second “fixup-cleanup” copy. Once the second copy is done, we restart the mail server. This minimizes downtime while ensuring a good copy.

Practice with the copy/mirroring software before you use it! Speaking from hard-won experience, something like rsync can annihilate a bunch of data if you use the wrong options (especially the “ –delete” in Fig. 1). The first few times you use it, you may find that the copy was placed one folder above or below where you actually wanted it. Practice!

Our websites use WordPress, which itself uses a MariaDB database to index the content. We “two-stepped” most of the data, but we used the provided MariaDB tools to backup and restore the database separately, as recommended by the WordPress folks.

The physical media is your choice. A smaller data store may fit on a USB drive. If you have a very fast network that won’t mind the load, transfer the files that way (this is how we did our webserver, using rsync over a fast LAN). For gobs and terabytes of data, you might use a fast, external hard drive, or temporarily mount a spare hard drive in the old server: you copy the data, then move the drive over to the new machine and dump it there.

There are other, more sophisticated methods that I won’t get into here in detail. For example, if you use file mirroring, maybe you could take the mirror offline and do the initial copy from that. We’ve done this when replacing our RCS NexGen file servers.

Whatever method you choose, warn the staff to keep a local copy of any recent work and plan for a “cleanup” operation where missed files are moved to the new server. Something will always turn up missing!


This is going to vary from one system to the next. I’ll finish with our webserver as the example, but use this to spur your thinking. Your needs will probably be different and you may have to get creative.

Fig. 2: Moving to the new server, one website at a time: has been moved, but is still on the old server.

First, know your system. Your research should have turned up lots of info on the best way to move your server. The Apache webserver has many nice features; one is called “VirtualHosts” and it allows you (among other things) to have more than one website on the same public IP address. Better still, you can forward requests to different servers with the (oddly-named) Reverse Proxy in a VirtualHost block. See Fig. 2, which is a portion of the Apache configuration on the old server.

OK: the old server was online. Whether people went to, or whatever, the VirtualHost mechanism checked the incoming requests and then routed them to the right place. Once I had the bugs worked out of the proxy forwarding, we built one site at a time on the new server. As each site became ready, we forwarded that site’s requests over to the new PC on our internal network.

Once all sites were ready on the new server, it only took about 15 minutes for our local folks in Denver to change the IP address and network connections. The new server was now online. Success!

I’ve (obviously) left some things out — for example, you may need to “spoof” some DNS while you’re moving a public server on the Internet. Otherwise, people will be bouncing between the new server and the old, depending on how site names are resolved. (WordPress, which seems to store URLs with the full site address, is especially bad for this.) We used a program called “dnsmasq,” which allows you to set the IP address for selected sites to a different value while you’re working. That way, both the old and new servers will think that they’re “” and they’ll be happy.

If nothing else, learn the two-step copy method: the Big Bulk store with the old server running, then stop the old server and do a final “finishing” copy to correct the errors. But confirm that this will work with your application software.

Finally, knowledge is power … as long as it’s good knowledge. There are plenty of “how-to” walkthroughs online for common servers, but let me give you this warning from experience: Most of them will invariably leave something out or fail to explain something adequately. It’s a good idea to read several of these walkthroughs, then test on your new server. Make up your own checklist based on that experience.

What’s the old saying? Prior planning prevents purely poor performance? The same is true here. Don’t think that you’re going to move an extremely-busy server with two terabytes of data in a single Saturday afternoon. It’s going to take lots of advance reading and lots of planning. Good luck!

Stephen Poole is chief engineer at Crawford Broadcasting Company in Birmingham, Ala.