Storage Area Networks
Jan 1, 2002 12:00 PM, By Kevin McNamara, CNE
In the life of any network environment, it is not uncommon for hardware to be replaced as technology improves. The item that is most likely to be replaced, whether due to improved performance or simply from an inadvertent failure, is the workstation. The concept for Storage Area Networks, or SANs, was born out of the need to protect the most essential byproduct of a workstation or network server – the data. SANs serve one purpose: to aggregate data storage resources to a single repository. Centralization of data is more reliable and allows a higher degree of scalability than other distributed network models. The cost and time needed to maintain and manage large numbers of distinct storage devices is significant. As the number of servers across the network increases and companies increase reliance in data-intense applications, traditional storage models fall short because access to a peripheral device (such as the hard drive) is slow and lacks the flexibility afforded by SANs.
SAN vs. NAS
Though probably aware of SANs, many may be more familiar with Network Attached Storage (NAS) devices. NAS devices attach to existing network backbones such as Ethernet, providing stand-alone storage that can be used for data backup or to increase data storage capabilities. The primary technical difference between the NAS and SAN is at the communication protocol level. NAS communicates over the network using NFS or CIFS Fiber Channel, while SAN primarily uses the Fiber Channel Protocol (FCP).
A Storage Area Network is a separate network that is isolated from the client and server connections.
NAS devices transfer data from storage device to server in the form of files. NAS units use file systems, which are managed independently. Each device manages file systems and user authentication. While NAS is a relatively inexpensive and easy method to add storage to an existing network, the performance of these devices is limited by the speed and amount of traffic carried over the network segment to which the device is attached. It is not advisable to use NAS units in intensive data processing environments.
SANs are typically connected over networks using a Fiber Channel backbone. Data is delivered in device blocks, similar to servers using embedded external RAID arrays. SANs do not require the thin server management overhead needed with NAS, thus eliminating additional latency.
Another distinguishing difference between NAS and SANs is the placement on the network. NAS devices, like workstations and other common network hardware, attach to the network in front of the servers. With the SAN, servers are attached to the storage devices on a second network connection behind, or separate from, the primary network. SANs are isolated from the primary network, where the typical I/O processes degrade performance. SANs have their own unique network connection scheme, typically but not limited to, a ring network. This arrangement permits the interconnection of multiple storage devices.
SANs permit the attachment of numerous storage devices, limited only by the amount of available hubs and switches. SANs also permit heterogeneous connectivity of storage devices with servers operating on different platforms. Some vendors of SAN hardware provide a means of data-conversion that permits data files from one platform to be used with systems operating on another. The software used for this application can only be used with disk arrays that emulate both mainframe volumes and open system Logical Units Numbers (LUNs).
A Storage Area Network is a specialized high-speed network that provides direct connections between storage devices and servers. In a SAN, the target provides the storage functions, and the side that typically originates the data, usually the network server, is called the initiator.
Network attached storage shares the same network as the application clients and servers.
The SAN topology provides three features:
- Storage is not directly connected to network clients.
- Storage is not directly connected to the network servers.
- Storage devices are interconnected.
The SAN is a real network and is expected to evolve over time using a variety of different network connectivity options. While the de-facto network connectivity method used for SANs is Fiber Channel, other options are beginning to appear, including iFCP, the IP over Fiber Channel standard and Internet SCSI, also known as iSCSI. The main goal of a SAN is to integrate traditional storage subsystems, such as RAID, and data archival systems that provide data backup for short and long-term periods. Fiber Channel networks will move data with speeds up to 2Gb/s depending on hardware used and are currently the dominant protocol used with SANs.
SAN Interconnection methods
SAN networks are built upon the Fiber Channel Arbitrated Loop (FC-AL). Fiber Channel can support up to 126 devices, but in practice is limited by the number of ports available on a hub. Fiber Channel also uses the shared-polling contention method, which can cause a decrease in performance when too many devices contend for bandwidth within a loop.
Fiber Channel switches, also called fabric switches, are becoming popular alternatives to the classic HUB. Fabric switches permit the simultaneous routing of traffic through the loop, preventing data bottlenecks.
SAN distance limitations
The distance of devices attached to a SAN are not necessarily limited to the same room as the primary network servers. In some cases, SANs can operate over a private Wide Area Network (WAN), in which storage devices can be located across the country or globally. The distance limitation placed on SANs is determined by the design of the Fiber Channel loop. Fiber Channel hubs can be purchased as short-wave (500m) or long-wave (up to 10km).
While SANs are still a little pricey for the typical broadcast environment, new applications, particularly those requiring safe, secure and massive amounts of storage, may soon be the killer app that we’ve been waiting for.
Kevin McNamara, BE Radio’s consultant on computertechnology, is president of Applied Wireless Inc., New Market, MD.
All of the Networks articles have been approved by the SBECertification Committee as suitable study material that may assist yourpreparation for the SBE Certified Broadcast Networking Technologistexam. Contact the SBE at (317) 846-9000 or go to www.sbe.org for moreinformation on SBE Certification.