Manage Learn to apply best practices and optimize your operations.

Connecting storage networks

Once you decide to use a SAN you have to decide how your network will connect to the servers, these tips will help.

This tip originally appeared on, a sister site of

Storage area networks, or SANs, come in many flavors these days, from those simply using a dedicated Gigabit-Ethernet switch and private subnet, to Fiber Channel solutions, to higher-end systems like IBM's Shark. But in this tip, we'll offer some things to consider when attaching the servers that make up the SAN.

First, if a need for raw bandwidth is driving your desire for a SAN -- it may be obvious -- but there are other more subtle reasons you might want to use one. An example is that doing in-band backups over the same network interface as your user traffic may result in scheduling issues even if the aggregate bandwidth utilization is less than the total of the link. Remember that a second is a long time in the computer world, so if your statistics say you're sending 20 Mbps of backup traffic it may actually all be sent in the first ¼ of a second, and the line may be quiet for ¾ of the second. In other words, data is bursty. This could affect any near-realtime apps on that server.

Second, you may see a performance increase by using multiple system busses on the server, which obviously requires more than one network interface card. So, for instance, your server may have two or three 10/100/1000 copper interfaces on different busses. Due to other constraints, you may get 200 Mbps throughput using one Gigabit interface, and 300 Mbps using two interfaces, even though both cards are actually capable of 1000 Mbps. (Obviously, your mileage may vary, depending on what type of hardware you have.) Alternatively, your server hardware may be much faster than your data network if the switches are limited to Fast Ethernet. In such a case, you could put both 100 Mbps interfaces on the same bus and hope to get 60-80 Mbps aggregate throughput. (Fast Ethernet isn't actually capable of 100 Mbps throughput.)

A third consideration is distance. Some technologies are capable of longer distances, allowing you to put more devices on the SAN, if for instance, you have devices scattered around several buildings in a campus. Fiber Channel was limited to about 6 miles, while Gigabit Ethernet varies from a few hundred meters to 20 or so miles. Notably, Fiber Channel over IP (FCIP) and the hot new iSCSI can both help address the distance issue.

Keeping in mind the number of devices, the way you connect your servers may well depend on the proportion of backup devices to servers; i.e. how centralized or decentralized your SAN is. On a high-speed campus network, you may save a lot of money by having a couple backup servers centralized, but this may force you to do backups in-band instead of using dedicated switches and private subnets. That is, the servers may use separate NICs, but both NICs may plug into different VLANs on the same switch.

Tom Lancaster, CCIE# 8829 CNX# 1105, is a consultant with 15 years experience in the networking industry, and co-author of several books on networking, most recently, CCSPTM: Secure PIX and Secure VPN Study Guide published by Sybex.

Do you have comments on this tip? Let us know.

Dig Deeper on Small-business infrastructure and operations

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.