Podcast

Making the business case for WAN optimization: A SearchCIO.com podcast

Wide area network (WAN) optimization significantly increases data throughput and slashes network response times, enabling system consolidation that would otherwise be impractical and enhancing employee productivity. It can also pay for itself in less than a year in some cases. This podcast offers strategic advice for CIOs who are evaluating the business case for WAN optimization.

SPEAKER: Eric Siegel, senior analyst, Burton Group

BIOGRAPHY: Siegel is a senior analyst at Burton Group focusing on Web and network performance optimization, service-level agreements and network measurement and management. He has more than 30 years' experience in design and evaluation of computer networks. Siegel is known nationally as an authority on Web performance measurement and optimization. He is also the author of Designing Quality of Service Solutions for the Enterprise and Practical Service Level Management: Delivering High-Quality Web-Based Services.


Read the full transcript from this podcast below:

Karen Guglielmo: Hello. My name is Karen Guglielmo, senior editor for SearchCIO.com. I'd like to welcome you to today's expert podcast on making the business case for WAN Optimization. I'm joined today by Eric Siegel. Eric is a senior analyst with the Burton Group focusing on web and network performance optimization, SLA's, and network management and measurement. He has more than 30 years experience in design and evaluation of computer networks. Eric is known nationally as an authority on web performance measurement and optimization. He is also the author of Designing Quality of Service Solutions for the Enterprise and Practical Service Level Management: Delivering High Quality Web Based Services. Today, Eric is with us to offer strategic advice for CIO's who are evaluating the business case for WAN optimization.

Eric Siegel: Okay. Let's start by talking about what WAN optimization is. Then, we can spend most of our time talking about looking at the business case, how would you justify WAN optimization, and if you are evaluating vendors what should you be looking for.

First, WAN optimizers are also called application accelerators. They are a combination of related technologies that increase throughput and slash network response times. It's not magic, but to the users of these things it really does look like magic. Sometimes, to the people who are paying the WAN bandwidth bills it looks like magic to them, also.

Briefly, it's a pair of appliances or software packages at both ends of a communication path that do a number of related things. First, they have advanced proprietary compression technologies. They're sometimes called data reduction.

Basically, what's happening is that the appliance or software looks for repeated patterns within a file and between files, takes those patterns, gives them an index number, and has synchronized directories at both ends of the path. It can transmit just the index number instead of having to transmit megabytes of data.

So, if you're sending a file back and forth and back and forth as it gets edited on, after just one or two passes the only things that are going back and forth are a couple of index numbers and the changes instead of that huge multi-megabyte or even gigabyte file. Even within one file it can make massive, massive data reductions. Of course, that also speeds things up as well as decreasing your bandwidth charges.

Another thing these devices do is they improve the transmission controls so that you really can fill a high bandwidth, high latency path. For example, if you're sending an already compressed movie file over VSAT or between two continents in most cases you'll find that the default TCP/IP flow control simply doesn't let you use the full channel.

The problem is that you can only send a certain amount of data before you have to pause and wait for an acknowledgment, and if there's any error at all there's a real dead stop for a while sometimes while TCP figures out what happened. With these advanced flow control mechanisms that problem is usually eliminated, and despite huge bandwidth, high latency, and even errors, you really can use the full bandwidth that's available.

The third thing that these do is they have special software that can to a very large degree compensate for communications designs that simply aren't WAN friendly. There are really two classes of this.

The first is when you're using remote files using an old protocol such as the Microsoft Windows CIFSprotocol which is really used to be called SMB. It was designed for DOS. That's when you open a remote file share either to copy it, or you're within a program and you open that remote share.

Those mechanisms and those protocols were never designed for use on the wide area network. They were designed for use on the LAN. When you open a remote file share over the wide area network, especially if there's long latency, the performance is just terrible.

To a large extent these new WAN optimization devices can compensate for that by basically tricking the protocol into thinking that the remote file share is local. They can be extremely effective, though some are much more effective than others.

The other class of problem these optimizers won't work as well on, and you have to be aware of them, that's when you've designed a number of programs that work very tightly together. They're constantly sending short messages to one another. There's not much that an optimizer can do in those cases. Those tightly coupled programs really have to work in one geographic location so latency won't affect them much. That's where an application remoting technology such as Citrix could really be useful.

Anyway, that's basically what the WAN optimizers do. They're great for situations where there's a lot of traffic that can be compressed. They can fill wide high latency channels to a degree that you can't do with standard TCP options. And, they can compensate for the use of protocols that were really not designed for the WAN such as the remote file access protocol in Windows called CIFS.

What do you base the business case on? Originally, the idea was to save a lot of money on bandwidth. But, you know, bandwidth's cost savings may not appear, because organizations always seem to find some use for that extra bandwidth.

And, of course, you may need to provide some sort of emergency backup in case the WAN optimization box fails and suddenly your bandwidth requirements jump by a factor of four or more. How are you going to deal with that even for the couple of hours in which the box is down?

Nevertheless, there really are notable savings here. Many organizations find that just bandwidth cost will pay for these devices within one or two years.

There are other costs, too, that I want to look at that you should also consider when looking at optimizers. Of course, one of them is simply the fact that they allow the use of centralization. They allow you to move applications to a central facility where as previously the performance would be so awful that you simply couldn't consider that.

When you centralize something there are massive cost savings in staffing, in having to have remote licenses, in having to have remote servers, et cetera. This itself will also pay for WAN optimization devices. These notable savings are hard savings costs that your accounting group can look at pay for the thing in under one or two years.

My personal feeling is that a lot of the savings are softer, but may be much more important actually in the long run than the actual hard savings. Those soft savings, or really opportunity advantages, are the fact that when an organization discovers that they can send files back and forth in a few minutes, when it used to take them hours, why then you get much greater collaboration.

If you have two groups working on a proposal, for example, an engineering group in one location, the sales team in another location. They're moving back and forth large engineering drawings, large proposals. If each turnaround takes an hour or so they're going to get pretty tired of it. But, if each turnaround only takes a minute or so, a couple more edits, a couple more turnarounds, that might make the difference between coming in second best and winning. That one win will pay for all these devices in many cases.

So, the fact that this permits not just the savings in bandwidth and the savings in staffing costs, but it allows centralization, and it greatly encourages collaboration of teams, quick turnaround, the feeling by people on remote locations that they're part of the central organization, that they're not out there on their own isolated, this is extremely valuable and should not be underrated even if you can't put a hard cost on it.

A good pilot program if you set some of this up can even generate a groundswell of interest. Where a couple of organizations that have been brought in and have seen those massive improvements in productivity will just create this drive to get this technology in house even though you can't come up with a hard cost that'll pay for it in a year or two. So, something to consider is those pilot projects.

And, there's another good reason for pilot projects. That kind of leads into talking about what should you be looking at when you're evaluating vendors. Most of these technologies can be rather tricky to implement. So, you'll want to run a pilot. You'll want to run in a couple of competing vendors. If you bring in just one you're going to buy it because it's so good. Bring in a couple. They really do have very different performance characteristics.

Test them on your actual workload in the actual work patterns. Don't just drag and drop files across your desktop and see that it moves faster. Do what your users actually do. Open files from inside an application as your users do and see what the improvement is. Have a background load similar to the ones that you have in real life. And, test in a configuration that's the same as your actual production configuration.

Even better, find a case study where someone's been running that product for six months or a year using the same configuration with the same vendors you are to make sure that there are not some sort of bugs that are really going to give you trouble. This is a rapidly evolving technology. There are going to be a couple of bugs out there. It'd be better to find a technology from a vendor that has experience in this where they can give you case studies of very similar implementations.

Before you talk to the vendors when you're choosing the vendors, of course, another important consideration for your team to look at is what is the traffic on your network. What are the characteristics of the data flows on the WAN's you're going to optimize? As I mentioned, the technology is just miraculous for some flows, and it does almost nothing for others. Plus, of course, why bother optimizing data that doesn't belong on the WAN link in the first place such as junk downloads from the Internet.

So, your team should be carefully characterizing what's going on inside those links you're going to optimize. What are the problematic applications? What do their flows look like? Are they sensitive to latency, are they sensitive to bandwidth, which are different things, of course, or some combination? Why are they in trouble? How much duplicate traffic is there? Looking at integration, looking at possible glitches, testing multiple vendors, and thinking about how you're going to manage it and handle outages.

Finally, realize the technology is evolving very rapidly and the environment it must work in is also changing rapidly. In just a couple of years we're going to have widespread implementation of Microsoft Vista. There's going to be encryption everywhere. There are going to be changes to both server load balancers and application front ends, deeper integration of this basic technology into existing platforms. So, don't plan for a three year procurement cycle followed by some sort of a ten year life. You'll be setting expectations incorrectly if you do that.

Instead, plan for quick procurement, some short tests, and generate that groundswell of approval. See how much that productivity really zooms at the same time that you're saving money so you'll be setting expectations and you'll have maybe a three or four year life which is far along enough to pay for itself; and, then a reevaluation and possibly reimplementation in the new environment. You'll still get huge benefits that will justify even a short life before redesign. The time has come to look at this. You can find case studies that are very similar to yours, and there are a lot of experienced vendors out there; Thanks for the time for listening to this. I hope that the projects work out well for you.

Karen Guglielmo: And on that note that concludes today's podcast. Thanks again to Eric Siegel for speaking with us today, and thank you all for listening. Have a great day.


This was first published in February 2008

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: