Rarely does a disaster recovery plan appear high on the list of priority IT budget items, and sometimes it doesn't...
make it onto the list at all. More often, IT executives piggyback disaster recovery planning onto a data center consolidation project or, as Irving, Texas-based Christus Health did, a desktop virtualization project.
Server and desktop virtualization projects are under way at Christus Health to meet business goals that range from more flexible access to data and less power consumption, to electronic health care regulations and disaster recovery planning.
"We were hit by hurricanes that caused major outages in our organization. Now we're building a client computing model that allows a physician at a hospital that went down to pick up a satellite phone or whatever is at hand, and get immediate access back to our infrastructure," said Todd Bruni, director of client computing services and configuration management for Christus Health, a health care company with 30,000 employees and 40 hospitals and affiliated facilities.
If a hospital loses power, employees or physicians remain tethered to the company's primary or backup disaster recovery facility because Bruni's team has been steadily virtualizing all client devices using virtualization technologies from Citrix Systems Inc. The first phase of the project was the introduction of Citrix-based server-based computing to host applications in the data center. The second phase was moving about 10% of the application portfolio (which covered approximately 50% of employees' data needs) off desktops and into the data center -- using thin clients as the front end and Terminal Services on the back end. The stage under way now is the build-out of a virtual desktop infrastructure (VDI) for more complicated clinical scenarios, such as access to medical records and to back-end financial systems.
"These are solutions that were not well built or intended for a server-based computing model or Terminal Services, so we needed VDI," Bruni said.
Virtualization by no means replaces a full-fledged disaster recovery plan -- Christus Health's data is replicated in "hot, hot" scenarios between its primary and secondary disaster recovery facilities -- but virtualization simplifies real-time replication and data portability.
"Virtualization is making it possible for our client services to be portable in case of a disaster," Bruni said. "All you need is an agent on any client device, and some type of Internet access."
Core business apps running on a virtual server infrastructure, "allows for portability and replication that we wouldn't have had with dedicated physical systems," Bruni said.
Weighing the costs and benefits of VDI
A VDI is costly, however, as Chelo Picardal, chief technology officer for the city of Bellevue, Wash., found out when she started investigating desktop virtualization for 1,500 employees in 13 departments. "Server virtualization was an easy sell because you're replacing the cost of buying physical servers anyway," she said. "With virtual desktops, you still have to buy PCs for people, but now you also have to buy the virtualization software and invest in an infrastructure that will hold all the data that used to be on the desktops -- where is that funding going to come from?"
Picardal does not see desktop virtualization as benefiting the city's disaster recovery strategy, but views it instead as an "efficiency" play for the IT department. "You can give remote workers access to their data, but we are looking at it more as an efficiency gain in terms of maintenance."
Ask her about the disaster recovery benefits of server virtualization, on the other hand, and Picardal has a checklist readily available:
- Workloads are easily portable from the primary to the secondary disaster recovery site, and users experience no downtime.
- Virtualization eliminates the need to buy double the hardware to replicate physical servers between the two facilities. This reduces costs, and reduces drift and hardware compatibility problems between the primary and secondary facilities. That in turn reduces downtime.
- Applications that need to be highly available remain that way when a failover to an alternate site occurs.
"When you think about high availability, the VM [virtual machine] becomes the point that fails over," said Chris Wolf, analyst with Stamford, Conn.-based research firm Gartner Inc. "That's a really big deal because traditionally, enterprise IT could cluster only a small percentage of apps for high availability because that type of architecture had to be written into the apps. Whereas with virtualization, any application can be made highly available and resilient to hardware failure."
Above all, however, Bellevue's Picardal can guarantee her performance service-level agreements (SLAs). "For a long time, there were a lot of things we couldn't promise that the customer really wanted. The best we could do is get them back up maybe in a half-hour in a disaster scenario. Now, with server virtualization, unless the entire [data center] facility goes down, the customers don't even notice it."
With the city's VMware Inc. server virtualization technology tied to its storage area network, which has deduplication, "you can get really close, or exceed what the customer needs," Picardal said. "Let the customer drive your DR needs, and you'll find that virtualization really allows you to meet those needs fairly easily," she said.
The city's public-facing applications, which have a high-availability SLA, can be backed up and returned to service with minimal downtime as a result of virtualization. That was the case when one of the city's websites was defaced, Picardal said.
Testing a disaster recovery plan made easy
Testing a disaster recovery plan is perhaps one of the most painful tasks an enterprise IT department faces. The process is so complicated and demoralizing that some departments have been reduced to just reading the disaster recovery plan's documentation and checking a box stating they are prepared for a disaster, Gartner's Wolf said.
Virtualization is making it possible for our client services to be portable in case of a disaster.
Todd Bruni, director of client computing services and configuration management, Christus Health
"I've seen companies just quit testing disaster recovery because it was bad for morale. They would run into so many problems trying to recover data, application and hardware in the DR facility because the hardware wasn't an exact match; and it would often take the IT staff days to get through the DR exercise," Wolf said.
Virtual machines, however, remove the necessity that hardware -- from devices to the firmware on them -- be an exact match between the production facility and disaster recovery facility. "It's so easy to validate that an application is going to come online in a VM, and test that regularly," Wolf said. "That's generally not an option with physical hardware."
Because disaster recovery testing is simple to do in a virtual environment, many enterprises aren't testing just Tier 1 applications, but now are moving down the line of business applications to test their ability to bounce back from a disaster, Wolf said.
Because VM environments are easy to isolate, "you can do recovery testing to your heart's content without having any impact on the production environment," said Nelson Ruest, principal at consultancy Resolutions Enterprises Ltd. in Victoria, British Columbia. "Recovery testing is as simple as changing a [network interface card] that is assigned to a VM."
The not-so-simple part
With server or client virtualization, overall systems maintenance and recovery are simplified. Workloads, whether they're on a server or client, are isolated from the underlying hardware and can be moved from one system to another, from one facility to another. In addition, most virtualization technology has disaster recovery capabilities built in to automate and prioritize the system recovery process.
This could free up IT from performing a few steps in disaster recovery, but many of the procedures needed to back up and maintain systems remain.
"With server virtualization, we gain high availability at a lower cost, but we still have to patch, monitor and troubleshoot -- that doesn't go away," Picardal said.
In addition, if you do choose to deploy virtual desktops, don't think it will be as easy as your server virtualization project. "With server virtualization, you worry about CPU cycles, memory, disk, network connectivity -- the same things you did before," Christus Health's Bruni said. "In the client [virtualization] space, you have to worry about screen shots, latency on circuits and whether that causes flash video not to perform appropriately. There are a lot of things that run on a desktop that never used to run in a data center [that now do]."
The tradeoff? Peace of mind, Bruni said. "The core benefit [of virtualization] back to the business is knowing that they have multiple ways of accessing data, services or applications … [because] the core infrastructure is designed to ensure that core services remain available."
Let us know what you think about the story; email Christina Torode, News Director.