Tip

Stop storage infrastructure costs from spinning out of virtual control

Last year, leading analysts revised their estimated annual storage capacity demand growth rates in data centers using server virtualization technology from 30-to-40% to a whopping 600%. If the new growth estimates are correct, more than 210 exabytes of external storage would be deployed in companies worldwide by 2014.

    Requires Free Membership to View

Jon Toigo

With storage arrays currently accounting for between 33 and 75 cents of every dollar spent on IT hardware, such an increase in storage capacity demand would likely bankrupt many firms. Not surprisingly, the unanticipated spike in storage capacity’s requirements and associated cost are the reasons most frequently cited by IT planners for abandoning server virtualization projects.

Current generation server "hypervisors" have been driving the deconstruction of centralized storage infrastructure since they first found homes in production data centers. After a decade of consolidation of storage assets into so-called storage area networks or SANs (actually, Fibre Channel fabrics), this infrastructure is now deemed too inflexible to support virtual server environments, since it lacks the agility to deliver storage services to virtualized workloads that may jump from one server to another. Animated workload requires the replication of data, potentially many times, within the physical storage infrastructure, which, in turn, drives up capacity requirements and increases the need for multiple synchronous data replication processes.

Storage capacity's requirements and associated cost are the reasons most frequently cited by IT planners for abandoning server virtualization projects.

It is also worth noting that server hypervisors do a generally poor job at storage I/O processing, often slowing down guest application performance by very noticeable degrees. This has also had an impact on storage infrastructure, as equipment vendors offer faster disk and flash memory based storage equipment as part of a largely ineffective "brute force" effort to redress application performance delays.

Virtualize to simplify the storage infrastructure

Short of de-installing server hypervisors, about the best way that IT professionals can deal with the pressures exerted by server virtualization on the storage infrastructure is to virtualize their storage infrastructure. Storage virtualization abstracts storage management and value-add software away from the proprietary hardware controllers built into storage kits and instead vests these functions in a centralized software-based "uber-controller," or controller of controllers. From this vantage point, storage becomes simpler to slice and dice into virtual volumes that can be associated readily with specific application workload.

With a properly virtualized storage infrastructure, virtual volumes can be readily provisioned to virtual server guests. Based on the requirements of the workload, virtual volumes can be enabled with specific storage services: Mission critical workload may use a virtual volume that also provides data mirroring between volumes or continuous data protection services. More importantly, the virtual volume can move with the workload as it transitions from one physical host to another, reducing the need to create multiple copies of data.

A good storage virtualization engine will also virtualize data paths between servers and storage and will provide load balancing across connections as a value-add. As a result, there is no longer any need to de-install a SAN or iSCSI interconnect; the plumbing between disk and server becomes a non-issue.

Old trick of the storage infrastructure trade: spoofing

Taking application performance speed into account, , a virtualized storage infrastructure tends to operate at about two to three times the speed of a non-virtualized infrastructure. This is less a function of some miraculous feature of storage hypervisor technology than of a very old engineering trick in storage, called "spoofing."

More on storage strategies

Storage area network fundamentals

Making a business case for storage backup

Storage paves path to Data Center as a Service

Here's how it works: Applications write their data to a virtual volume presented by the storage hypervisor (usually deployed on a server). Memory in the server is configured to cache the incoming write request, but the hypervisor informs the application that its data has been written to disk and to go on about its business. Over time, queued writes make their way to the back end physical storage.

The net effect of spoofing is to trick the application into thinking that storage is faster. In fact, speed is a function of writing data to memory rather than to disk. We have been using this technique for at least 30 years (since I first entered IT): Mainframes used it heavily to extend channels and to communicate effectively with peripheral devices over great distances.

Can spoofing really address the I/O performance issues of server hypervisor computing? Most vendors say no, but they quickly add that storage virtualization can help generally reduce the storage-related cost, complexity and capacity demand issues that are currently seeing many firms abandon their server virtualization projects when they are less than 20% complete.

Jon Toigo is CEO and Managing Principal of Toigo Partners International, and Chairman of the Data Management Institute.

This was first published in April 2013

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.