Optimizing storage capacity management for a virtual environment

Learn the steps for virtualizing the storage infrastructure for effective storage capacity management, performance optimization and data protection.

This Content Component encountered an error

Storage capacity demands are expected to grow 350% to 600% year over year as a result of server virtualization. In this expert podcast, Jon Toigo, CEO and Managing Principal of Toigo Partners International, explains how to address increasing storage capacity demands, data protection and performance requirements by virtualizing the storage infrastructure. Toigo also delves into the importance of categorizing data needs as a strategic...

first step of storage capacity management.

Listen to the podcast or read the full transcript below.

Christina: What challenges do virtual servers, or does virtualization in general, create for storage capacity management?

Jon Toigo, CEO and Managing Principal, Toigo Partners InternationalJon Toigo

Jon: That's a really good question. In fact, it's one that most of the leading analysts appear to be struggling with right now. Back in 2011, IDC was anticipating a gentle growth in storage capacity demand of about 30% to 40% year-over-year on the 21 exabytes of storage that companies had deployed worldwide in their data centers. Midyear last year, they said, based on their analysis of the impact of server virtualization that that number was going to climb to a rate of about 350% per year, growth and storage capacity demand. That's quite the jump. Gartner, not to be outdone, decided that they were going to articulate another rate of storage capacity demand behind a virtualized server environment of 600% per year.

Basically what that means is that by 2014, if Gartner was right, we would be at 214 exabytes of external storage to support virtualization, and that of course, would bankrupt many, many companies. The problem is that if you're going to move workload around, which is one of the touted benefits of server virtualization, high availability, VMotion, the ability to move guest machines from one physical server complex to another effortlessly, you need to have the data that's used by those applications stored at each location. Some of the virtualization mavens are saying, “Storage is a big impediment. We got to get everything out of those SANs that we've been tucking all our storage into for the last 10 years. Break those SANs and start going back to direct-attach storage.”

There's a huge cost that's associated with those, in terms of dealing with the new capacity demands that are posed by virtual servers, and there are oftentimes costs that nobody thought about when they decided they were going to virtualize their servers. They didn't realize the impact it was going to have on their networks and their servers, which is why some of the surveys that I'm reading right now are saying that companies are backing out of server virtualization when they're less than 20% through with their projects.

Christina: That's a pretty big issue. Really, where do you start to address it? Do you have to invest in new technology, whether it's capacity management technology or new storage solutions? Or do you have to just do things differently?

Jon: Actually, what I'm seeing are two major complaints: One is the speed or performance complaint with the backend storage; if you leave everything as it is, routing to the location where the data that supports the guest machine is located, appears to be slowing everything down. At least that's the way that those server virtualization people put it. In fact, what's happening is you've got a logjam, a funnel, a chokepoint, inside the virtualization stack, so at the application layer, if you will, that is really the source of your I/O logjam. It's not anything having to do with how fast your storage is, although that little mean is being used to sell an awful lot of flash storage and faster storage that really isn't going to address the problem effectively at all.

The real issue here, though, is one of when you start to talk about capacity increases and the demand for more capacity. It has to do with the number of replicas that we need to make of data. Ultimately, that's the root cause here. If you want to do the VMotion, you want to do some moving around of workload, you need to be able to access the data, and that usually cries out for making copies of the data everywhere. If you don't want to replicate data all over heck and half of Georgia, what you do is you virtualize your storage infrastructure. That means that you take all the physical constructs of storage and you overlay a virtual controller . . . we'll get into that later, if you want to. It's a software-based controller that sits over the top of all of your storage room and it presents virtual volumes just like server virtualization presents virtual machines-to-guest applications. It's a lot easier to do it with storage than it is with server workload because there's less variability in storage. Storage does six or seven things; it does them all the time, and it's pretty predictable in how it behaves. It's easier to virtualize storage.

When you virtualize the storage infrastructure, you create a virtual volume that contains the data that's associated with workload, and that virtual container can move around with the workload. In other words, physically, the data stays where it is, but the access paths and everything else are optimized by the virtualization engine. That's a very convenient, easy, and I would say, effective fix to the problem of having to replicate data all over the place and break the existing infrastructure of fiber channel or iSCSI connections to accommodate the VMware implementation.

Christina: Are you talking about centralization here?

Jon: We've spent 10 years centralizing our storage already. If you look at most Global 2,000 companies -- a lot of my clients are Global 2000 -- if you go in there, they've got huge, huge storage farms. They have been centralizing their storage for years because somebody told them it would be easier to manage growing storage complex if we had them all wired together into the same storage fabric. That's what a fan is. A fan technically isn't a network, because it doesn't have a management layer, which is what one of the signature traits of a real network. It is a fabric; it allowed us to connect lots, and lots, and lots of storage devices to a single iSCSI connection by serializing iSCSI. It provided a protocol and a wiring plant solution for connecting up a lot of gear. A lot of companies stopped deploying storage directly behind individual servers and moved all those storage kits into a central repository that they call a SAN. That's what we've been all about for 10 years now.

When you virtualize the storage infrastructure, you create a virtual volume that contains the data that’s associated with workload, and that virtual container can move around with the workload.

They were extraordinarily successful in selling that particular topology for how to deploy storage. A lot of companies drank Kool-Aid, and that's what they have out there. Now we're being told that we need to segregate it by a kind of workload, by whether we're using VMware, we're using Hyper-V, we're using Citrix, or maybe no virtualization at all; we're going to have to segregate that storage area network by the type of workload that uses it. We've gone aggregate, then segregate, and to me and what I'm hearing on the street, aggravate. Everybody's kind of annoyed by the fact that, “Hey, didn't we just get through spending a boatload of money on storage, now we're going to have to do it again?”

Christina: You call storage capacity, performance and data protection the three cornerstones of storage service management.

Jon: Let me articulate a little bit about what I mean by that. First of all, we're moving out of a realm of standard storage management. Storage management means managing plumbing, managing wires, managing HBAs and netcards that are inside servers, controllers that are inside storage arrays, and managing ranks, and ranks and ranks of disc drives. That's the stuff we do with storage resource management software. It's very complex; it's something that is kind of still a black art. It pretty much avails itself of use only by very knowledgeable storage-knowledgeable people. What the business needs today is the ability to provision capacity, maybe performance, and maybe some data protection services on-demand based on what an application requires, and they need to do that in a very expeditious way. Storage capacity management, performance management, and data protection management are the three cornerstones of what I would call ‘storage service management'.

We need to have some sort of a mechanism for deploying capacity, either autonomically, where the capacity shifts in increases based on what the application requires automatically, or that makes it very simple for a server administrator to do it. What's happened in a lot of IT shops is that they've done downsizing of staff, particularly when they virtualized the server complex, they ended up getting rid of a lot of the geeks who worked around servers, who did other things, who did peripheral things, like network administration and storage administration. The server administrator can't just reach out and ask for something from somebody who's knowledgeable about how to configure it and deploy it. Instead, what he needs to do is provision it himself. A lot of people are running server virtualization complexes don't know storage from Shinola; they frankly don't understand storage, and in many cases, they don't understand networking either. They can be very good on VMware, but they know nothing about the storage.

What you have to do is make it abysmally simple for these folks, to make it easy, maybe even for users at some point, because that's what the whole concept of private cloud is all about, is the user's self-provision. That scares the heck out me, but that's one of the goals of cloud. The idea is you need to have that dashboard that's very simply, very direct, very easy to implement, that says, “Hey. I need a little more capacity to handle this workload. I'm running out of space behind this database.” I should be able to grab it, pull it over, and drop it into place, and it automatically delivers the right performance, the capacity, and if there are special requirements like continuous data protection for that database, those attributes are all part of that storage that's provided.

The cool thing is that storage virtualization provides sort of a one-throat-to-choke location for doing all that stuff. It simplifies, refines and consolidates the capacity management effort so those services can be delivered simply and effectively as part of storage virtualization management. You've only got one server, or cluster of servers, that are being used to host that uber-controller that's in software. I only have one set of knobs and dials that I need to get to.

Christina: I'm interested to find out why self-service provisioning scares you.

Jon: Usually, your biggest pigs for storage in any datacenter are the database administrators. If they see a terabyte of available capacity out there, and they only need 500 Gigs, they're going to take the full terabyte, because they don't want to have to go back through any process to request a provisioning of another 500Gigs six months from now, so they're going to steal it all. You run into problems when they do that because if you're the storage administrator, you allocate a certain amount of space to a particular application or the user of that application. Then if they decide that they're going to take more than that, and they can do so arbitrarily, and they have the full capability to do it, you're going to run out of space the day that you open up shop. That's what makes me nervous.

Christina: Is there an area that CIOs, they're looking at this strategically, so is there an area that CIOs should have their team tackle first?

Jon: Here's what I regard as the most strategic aspect of all of this: Server virtualization certainly poses some interesting workload requirements, and you have to understand what that workload is. The very first thing you need to do is characterize the workload; understand how fast it needs to go, how fast it's capacity requirements are growing, is there data that gets put out disc and it's accessed quite a bit from multiple concurrent accesses for a period of time, and then after, say 30 days, the accesses to that data drop to near 0? If you can characterize the data, you can move the data around, so it's not all sitting on your most expensive and most dear and limited kinds of capacity. Storage isn't one-size-fits-all; there's fast storage, and there's slower storage. For data that is being issued by that database, you're probably going to write it to the fastest storage complex that you've got. You may even go to what they're calling Tier-0 right now. You may even go to some sort of flash memory array initially, and then you're going to migrate that in disc, to the fast 15K disc, and then you're going to migrate that ultimately into more capacious storage modality. Some people are still using SATA drives for those, and other people will go to LTFS, which is linear tape file system, which is basically mass storage running on tape with a file system. It's like a big NAS box, a big network-attach storage array, but it's a tape library, so you can get up to 190 petabytes of storage on a couple raised-floor or tiles, consuming less energy than about two light bulbs.

More on storage management in virtualized environments

Storage strategies for a virtual world

Storage infrastructure built for the virtual environment

Storage as a foundation for Data Center as a Service

That's cool, that gives you sort of a storage model with progressively less expensive places to stick your data and progressively modify performance characteristics and modified cost-per-gigabyte characteristics of the arrays themselves. I can write an intelligent script that's going to move data after 10 days from Tier-0 to Tier-1, and then from Tier-1 to Tier-2, and from Tier-2 to LTFS, or something. That will keep continuously freeing up space in the tiers above so that I don't run out of space as rapidly; I don't have to add more capacity. That's a very simple mechanism for trying to utilize more efficiently the capacity you've got. Unfortunately, industry doesn't push those kinds of solutions very much. Number one, because they don't have a product to offer in each one of those tiers. There are only a couple of companies that have actually have a product for all memory, a product for fast disc, a product for capacity disc, and then a product for tape.

Bottom line here is you have to look at how the data operates and provision for the profile that that data presents. Very little work has been done on characterizing the actual I/O requirements of workload. That's got to change. After you understand your data, the other stuff you do begin virtualizing, and you can virtualize your storage infrastructure in parts or in total. Think about the future, in terms of energy; in terms of energy consumption and energy cost.

I have a client right now that's up in the New England area, and they're required to keep clinical test trial data. They're a pharmaceutical company, and they're required to keep clinical test trial data in a near-online state. The IT manager was telling me, “We have more money than God. We own the patents on some really big medical breakthroughs, but the problem is I can't get any more electricity dropped into my datacenter because the grid in my area in New England is saturated.” He's kind of between a rock and a hard place; he can afford to buy the latest storage array but he can't get any more energy to support it. He's going with an LTFS solution using a tape library because he can get capacity storage with very little energy consumption. The cost of energy keeps increasing by about 20% every two years. Maybe eventually, the cost of the electricity for a datacenter is going to become a major issue for it.

Christina: Thank you, Jon, for your insights and advice.

Jon: Okay. I hope it helps. Thank you very much for the opportunity.

Christina: This has been Christina Torode, Editorial Director of SearchCIO, and Jon Toigo, CEO and Managing Principal of Toigo Partners International and Chairman of the Data Management Institute. Thanks for joining us.

This was first published in April 2013

Dig deeper on Data centers and virtualization for Small Business

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchCompliance

SearchHealthIT

SearchCloudComputing

SearchMobileComputing

SearchDataCenter

Close