Pete Graves, CIO at Independent Bank Corp. in Ionia, Mich., has an ambitious goal for 2012 -- 90% virtualization....
Between servers and desktops (and more than 300 virtualized applications), the bank currently is 80% virtualized. To manage capacity in this highly virtual environment, Graves employs an array of tools including Microsoft System Center Suite and Citrix EdgeSight, as well as SolarWinds Inc.'s monitoring tools on the network side. Here, he speaks in-depth about his capacity management plan and why a move to a hosted data center makes sense for his company.
What are some capacity management issues and challenges to be aware of in resource planning for virtualized servers?
We have a private virtualized network in our data center right now. I'd like to maintain that private network but evolve into a totally hosted production and backup data center environment so we can take advantage of additional capacity when we need it and to buy that virtual capacity on a moment's notice. There is a huge incremental cost to invest in your own overcapacity requirements and for it to sit idle until needed.
I think that most enterprises today will find that they just cannot afford to maintain these basic infrastructure- and power-type data center environments themselves. Data center services are becoming more of a commodity, or at least a very expensive commodity, and most enterprises will choose a "pay as you go" kind of a philosophy or "pay as you need." For many CIOs … getting their virtualized private networks to a point where they can host them is a real critical thing. So, we're probably about 18 months away from being totally hosted at the data-center level. This will greatly simplify our capacity planning efforts in the long run while reducing cost and risk.
Some companies are being caught unawares by capacity creep. Is this an issue for you? How did you address it?
Definitely. Our business units drive our whole usage equation, and of course their capacity requirements and their storage requirements are constantly changing. The last I checked, our digital footprint expands at probably 35% to 45% per year, which makes capacity planning very challenging. You're constantly planning for major infrastructure capacity upgrades about every three years, along with adding incremental capacity as you go. So, it's maintaining the physical ahead of the virtualized capacity that gets to be a challenge.
There is a huge incremental cost to invest in your own overcapacity requirements and for it to sit idle until needed.
CIO, Independent Bank Corp.
We are implementing a major [storage area network] SAN upgrade, which is the primary repository for all your data and all your systems, along with all your virtualized systems. So you need to stay on top of it every day. Today we probably rely on four or five [monitoring] systems to manage that virtual environment. At some point, if you could get it down to a couple of monitoring platforms, that would be great.
What do you hope to gain with the SAN upgrade?
We had a separate SAN that was for our hypervisors, and we're getting rid of that storage appliance and going with a tiered storage solution by EMC [Corp.] that we're just installing now. The tiered storage architecture will vastly improve the performance of our virtualized systems. Staying ahead of that infrastructure incremental cost is huge for us, a large investment that will pay big dividends helping us on our virtualized storage. So, maintaining and staying ahead of that physical infrastructure requirement to support that virtualized capacity is a major effort, and sometimes you've got to have those large incremental jumps in investment in order to stay in the game. Ultimately, the bank would prefer, and will be in, a totally hosted scenario where we won't have to pay a premium for overcapacity in favor of a pay-as-you-go strategy for capacity planning and risk reduction.
What capacity management advice do you have for dealing with the higher-resource-demand servers needed for server virtualization?
We use clustering and pooled resources where you're clustering physical hosts such that virtual machines [VMs] can migrate from one physical location to another in order to manage capacity. So, if you think of a physical server host, that physical host is maybe managing a number of VMs; but if you cluster a physical host together as a pooled resource, then the VMs could actually populate themselves on other physical hosts in order to have more high availability and capacity for managing that particular system or application. We do that for things like our SQL database, our file servers, our user files and Microsoft Exchange. Those pooled resources in a virtualized environment give you much more flexibility and high availability for those systems.
What effect has virtualization had on your power consumption?
More on capacity management
Virtualization capacity management: The right tools rule
Storage virtualization for better capacity management
We've probably reduced our power by at least 25% in our data center for cooling and for physical-host consumption. We've got only a couple more blades that we're going to be retiring this year, and those were more intense in terms of power requirements than what we replaced them with. But I can't say we've realized all the savings that we could from a power perspective.
The challenge has been that our power consumption tends to be either zero or 100% on some of these systems. Our cooling systems are typically running at 100%. Today it is possible to utilize a total power management environment with an infinite level of output to match the need for cooling and other resource needs. We haven't been able to justify that level of sophistication or investment.
So, what will the next move be for your data center?
What we're probably going do in a couple of years is move our primary data center to a hosted facility so we'll have both hosted primary and secondary facilities to take advantage of more of these kinds of power management systems. If we want to maintain a tier three- to tier four-type data center with all those redundancies and to have that kind of variable power management -- well, there is just not the ROI for us to invest in that kind of data center today. So, I see the trend for "pay as you go" for some of these data center services that will charge you for what you actually use versus investing in overcapacity for growth. That is where the cost savings is in an ownership versus hosting equation.
With the changing needs of the business and the services they deliver, do you think it's possible to design capacity management that is 'futureproof'?
You definitely can, there's no question about it. The challenge is matching the business need to having a totally flexible model from a network and systems perspective that can meet that capacity so it's totally futureproof. There's definitely a cost associated with having that flexibility. That cost can be really a barrier to moving forward with a business project because having all that capacity just sitting there is not something you want to pay for. But capacity on demand is more expensive than capacity that you would purchase and manage and maintain yourself. So, there's a tradeoff to owning versus renting it, and there's an incremental cost to both. Hosting is more expensive than maybe owning it yourself but you don't have those huge incremental costs of reinvesting in infrastructure to keep up with, along with security, bandwidth, power and all the other challenges that go along with making a data center low-risk.