Manage Learn to apply best practices and optimize your operations.

Musings on the phenomena of private clouds

Migrating to private clouds is as simple as changing the way you think about virtualization. Scott Lowe gives a detailed history of one metamorphosis.

The data center is undergoing a massive transformation: What began a few years ago with server consolidation initiatives...

to make better use of hardware and reduce ongoing operating costs has taken on a life of its own and is quickly becoming the new normal. While the data center is being transformed from server "silos," IT departments are being called upon to create more nimble infrastructures to help the business react more quickly to changing market conditions and new demands. Although I would hesitate to call many data centers "private clouds," ensuring that private data centers attain cloud-like features is a key to future success.

Scott Lowe
Scott Lowe

Let's start with a baseline discussion defining private clouds:

According to SearchCloudComputing.com, private cloud (also called internal cloud or corporate cloud) is a marketing term for a proprietary computing architecture that provides hosted services to a limited number of people behind a firewall.

While it's easy to say that just about any data center is a private cloud, there are some specific characteristics that differentiate between a traditional data center and what can be considered a private cloud, even in small and medium-sized organizations. Here are three that are key to success:

Heavy use of virtualization

In many companies, virtualization started as a way for IT to decommission rather than replace older servers while keeping the existing workloads intact. Westminster College, for example, had a bunch of older, out-of-warranty systems running workloads that were critical at the time but were being phased out. We didn't want to spend a bunch of money on new servers, so we used physical-to-virtual software to virtualize those workloads and moved ahead. It's important to note that this early virtualization project was about as far from "private cloud" as one could get. We had a virtual host running a bunch of formerly virtual workloads. It really was nothing more than a way to save a few bucks by not buying hardware.

Since then, however, our virtual infrastructure has evolved. Whereas our initial foray into the world of virtualization was targeted at server consolidation with host servers using just local storage, today we have a whole new way of thinking.

In the old days, we purchased individual physical servers anticipating peak demand. Today, we provision processing power, RAM and disk space based on the needs of a service. Rather than monitoring individual virtual server instances, we keep watch over the entire virtual infrastructure and add new resources when necessary. If we find that the overall environment is becoming RAM-constrained, we add more RAM by upgrading RAM in a host, replacing a host or adding more hosts. If we find that we're having either disk capacity or performance issues, we add more disk spindles to the infrastructure. With our almost fully virtualized infrastructure, we can more granularly target resource additions and avoid adding unnecessary resources. In our case, our services use a lot of RAM, but not very much processing power, so we can focus our resources where it matters rather than buying an overpowered server just to handle peak demand for a single workload.

As a default, we now hot add RAM and processors to running virtual machines. Almost everything we add today runs on Windows Server 2008 R2, which supports this VMware ESX-based capability (hot-adding has yet to come to Hyper-V). We've already used this hot-add capability as we discover resource constraints popping up -- now we can very easily add resources without bringing down the virtual service.

While I would certainly not consider our environment "multi-tenant," we are able to provide non-IT managed services to specific disparate campus groups in a secure way that does not jeopardize the operation of the rest of the environment. For example, using resource groups in vCenter, we can limit how many resources are used by a group and use virtual networks to logically partition these other entities.

Tiered services

Not every service demands the same level of resources -- file services, for example, don't generally require as much in the way of disk performance as database services. We built our storage environment in a way that allows us to target services at the most appropriate storage tier. We're very iSCSI focused, but at the storage area network level, we have a mix of SATA, 10K rpm SAS and 15K rpm SAS disks. Our file services are provisioned on the Serial Advanced Technology Attachment (SATA) disks while our SQL databases sit on the 15K rpm Serial-Attached SCSI (SAS) disks, ensuring that we're able to provision services with appropriate infrastructure. We intentionally avoided building out a one-size-fits-all infrastructure that would end up being wasteful (i.e., buying 15K rpm SAS for everything) or not robust enough (i.e., 7,200 rpm SATA for everything).

Centrally monitored and managed private clouds

On the management side, it should come as no surprise that Westminster College uses vCenter to manage the virtual environment. For monitoring, however, we've selected System Center Operations Manager (SCOM) as our centralized monitoring tool of choice. We've been slow to adopt this aspect of a "well-monitored private cloud" and have been largely reactive until recently, adopting SCOM to monitor individual workloads and make resource adjustments to these individual workloads. SCOM provides an amazing level of insight into a production environment, allowing administrators to truly understand how services are operating.

With our almost fully virtualized infrastructure, we can more granularly target resource additions and avoid adding unnecessary resources.

Very soon, we'll be adding Veeam's nworks management pack to the SCOM environment so that we have much more insight at the ESX host level of the equation. This will give us a top-down and bottom-up view of the entire virtual environment, which is a key missing link right now.

Our initial server consolidation project, while fully virtualizing older services, was far from being a private cloud. Today, our new infrastructure resembles a small private cloud in that:

  • We can very quickly provision new services as the business demands them.
  • We can grow our cloud in a very granular way, without having to add (and pay for) unnecessary resources.
  • It's centrally managed and monitored at both the aggregate level as well as at the workload level.

I hesitate to use the term private cloud to describe our environment, although we have intentionally built this environment in order to gain more cloud-like agility and better control our ongoing costs. I can safely say that our environment has already proven itself much more flexible than the old, and we are seeing the benefits that were expected when we embarked down this road toward flexibility computing.

Scott Lowe is CIO of Westminster College in Fulton, Mo. Write to him at editor@searchcio-midmarket.com or tt@slowe.com.

This was last published in February 2011

Dig Deeper on Small-business infrastructure and operations

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCompliance

SearchHealthIT

SearchCloudComputing

SearchMobileComputing

SearchDataCenter

Close