Storage technologies are paving the way for Data Center as a Service

Storage technologies like SSD are an important part of constructing the virtual enterprise data center, or Data Center as a Service, of the future.

Tier 1 applications have been virtualized, but enterprise data centers are seeing little action when it comes to virtualized mission-critical applications. Arun Taneja, consulting analyst and founder of Hopkinton, Mass.-based storage consulting firm Taneja Group Inc., explains why such storage technologies as solid-state drives (SSDs) are helping CIOs overcome virtual application performance hurdles. He also discusses why a phased approach...

is needed to create the enterprise data center of the future: a Data Center as a Service that mimics the multi-tenancy cloud model.

SearchCIO.com: What storage technologies are allowing for better application performance in a virtual environment to in turn encourage enterprises to move mission-critical applications to virtual machines?

Taneja: Our historical definition of shared storage was five physical storage systems connected into a SAN[storage area network], but each one of those physical storage boxes is complete unto itself.

Start with server virtualization. That will force you to get the right kind of storage.

They were connected into shared storage, and I fine-tuned that storage so that it worked effectively with a particular application on a particular physical server. Some applications needed more input/output operations per second, so I gave it better IOPS capability. Some applications were not latency-sensitive, so they got less storage.

When you move to one physical server with 10 applications running as 10 virtual machines, all the I/O patterns I had skillfully managed and precisely mapped to make the applications run correctly are now jumbled as they go into one physical machine. Whereas my I/O looked sequential before and had some pattern to it, what I've just done is randomized all of the I/O. Now I have completely random patterns coming into my physical server, and lo and behold, performance goes to hell in a hand basket.

So, in a virtual environment, I need a different kind of storage that works effectively in this randomized environment. That's why you're seeing an emphasis on IOPS and why SSD data storage devices are considered to be the panacea in this environment.

How do SSDs address application performance in a virtual environment?

Because they are mechanical devices, hard disk drives can only produce 100 to 200 IOPS per drive no matter what you do to them. They don't like IOPS. IOPS are small transactions and the data has to bounce around. The only way of making that work effectively is if the storage box is capable of delivering high IOPS.

That's why companies like Hewlett-Packard's 3PAR Inc. have done really well in the new virtual server environment. Their product is targeted to be a random-IOPS type of a box. They also have thin provisioning and thin cloning. These new technologies are a godsend for making this virtual server environment work. That's why in the last year or two we've started to see some mission-critical applications migrate over to virtual machines, or VMs, in conjunction with these new architectures on the storage side.

How does this tie into reconstructing -- or 'deconstructing,' as you call it -- the data center?

I need new storage architecture and I have that with these newer technologies addressing the IOPS problem. That's just seeping into the data center, and it's going to create a different level of deconstruction because effectively, my storage is going to look very different three years from now. It's going to have huge complements of SSDs. Those can be implemented to look just like a hard disk drive, so that means you can stick it into the same chase where your hard disk drives go. With solid state, that's one way I can introduce flash into the environment. It can also be introduced as a cache at the server level, or as a PCIe card.

So, what does the future data center look like once this deconstruction is complete?

The entire data center will be virtualized. I'll have virtualization at the applications, server, networking and storage level. I've got a good handle on server virtualization right now, except for mission-critical applications. There's a lot more to be done in the application virtualization space, and network virtualization is at a zero right now. Storage virtualization is still in its infancy. Storage virtualization is probably at the 10% level only right now in the data center. But the dream data center will have everything virtualized, and services will be delivered to users and business divisions though this environment.

This new data center will have a "cloud mentality." which means that if I'm the user responsible for all IT in finance, I will go to the IT guy for my division and say, "I want you to charge me for what I use and nothing more. If you don't do that, I will go to a public cloud because they do let me do that."

What storage technologies are available to help CIOs determine what to charge for storage usage?

Every storage company right now is creating their storage so that each one of them is multi-tenancy-oriented, which means they can serve X number of masters. They can have competitive data sitting on it from business division A and business division B, but it is so secure that the data is not mingled. So we have multi-tenancy now in storage boxes. Every new product coming out is dealing with multi-tenancy and virtual domains. Virtual domains make that system serve many masters. That's another piece that is going into the data center of the future.

What advice would you give CIOs as they start to create this data center of the future?

Start with server virtualization. That will force you to get the right kind of storage that has multi-tenancy and virtual domains.

At the same time, you need to have a private cloud for data you want to keep in-house that connects to the public cloud for things you want to offload. This requires a whole aspect of integrations to connect the private cloud to the public cloud.

I'll need tools for management and policies to ensure that I don't let data willy-nilly go out to the public cloud. These are all elements of the disruptive process that I need to bring in and integrate, and maybe five years from now I'll have something that meets this new vision of a completely self-service-driven data center: Everything is only delivered as needed and people only pay for what they use. It has agility, and can go up and down. That's Data Center as a Service.

Let us know what you think about the story; email Christina Torode, News Director.

Dig deeper on Enterprise data storage management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchCompliance

SearchHealthIT

SearchCloudComputing

SearchMobileComputing

SearchDataCenter

Close