Technology innovation drives data center of the future

Future-thinking CIOs plan to get the most out of their data center resources while reducing how much they spend on power and cooling.

Kermit the Frog was right -- it's not easy being green. But it's easier than it used to be. Enterprise-sized companies,

including IBM and The Coca-Cola Co., have taken the lead in making sure their IT operations are as "green" as possible. But what about the green data center of the future? What strategies are in play now to move the data center beyond green?

Analysts agree that there will be no decrease in power demands from data centers. The good news is the 15% to 30% of the yearly increase in operating costs that companies have been experiencing seems to be leveling off. Increasingly, companies will seek "more efficiency using the resources you have" said Greg Clark, global portfolio director, data center services at Computer Sciences Corp. "We won't see a decrease in power usage by data centers in the future. It will be more about balancing power, space and cooling." The granular details of data center infrastructures will become increasingly important to manage that balance, Clark noted.

There's no question that CIOs at large companies are already taking steps to make the data center more energy efficient. For example, companies have instituted water-cooling capabilities where possible, deployed "cold aisle" heating and cooling methodologies, and consolidated servers using virtualization. As with smaller companies, virtualization has provided a dual benefit: lower hardware costs and lower power bills.

To that end, enterprise companies have taken the lead in using virtualization as a consolidation tool. In a Forrester Research Inc. study released in May and authored by Frank Gillett, a survey base of 179 enterprise hardware purchasing decision makers reported they expect 45% of their x86 servers to be virtual within two years. Thirty-seven percent of a slightly larger pool of 197 listed improved power and cooling as very important factors in making the decision to virtualize.

St. Louis-based MasterCard Worldwide has used virtualization widely to cut hardware costs and lower energy bills. About 45% of the company's x86 servers are virtualized, as are 20% of the its Unix servers.

The company's main data center is huge. MasterCard occupies a 550,000-square-foot data center on 52 acres in O'Fallon, Mo., and has 1.8 petabytes of available storage. The data center is the company's largest and was built from the ground up with energy efficiency in mind, said Jim Hull, group executive, global operations for MasterCard International's global technology and operations group.

"When we built the data center [in 2001], we built it with sufficient power and air conditioning to accommodate significant growth," Hull said. Specifically, the data center has floors that are raised to 36 inches in the center for increased airflow. The ceiling height has been raised to accommodate server build that goes up and not out, he added. Water-cooling capability is built in for future use if it's needed and the data center uses "hot aisle/cold aisle" cooling for heat dissipation. Power and air conditioning hardware is housed outside the data center.

Blade servers are not a significant part of MasterCard's data center strategy for now. This mirrors the results of Forrester's virtualization survey. In that study, only 18% of 179 hardware decision makers at enterprise companies said they were highly motivated to buy more blade servers in the next two years. Thirty percent simply agreed that they would buy more blade servers during that same period. Vendor lock-in was mentioned as one of the factors for shunning blade servers. Hull concurred: "They're not standardized yet."

New technologies increase energy consumption

Most large companies are more interested in getting more capacity out of their data centers, and likely don't have a strategy in place to decrease energy usage per se, said Peter Panfil, vice president and general manager for the Liebert AC Power business unit of Emerson Network Power, which is based in St. Louis. "Availability can't take a back seat to increasing the efficiency of power usage."

One key to increasing data center capacity will be teaming up IT and facilities management and giving that team joint control over the data center, Panfil said. The CIO, IT managers and facilities managers usually sit at opposite corners of the table. Hardware buying decisions, air conditioning systems modifications, data center heating and cooling strategies all need input from both IT and facilities for optimal capacity.

But new technologies also affect the capacity equation, Panfil said. "Higher-density application servers running in one rack space consume 20 to 40 kilowatts compared to that same rack preconsolidation, which might have used 10 kilowatts," he said. Panfil uses a "kilowatt per server rack" measure that assumes one ton of cooling power per floor tile in the data center. "Virtualization pushes the utility rates up on the servers, but it also increases the heat density in the data center," he said.

Panfil advocated buying servers with low-power processors "to pick up a 2 to 3% capacity increase overnight." Consolidating high-density servers and using supplemental cooling for these "heat islands" is another strategy. Hot aisle/cold aisle cooling strategies are still a tried-and-true way to save as well. While the latter is mentioned a lot, there are still many companies that haven't yet deployed hot aisle/cool aisle layouts.

Dual path power, separate air conditioning build-outs center of new modernization

But the green data center of the future will require more than adjustments to floor layouts and low-power processors. Infrastructure and design changes for layouts and options for dual path power, such as separate air conditioning build-outs, will need to be made. "Most data centers will require critical infrastructure changes to support technologies like blade servers and grid computing," Panfil said. To fully appreciate these changes, consider this standard formula: Each kilowatt of power brought into a data center requires 2 kilowatts of cooling power. "One of the challenges will be to manage that ratio," Clark said.

It will be more about balancing power, space and cooling.

Greg Clark, global portfolio director, data center services, Computer Sciences Corp.

"In the past, you had data center managers who would say, 'I'll never go above 40 kilowatts per square foot on the raised floor, and cooling isn't an issue'. Now you've got blade servers coming in that require 100 kilowatts per square foot," he said. During the past 10 to 15 years, most data centers have been concentrated on the mainframe footprint, Clark added. It's tough to make a data center resource efficient with new technology on a legacy infrastructure.

Future infrastructure changes may also need to accommodate streamlined power distribution channels -- to bring in computer equipment that runs on direct current (DC) instead of standard alternating current (AC). "With AC you go through a lot of conversions before the power is consumable. You lose a little bit of power with each conversion," Clark said. With straight DC distribution to the raised floor, you'd deliver power directly to the data center, he added. "The industry isn't mature enough yet to handle this change, but there are vendors modifying equipment now that will run on DC current," Clark said. Using DC power in conjunction with uninterruptible power supply systems could lower power consumption by 10% to 15%, he added.

Computer room air conditioning for large data centers is another innovation. The AC unit sits on the raised floor and pushes cold air out and sucks in all the hot air. "The minute there's hot air, you exhaust it," Clark said.

There's no question that ripping out an older infrastructure isn't practical even for large companies. Considering this, some large companies are now evaluating dual power distribution paths for the data centers. In this scenario, DC power would be delivered straight to some equipment, while AC power would continue to power other hardware. "This entails redoing a small piece of the infrastructure and lets you leave the rest," Clark said. However, retrofitting the legacy infrastructure without interrupting business operations is still an issue.

Sending many data center functions out-of-house using the cloud computing model seems like an obvious solution for resource conservation. "Cloud computing will be useful for ordinary, noncritical data but not for highly transactional systems," Clark said. "Data centers will become more of a utility than you see today."

Tom Bittman, vice president and chief of research for the infrastructure and operations practice at Stamford, Conn.-based Gartner Inc., said, " In the future, the data center is going to look an awful lot like a large, private cloud. Customers will pay for usage, just like we pay the power company today."

Let us know what you think about the story; email editor@searchcio.com.

This was first published in November 2008

Dig deeper on Green computing and energy efficiency

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchCompliance

SearchHealthIT

SearchCloudComputing

SearchMobileComputing

SearchDataCenter

Close