Feature

Managing the Energy-Efficient Data Center: A Special Report for CIOs

Market researcher IDC predicts that 10% of all servers sold in the U.S. this year will be blades. "We're seeing an explosion of volume-density servers, with 2007 as a crossover year," says Jean Bozman, vice president of global enterprise server solutions at IDC.

Unfortunately, advancements in power infrastructure haven't kept up with data center technology. Batteries, generators and fire extinguishers look pretty much the same as they did decades ago. "Data center power is 1940s technology," says Dr. Werner Vogels, vice president and CTO of Amazon.com. This means a data center has a finite amount of available energy that won't change anytime soon. Thus, CIOs must find ways to improve server densities and cooling techniques in order to take advantage of the limited space and reduce wasted energy.

On the server front, blade servers bring a high level of density and power efficiency to the data center. On average, a blade chassis can hold eight to 16 blades. That density means the chassis exhausts a lot of heat in a very small area. Airflow needs to be fast and have enough cooling concentration to keep blades from overheating. At 30 kilowatts of power per rack, a data center will need two five-ton CRACs for cooling, according to Eaton. Emerson Network Power reports that cooling accounts for 37% of electricity usage within a well-designed data center.

A data center cannot run on blades alone because it would be impossible to cool. Even HP admits today's data center tops out at around 35% blades, although the blade maker is working to double this figure through better management and cooling practices. "If you don't focus on cooling, your data center could be unusable," says Vyas, whose new data center contains 20% blades in five chassis. "Within minutes all the servers will literally melt, especially the blade servers."

Viejas Enterprises' new data center has raised floors and perforated tiles and is designed with hot and cold aisles. A raised floor allows cold air to flow to hard-to-get-to areas. If more cooling is required for a specific aisle or group of racks, existing tiles can be swapped for tiles with a higher percent of perforation. However, there's a limit to the percentage because perforated tiles must be able to handle certain weight-load requirements.

Virtualization plays a key role in cooling, too. "It's a key enabler not so much in the consolidation perspective but of pooling resources and moving workloads from one system to another," IBM's Lechner says. "This can eliminate a hot spot or identify a system that's underutilized for long periods of time, so you move the remaining workload off that system and shut it down completely and save the energy associated with it."

Then there are internal processes that keep in energy. Vyas' data center has a separate room where staff members prepare and test server applications, thus limiting the number of times anyone has to go into the actual server room. Emerson Network Power advises CIOs to keep data center doors closed and use a vapor seal to control humidity levels.

This was first published in November 2007

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: