Wasted: The Datacenter

By Allen Bernard

(Back to article)

By far one of the biggest energy consumers for most IT departments is the datacenter. How much power is used and how much can be saved depends on each individual company's compute and storage requirements and how far along they are in adopting new, energy-saving technologies and datacenter designs.

Most datacenter power (about 50%) goes towards running compute cycles — either on servers, blades or mainframes—roughly 25% goes towards cooling and the remainder is spread around to things like air-handlers and lighting.

What this means in real-world terms, according to Noah Horowitz, senior scientist with the Natural Resources Defense Council is, worldwide, about 50 large power plants are dedicated to supplying just servers and air conditioners with power. "And that number is growing exponentially as data needs and storage is increasing."

Increasing Demand

This is because the need for datacenters is, once again, on the rise, said Dave Driggers, CTO of Verari, a maker of high-density blade server clusters, rack-optimized servers and software solutions. Post dot-com-bust, datacenter capacity, like bandwidth, was cheap. But all those cheap CPU cycles have been absorbed. That means new datacenters are getting more and more expensive to build and provision.

"We were just with AT&T and AT&T talked about their hosting business as just a cash cow, money-making machine," said Driggers. "They bought quite a few of the companies that were on the chopping for hosting and now they say that group is printing money."

To offset the high cost of building new datacenters, which, according to Driggers, is 10x what it was just five years ago, most CIOs are looking to maximize the output and capacity of the ones they already have. To do this, they need to get more compute power into (and out of) the same physical space.

Luckily, vendors have stepped up to meet this need with new products like more powerful chip sets, multi-core processors, blades, and networked-attached processing, datacenter management software, virtualization, and rack-mounted liquid cooling, to name a few.

"Basic datacenter design has been pretty much static the last 20 years ... from a cooling and power perspective, said HP's Ron Mann, director of Engineering, Enterprise Infrastructure Solutions.

"You try to get as much as you can in that rack because that raised floor, that cooling space, is at a premium. That's why 1U servers came about, that’s why smaller drives came about, that's why blades came about, because you want to maximize your compute based on the square footage of datacenter space you have available."

In real terms what this means is most datacenters are actually consuming more power than just a few years ago when a typical rack drew about two-to-three kilowatts. Today, most racks draw between seven-to-10 kilowatts. This means more heat and more power to run the servers and cool them off.

On the plus side, this is being done using the same amount of space, which, from a maximization perspective, is exactly what CIOs are after—more computing power from existing facilities.

The Heat

But, as the demand for faster, more-reliable computing continues to increase unabated, new energy-saving solutions will have to be employed to continue this trend. This is because inefficiencies generate every datacenter's worst enemy: Heat.

Otherwise, many companies may run into situations where they can no longer squeeze anymore compute cycles out of their existing infrastructure and will have build new capacity or buy it—at a premium—elsewhere.

Fortunately, solutions are available to begin this process. New, high-efficiency server power supplies are coming online that offer up to 90% efficiency, said Peter Panfil, VP of Power Engineering for Liebert Solutions, an Emerson Network Power company. Google, for example, just annouced an initiative to get 90% efficient, 12V power supplies into home computers and their line of servers.

Today's, server power supplies are typically only about 70% efficient. In other words, only 70% of the energy the server consumes is turned into work. Eighty-percent efficient units are available but buyers have to specify when they order new servers they want these units. Unfortunately, you cannot retrofit existing servers with more efficient power supplies.

The goal, of course, is to reduce heat so that more and more computing can be done in the same space. To do this, Panfil recommends his clients take some simple steps like clearing air duct obstructions, getting rid of excess cabling, and using hot-aisle/cold-aisle set ups.

"There's things you can do today in your datacenter to improve efficiency without doing heroic things," said Panfil. "What we talk to folks about is to prioritize the efficiency improvements: A 10-percent reduction in the IT power consumption is bigger than a 10-percent reduction in the cooling and it's bigger than a 10-percent reduction in power."

HP's Mann agrees. He counsels clients to look at the problem holistically. You can't do one thing without affecting another. "You can't just look at it from a chip-perspective or a compute-perspective."

That's why HP offers things like a thermal inspection to see where heat is being generated and cooling lost. They are also offering up a new class of servers, c-class blades, that use "power capping" software to throttle back chip power when not in use. This makes them up to 40% more efficient that convention chip sets in always-on mode.

Azul Systems, for example, isn't in the energy saving business but, sensing an opportunity, it is offering up its network-attached processing (NAC) solution as a way to help cut costs. The company's 11U, 16 processor (384 cores) on-demand appliance draws just 2.7 kilowatts of power. For datacenter folks what this means is they can have a huge reserve of back-up compute power available just when they need it most—without having keep a bunch of servers sitting idle.

"The reason that the compute utilization is so low across the datacenter today is people have no idea how much compute they really need at a given moment for a given application," said Azul's COO and co-founder, Scott Sellers. "So the only thing they can do is throw more servers at it."

Virtualization is also playing an important part but allowing admins to run more applications on fewer servers. Software that optimizes severs loads and power usage is also available. Couple this with facilities improvements such as smart air-handlers that maximize air movement, and you begin to put together a viable, leaner, more energy-efficient space that will meet you future needs—at least for now.

"Today's datacenters, most of them run less than 50% efficient," said Verari's Driggers. "So the majority of the power that is going in there is being wasted. The best way to improve the performance of datacenters is through conservation, by not wasting."