by Bill Blausey, SVP and CIO of Eaton Corp.
In Part II of this two part series, Eaton's SVP and CIO Bill Blausey reviews eight technologies, services, and department-level changes you can employ today to green your IT department.
1. Implementing virtualization
Though many organizations use virtualization primarily to simplify hardware management, enhance business continuity and conserve data center floor space, it can also significantly reduce power and cooling costs by consolidating underutilized yet energy-hungry servers. Indeed, a properly-architected server virtualization solution can lower server energy consumption by up to 82 percent, according to tech analyst firm Gartner. Of course, virtualization usually imposes significant upfront hardware, software and services expenses, but Gartner estimates that most companies recover such costs within 24 months. As a result, adoption of virtualization is increasing rapidly.
2. Deploying Energy Star servers
Standardizing on servers that qualify for the federal government’s Energy Star designation can help you free up stranded power capacity. Servers that meet Energy Star requirements use 30 percent less power on average, according to the U.S. Department of Energy and the U.S. Environmental Protection Agency, which jointly administer Energy Star. However, Energy Star-compliant products often cost more than comparable devices. Companies typically recoup that premium over time in the form of lower power spending, but that may mean little to an IT executive with limited procurement funds today.
3. Freeing stranded power and cooling capacity
Trimming waste from power and cooling systems can be a safe and economical way to reduce energy consumption and greenhouse gas emissions. For example, many data centers rely on aging uninterruptible power systems. Replacing them with newer, more energy-efficient models is a low risk, relatively low cost way to save money on power and shrink your carbon footprint. What’s more, many electrical utilities offer financial incentives that can accelerate your returns on a UPS investment.
Returns on high-efficiency backup systems can be substantial. In the 1990s, a typical UPS was generally only about 80 to 82 percent efficient under standard loading conditions. Today’s models, however, routinely achieve 92 to 95 percent efficiency, and newer technology UPS systems with advanced energy savings capability like systems with ESS can save you even more.
Similarly, equipping your air handling system with a variable frequency drive (VFD) is another affordable means of recapturing stranded power. Most organizations make less use of their servers at night and on weekends than they do during business hours. Yet their air handling system distributes cool air at precisely the same rate all week long.
4. Leveraging free cooling
Most data centers chill hot exhaust air from servers and then re-circulate it. Facilities that utilize “free cooling,” by contrast, simply pump hot internal air out of the building and pipe cool external air in. The end result can be a dramatic drop in cooling costs. In fact, based on the results of a ten-month experiment involving nearly 900 heavily-utilized production servers in a high-density data center, Intel Corp. asserts that free cooling techniques can reduce the total amount of power a typical data center uses for cooling by approximately 67 percent. That could save a 10 MW data center roughly $2.87 million a year, Intel notes.
However, free cooling isn’t an option for every data center, as external air temperatures are simply too high in some locales to cool servers properly. Adequate filtering and cooling of external air will be required to ensure a more reliable environment.
5. Using enterprise monitoring
When it comes to data center energy consumption, IT managers have long agreed with the old adage that “you can’t manage what you can’t measure.” Yet, until recently, the only way to deliver energy usage data to network operations centers was to install interfaces between IT management systems and building automation systems. Today, that is no longer the case.
Armed with such figures, organizations can measure their power efficiency against comparable organizations and set realistic efficiency targets. The PUE metric can help with this task. Developed by The Green Grid, a technology industry non-profit consortium dedicated to raising data center efficiency, PUE expresses the amount of power used for power quality and cooling by dividing the total power entering an IT facility by the total power used by IT equipment in that facility, as follows:
PUE = (Total Facility Power) ÷ (IT Equipment Power)
Thus, for a data center that consumes 1,000 kW of power, of which 400 kW is used by IT equipment:
PUE = 1000 ÷ 400 = 2.5
Combined, newer energy metering technologies and metrics like PUE can help companies benchmark their power consumption against that of similar data centers.
6. Raising server inlet temperatures
For years, conventional wisdom has held that data center thermostats should be set at roughly 72 degrees. According to recent studies by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), however, most data centers can safely operate at temperatures as high as 80 degrees. Raising data center temperatures even a few degrees can save you thousands of dollars a year, depending on the size of your facility.
However, should cooling systems in an 80-degree Fahrenheit/26.67-degree Celsius data center fail, IT and facilities managers will have significantly less time to react before their servers reach thermal shutdown. Additionally, operating your data center at higher temperatures can shorten the lifespan of UPS batteries, potentially resulting in higher maintenance and replacement costs. Companies must decide whether or not the savings associated with higher inlet temperatures justify such risks and expenses.