Saving energy with cloud computing and air-conditioning

Mar 05, 2012

T-Systems and Intel continue their joint research after the first successful discoveries. The objective of this research is the maximum reduction of power consumption in data centers. Core question: How can high server performance be consolidated with low energy demand, and which role does cloud computing play in this equation?

Data centers house many hidden energy guzzlers that take huge chunks out of operating budgets and leave a large carbon footprint. According to calculations by the Borderstep Institute for Innovation and Sustainability, German data centers consumed a total of ten terawatt-hours (TWh) of electricity in 2008 - the same amount produced by four medium-size coal-fired power plants. This costs the operators about EUR 1.1 billion per year. If this trend - measured between 2000 and 2008 - continues, power consumption will rise to 14.7 TWh by 2013. This would double the costs to 2.2 billion Euro, provided that energy prices remain the same.

It's about time to locate energy guzzlers and shut them down. T-Systems and Intel will share responsibility in this project. Since September 2009 they have already achieved a lot working in the test data center in Munich with its approximately 180 servers: they were able to reduce the Power Usage Effectiveness (PUE) rate from originally 1.8 to 1.3 within one year. This ratio indicates the total amount of power used in relation to the power consumed by computers. A PUE of 1.8 thus indicates that the computers consume almost as much power for cooling as for their actual operation (see box: PUE).

Simply ways to reduce energy consumption
One discovery surprised even the ICT specialists: "Increasing energy efficiency in a data center is not as difficult as generally assumed," says Dr. Rainer Weidmann, project manager and IT architect at T-Systems in DataCenter2020. "Based on optimum computer usage, the mere combination of relatively simple construction changes in connection with intelligent equipment already leads to enormous improvements."

The use of cloud computing and the resulting intelligent computer usage alone reduce the server pool's power consumption by 80 % because fewer computers are operating in total. Whereas one customer used to occupy only one-fifth of a single server with his own software, cloud computing allows customers to share computers, operating systems, and programs. This increases server usage significantly and lowers total power consumption.

Air-conditioning in server rooms offers additional savings potential. Fans with electronic speed controls allow for forced-air cooling being adjusted to actual demand. Reducing speed to 50 % already decreases energy consumption by up to 90 %.

Warm and cold stay separate
Another measure is to keep warm air and cold air strictly separate so that no energy is wasted in the server room. Technical specialists installed doors at the beginning and end of each cabinet row that contain computers and put a roof over the cold passage in-between. They sealed leaks in the double floor so that air can no longer escape. Finally, they closed empty spaces in cabinets with dummy plates to prevent the formation of heat pockets.

Last but not least, the right temperature control is a decisive factor. Currently, warm air with a temperature of 22 to 23 degrees normally streams into the servers. According to recommendations by the American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE), this temperature can easily be increased to 27 degrees. The researchers in the data processing center found that every degree of temperature increase saves approximately four percent in air-conditioning costs. They also doubled the IT load of approximately five kilowatt (kW) for each computer cabinet. This is not contradictory, because "although it increases total energy consumption, it also increases the degree of usage for each computer cabinet," says Weidmann.

Research continues
Even though they achieved their objective of lowering the PUE to 1.3 relatively quickly, T-Systems and Intel do not want to stop here: they already discovered other possibilities to optimize energy consumption. Among other measures they increased energy density to over 20 kW per computer cabinet and tested the cabinets' load capacity for both cold and hot aisle containment. Result: The servers can be operated with this higher IT load while using standard technology and without compromising safety and availability. The researchers found no significant differences in efficiency increase between cold aisle and hot aisle containment.

In the tests cold aisle containment proved to be the more fail-safe option: contained cabinets with energy densities of 5.5 kW per computer cabinet reached the critical threshold of 35 degree Celsius three times later than cabinets without containment. This means that contained cabinets with energy densities of 17.5 kW per unit have the same time buffer as conventional computer cabinets without containment and a load of 5.5 kW before emergency-power cooling is activated.

Integrated approach is important
So what happens next? "The decisive factor for all energy efficiency improvements is an integrated approach that involves the entire process chain from energy supply to energy consumption," says energy specialist Weidmann. "All measures must be coordinated with each other. Managing both the environment and the infrastructure in data processing centers plays the most important role in this process."

In the next research phase, the T-Systems and Intel specialists intend to investigate IT equipment and its energy consumption. This includes test series with energy-efficient power supplies as well as server and CPU power management.

The specialists further intend to test power-capping and server-parking. In power-capping, administrators limit the server's energy consumption using electronic support. Even during peak loads the servers will not run at full capacity and therefore do not require full-capacity air-conditioning. This reduces the danger of overloading existing power supplies. Data processing centers can be optimally planned from the very start. More servers can be accommodated on the same floor space. Server-parking, on the other hand, involves the consolidation of computing loads in a section of available computers while completely shutting down other computers with low usage.

The end result of all test series is a concept for comprehensive server energy management. At this point the PUE ratio will lose its importance as sole parameter for energy efficiency. This is the paradox: when IT components and servers consume less energy - partially facilitated by a higher share of visualization mechanisms for cloud computing - the PUE rate once again increases. "It's a simple math exercise," says Weidmann. "The PUE rate divides the total amount of energy consumed in data processing centers by absolute IT energy consumption. If the latter decreases, the PUE increases. An integrated management approach achieves significantly more for energy efficiency than a simple measurement. PUE is not everything."

PUE measures energy efficiency
The industry standard "Power Usage Effectiveness" is a measurement used to determine energy efficiency in data centers. It indicates the total amount of energy consumed in data centers in relation to the amount of energy that is used solely for computing. The lower the rate, the lower the consumption of resources and budgets; a factor of 1 indicates that the entire amount of supplied energy is converted into computing performance. A factor of 2 indicates that the environment (cooling, ventilation or lighting) and the servers each consume the same amount of energy. Data centers currently have a normal rate of 1.7 (relatively efficient) to 3 (poor).

Article options

Print article

Contact

Deutsche Telekom

Tel. +49 228 181 4949
media@telekom.de

Special

Find more information on CeBIT 2012 here.

Photos

Find all images on CeBIT 2012 at a glance.

/static/-/p1451149069/flash/Jplayer.swf