Archived Content

The following content is from an older version of this website, and may not display correctly.

Karel Haverkorn, consultant for energy saving in ICN, Agentschap NL: Energy efficiency of different data centers is hard to compare. That is why The Green Grid (TGG) tells us that Power Usage Effectiveness (PUE) is not meant to be used for comparing one data center to another. What set of data center types can we devise to compare energy efficiency between data centers of similar types?
 


Kevin S Farnsworth, owner, Green Lane Design: There are many types of facilities that house IT equipment. Beyond hosting sites and owner-dedicated sites, we need to consider reliability rating (Tier), age and size. We should also consider the function that distinguishes between the percentage of capacity dedicated to network, server and storage. A matrix of these and other categories could help data center operation managers have a realistic efficiency target for their type of facility. PUE is something that should be monitored within the organization and over time should be used as a gauge to see if operational changes have improved efficiency.

Gabe Andrews, Greater St Louis: Energy efficiency metrics should continue to remain in-house and be utilized as benchmarks of performance and not for publication in any magazine or online forum. As an enterprise, it may be acceptable to have a PUE of 3.3 based on the multiple levels of redundancy built into the facility to ensure a lack of downtime. In the market this single number is outlandish and inefficient.

Dr Ian Bitterlin: Apart from ‘type’ – and that has to include ‘appetite for risk’ – you have to take into account ‘load vs capacity’ and weather data, or at least latitude, altitude, rainfall and prevailing wind. In other words, impossible! The real question is why would you want to?

Karel Haverkorn: There is a large portion of the northern hemisphere where the temperatures are low enough for free cooling for most of the year. Governments need to know whether data center owners are ‘doing what they are supposed to’ because of environmental laws and for fiscal allowances. This can only be done by comparing data centers. By measuring all data centers on the same measure, we are prejudicing some. Some governments are circumventing the uncertainty of the PUE/EUE key figure by prescribing minimum temperatures in the (cold) corridor, and that makes the inequality between different types of data center worse. I would like to explore another approach.

Dr Ian Bitterlin: You are trying to solve a problem that should not exist and one that many people have concluded is impossible to answer anyway. Any such ‘comparison’ system cannot adapt itself to partial load. For example, Facebook in Prineville is 1.07 now when full, but was surely [it was] over 10 the day they turned it on. Then how do you suggest we differentiate between risk models? Some people, often governments themselves, would not advocate changing from 22 °C CRAC return temp as the ‘risk’ is too great for their appetite, while others (for example, Google) would/do accept opening the window. Don’t forget that our industry is ruled by paranoia, not by engineering sense.

Bobby Yatco, senior manager of facilities, DataOne Asia: This question should evolve to categorizing PUEs in different climates. Perhaps it can be categorized to a maximum of six climates: 1) Coldest; 2) Cooler; 3) Cool; 4)Average; 5) Hot; and 6) Hottest.

Karel Haverkorn: So now we have gathered the following discriminators:

1) Climate regions

2) Partiality of load

3) Risk model: Tier (is this enough?)

4) Business model: hosting vs dedicated

5) Size (is there much distinction?)

6) Age

The system we devise will not be anywhere near perfect. But (in my opinion) paranoia will reign as long as there is no clear comparison.

Dr Ian Bitterlin: Colo is probably the last place that efficient facilities will be found, as without control of the load (mainly utilization) your facility will just be burning energy for little or no work output.

Philip Morrix, deputy CTO, Outside 23 Wards, Tokyo: Dr Bitterlin, I have to disagree with your opinion on colos being the least efficient. Many of them are now modular and therefore the infrastructure is much better matched to the load as customers move into them. Many of the same arguments made for cloud computing (scale begets efficiency, dedicated management of equipment leading to less downtime and more security, etc) applies to larger colo facilities equally well. Quite a few are now charging overhead based upon PUE.

Zahl Limbuwala, CEO, Romonet: If you qualify what external temperature and what load point, that would start to make them more comparable but again, different operators will run DCs for different reasons, have different business requirements and different levels of risk. Until that’s answered, just plot PUE against load and external temp and you’ll get a simple but comparable surface plot.

Malcolm Harris, director, Saradan Design and Management: I think it will be impossible to produce a method of comparing a modular Google or Facebook 200,000 sq ft data center in Texas with a 100% build up-front but part-filled 10,000 sq ft co-hosting site in Scotland, and – as others have stated – why should we? Surely we should be looking at producing a formula and metrics that help make each site as energy efficient as possible in its own right.

Arun Shenoy, director of sales, Romonet:  We all know lots of 1.2 DCs that will never run at 100%. However, I would make the case for using unit cost as a measure of ‘good business’, applicable whether you’re a colo/service provider that needs to understand cost so you can control margin, or an enterprise that needs to understand cost to manage budgets. If you know the unit cost of every service delivered (including future ones) from your DCs, under all load, temp, design, location and provisioning conditions, then you’re already ahead of everyone else.

Join the debate. Register at LinkedIn.com/DatacenterDynamics Glob