Archived Content

The following content is from an older version of this website, and may not display correctly.

In the second part of our guide to reducing data center energy costs, we look at some of the key components that should feature in an energy-efficient, next-generation facility, from efficient cooling systems to effective information management systems.

Step 5: Use appropriate technology
In your quest for an energy-efficient data center, evaluating products can no longer be just a price-versus-performance comparison. It’s important to incorporate the total cost of the data center environment into the calculation, which also includes costs for energy consumption.

Firstly, look for vendors that have power and cooling at the forefront of their research and development strategies. Secondly, select equipment based on lifecycle costs that take into account the energy usage of servers.

One good example of an energy-efficient technology is Massive Array of Idle Disks (MAID). This is a storage technology that employs a large group of disk drives. Only those drives in active use are spinning at any given time. This technology can have thousands of individual drives, and offers mass storage at a cost per terabyte roughly equivalent to that of tape.

Step 6: Take a new perspective on information life cycle management (ILM)
ILM is the optimum allocation of storage resources that support a business. From a voice conversation to legal and medical records, every item of information in an organization has a useful lifespan. By implementing an ILM strategy, you can create greater efficiencies in data storage, which in turn lead to greater efficiencies in power consumption.

The art of ILM lies in understanding your organization’s information needs and developing the infrastructure and processes required to maintain the usefulness of that information, while at the same time minimizing the cost of such maintenance.

The value of ILM is the ability to tie the cost of storage to the value of the information stored. Tiered storage is therefore at the heart of an ILM implementation. The most important data, or the most performance-critical data, should be placed on the highest-performance and most expensive storage. Don’t use expensive, energy-consuming storage to store information for compliance purposes, when a tape will do. Take advantage of low-speed and lower energy-consuming devices whenever they can meet the service requirements.

Increasingly, solid-state drives (SSD) are becoming part of the enterprise architecture, providing performance improvements; however, these decisions should be made while bearing the associated energy trade-offs in mind. Additionally, knowing the character (age, file type, usage frequency, and business value) of the data in your environment will help you make informed decisions about ILM strategies.

Step 7: Investigate liquid cooling
To meet the challenges of blade servers and high-density computing, more organizations are realising the need for effective cooling and heat management solutions. Many are welcoming liquid cooling systems into their infrastructures to achieve better cooling efficiency; however, others still find it difficult to fathom pipes of running water snaking through the plenums of their data centers.

Liquid cooling systems use air or liquid heat exchangers to provide effective cooling and to isolate equipment from the existing heating, ventilation, and air conditioning system. There are several approaches to data center liquid cooling, including: sidecar heat exchangers; chip-level cooling and bottom-mounted heat exchangers, which some claim is safer than sidecar enclosures as components won’t be affected in the event of a water leak.

Other approaches include integrated rack-based liquid cooling, which incorporates a rack-based architecture that integrates uninterruptible power supply (UPS), power distribution and cooling; and device-mounted liquid cooling, which works at the device level, with coolants routed through sealed plates on the top of a CPU.

While liquid cooling provides the best thermal transfer and most efficient removal of heat from the data center, frequently a better alternative is to use free air for all or some of the year, depending on the climate. This approach makes climate a key consideration in data centre location.

Step 8: Use power-saving technologies
There are a number of power-saving technologies available. For example, direct current (DC)-compatible equipment can have a significant impact on power consumption; however, it’s costly to configure, is not widely available, and is also more expensive than equivalent alternating current options.

At present, data centers perform many conversions between alternating current and direct current. This wastes energy, which is emitted as heat and increases the need for cooling. It’s far more efficient to power servers directly from a central DC supply. The Lawrence Berkeley National Laboratory in the US estimates that an organization may save 10-20% of its energy use by moving to direct current technology.

Alternatively, consider deploying higher-voltage air conditioning within the data center when this is suitable for the technologies deployed. In some cases, elimination of step-down transformer and distribution losses can reduce energy loss by up to 10%.

Data center technologies and methodologies are constantly evolving, and resource-efficient computing has become a huge industry in itself. By keeping abreast of the latest developments, data center owners and operators can ensure that their facilities are operating close to the limits of their potential efficiency, and ensuring significant savings in money, energy and carbon.

The opinions expressed in the article above are those of the author and do not reflect those of Datacenter Dynamics, its employers or affiliates.