DCOI Compliance: A Guide to Improving PUE in U.S. Federal Data Centers

White Paper 250 Summary     Revision 2     By Patrick Donovan

The U.S. government’s Data Center Consolidation Initiative (DCOI) requires all Federal agencies and departments to achieve and maintain a PUE of ≤1.5 for existing “tiered” data centers. New data centers must design and maintain a PUE of ≤1.4. This paper provides an explanation of the PUE metric, lists factors impacting data center efficiency measurements, and describes ways to improve PUE by reducing energy consumption of power and cooling systems supporting the IT. While DCOI policy only applies to the U.S. government, the suggested improvements would be beneficial to any data center.

Federal data center stakeholders will have to assess the energy situation within their own particular data centers and then formulate short-term and long-term plans for changes to their existing practices and existing infrastructure. This paper focuses on energy efficiency gains that can be realized through optimization of physical infrastructure (i.e., power and cooling equipment). Physical infrastructure accounts for nearly half of the total energy consumption of a typical data center (see Figure). Approaches for improving IT equipment efficiency (i.e., servers, storage, telecommunications devices) are NOT within the scope of this paper.

The commonly-used infrastructure efficiency metric is Power Usage Effectiveness (PUE). It is determined by dividing the total amount of power entering a data center by the amount of power that actually makes it to the data center computer equipment (servers, storage etc.) PUE is expressed as a ratio, with overall efficiency improving as the quotient decreases toward 1. The paper goes on to describe factors influencing PUE, including the IT load, outdoor conditions, and user configuration/settings. A three step approach to measuring and modeling PUE is provided along with advice for how to handle shared systems such as a chiller plant. A large list of industry best practices are also described to help improve efficiencies.

Of all of the efficiency techniques available to users (See Table for partial list), right-sizing the physical infrastructure system to the load has the most impact on physical infrastructure electrical consumption. Scalable physical infrastructure solutions that can grow with IT load offer a major opportunity to reduce electrical waste and costs. Right-sizing has the potential to eliminate up to 50% of the electrical bill in real-world installations. The compelling economic advantage of right-sizing is a key reason why the industry has been moving toward modular, scalable physical infrastructure solutions.

Energy savings
Energy savings
Right-size physical infrastructure 10 – 30% • Turn-off unused equipment, remove unused UPS power modules • Using a modular, scalable power and cooling architecture • Savings are greater for redundant systems • For new designs and some expansions
More efficient air conditioner architecture 7 – 15% • Separate cold and hot airstreams with containment • Shorter air paths require less fan power (when uncontained) • CRAC supply and return temperatures are higher, increasing efficiency, capacity, and preventing dehumidification thereby greatly reducing humidification costs • Hot aisle containment easier for new designs
Economizer modes of air conditioners 4 – 15% • Many air conditioners offer economizer options • This can offer substantial energy savings, depending on geographic location • Some data centers have air conditioners with economizer modes, but economizer operation is disabled • For new designs if not present • Difficult to retrofit
More efficient floor layout 5 – 12% • Floor layout has a large effect on the efficiency of the air conditioning system • Involves hot-aisle / cold-aisle arrangement with suitable air conditioner locations (White Paper 122) • For new designs • Difficult to retrofit
More efficient power equipment 4 – 10% • New best-in-class UPS systems have 70% less losses than legacy UPS at typical loads • Light load efficiency is the key parameter, NOT the full load efficiency • Don't forget that UPS losses must be cooled, doubling their costs • For new designs or retrofits
Coordinate air conditioners 0 – 10% • Many data centers have multiple air conditioners that actually fight each other • One may actually heat while another cools • One may dehumidify while another humidifies • The result is gross waste • May require a professional assessment to diagnose • For any data center with multiple air conditioners
Locate vented floor tiles correctly 1 -6% • Many vented tiles are located incorrectly in the average data center or the wrong number are installed • Correct locations are NOT intuitively obvious • A professional assessment can ensure an optimal result • Side benefit - reduced hot spots • Only for data centers using a raised floor • Easy, but requires expert guidance to achieve best result
Install energy efficient lighting 1 – 3% • Turn off some or all lights based on time of day or motion • Use more efficient lighting technology • Don't forget that lighting power also must be cooled, doubling the cost • Benefit is larger on low density or partly filled data centers • Most data centers can benefit
Install blanking panels 1 – 2% • Decrease server inlet temperature • Also saves on energy by increasing the CRAC return air temperature • Cheap and easy with new snap-in blanking panels • For any data center, old or new
Variable Frequency Drives (VFD) 1 – 10% • Replaces fixed speed drives • Enhances performance of chillers and pumps • Appropriate controls needed to match IT load and outdoor conditions • For data centers operated 24X7X365
Your browser is out of date and has known security issues. It also may not display all features of this website or other websites. Please upgrade your browser to access all of the features of this website. Internet Explorer 9 or higher is recommended for optimal functionality.


Do you have questions or need assistance? We’re here to help!