White Papers


ATTENTION: For instructions on how to properly link to these white papers. click here 36 result(s) found.
Number Title & Abstract
WP-165 v0
Types of Prefabricated Modular Data Centers
Data center systems or subsystems that are pre-assembled in a factory are often described with terms like prefabricated, containerized, modular, skid-based, pod-based, mobile, portable, self-contained, all-in-one, and more. There are, however, important distinctions between the various types of factory-built building blocks on the market. This paper proposes standard terminology for categorizing the types of prefabricated modular data centers, defines and compares their key attributes, and provides a framework for choosing the best approach(es) based on business requirements.
WP-132 v0
Economizer Modes of Data Center Cooling Systems
In certain climates, some cooling systems can save over 70% in annual cooling energy costs by operating in economizer mode, corresponding to over 15% reduction in annualized PUE. However, there are at least 17 different types of economizer modes with imprecise industry definitions making it difficult to compare, select, or specify them. This paper provides terminology and definitions for the various types of economizer modes and compares their performance against key data center attributes.
WP-153 v0
Implementing Hot and Cold Air Containment in Existing Data Centers
Containment solutions can eliminate hot spots and provide energy savings over traditional uncontained data center designs. The best containment solution for an existing facility will depend on the constraints of the facility. While ducted hot aisle containment is preferred for highest efficiency, cold aisle containment tends to be easier and more cost effective for facilities with existing raised floor air distribution. This paper investigates the constraints, reviews all available containment methods, and provides recommendations for determining the best containment approach.
WP-135 v3
Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency
Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.
WP-136 v1
High Efficiency Indirect Air Economizer-based Cooling for Data Centers
Of the various economizer (free cooling) modes for data centers, using fresh air is often viewed as the most energy efficient approach. However, this paper shows how indirect air economizer-based cooling produces similar or better energy savings while eliminating risks posed when outside fresh air is allowed directly into the IT space.
WP-173 v0
Power and Cooling Guidelines for Deploying IT in Colocation Data Centers
Some prospective colocation data center tenants view power and cooing best practices as constraining. However, an effective acceptable use policy can reduce downtime due to thermal shutdown and human error, reduce stranded capacity, and extend the life of the initial leased space, avoiding the cost of oversized reserved space. This paper explains some of the causes of stranded power, cooling, and space capacity in colocation data centers and explains how high-density rack power distribution, air containment, and other practices improve availability and efficiency. Examples of acceptable use policies that address these issues are provided.
WP-68 v1
Cooling Strategies for IT Wiring Closets and Small Rooms
Cooling for IT wiring closets is rarely planned and typically only implemented after failures or overheating occur. Historically, no clear standard exists for specifying sufficient cooling to achieve predictable behavior within wiring closets. An appropriate specification for cooling IT wiring closets should assure compatibility with anticipated loads, provide unambiguous instruction for design and installation of cooling equipment, prevent oversizing, maximize electrical efficiency, and be flexible enough to work in various shapes and types of closets. This paper describes the science and practical application of an improved method for the specification of cooling for wiring closets.
WP-42 v4
Ten Cooling Solutions to Support High-density Server Deployment
High-density servers offer a significant performance per watt benefit. However, depending on the deployment, they can present a significant cooling challenge. Vendors are now designing servers that can demand over 40 kW of cooling per rack. With most data centers designed to cool an average of no more than 2 kW per rack, innovative strategies must be used for proper cooling of high-density equipment. This paper provides ten approaches for increasing cooling efficiency, cooling capacity, and power density in existing data centers.
WP-130 v2
Choosing Between Room, Row, and Rack-based Cooling for Data Centers
Latest generation high density and variable density IT equipment create conditions that traditional data center cooling was never intended to address, resulting in cooling systems that are oversized, inefficient, and unpredictable. Room, row, and rack-based cooling methods have been developed to address these problems. This paper describes these improved cooling methods and provides guidance on when to use each type for most next generation data centers.
WP-134 v2
Deploying High-Density Pods in a Low-Density Data Center
Simple and rapid deployment of self-contained, high-density pods within an existing or new low-density data center is possible with today’s power and cooling technology. The independence of these high-density pods allow for predictable and reliable operation of high-density equipment without a negative impact on the performance of existing low-density power and cooling infrastructure. A side benefit is that these high-density pods operate at much higher electrical efficiency than conventional designs. Guidance on planning design, implementation, and predictable operation of high-density pods is provided.
WP-159 v0
How Overhead Cabling Saves Energy in Data Centers
Placing data center power and data cables in overhead cable trays instead of under raised floors can result in an energy savings of 24%. Raised floors filled with cabling and other obstructions make it difficult to supply cold air to racks. The raised floor cable cutouts necessary to provide cable access to racks and PDUs result in a cold air leakage of 35%. The cable blockage and air leakage problems lead to the need for increased fan power, oversized cooling units, increased pump power, and lower cooling set points. This paper highlights these issues, and quantifies the energy impact.
WP-118 v4
Virtualization and Cloud Computing: Optimized Power, Cooling, and Management Maximizes Benefits
IT virtualization, the engine behind cloud computing, can have significant consequences on the data center physical infrastructure (DCPI). Higher power densities that often result can challenge the cooling capabilities of an existing system. Reduced overall energy consumption that typically results from physical server consolidation may actually worsen the data center’s power usage effectiveness (PUE). Dynamic loads that vary in time and location may heighten the risk of downtime if rack-level power and cooling health are not understood and considered. Finally, the fault-tolerant nature of a highly virtualized environment could raise questions about the level of redundancy required in the physical infrastructure. These particular effects of virtualization are discussed and possible solutions or methods for dealing with them are offered.
WP-199 v0
How to Fix Hot Spots in the Data Center
Data center operators take a variety of actions to eliminate hot spots. Some of these actions result in short term fixes but may come with an energy penalty and some actions may even create more hot spots. Airflow management is an easy and cost effective way to permanently eliminate hot spots while saving energy and can avoid the capital expense of adding more cooling units. This paper describes the root cause of hot spots, recommends methods to identify them, reviews the typical actions taken, and provides the best practices to eliminate them.
WP-40 v3
Cooling Audit for Identifying Potential Cooling Problems in Data Centers
The compaction of Information Technology equipment and simultaneous increases in processor power consumption are creating challenges for data center managers in ensuring adequate distribution of cool air, removal of hot air and sufficient cooling capacity. This paper provides a checklist for assessing potential problems that can adversely affect the cooling environment within a data center.
WP-121 v1
Airflow Uniformity Through Perforated Tiles in a Raised-Floor Data Center
Perforated tiles on a raised floor often deliver substantially more or less airflow than expected, resulting in inefficiencies and even equipment failure due to inadequate cooling. In this paper, the impact of data center design parameters on perforated tile airflow is quantified and methods of improving airflow uniformity are discussed. This paper was written jointly by APC and IBM for the ASME InterPACK '05 conference.
WP-182 v0
The Use of Ceiling-Ducted Air Containment in Data Centers
Ducting hot IT-equipment exhaust to a drop ceiling can be an effective air management strategy, improving the reliability and energy efficiency of a data center. Typical approaches include ducting either individual racks or entire hot aisles and may be passive (ducting only) or active (include fans). This paper examines available ducting options and explains how such systems should be deployed and operated. Practical cooling limits are established and best-practice recommendations are provided.
WP-120 v1
Guidelines for Specification of Data Center Power Density
Conventional methods for specifying data center density are ambiguous and misleading. Describing data center density using Watts / ft2 or Watts / m2 is not sufficient to determine power or cooling compatibility with high density computing loads like blade servers. Historically there is no clear standard way of specifying data centers to achieve predictable behavior with high density loads. An appropriate specification for data center density should assure compatibility with anticipated high density loads, provide unambiguous instruction for design and installation of power and cooling equipment, prevent oversizing, and maximize electrical efficiency. This paper describes the science and practical application of an improved method for the specification of power and cooling infrastructure for data centers.
WP-46 v7
Cooling Strategies for Ultra-High Density Racks and Blade Servers
Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.
WP-58 v2
Humidification Strategies for Data Centers and Network Rooms
The control of humidity in Information Technology environments is essential to achieving high availability. This paper explains how humidity affects equipment and why humidity control is required. Quantitative design guidelines for existing and new computing installations are discussed. Alternative methods to achieve desired humidity are described and contrasted. The difficult issue of how and where humidity should be measured is explained. The hidden costs associated with over-humidification are described.
WP-49 v2
Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
Avoidable mistakes that are routinely made when installing cooling systems and racks in data centers or network rooms compromise availability and increase costs. These unintentional flaws create hot-spots, decrease fault tolerance, decrease efficiency, and reduce cooling capacity. Although facilities operators are often held accountable for cooling problems, many problems are actually caused by improper deployment of IT equipment outside of their control. This paper examines these typical mistakes, explains their principles, quantifies their impacts, and describes simple remedies.
WP-179 v0
Data Center Temperature Rise During a Cooling System Outage
The data center architecture and its IT load significantly affect the amount of time available for continued IT operation following a loss of cooling. Some data center trends such as increasing power density, warmer supply temperatures, the “right-sizing” of cooling equipment, and the use of air containment may actually increase the rate at which data center temperatures rise. However, by placing critical cooling equipment on backup power, choosing equipment with shorter restart times, maintaining adequate reserve cooling capacity, and employing thermal storage, power outages can be managed in a predictable manner. This paper discusses the primary factors that affect transient temperature rise and provides practical strategies to manage cooling during power outages.
WP-125 v2
Strategies for Deploying Blade Servers in Existing Data Centers
When blade servers are densely packed, they can exceed the power and cooling capacities of almost all traditional data centers. This paper explains how to evaluate the options and select the best power and cooling approach for a successful and predictable blade deployment.
WP-55 v3
The Different Types of Air Distribution for IT Environments
There are nine basic approaches to distribute air in data centers and network rooms. These approaches vary in performance, cost, and ease of implementation. These approaches are described along with their various advantages. The proper application of these air distribution types is essential knowledge for Information Systems personnel as well as Facilities Managers.
WP-44 v4
Improving Rack Cooling Performance Using Airflow Management Blanking Panels
Unused vertical space in open frame racks and rack enclosures creates an unrestricted recycling of hot air that causes equipment to heat up unnecessarily. The use of airflow management blanking panels can reduce this problem. This paper explains and quantifies the effects of airflow management blanking panels on cooling system performance.
WP-137 v1
Energy Efficient Cooling for Data Centers: A Close-Coupled Row Solution
The trend of increasing heat densities in data centers has held consistent with advances in computing technology for many years. As power density increased, it became evident that the degree of difficulty in cooling these higher power loads was also increasing. In recent years, traditional cooling system design has proven inadequate to remove concentrated heat loads (20 kW per rack and higher). This has driven an architectural shift in data center cooling. The advent of a newer cooling architecture designed for these higher densities has brought with it increased efficiencies for the data center. This article discusses the efficiency benefits of row-based cooling compared to two other common cooling architectures.
WP-131 v1
Improved Chilled Water Piping Distribution Methodology for Data Centers
Chilled water remains a popular cooling medium; however leaks in the piping systems are a threat to system availability. High density computing creates the need to bring chilled water closer than ever before to the IT equipment, prompting the need for new high reliability piping methods. This paper discusses new piping approaches which can dramatically reduce the risk of leakage and facilitate high density deployment. Alternative piping approaches and the advantages over traditional piping systems are described.
WP-138 v1
Energy Impact of Increased Server Inlet Temperature
The quest for efficiency improvement raises questions regarding the optimal air temperature for data centers. The ASHRAE TC-9.9 committee has recently adopted an extension of the recommended thermal envelope for server inlet temperature and humidity. A popular hypothesis suggests that total energy demands should diminish as the server inlet temperatures increase. This paper tests that hypothesis through the development of a composite power consumption baseline for a mixture of servers as a function of inlet temperature and applying this data to a variety of cooling architectures.
WP-57 v5
Fundamental Principles of Air Conditioners for Information Technology
Every Information Technology professional who is responsible for the operation of computing equipment needs to understand the function of air conditioning in the data center or network room. This introductory paper explains the function of basic components of an air conditioning system for a computer room. The concepts presented here are a foundation for allowing IT professionals to successfully specify, install, and operate critical facilities.
WP-56 v3
How and Why Mission-Critical Cooling Systems Differ From Common Air Conditioners
Today's technology rooms require precise, stable environments in order for sensitive electronics to operate optimally. Standard comfort air conditioning is ill suited for technology rooms, leading to system shutdowns and component failures. Because precision air conditioning maintains temperature and humidity within a very narrow range, it provides the environmental stability required by sensitive electronic equipment, allowing your business to avoid expensive downtime.
WP-59 v2
The Different Technologies for Cooling Data Centers
There are 13 basic heat removal methods to cool IT equipment and to transport unwanted heat to the outdoor environment. This paper describes these fundamental cooling technologies using basic terms and diagrams. 11 of these methods rely on the refrigeration cycle as the primary means of cooling. Pumped refrigerant systems provide isolation between the primary heat removal system and IT equipment. The direct air and indirect air methods rely on the outdoor conditions as the primary means cooling making them more efficient for mild climates. The information in this paper allows IT professionals to be more involved in the specification of precision cooling solutions that better align with IT objectives.
WP-69 v1
Power and Cooling for VoIP and IP Telephony Applications
Voice Over IP (VoIP) deployments can cause unexpected or unplanned power and cooling requirements in wiring closets and wiring rooms. Most wiring closets do not have uninterruptible power available, and they do not provide the ventilation or cooling required to prevent equipment overheating. Understanding the unique cooling and powering needs of VoIP equipment allows planning for a successful and cost effective VoIP deployment. This paper explains how to plan for VoIP power and cooling needs, and describes simple, fast, reliable, and cost effective strategies for upgrading old facilities and building new facilities.
WP-50 v1
Cooling Solutions for Rack Equipment with Side-to-Side Airflow
Equipment with side-to-side airflow presents special cooling challenges in the modern data center. Common rack enclosures and rack layouts are fundamentally incompatible with side-to-side cooling, resulting in equipment that receives supply air of excessive temperature. This paper describes the problem along with several side-effects that are not generally appreciated. Various solutions to the problem are described along with their costs and benefits.
WP-113 v2
Electrical Efficiency Modeling for Data Centers
Conventional models for estimating electrical efficiency of data centers are grossly inaccurate for real-world installations. Estimates of electrical losses are typically made by summing the inefficiencies of various electrical devices, such as power and cooling equipment. This paper shows that the values commonly used for estimating equipment inefficiency are quite inaccurate. A simple, more accurate efficiency model is described that provides a rational basis to identify and quantify waste in power and cooling equipment.
WP-123 v1
Impact of High Density Hot Aisles on IT Personnel Work Conditions
The use of modern enclosed hot aisles to address increasing power densities in the data center has brought into question the suitability of working conditions in these hot aisle environments. In this paper, it is determined that the additional heat stress imposed by such high density IT environments is of minimal concern.
WP-11 v3
Explanation of Cooling and Air Conditioning Terminology for IT Professionals
As power densities continue to increase in today’s data centers, heat removal is becoming a greater concern for the IT professional. Unfortunately, air conditioning terminology routinely used in the cooling industry is unnecessarily complicated. This complexity makes it difficult and frustrating for IT professionals to specify cooling requirements and even makes it difficult to discuss current cooling system performance with contractors, engineers, and maintenance personnel. This paper explains cooling terms in common language, providing an essential reference for IT professionals and data center operators.
WP-25 v3
Calculating Total Cooling Requirements for Data Centers
This document describes how to estimate heat output from Information Technology equipment and other devices in a data center such as UPS, for purposes of sizing air conditioning systems. A number of common conversion factors and design guideline values are also included.