Applying Natural Gas Engine Generators to Hyperscale Data Centers Power capacity shortages, a desire to be independent from the grid, and increasing pressure to reduce emissions, are all drivers for having self-generating power on site. Providing a reliable, environmentally-friendly power supply, however, can be challenging. We will show that natural gas-fired, on-site generation power plants for either backup or primary power supply functions, can be an attractive alternative to diesel-based power plants. We will explain the technical requirements and design adaptations needed for hyper-scale data centre applications and will compare our proposal to traditional diesel generation power plants. Finally, we explain how to extend the benefits of gas-engine technology and turn the generation plant into a revenue-generating asset.
Design and Specification for Safe and Reliable Battery Systems for Large UPS A properly designed UPS battery solution is important for safe and reliable operation. This paper describes the main components and functions of a battery system, and discusses the reasons why vendor pre-engineered battery solutions are optimal. In cases where pre-engineered solutions don’t meet the requirements, vendor-engineered solutions are next best. If third party custom battery solutions must be used, design guidelines are provided to ensure a safe and reliable design.
Three Types of Edge Computing Environments and their Impact on Physical Infrastructure Selection Edge computing deployments are on the rise, as more and more use cases are conceived for local compute, storage, and networking. Although there are fundamental needs of all edge compute sites to ensure availability, environmental differences can be significant which impact the specific attributes of the systems you deploy. It’s important to understand characteristics like ambient temperature & humidity conditions, security / access of the space, and purpose of the space, as these parameters drive your choice of physical infrastructure. Sites with greater business risks warrant more robust infrastructure. In this paper, we define three types of environments for micro data centers: (1) IT environments (2) commercial & office environments, and (3) industrial & harsh environments. We discuss the challenges of each and share best practices for physical infra-structure deployments for each environment.
Capital Cost Analysis of Immersive Liquid-Cooled vs. Air-Cooled Large Data Centers There are several known benefits of choosing liquid cooling over traditional air cooling including energy savings. Capital cost, however, is viewed as a common obstacle. In this paper, we first demonstrate that at a like-for-like rack density of 10 kW in a 2 MW data center, the data center capex is roughly equal for both a traditional air-cooled data center and a chassis-based immersive liquid cooled data center. Because high density compaction is a key benefit of liquid cooling, we also quantify the capex difference when liquid cooling is deployed at 20 kW/rack and 40 kW/rack for the same capacity data center. The result is 10% and 14% capex savings, respectively.
Essential Guidance on DCIM for Edge Computing Infrastructure The lack of staff or “lights out” nature of many local IT and mobile edge computing (MEC) sites makes operations & maintenance a challenge. This struggle worsens as the number of sites increase. How do you maintain IT resiliency in a cost-effective way under these conditions? It is not practical to staff each location with trained personnel. The answer lies, in large part, on data center infrastructure management (DCIM) software. In this paper we describe essential DCIM functions for small, unmanned edge computing sites and attributes of next-generation DCIM solutions best optimized for that type of environment. We also provide practical advice on how to get started with DCIM to better ensure its value is realized.
Liquid Cooling Technologies for Data Centers and Edge Applications Increasing IT chip densities, a focus on energy efficiency, and new IT use cases like harsh edge computing environments are driving the interest and adoption of liquid cooling. In this paper we present the fundamentals of liquid cooling, describe the advantages over conventional air cooling, and explain the 5 main direct to chip and immersive methods. To help guide the selection of the appropriate liquid cooling method for a given need, we explain the key attributes that must be considered.
Five Reasons to Adopt Liquid Cooling IT equipment chip densities has been the commonly-discussed driver for adopting liquid cooling. But, there are four other reasons why data center owners should consider liquid cooling including low PUE targets, space constraints, harsh IT environment, and water restrictions. This paper describes these reasons. With this information, data center owners can make an informed decision on whether liquid cooling has advantages for their application.
Practical Guide to Ensuring Availability at Edge Computing Sites IT stakeholders recognize the need for computing at distributed sites, where part or all of their business operations take place. Assessing the criticality of these edge sites should reveal which sites are in greater need of availability improvement. Schneider’s experience with edge computing environment assessments reveals a list of practical actions that improve the availability of IT operations by improving the physical infrastructure systems supporting the IT. This paper provides specific availability improvements broken down by eight key systems including power, cooling, physical security, environment, and management.
A Framework for How to Modernize Data Center Facility Infrastructure Aging data centers represent a downtime risk to business operations. In this paper, we lay out a framework for modernizing a facility. This framework includes (1) defining performance standards, (2) benchmarking the facility to identify gaps and health risks, (3) determining modernization options, and (4) prioritizing actions based on business objectives. Modernization should include not only the hardware, but also the software management tools, and operations & maintenance programs. This complete approach ensures the facility continues to meet its IT objectives, including availability, efficiency, and operational cost targets.
Solving Edge Computing Infrastructure Challenges Edge compute (distributed IT) installations have become increasingly business critical. Deploying and operating IT at the edge of the network, however, comes with unique challenges. Solving them requires a departure from the traditional means of selecting, configuring, assembling, operating, and maintaining these systems. This paper describes a new, emerging model that involves an integrated ecosystem of cooperative partners, vendors, and end users. This ecosystem and the integrated micro data center solution it produces, help mitigate the unique challenges of edge applications.
Efficiency Analysis of Consolidated vs. Conventional Server Power Architectures Open-source IT systems, like those designed by the Open Compute Project (OCP), are redefining how power is distributed within an IT rack by replacing internal server power supplies with a centralized rack-level power supply. In this paper, we investigate the efficiencies of conventional internal server PSU architectures and centralized rack-level PSU architectures (12VDC and 48VDC). While many believe that consolidating power supplies leads to significant efficiency gains, we found otherwise. With best-in-class components, the consolidated 12VDC rack-level PSU architecture provides a small incremental energy efficiency improvement over the conventional architecture. And consolidating at 48VDC provides another small incremental energy efficiency improvement over 12VDC.
Optimized MV Generator Power Plant Architectures for Large Data Centers For large (>5 MW) data center applications, it is common to have large generator power plants connected directly to the medium voltage (MV) electrical distribution network. The principal way to cost optimize the system is to reduce the number of generators installed using N+1 redundancy instead of 2N. This paper introduces some optimized electrical distribution architectures for the generator power plant that meets the various Uptime Institute Tier levels. A cost comparison study and reliability calculations were made on a representative case study. The results are presented in this paper. Recommendations for choosing an architecture based on the expected performance of the system are also provided.
How Higher Chilled Water Temperature Can Improve Data Center Cooling System Efficiency Alternative data center cooling approaches such as indirect air economization are calling into question the economic justification for using traditional chilled water cooling in new data centers, especially those in mild climates. This paper describes some innovative approaches to chilled water cooling, where the chiller is used only to boost cooling capacity on the hottest days. A capex and opex analysis describes how these approaches can save 41%-64% opex, with 13% increase in capex with assumption of using the same chiller. We also discuss the design considerations for these new technologies.
Using Infrared Thermography to Improve Electrical Preventive Maintenance Programs IR thermography can be used both at startup and during on-going operations to locate potentially dangerous problems quickly allowing for a controlled shutdown before unplanned interruptions in service occur. It can prevent premature failure and extend equipment life, and reduce costly outages and downtime. However, if done improperly, these benefits may not be realized. This white paper addresses some key points to remember while performing the scan as well as while interpreting the results to help identify potential problems accurately. The next section of this paper will describe in brief some best practices to follow while conducting an IR scan, and then the paper goes on to highlight some important factors that should be considered while interpreting the resulting thermogram.
The Different Types of Cooling Compressors There is much confusion in the marketplace about different compressor types and their characteristics. In this paper, each of these compressors is defined, benefits and limitations are listed, and practical applications of each are discussed. With this information, an educated decision can be made as to most appropriate compressor for a given need.
Cost Benefit Analysis of Edge Micro Data Center Deployments Several IT trends including internet of things (IoT) and content distribution networks (CDN) are driving the need to reduce telecommunications latency and bandwidth costs. Distributing “micro” data centers closer to the points of utilization reduces the latency and costs from the cloud or other remote data centers. This distributed data center architecture also provides physical infrastructure benefits that apply to any small data center regardless of the latency requirement. This paper explains how micro data centers take advantage of existing infrastructure and demonstrates how this architecture reduces capital expenses by 42% over a traditional build. Other benefits are discussed including shorter project timelines.
Specifying Data Center IT Pod Architectures The desire to deploy IT at large scale efficiently and quickly has forced change in the way physical infrastructure is deployed and managed in the white space. Fully integrated racks complete with IT that roll into place, hard floor data halls, and air containment are just a few of the trends. Designing and deploying IT using standardized blocks of racks (or pods) facilitates these trends. This paper explains how to specify the physical infrastructure for an IT pod and describes optimum configurations based on available power feeds, physical space, and targeted average rack power densities.
Optimize Data Center Cooling with Effective Control Systems Cooling systems specified without considering their control methods leads to issues such as demand fighting, human error, shutdown, high operation cost, and other costly outcomes. Understanding the different levels of cooling control provides a framework for rational discussions and specifications for data center cooling systems. This paper describes four cooling control levels, when they should be used, the benefits and limitations of each level, and provides examples of each.
Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge Use of cloud computing by enterprise companies is growing rapidly. A greater dependence on cloud-based applications means businesses must rethink the level of redundancy of the physical infrastructure equipment (power, cooling, networking) remaining on-premise, at the “Edge”. In this paper, we describe and critique the common physical infrastructure practices seen today, propose a method of analyzing the resiliency needed, and discuss best practices that will ensure employees remain connected to their business critical applications.
Benefits of Limiting MV Short-Circuit Current in Large Data Centers Once IT load levels exceed just a few megawatts (MW) of power, moving more of the electrical distribution infrastructure to the medium voltage (MV) level makes sense. The short-circuit rating of the MV transformers can have a large impact on the cost of the data center and footprint of the switchgear lineups. This paper shows the benefits of limiting this short-circuit current to 25 kA or less and how that allows the use of newer switchgear technologies that offer advantages in size, safety, cost, and reliability.
Addressing Cyber Security Concerns of Data Center Remote Monitoring Platforms Digital remote monitoring services provide real-time monitoring and data analytics for data center physical infrastructure systems. These modern cloud-based platforms offer the promise of reduced downtime, reduced mean time to recovery (MTTR), less operations overhead, as well as improved energy efficiency for power and cooling systems. However, with the cost of cyber security crime projected to quadruple over the next few years reaching $2 trillion by 20191, there is concern these systems could be a successful avenue of attack for cyber criminals. This paper describes a secure development lifecycle (SDL) process that ensures digital, cloud-based remote monitoring platforms keep data private and infrastructure systems secure from hackers. This knowledge of how platforms are developed and operated is helpful when evaluating the merits of remote monitoring vendors and their solutions.
Diesel Rotary UPS (DRUPS) vs. Static UPS: A Quantitative Comparison for Cooling and IT Applications A number of published comparisons indicate that DRUPS is a superior solution for data center cooling and IT applications. The inaccurate assumptions and mistakes found in these comparisons are explained with support from third party sources. In this paper, we present a detailed quantitative comparison from a design architecture-level perspective between low voltage DRUPS and low voltage Static UPS. We consider capital expenses, energy losses, maintenance costs, footprint, and TCO using a common 2N architecture. The analysis shows the Static UPS is less expensive to purchase, install, operate, and maintain while having a slightly larger footprint.
Selecting a Building Management System (BMS) for Sites with a Data Center or IT Room A data center or IT room uniquely alters the requirements of a building management system (BMS). This is primarily because of the criticality of IT and its dependence on facility infrastructure. IT’s reliance on power and cooling systems makes a BMS an important part of a larger data center infrastructure management (DCIM) solution that brings together Facilities and IT. The cooperation and information sharing better ensures uninterrupted and efficient operation. This paper explains how the requirements for building management are affected by the presence of a mission critical data center or IT room (data rooms) and describes key attributes to look for in an effective BMS system. Common pitfalls of implementing and using a BMS for sites with IT along with advice on how to avoid them are also provided.
How to Prepare and Respond to Data Center Emergencies Data center operations and maintenance teams must be prepared to act swiftly and surely without warning. Unforeseen problems, failures, and dangers can lead to injury or downtime. Good preparation and process, however, can quickly and safely mitigate the impact of emergencies and help prevent them from happening again. This paper describes a framework for an effective emergency preparedness and response strategy for mission critical facilities. This strategy is composed of 7 elements arranged across the 3 categories of Emergency Response Procedures, Emergency Drills, and Incident Management. Each of the elements are described and practical advice is given to assist in implementing this strategy.
Analysis of Data Center Architectures Supporting Open Compute Project (OCP) Open Compute has had a significant impact on the thinking about data center design. Until now, the focus has been on systems at the rack level, leaving unanswered questions about the power infrastructure upstream of the rack. In this paper, we address critical questions about the implications of Open Compute on the upstream power infrastructure, including redundancy, availability, and flexibility. We introduce simplified reference designs that support OCP and provide a capital cost analysis to compare traditional and OCP-based designs. We also present an online TradeOff Tool that allows data center decision makers to better understand the cost differences and cost drivers to various architectures.
The Drivers and Benefits of Edge Computing Internet use is trending towards bandwidth-intensive content and an increasing number of attached “things”. At the same time, mobile telecom networks and data networks are converging into a cloud computing architecture. To support needs today and tomorrow, computing power and storage is being inserted out on the network edge in order to lower data transport time and increase availability. Edge computing brings bandwidth-intensive content and latency-sensitive applications closer to the user or data source. This white paper explains the drivers of edge computing and explores the various types of edge computing available.
Quantitative Analysis of a Prefabricated vs. Traditional Data Center Prefabricated modular data centers offer many advantages over traditionally built data centers, including flexibility, improved predictability, and faster speed of deployment. Cost , however, is sometimes stated as a barrier to deploying these designs. In this paper, we focus on quantifying the capital cost differences of a prefabricated vs. traditional 440 kW data center, both built with the same power and cooling architecture, in order to highlight the key cost drivers, and to demonstrate that prefabrication does not come at a capex premium . The analysis was completed and validated with Romonet’s Cloud-based Analytics Platform, a vendor-neutral industry resource.
Planning Effective Power and Data Cable Management in IT Racks Poor rack cable management has proven to many data center operators to be a source of downtime and frustration during moves, adds and changes. It can also lead to data transmission errors, safety hazards, poor cooling efficiency, and a negative overall look and feel of the data center. This paper discuses the benefits of effective rack cable management, provides guidance for cable management within IT racks including high density and networking IT racks, which will improve cable traceability and troubleshooting time while reducing the risk of human error.
Choosing Between Direct and Indirect Air Economization for Data Centers The choice of direct or indirect air economization for a data center depends on their benefits, geographic locations, capital costs, operating costs, and availability risks. We examined both approaches in this paper across these five factors. While both approaches can cool a data center with little to no use of mechanical cooling, indirect air economization uses less energy in the majority of locations around the world. The direct approach can have a lower capital expense but presents more availability risks compared to indirect. The added capital expense of mitigating these risks diminishes the appeal for direct air economization. Therefore, in general, indirect air economization is the recommended approach.
How to Choose an IT Rack In data centers with 1-3kW/rack, the most popular IT racks have been 600 mm (24 inches) wide, 1070 mm (42 inches) deep, and 42U tall. However, most data centers today support a wide variety of IT equipment densities and form factors that require appropriate racks and accessories. For example, in racks housing 5 kW and above, the most popular rack size is no longer optimal as deeper equipment, higher density rack-mounted power distribution units (rack PDUs), and increased cable loads crowd the inside of the IT rack. This paper discusses the key size and feature options for IT racks and criteria for selection.
Lifecycle Carbon Footprint Analysis of Batteries vs. Flywheels Flywheel energy storage for static UPSs is often thought to be the “greener” technology when compared to batteries. This paper presents a lifecycle carbon footprint analysis to show that the opposite is often true, primarily because the energy consumed to operate the flywheel over its lifetime is greater than that of the equivalent VRLA battery solution, and the carbon emissions from this energy outweighs any carbon emissions savings in raw materials or cooling. A tool is presented to help demonstrate these carbon tradeoffs.
Calculating Space and Power Density Requirements for Data Centers The historic method of specifying data center power density using a single number of watts per square foot (or watts per square meter) is an unfortunate practice that has caused needless confusion as well as waste of energy and money. This paper demonstrates how the typical methods used to select and specify power density are flawed, and provides an improved approach for establishing space requirements, including recommended density specifications for typical situations.
How to Choose IT Rack Power Distribution One of the challenges associated with rack power distribution units (PDU) has been determining which to choose among the wide array of offerings. In most cases, there are so many to choose from (100-700 models) that vendors must provide product selectors in order to narrow down the choices. Others challenges include maintaining system availability and supporting higher density equipment. Once a rack PDU is selected, IT administrators wonder if it will support the next generation of IT equipment, in terms of power capacity, electrical plug type(s), and plug quantity. Trends such as virtualization, converged infrastructure, and high efficiency add to the need for a comprehensive strategy for selecting rack PDUs. This paper discusses the criteria for selecting IT rack power distribution and the practical decisions required to reduce downtime.
Guidance on What to Do with an Older UPS “When should an older UPS be replaced with a new one?” is a question that virtually all data center owners will have to answer. The answer is not always self evident and depends on several factors. This paper provides data center owners and managers a simple framework for answering the question in the context of their own circumstances and requirements. Three options are explained and compared: run to fail, upgrade, and buy new.
Arc Flash Considerations for Data Center IT Space Do IT administrators violate arc flash requirements when they turn off or reset a branch circuit breaker? What about swapping out a rack power strip? Most data center operators are familiar with fire safety and shock hazard protection, but are less familiar with arc flash safety. Three IT trends have increased the severity of a potential arc flash in the IT space; higher data center capacities, higher rack densities, and higher efficiency designs. This paper discusses these three trends in the context of arc flash safety within the IT space. Arc flash is explained, potential areas of concern in the IT space are identified, and compliance with associated regulations is discussed.
How Row-based Data Center Cooling Works Row-based data center cooling is normally regarded as a “cold air supply” architecture that uses row-based coolers. However, row-based cooling is actually a “hot air capture” architecture that neutralizes hot air from IT equipment before it has a chance to mix with the surrounding air in the room. This paper discusses common misconceptions about row-based cooling, explains how row-based cooling removes hot air, and describes key design attributes that maximize the effectiveness of this approach.
Single Phase UPS Management, Maintenance, and Lifecycle “How long will my battery last?” and “what is the best practice for maintaining my UPS?” are very common questions posed from UPS owners. Few realize there is more to the UPS than just battery back-up; and that, like all electronics it has a life expectancy. Many of the factors that affect battery life also affect UPS electronics. Some factors may be controlled by taking some preventative measures or simply adjusting some basic UPS settings. This whitepaper discusses the key factors that influence both battery and UPS life; and provides some simple recommendations and guidelines to help you manage your single phase UPS to maximize the life and overall availability.
Choosing the Optimal Data Center Power Density The choice of IT rack power densities has a direct impact on the capital cost of the data center. There are significant savings in developing a data center with an average rack power density of at least 5 kW per rack, however, densities higher than ~15 kW per rack show no further relevant savings. This white paper analyzes these costs and presents a flexible architecture to accommodate a well-specified density, and discusses the importance of operational policies in enforcing the specification.
How to Fix Hot Spots in the Data Center Data center operators take a variety of actions to eliminate hot spots. Some of these actions result in short term fixes but may come with an energy penalty and some actions may even create more hot spots. Airflow management is an easy and cost effective way to permanently eliminate hot spots while saving energy and can avoid the capital expense of adding more cooling units. This paper describes the root cause of hot spots, recommends methods to identify them, reviews the typical actions taken, and provides the best practices to eliminate them.
Overload Protection in a Dual-Corded Data Center Environment In a dual-corded environment, the loss of power on one path will cause the load to transfer to the other path, which can create an overload condition on that path. This can lead to a situation where the failure of one path leads to the failure of both paths. This paper explains the problem and how to solve it, and provides a set of rules to ensure that a dual-path environment provides the expected fault tolerance.
A Framework for Developing and Evaluating Data Center Maintenance Programs Inadequate maintenance and risk mitigation processes can quickly undermine a facility’s design intent. It is, therefore, crucial to understand how to properly structure and implement an operations and maintenance (O&M) program to achieve the expected level of performance. This paper defines a framework, known as the Tiered Infrastructure Maintenance Standard (TIMS), for aligning an existing or proposed maintenance program with a facility’s operational and performance requirements. This framework helps make the program easier to understand, communicate, and implement throughout the organization.
Fundamentals of Managing the Data Center Life Cycle for Owners Just as good genes do not guarantee health and well-being, a good design alone does not ensure a data center is well-built and will remain efficient and available over the course of its life span. For each phase of the data center’s life cycle, proper care and action must be taken to continuously meet the business needs of the facility. This paper describes the five phases of the data center life cycle, identifies key tasks and pitfalls, and offers practical advice to facility owners and management.
Power and Cooling Guidelines for Deploying IT in Colocation Data Centers Some prospective colocation data center tenants view power and cooing best practices as constraining. However, an effective acceptable use policy can reduce downtime due to thermal shutdown and human error, reduce stranded capacity, and extend the life of the initial leased space, avoiding the cost of oversized reserved space. This paper explains some of the causes of stranded power, cooling, and space capacity in colocation data centers and explains how high-density rack power distribution, air containment, and other practices improve availability and efficiency. Examples of acceptable use policies that address these issues are provided.
Implementing Hot and Cold Air Containment in Existing Data Centers Containment solutions can eliminate hot spots and provide energy savings over traditional uncontained data center designs. The best containment solution for an existing facility will depend on the constraints of the facility. While ducted hot aisle containment is preferred for highest efficiency, cold aisle containment tends to be easier and more cost effective for facilities with existing raised floor air distribution. This paper investigates the constraints, reviews all available containment methods, and provides recommendations for determining the best containment approach.
Avoiding Common Pitfalls of Evaluating and Implementing DCIM Solutions While many who invest in Data Center Infrastructure Management (DCIM) software benefit greatly, some do not. Research has revealed a number of pitfalls that end users should avoid when evaluating and implementing DCIM solutions. Choosing an inappropriate solution, relying on inadequate processes, and a lack of commitment / ownership / knowledge can each undermine a chosen toolset’s ability to deliver the value it was designed to provide. This paper describes these common pitfalls and provides practical guidance on how to avoid them.
Top 10 Mistakes in Data Center Operations: Operating Efficient and Effective Data Centers How can you avoid making major mistakes when operating and maintaining your data center(s)? The key lies in the methodology behind your operations and maintenance program. All too often, companies put immense amounts of capital and expertise into the design of their facilities. However, when construction is complete, data center operations are an afterthought. This whitepaper explores the top ten mistakes in data center operations.
Review of Four Studies Comparing Efficiency of AC and DC Distribution for Data Centers DC is proposed for use in data centers as an alternative to AC distribution primarily based on publicized claims of efficiency improvements and energy savings. This paper shows that the most widely cited values for quantitative improvements are wrong and grossly overstate the efficiency differences between AC and DC, and that the latest AC and DC systems provide effectively the same efficiency. This paper compares the results of four different publicized studies and explains the assumptions and mistakes that have led to erroneous but widely circulated beliefs about the efficiency benefits of DC power distribution.
Data Center Projects: Advantages of Using a Reference Design It is no longer practical or cost-effective to completely engineer all aspects of a unique data center. Re-use of proven, documented subsystems or complete designs is a best practice for both new data centers and for upgrades to existing data centers. Adopting a well-conceived reference design can have a positive impact on both the project itself, as well as on the operation of the data center over its lifetime. Reference designs simplify and shorten the planning and implementation process and reduce downtime risks once up and running. In this paper reference designs are defined and their benefits are explained.
A Practical Guide to Disaster Avoidance in Mission-Critical Facilities A disaster preparedness plan is crucial to organizations operating in 24/7/365 environments. With zero disruption the goal, management must carefully evaluate and mitigate risks to the physical infrastructure that supports the mission-critical facility. While business continuity planning typically addresses Information Technology, this paper reviews and discusses the requirements of the facility’s infrastructure as part of a comprehensive business continuity disaster plan. Without a proper disaster mitigation plan for the facility’s infrastructure, the overall business continuity plan is built on a risky foundation. If a natural, human, or technological disaster strikes your facility, are you and your infrastructure prepared? Does your organization have procedures in place to prepare for severe winter storms, earthquakes, tornados, hurricanes, or other disasters? Surviving tomorrow’s disaster requires planning today.
How Overhead Cabling Saves Energy in Data Centers Placing data center power and data cables in overhead cable trays instead of under raised floors can result in an energy savings of 24%. Raised floors filled with cabling and other obstructions make it difficult to supply cold air to racks. The raised floor cable cutouts necessary to provide cable access to racks and PDUs result in a cold air leakage of 35%. The cable blockage and air leakage problems lead to the need for increased fan power, oversized cooling units, increased pump power, and lower cooling set points. This paper highlights these issues, and quantifies the energy impact.
Economizer Modes of Data Center Cooling Systems In certain climates, some cooling systems can save over 70% in annual cooling energy costs by operating in economizer mode, corresponding to over 15% reduction in annualized PUE. However, there are at least 17 different types of economizer modes with imprecise industry definitions making it difficult to compare, select, or specify them. This paper provides terminology and definitions for the various types of economizer modes and compares their performance against key data center attributes.
How Monitoring Systems Reduce Human Error in Distributed Server Rooms and Remote Wiring Closets Surprise incidences of downtime in server rooms and remote wiring closets lead to sleepless nights for many IT managers. Most can recount horror stories about how bad luck, human error, or just simple incompetence brought their server rooms down. This paper analyzes several of these incidents and makes recommendations for how a basic monitoring system can help reduce the occurrence of these unanticipated events.
Estimating a Data Center’s Electrical Carbon Footprint Data center carbon emissions are a growing global concern. The U.S. Environmental Protection Agency (EPA) cites data centers as a major source of energy consumption in the United States. The EPA has set an efficiency target for government data centers: a 20% reduction in carbon footprint by 2011. European Union (EU) members have agreed to cut their combined emissions of greenhouse gases to 8% below the 1990 level by 2012. Data center owners will be increasingly challenged to report their carbon emissions. This paper introduces a simple approach, supported by free web-based tools, for estimating the carbon footprint of a data center anywhere in the world.
The Role of Isolation Transformers in Data Center UPS Systems Most modern UPS systems do not include the internal transformers that were present in earlier designs. This evolution has increased efficiency while decreasing the weight, size, and raw materials consumption of UPS systems. In the new transformerless UPS designs, the transformers are optional and can be placed in the best location to achieve a required purpose. In older designs, transformers were typically installed in permanent positions where they provided no benefit, reduced system efficiency, or were not optimally located. This paper considers the function of transformers in UPS systems, when and how transformers should be used, and how the absence of internal transformers in newer UPS designs frequently improves data center design and performance.
Monetizing Energy Storage in the Data Center Over the last several years, the electric utility markets have seen significant changes in the mix of generation supporting the grid. In particular, traditional generating forms such as coal, nuclear, and even natural gas are being replaced by intermittent generation sources such as wind and solar. Because of the increasing penetration of these intermittent sources, and their lack of controlled dispatchability, other fast-acting energy sources are becoming even more valuable in balancing supply and demand. This is one reason why utilities are increasingly providing financial incentives to their customers to more closely balance real-time supply and demand. Due to their generous capacity of underused UPS batteries, data centers are a prime candidate to take advantage of these incentives. In this paper, we describe approaches data center operators can use to monetize their UPS energy storage and explain the modes of UPS operation required for each method.
Industry 4.0: Minimizing Downtime Risk with Resilient Edge Computing Industry 4.0 makes manufacturing “smart” through emerging technology innovations such as data analytics, autonomous robotics, and AI. These technologies drive increased productivity and performance throughout the value chain. These data-driven innovations require information technology (IT) systems deployed on-premise, often referred to as edge IT or edge computing. This edge IT can increase the risk of downtime for automation systems in some cases. Choosing IT enclosures designed for manufacturing environments and investing in proper power and cooling infrastructure can address the unique challenges of edge IT deployments in manufacturing environments. In this paper, we describe manufacturing environments, the cost of downtime, and the unique challenges in deploying industrial edge IT. We also provide best practices to ensure resilient edge computing by minimizing the risk of downtime.
Switching Transients and Surge Protection for MV Transformers in Data Centers Voltage transients in MV power systems have been observed to contribute to failures of both power and instrument transformers in data centers in recent years. This white paper provides a background regarding the nature of the transient problems, as well as a discussion of factors that may put transformers at risk. Several common solutions are available to help safeguard transformers, and each of these is discussed along with some of the pros/cons of each solution type.
Considerations for Selecting a Lithium-ion Battery System for UPSs and Energy Storage Systems “Why do these batteries cost more?”, “How large are these batteries?” and “How long will these batteries last?” are some of the more common questions posed from UPS and energy storage stakeholders. These questions increase in importance and complexity as the industry transitions from VRLA to lithium-ion batteries. This paper does not teach you how to specify or design a battery system, but rather it explains the key variables that drive battery decisions. Having this knowledge also prepares you for vendor discussions, especially when you’re presented with trade-offs. Informed battery decisions lead to optimized solutions that drive long-term value.
Digital Remote Monitoring and Dispatch Services’ Impact on Edge Computing and Data Centers Power and cooling infrastructure for edge computing and data center sites have roughly 3 times more data points / notifications today than it did 10 years ago. Traditional remote monitoring services have been available for over 10 years but were not designed to support this amount of data monitoring and the associated alarms, let alone extract value from the data. This paper explains how seven trends are re-defining remote monitoring and field service dispatch service requirements and how this will lead to improvements in operations and maintenance of IT installations.
Watts and Volt-Amps: Powerful Confusion This note helps explain the differences between Watts and VA and explains how the terms are correctly and incorrectly used in specifying power protection equipment.
Cooling Entire Data Centers Using Only Row Cooling Row cooling is emerging as a practical total cooling solution for new data centers due to its inherent high efficiency and predictable performance. Yet some IT equipment in data centers appears incompatible with row cooling because it is not arranged in neat rows due to the nature of the equipment or room layout constraints, suggesting the ongoing need for traditional perimeter cooling to support these loads. This paper explains how a cooling system comprised only of row coolers, with no room cooling system, can cool an entire data center, including IT devices that are not in neat rows.
Electrical Distribution Equipment in Data Center Environments IT professionals who are not familiar with the concepts, terminology, and equipment used in electrical distribution, can benefit from understanding the names and purposes of equipment that support the data center, as well as the rest of the building in which the data center is located. This paper explains electrical distribution terms and equipment types and is intended to provide IT professionals with useful vocabulary and frame of reference.
Battery Technology for Data Centers: VRLA vs. Li-ion Lithium-ion battery prices have decreased over the years and are now becoming a viable option for data center UPS. This paper provides a brief overview of li-ion batteries in comparison to VRLA batteries for static UPS applications, including optimal chemistries and technologies. A 10-year total cost of ownership (TCO) analysis is provided showing li-ion is 39% less than VRLA despite their capital cost premium. A sensitivity analysis reveals the TCO drivers. Finally we discuss li-ion batteries for retrofit and new UPS applications and the effect of temperature on battery life, runtime, and cooling.
FAQs for Using Lithium-ion Batteries with a UPS Lithium-ion batteries offer several advantages over traditional lead acid batteries. Despite the benefits, the use of lithium-ion batteries in uninterruptable power supplies (UPSs or battery backups) is relatively new with valve-regulated lead acid batteries still the dominant energy storage technology used today. This will likely change as Li-ion costs continue to decrease, the benefits become more widely known, and manufacturers make their UPSs compatible. This paper serves to answer common questions about Li-ion batteries and their use in UPSs.
Impact of Leading Power Factor on Data Center Generator Systems IT devices may exhibit electrical input current with a characteristic called “leading power factor”. This situation may cause back-up generators to become unstable and shut down. Furthermore, a data center that is operating correctly for a long time may suddenly develop a problem as the IT load changes over time, or during an unusual event. This means that it is important to understand the margin of safety and correct for this condition before it happens. This paper explains the problem, why and how it occurs, and how to detect and correct it.
Battery Technology for Single Phase UPS Systems: VRLA vs. Li-ion Lithium-ion battery prices have decreased over the years and are now becoming a viable option for data center UPS. This paper provides a brief overview of li-ion batteries in comparison to VRLA batteries for single-phase UPS applications. A 10-year total cost of ownership (TCO) analysis is also provided showing li-ion is 53% less than VRLA despite their capital cost premium. A sensitivity analysis reveals the TCO drivers.
High Efficiency Indirect Air Economizer-based Cooling for Data Centers Of the various economizer (free cooling) modes for data centers, using fresh air is often viewed as the most energy efficient approach. However, this paper shows how indirect air economizer-based cooling produces similar or better energy savings while eliminating risks posed when outside fresh air is allowed directly into the IT space.
Specification of Modular Data Center Architecture There is a growing consensus that conventional legacy data center design will be superseded by modular scalable data center designs. Reduced total cost of ownership, increased flexibility, reduced deployment time, and improved efficiency are all claimed benefits of modular scalable designs. Yet the term “modular”, when and where modularity is appropriate, and how to specify modularity are all poorly defined. This paper creates a framework for modular data center architecture and describes the various ways that modularity can be implemented for data center power, cooling, and space infrastructure and explains when the different approaches are appropriate and effective.
Types of Prefabricated Modular Data Centers Data center systems or subsystems that are pre-assembled in a factory are often described with terms like prefabricated, containerized, modular, skid-based, pod-based, mobile, portable, self-contained, all-in-one, and more. There are, however, important distinctions between the various types of factory-built building blocks on the market. This paper proposes standard terminology for categorizing the types of prefabricated modular data centers, defines and compares their key attributes, and provides a framework for choosing the best approach(es) based on business requirements.
The Use of Ceiling-Ducted Air Containment in Data Centers Ducting hot IT-equipment exhaust to a drop ceiling can be an effective air management strategy, improving the reliability and energy efficiency of a data center. Typical approaches include ducting either individual racks or entire hot aisles and may be passive (ducting only) or active (include fans). This paper examines available ducting options and explains how such systems should be deployed and operated. Practical cooling limits are established and best-practice recommendations are provided.
Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.
The Different Technologies for Cooling Data Centers There are 13 basic heat removal methods to cool IT equipment and to transport unwanted heat to the outdoor environment. This paper describes these fundamental cooling technologies using basic terms and diagrams. 11 of these methods rely on the refrigeration cycle as the primary means of cooling. Pumped refrigerant systems provide isolation between the primary heat removal system and IT equipment. The direct air and indirect air methods rely on the outdoor conditions as the primary means cooling making them more efficient for mild climates. The information in this paper allows IT professionals to be more involved in the specification of precision cooling solutions that better align with IT objectives.
Calculating Total Cooling Requirements for Data Centers This document describes how to estimate heat output from Information Technology equipment and other devices in a data center such as UPS, for purposes of sizing air conditioning systems. A number of common conversion factors and design guideline values are also included.
Benefits and Drawbacks of Prefabricated Modules for Data Centers Standardized, pre-assembled and integrated data center modules, also referred to in the data center industry as containerized or modular data centers, allow data center designers to shift their thinking from a customized “construction” mentality to a standardized “site integration” mentality. Prefabricated modules are faster to deploy, more predictable, and can be deployed for a similar cost to traditional stick-built data centers. This white paper compares both scenarios, presents the advantages and disadvantages of each, and identifies which environments can best leverage the prefabricated module approach.
Cost, Speed, and Reliability Tradeoffs between N+1 UPS Configurations There is an increasing trend towards N+1 UPS architectures – rather than 2N – as IT fault tolerance through software continues to improve. There are two common ways N+1 can be achieved: paralleling multiple unitary UPSs together or deploying a single UPS frame with multiple internal modules configured for N+1 redundancy. In this paper, we quantify key tradeoffs between an internal “modular” redundant UPS and parallel redundant UPSs, and show a 27% capital cost savings and a 1-2 week decrease in deployment time when internal redundancy is deployed. We also discuss the importance of fault tolerance within the UPS to ensure availability, reliability, and maintainability needs are met.
Maximizing Uptime of Critical Systems in Commercial and Industrial Applications As technology and information reach into every corner of our world, the availability of critical systems in industrial process and facility management is more important than ever. Uptime and the availability of critical process information is no longer a lofty goal, but is a necessity to remain competitive. Much has been made of uptime with respect to data centers. However, applications exist within industrial and commercial facilities that also merit “mission critical” treatment even if the larger facility as a whole is not viewed as such. Maintaining productivity and overall equipment effectiveness (OEE) requires design and operational practices that maximize uptime. This paper describes those key practices in the context of the facility life cycle.
The Unexpected Impact of Raising Data Center Temperatures Raising IT inlet temperatures is a common recommendation given to data center operators as a strategy to improve data center efficiency. While it is true that raising the temperature does result in more economizer hours, it does not always have a positive impact on the data center overall. In this paper, we provide a cost (capex & energy) analysis of a data center to demonstrate the importance of evaluating the data center holistically, inclusive of the IT equipment energy. The impact of raising temperatures on server failures is also discussed.
Five Basic Steps for Efficient Space Organization within High Density Enclosures Organizing components and cables within high density enclosures need not be a stressful, time consuming chore. In fact, thanks to the flexibility of new enclosure designs, a standard for organizing enclosure space, including power and data cables can be easily implemented. This paper provides a five step roadmap for standardizing and optimizing organization within both low and high density enclosures, with special emphasis on how to plan for higher densities.
Ten Cooling Solutions to Support High-density Server Deployment High-density servers offer a significant performance per watt benefit. However, depending on the deployment, they can present a significant cooling challenge. Vendors are now designing servers that can demand over 40 kW of cooling per rack. With most data centers designed to cool an average of no more than 2 kW per rack, innovative strategies must be used for proper cooling of high-density equipment. This paper provides ten approaches for increasing cooling efficiency, cooling capacity, and power density in existing data centers.
Practical Options for Deploying Small Server Rooms and Micro Data Centers Small server rooms and branch offices are typically unorganized, unsecure, hot, unmonitored, and space constrained. These conditions can lead to system downtime or, at the very least, lead to “close calls” that get management’s attention. Practical experience with these problems reveals a short list of effective methods to improve the availability of IT operations within small server rooms and branch offices. This paper discusses making realistic improvements to power, cooling, racks, physical security, monitoring, and lighting. The focus of this paper is on small server rooms and branch offices with up to 10kW of IT load.
Standardization and Modularity in Data Center Physical Infrastructure Failure to adopt modular standardization as a design strategy for data center physical infrastructure (DCPI) is costly on all fronts: unnecessary expense, avoidable downtime, and lost business opportunity. Standardization and its close relative, modularity, create wide-ranging benefits in DCPI that streamline and simplify every process from initial planning to daily operation, with significant positive effects on all three major components of DCPI business value – availability, agility, and total cost of ownership.
Virtualization and Cloud Computing: Optimized Power, Cooling, and Management Maximizes Benefits IT virtualization, the engine behind cloud computing, can have significant consequences on the data center physical infrastructure (DCPI). Higher power densities that often result can challenge the cooling capabilities of an existing system. Reduced overall energy consumption that typically results from physical server consolidation may actually worsen the data center’s power usage effectiveness (PUE). Dynamic loads that vary in time and location may heighten the risk of downtime if rack-level power and cooling health are not understood and considered. Finally, the fault-tolerant nature of a highly virtualized environment could raise questions about the level of redundancy required in the physical infrastructure. These particular effects of virtualization are discussed and possible solutions or methods for dealing with them are offered.
Guidelines for Specification of Data Center Power Density Conventional methods for specifying data center density are ambiguous and misleading. Describing data center density using Watts / ft2 or Watts / m2 is not sufficient to determine power or cooling compatibility with high density computing loads like blade servers. Historically there is no clear standard way of specifying data centers to achieve predictable behavior with high density loads. An appropriate specification for data center density should assure compatibility with anticipated high density loads, provide unambiguous instruction for design and installation of power and cooling equipment, prevent oversizing, and maximize electrical efficiency. This paper describes the science and practical application of an improved method for the specification of power and cooling infrastructure for data centers.
Fundamental Principles of Air Conditioners for Information Technology Every Information Technology professional who is responsible for the operation of computing equipment needs to understand the function of air conditioning in the data center or network room. This introductory paper explains the function of basic components of an air conditioning system for a computer room. The concepts presented here are a foundation for allowing IT professionals to successfully specify, install, and operate critical facilities.
Understanding EPO and its Downtime Risks An Emergency Power Off (EPO) system is a control mechanism, formally known as a “disconnecting means.” It is intended to power down a single piece of electronic equipment or an entire installation from a single point. EPO is employed in many applications such as industrial processes and information technology (IT). This white paper describes the advantages and disadvantages of EPO for protecting data centers and small IT equipment rooms containing uninterruptible power supply (UPS) systems. Various codes and standards that require EPO are discussed. Recommended practices are suggested for the use of EPO with UPS systems.
A Quantitative Comparison of High Efficiency AC vs. DC Power Distribution for Data Centers This paper presents a detailed quantitative efficiency comparison between the most efficient DC and AC power distribution methods, including an analysis of the effects of power distribution efficiency on the cooling power requirement and on total electrical consumption. The latest high efficiency AC and DC power distribution architectures are shown to have virtually the same efficiency, suggesting that a move to a DC-based architecture is unwarranted on the basis of efficiency.
Humidification Strategies for Data Centers and Network Rooms The control of humidity in Information Technology environments is essential to achieving high availability. This paper explains how humidity affects equipment and why humidity control is required. Quantitative design guidelines for existing and new computing installations are discussed. Alternative methods to achieve desired humidity are described and contrasted. The difficult issue of how and where humidity should be measured is explained. The hidden costs associated with over-humidification are described.
Data Center Projects: Establishing a Floor Plan A floor plan strongly affects the power density capability and electrical efficiency of a data center. Despite this critical role in data center design, many floor plans are established through incremental deployment without a central plan. Once a poor floor plan has been deployed, it is often difficult or impossible to recover the resulting loss of performance. This paper provides structured floor plan guidelines for defining room layouts and for establishing IT equipment layouts within existing rooms.
Guidance for Calculation of Efficiency (PUE) in Data Centers Before data center infrastructure efficiency can be benchmarked using PUE or other metrics, there must be agreement on exactly what power consumptions constitute IT loads, what consumptions constitute physical infrastructure, and what loads should not be counted. Unfortunately, commonly published efficiency data is not computed using a standard methodology, and the same data center will have different efficiency ratings when different methodologies are applied. This paper explains the problem and describes a standardized method for classifying data center loads for efficiency calculations.
Data Center Temperature Rise During a Cooling System Outage The data center architecture and its IT load significantly affect the amount of time available for continued IT operation following a loss of cooling. Some data center trends such as increasing power density, warmer supply temperatures, the “right-sizing” of cooling equipment, and the use of air containment may actually increase the rate at which data center temperatures rise. However, by placing critical cooling equipment on backup power, choosing equipment with shorter restart times, maintaining adequate reserve cooling capacity, and employing thermal storage, power outages can be managed in a predictable manner. This paper discusses the primary factors that affect transient temperature rise and provides practical strategies to manage cooling during power outages.
Reliability Analysis of the APC InfraStruXure Power System The APC InfraStruXure product line offers an alternative architecture to the central UPS. MTechnology, Inc. used the techniques of Probabilistic Risk Assessment (PRA) to evaluate the reliability of the 40 kW InfraStruXure UPS and PDU with static bypass. The calculations considered the performance of the InfraStruXure in both ideal and real-world conditions. The study also compared the performance of the InfraStruXure architecture to that of a central UPS serving a hypothetical 500 kW critical load in a data center. The results showed that the InfraStruXure architecture was significantly less likely to suffer failure of all loads in the data center, and slightly less likely to experience failure in any one piece of IT equipment. This paper summarizes the key findings of MTechnologys quantitative risk assessment and discusses their implications for facility managers and designers.
Avoiding AC Capacitor Failures in Large UPS Systems Most AC power capacitor failures experienced in large UPS systems are avoidable. Capacitor failures can give rise to UPS failure and can in some cases cause critical load drops on stand-alone and paralleled systems. AC capacitor failures have historically been ascribed to unavoidable random failure or supplier defect. However, recent advances in the science of capacitor reliability analysis show that capacitor failures can be controlled by system design. This paper explains AC capacitor failure mechanisms and demonstrates how UPS designers and specifiers can avoid most common AC capacitor failures and the associated consequences.
Eco-mode: Benefits and Risks of Energy-saving Modes of UPS Operation Many newer UPS systems have an energy-saving operating mode known as “eco-mode” or by some other descriptor. Nevertheless, surveys show that virtually no data centers actually use this mode, because of the known or anticipated side-effects. Unfortunately, the marketing materials for these operating modes do not adequately explain the cost / benefit tradeoffs. This paper shows that eco-mode provides a reduction of approximately 2% in data center energy consumption and explains the various limitations and concerns that arise from eco-mode use. Situations where these operating modes are recommended and contra-indicated are also described.
Power Protection for Digital Medical Imaging and Diagnostic Equipment Medical imaging and diagnostic equipment (MIDE) is increasingly being networked to Picture Archiving and Communications Systems (PACS), Radiology Information Systems (RIS), Hospital Information Systems (HIS), and getting connected to the hospital intranet as well as the Internet. Failing to implement the necessary physical infrastructure can result in unexpected downtime, and safety and compliance issues, which translates into lost revenue and exposure to expensive litigations, negatively affecting the bottom line. This paper explains how to plan for physical infrastructure when deploying medical imaging and diagnostic equipment, with emphasis on power and cooling.
Raised Floors vs Hard Floors for Data Center Applications Raised floors were once a standard feature of data centers, but over time a steadily growing fraction of data centers are built on hard floors. Many of the traditional reasons for the raised floor no longer exist, and some of the costs and limitations that a raised floor creates are avoidable by using hard-floor designs. This paper discusses factors to consider when determining whether a data center should use a raised floor or a hard floor design.
Guidelines for Specification of Data Center Criticality / Tier Levels A framework for benchmarking a future data center’s operational performance is essential for effective planning and decision making. Currently available criticality or tier methods do not provide defensible specifications for validating data center performance. An appropriate specification for data center criticality should provide unambiguous defensible language for the design and installation of a data center. This paper analyzes and compares existing tier methods, describes how to choose a criticality level, and proposes a defensible data center criticality specification. Maintaining a data center’s criticality is also discussed.
Comparing Data Center Power Distribution Architectures Significant improvements in efficiency, power density, power monitoring, and reconfigurability have been achieved in data center power distribution, increasing the options available for data centers. This paper compares five power distribution approaches including panelboard distribution, field-wired PDU distribution, factory-configured PDU distribution, floor-mount modular power distribution, and modular busway, and describes their advantages and disadvantages. Guidance is provided on selecting the best approach for specific applications and constraints.
Data Center Projects: System Planning Planning of a data center physical infrastructure project need not be a time consuming or frustrating task. Experience shows that if the right issues are resolved in the right order by the right people, vague requirements can be quickly translated into a detailed design. This paper outlines practical steps to be followed that can cut costs by simplifying and shortening the planning process while improving the quality of the plan.
Types of Electrical Meters in Data Centers There are several different types of meters that can be designed into a data center, ranging from high precision power quality meters to embedded meters (i.e. in a UPS or PDU). Each has different core functions and applications. This white paper provides guidance on the types of meters that might be incorporated into a data center design, explains why they should be used, and discusses the advantages and disadvantages of each. Example data centers are presented to illustrate where the various meters are likely to be deployed.
Site Selection for Mission Critical Facilities When selecting a new site or evaluating an existing site, there are dozens of risk factors that must be considered if optimal availability is to be obtained. Geographic, site-related, building, and economic risks need to be understood and mitigated to lessen the downtime effects on your business. In this paper guidelines are established for selecting a new site or assessing an existing one. Common risks that affect the availability of a business are defined and techniques for minimizing these risks are presented.
Choosing Between Room, Row, and Rack-based Cooling for Data Centers Latest generation high density and variable density IT equipment create conditions that traditional data center cooling was never intended to address, resulting in cooling systems that are oversized, inefficient, and unpredictable. Room, row, and rack-based cooling methods have been developed to address these problems. This paper describes these improved cooling methods and provides guidance on when to use each type for most next generation data centers.
The Top 9 Mistakes in Data Center Planning Why do so many data center builds and expansions fail? This white paper answers the question by revealing the top 9 mistakes organizations make when designing and building new data center space, and examines an effective way to achieve success through the Total Cost of Ownership (TCO) approach.
Classification of Data Center Infrastructure Management (DCIM) Tools Data centers today lack a formal system for classifying software management tools. As a result, confusion exists regarding which management systems are necessary and which are optional for secure and efficient data center operation. This paper divides the realm of data center management tools into four distinct subsets and compares the primary and secondary functions of key subsystems within these subsets. With a classification system in place, data center professionals can begin to determine which physical infrastructure management tools they need – and don’t need – to operate their data centers.
Power and Cooling Capacity Management for Data Centers High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.
How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Costs Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.
Battery Technology for Data Centers and Network Rooms: Ventilation of Lead-Acid Batteries Lead-acid batteries are the most widely used method of energy reserve. Ventilation systems must address health and safety as well as performance of the battery and other equipment in a room. Valve regulated lead acid (VRLA) batteries and modular battery cartridges (MBC) do not require special battery rooms and are suitable for use in an office environment. Air changes designed for human occupancy normally exceed the requirements for VRLA and MBC ventilation. Vented (flooded) batteries, which release hydrogen gas continuously, require a dedicated battery room with ventilation separate from the rest of the building. This paper summarizes some of the factors and U.S. codes to consider when selecting and sizing a ventilation system for a facility in which stationary batteries are installed.
Battery Technology for Data Centers and Network Rooms: U.S. Fire Safety Codes Related to Lead Acid Batteries Fire safety regulations and their application to UPS battery installations are reviewed. In some cases, fire codes do not clearly recognize improvements in battery safety resulting from changing battery technology. Valve Regulated Lead Acid (VRLA) batteries are frequently deployed within data centers and network rooms without the need for the elaborate safety systems that are required for Vented (Flooded) Lead Acid batteries. Proper interpretation of the fire codes is essential in the design and implementation of data centers and network rooms.
Determining the Power, Cooling, and Space Capacities when Consolidating Data Centers When planning the consolidation of multiple data centers into existing data center(s), it is often difficult to establish the various capacities and capabilities of each site’s physical infrastructure. This information is a key input to deciding which site(s) will become the “receiving” data center(s). This paper describes how to specify these requirements in standard terms and how to establish current conditions and future capabilities of each data center involved in a consolidation project.
Effect of UPS on System Availability This white paper explains how system availability and uptime are affected by AC power outages and provides quantitative data regarding uptime in real-world environments, including the effect of UPS on uptime.
Preparing the Physical Infrastructure of Receiving Data Centers for Consolidation The consolidation of one or more data centers into an existing data center is a common occurrence. This paper gives examples of what is becoming a standard architecture for preparing the physical infrastructure in the receiving data center. This approach allows for shorter timelines and high efficiency while avoiding the commonly expected difficulties and complexities often experienced with consolidation projects.
Allocating data center energy costs and carbon to IT users Are complicated software and instrumentation needed to measure and allocate energy costs and carbon to IT users? Or can we get by with simple, low cost methods for energy cost and carbon allocation? How precise do we need to be? This paper provides an overview of energy cost and carbon allocation strategies and their precision. We show that it is both easy and inexpensive for any data center, large or small, new or old, to get started allocating costs and carbon, but the expense and complexity escalate and ROI declines when excessive precision is specified.
Data Center Physical Infrastructure: Optimizing Business Value To stay competitive in today’s rapidly changing business world, companies must update the way they view the value of their investment in data center physical infrastructure (DCPI). No longer are simply availability and upfront cost sufficient to make adequate business decisions. Agility, or business flexibility, and low total cost of ownership have become equally important to companies that will succeed in a changing global marketplace.
Data Center Projects: Project Management In data center design/build projects, flaws in project management and coordination are a common – but unnecessary – cause of delays, expense, and frustration. The ideal is for project management activities to be structured and standardized like interlocking building blocks, so all parties can communicate with a common language, avoid responsibility gaps and duplication of effort, and achieve an efficient process with a predictable outcome. This paper presents a framework for project management roles and relationships that is understandable, comprehensive, and adaptable to any size project.
Data Center Projects: Standardized Process As the design and deployment of data center physical infrastructure moves away from art and more toward science, the benefits of a standardized and predictable process are becoming compelling. Beyond the ordering, delivery, and installation of hardware, any build or upgrade project depends critically upon a well-defined process as insurance against surprises, cost overruns, delays, and frustration. This paper presents an overview of a standardized, step-by-step process methodology that can be adapted and configured to suit individual requirements.
Electrical Efficiency Measurement for Data Centers Data center electrical efficiency is rarely planned or managed. The unfortunate result is that most data centers waste substantial amounts of electricity. Today it is both possible and prudent to plan, measure, and improve data center efficiency. In addition to reducing electrical consumption, efficiency improvements can gain users higher IT power densities and the ability to install more IT equipment in a given installation. This paper explains how data center efficiency can be measured, evaluated, and modeled, including a comparison of the benefits of periodic assessment vs. continuous monitoring.
Energy Impact of Increased Server Inlet Temperature The quest for efficiency improvement raises questions regarding the optimal air temperature for data centers. The ASHRAE TC-9.9 committee has recently adopted an extension of the recommended thermal envelope for server inlet temperature and humidity. A popular hypothesis suggests that total energy demands should diminish as the server inlet temperatures increase. This paper tests that hypothesis through the development of a composite power consumption baseline for a mixture of servers as a function of inlet temperature and applying this data to a variety of cooling architectures.
Explanation of Cooling and Air Conditioning Terminology for IT Professionals As power densities continue to increase in today’s data centers, heat removal is becoming a greater concern for the IT professional. Unfortunately, air conditioning terminology routinely used in the cooling industry is unnecessarily complicated. This complexity makes it difficult and frustrating for IT professionals to specify cooling requirements and even makes it difficult to discuss current cooling system performance with contractors, engineers, and maintenance personnel. This paper explains cooling terms in common language, providing an essential reference for IT professionals and data center operators.
Harmonic Currents in the Data Center: A Case Study This document provides an overview of how problems related to harmonic neutral currents are mitigated by load diversity, with specific focus on Information Technology data center environments. Detailed measurements of an actual operating data center are presented. This case study illustrates the way that load diversity mitigates harmonic current levels, lowers shared neutral current in multi-wire feeders and branch circuits, and improves total circuit power factor.
Reliability Models for Electric Power Systems This white paper explains the sources of downtime in electric power systems and provides an explanation for site-to-site variations in power availability. The factors affecting power quality from generation to the utilization point are summarized. There is a qualitative description of a model, which can be combined with data to provide a method for estimating down time based on site-related factors.
Improving Rack Cooling Performance Using Airflow Management Blanking Panels Unused vertical space in open frame racks and rack enclosures creates an unrestricted recycling of hot air that causes equipment to heat up unnecessarily. The use of airflow management blanking panels can reduce this problem. This paper explains and quantifies the effects of airflow management blanking panels on cooling system performance.
Determining Total Cost of Ownership for Data Center and Network Room Infrastructure An improved method for measuring total cost of ownership (TCO) of data center and network room physical infrastructure and relating these costs to the overall Information Technology infrastructure is described, with examples. The cost drivers of TCO are quantified. The largest cost driver is shown to be unnecessary unabsorbed costs resulting from the oversizing of the infrastructure.
Inter-system Ground Noise: Causes and Effects Many power-related problems are the result of Inter-System Ground Noise. This problem cannot be corrected using typical AC-only power protection equipment. The cause and solution of Inter-System Ground Noise problems are described.
Strategies for Deploying Blade Servers in Existing Data Centers When blade servers are densely packed, they can exceed the power and cooling capacities of almost all traditional data centers. This paper explains how to evaluate the options and select the best power and cooling approach for a successful and predictable blade deployment.
Rack Powering Options for High Density in 230VAC Countries Alternatives for providing electrical power to high density racks in data centers and network rooms are explained and compared. Issues addressed include quantity of feeds, single-phase vs. three-phase, number and location of circuit breakers, overload, selection of connector types, selection of voltage, redundancy, and loss of redundancy. The need for the rack power system to adapt to changing requirements is identified and quantified. Guidelines are defined for rack power systems that can reliably deliver power to high density loads while adapting to changing needs.
Cooling Audit for Identifying Potential Cooling Problems in Data Centers The compaction of Information Technology equipment and simultaneous increases in processor power consumption are creating challenges for data center managers in ensuring adequate distribution of cool air, removal of hot air and sufficient cooling capacity. This paper provides a checklist for assessing potential problems that can adversely affect the cooling environment within a data center.
Implementing Energy Efficient Data Centers Electricity usage costs have become an increasing fraction of the total cost of ownership (TCO) for data centers. It is possible to dramatically reduce the electrical consumption of typical data centers through appropriate design of the network-critical physical infrastructure and through the design of the IT architecture. This paper explains how to quantify the electricity savings and provides examples of methods that can greatly reduce electrical power consumption.
The different types of UPS systems There is much confusion in the marketplace about the different types of UPS systems and their characteristics. Each of these UPS types is defined, practical applications of each are discussed, and advantages and disadvantages are listed. With this information, an educated decision can be made as to the appropriate UPS topology for a given need.
Avoiding Costs From Oversizing Data Center and Network Room Infrastructure The physical and power infrastructure of data centers and network rooms is typically oversized by more than 100%. Statistics related to oversizing are presented. The costs associated with oversizing are quantified. The fundamental reasons why oversizing occurs are discussed. An architecture and method for avoiding oversizing is described.
Grounding and the Use of the Signal Reference Grid in Data Centers Signal reference grids are automatically specified and installed in data centers despite the fact that they are no longer needed by modern IT equipment. Even when installed, they are typically used incorrectly. This paper explains the origins of the signal reference grid, the operating principles and limitations, and why they no longer are needed.
Technical comparison of On-line vs. Line-interactive UPS designs UPS systems below 5000VA are available in two basic designs: line-interactive or double-conversion on-line. This paper describes the advantages and disadvantages of each topology and addresses some common misconceptions about real-world application requirements.
Preventive Maintenance Strategy for Data Centers In the broadening data center cost-saving and energy efficiency discussion, data center physical infrastructure preventive maintenance (PM) is sometimes neglected as an important tool for controlling TCO and downtime. PM is performed specifically to prevent faults from occurring. IT and facilities managers can improve systems uptime through a better understanding of PM best practices. This white paper describes the types of PM services that can help safeguard the uptime of data centers and IT equipment rooms. Various PM methodologies and approaches are discussed. Recommended practices are suggested.
Deploying High-Density Pods in a Low-Density Data Center Simple and rapid deployment of self-contained, high-density pods within an existing or new low-density data center is possible with today’s power and cooling technology. The independence of these high-density pods allow for predictable and reliable operation of high-density equipment without a negative impact on the performance of existing low-density power and cooling infrastructure. A side benefit is that these high-density pods operate at much higher electrical efficiency than conventional designs. Guidance on planning design, implementation, and predictable operation of high-density pods is provided.
Calculating Total Power Requirements for Data Centers Part of data center planning and design is to align the power and cooling requirements of the IT equipment with the capacity of infrastructure equipment to provide it. This paper presents methods for calculating power and cooling requirements and provides guidelines for determining the total electrical power capacity needed to support the data center, including IT equipment, cooling equipment, lighting, and power backup.
Monitoring Physical Threats in the Data Center Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.
Airflow Uniformity Through Perforated Tiles in a Raised-Floor Data Center Perforated tiles on a raised floor often deliver substantially more or less airflow than expected, resulting in inefficiencies and even equipment failure due to inadequate cooling. In this paper, the impact of data center design parameters on perforated tile airflow is quantified and methods of improving airflow uniformity are discussed. This paper was written jointly by APC and IBM for the ASME InterPACK ’05 conference.
Energy Efficient Cooling for Data Centers: A Close-Coupled Row Solution The trend of increasing heat densities in data centers has held consistent with advances in computing technology for many years. As power density increased, it became evident that the degree of difficulty in cooling these higher power loads was also increasing. In recent years, traditional cooling system design has proven inadequate to remove concentrated heat loads (20 kW per rack and higher). This has driven an architectural shift in data center cooling. The advent of a newer cooling architecture designed for these higher densities has brought with it increased efficiencies for the data center. This article discusses the efficiency benefits of row-based cooling compared to two other common cooling architectures.
Data Center Projects: Commissioning Failure to properly commission a data center leaves the door wide open for expensive and disruptive downtime that could have been avoided. Integrated commissioning of all physical infrastructure components assures maximum data center performance and justifies the physical infrastructure investment. This paper reviews the desired outputs and identifies the standard inputs of the commissioning data center project step. The commissioning process flow is described and critical success factors are discussed. The commissioning process inputs and outputs are also placed in context with other key data center project process phases and steps.
Impact of High Density Hot Aisles on IT Personnel Work Conditions The use of modern enclosed hot aisles to address increasing power densities in the data center has brought into question the suitability of working conditions in these hot aisle environments. In this paper, it is determined that the additional heat stress imposed by such high density IT environments is of minimal concern.
Improved Chilled Water Piping Distribution Methodology for Data Centers Chilled water remains a popular cooling medium; however leaks in the piping systems are a threat to system availability. High density computing creates the need to bring chilled water closer than ever before to the IT equipment, prompting the need for new high reliability piping methods. This paper discusses new piping approaches which can dramatically reduce the risk of leakage and facilitate high density deployment. Alternative piping approaches and the advantages over traditional piping systems are described.
Creating Order from Chaos in Data Centers and Server Rooms Data center professionals can rid themselves of messy racks, sub-standard under floor air distribution, and cable sprawl with a minimum of heartache and expense. Whether the data center mess is created over years of mismanagement or whether the cable-choked data center is inherited, solutions for both quick fixes and longer term evolutionary changes exist. This paper outlines several innovative approaches for dealing with the symptoms of chaos and for eliminating the root causes of disorder.
Data Line Transient Protection Electrical transients (surges) on data lines can destroy computing equipment both in the business and home office environments. Many users appreciate the risk of power surges but overlook data line surges. This white paper explains how transients are created, how they can have devastating effects on electrical equipment, and how surge suppression devices work to help protect against them.
AC vs DC Power Distribution for Data Centers DC power distribution has been proposed as an alternative to AC power distribution in data centers, but misinformation and conflicting claims have confused the discussion. A detailed analysis and model show that many of the benefits commonly stated for DC distribution are unfounded or exaggerated. This paper explains why high efficiency AC will likely emerge as the dominant choice for data center power distribution.
Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms Avoidable mistakes that are routinely made when installing cooling systems and racks in data centers or network rooms compromise availability and increase costs. These unintentional flaws create hot-spots, decrease fault tolerance, decrease efficiency, and reduce cooling capacity. Although facilities operators are often held accountable for cooling problems, many problems are actually caused by improper deployment of IT equipment outside of their control. This paper examines these typical mistakes, explains their principles, quantifies their impacts, and describes simple remedies.
Comparing Data Center Batteries, Flywheels, and Ultracapacitors Most data center professionals choose lead-acid batteries as their preferred method of energy storage. However, alternatives to lead-acid batteries are attracting more attention as raw material and energy costs continue to increase and as governments become more vigilant regarding environmental and waste disposal issues. This paper compares several popular classes of batteries, compares batteries to both flywheels and ultracapacitors, and briefly discusses fuel cells.
Comparison of Static and Rotary UPS Much confusion exists among data center professionals when deciding whether to deploy static or rotary Uninterruptible Power Supplies (UPS) in their data centers. This paper defines both static and rotary UPS architectures, points out similarities and differences, and analyzes the advantages and disadvantages of each in data center environments.
Cooling Strategies for IT Wiring Closets and Small Rooms Cooling for IT wiring closets is rarely planned and typically only implemented after failures or overheating occur. Historically, no clear standard exists for specifying sufficient cooling to achieve predictable behavior within wiring closets. An appropriate specification for cooling IT wiring closets should assure compatibility with anticipated loads, provide unambiguous instruction for design and installation of cooling equipment, prevent oversizing, maximize electrical efficiency, and be flexible enough to work in various shapes and types of closets. This paper describes the science and practical application of an improved method for the specification of cooling for wiring closets.
Data Center Physical Infrastructure for Radio Frequency Identification (RFID) Systems Radio frequency identification (RFID) technology helps automate a variety of business processes, improving their efficiencies. It generates a huge volume of data that needs to be filtered, processed and stored, and generally requires its own virtual local area network (VLAN). To gain all the promised benefits and return on investment of RFID, the network must be highly available. The data center physical infrastructure (DCPI) must be assessed for vulnerabilities in power, cooling, physical security, and other CCPI elements. Failing to plan for DCPI can lead to disruption of critical business processes resulting in loss of revenue and competitive advantage. This paper provides an understanding of an RFID network and its components, identifies critical DCPI locations, and explains how to plan for high availability.
Data Center VRLA Battery End-of-Life Recycling Procedures Contrary to popular belief, the recycling of lead-acid batteries, which are the most common batteries found in data centers, is one of the most successful recycling systems that the world has ever seen. Reputable battery manufacturers, suppliers, and recycling companies have teamed up to establish a mature and highly efficient lead-acid battery recycling process. This paper reviews battery end-of-life options and describes how a reputable vendor can greatly facilitate the safe disposal and recycling of VRLA lead-acid batteries.
Electrical Efficiency Modeling for Data Centers Conventional models for estimating electrical efficiency of data centers are grossly inaccurate for real-world installations. Estimates of electrical losses are typically made by summing the inefficiencies of various electrical devices, such as power and cooling equipment. This paper shows that the values commonly used for estimating equipment inefficiency are quite inaccurate. A simple, more accurate efficiency model is described that provides a rational basis to identify and quantify waste in power and cooling equipment.
Hazards of Harmonics and Neutral Overloads This document provides an overview of problems related to harmonic currents, with a specific focus on Information Technology equipment. The way that international regulations solved these problems is described.
Mean Time Between Failure: Explanation and Standards Mean Time Between Failure is a reliability term used loosely throughout many industries and has become widely abused in some. Over the years the original meaning of this term has been altered which has led to confusion and cynicism. MTBF is largely based on assumptions and definition of failure and attention to these details are paramount to proper interpretation. This paper explains the underlying complexities and misconceptions of MTBF and the methods available for estimating it.
Mitigating Fire Risks in Mission Critical Facilities This paper provides a clear understanding of the creation, detection, suppression, and prevention of fire within mission critical facilities. Fire codes for Information Technology environments are discussed. Best practices for increasing availability are provided.
Neutral Wire Facts and Mythology This Technical Note discusses many common misunderstandings about the function of the neutral wire and its relation to power problems. The subjects of dedicated lines, phase reversal, isolation transformers, and grounding are addressed. Various myths are described and criticized.
Performing Effective MTBF Comparisons for Data Center Infrastructure Mean Time Between Failure (MTBF) is often proposed as a key decision making criterion when comparing data center infrastructure systems. Misleading values are often provided by vendors, leaving the user incapable of making a meaningful comparison. When the variables and assumptions behind the numbers are unknown or are misinterpreted, bad decisions are inevitable. This paper explains how MTBF can be effectively used as one of several factors for specification and selection of systems, by making the assumptions explicit.
Ten Errors to Avoid When Commissioning a Data Center Data center commissioning can deliver an unbiased evaluation of whether a newly constructed data center will be an operational success or a failure. Proper execution of the commissioning process is a critical step in determining how the data center operates as an integrated system. The documentation produced as a result of commissioning is also the single, most enduring value added deliverable in a data center’s operational life. This paper outlines the ten most common errors that prevent successful execution of the commissioning process.
Cooling Solutions for Rack Equipment with Side-to-Side Airflow Equipment with side-to-side airflow presents special cooling challenges in the modern data center. Common rack enclosures and rack layouts are fundamentally incompatible with side-to-side cooling, resulting in equipment that receives supply air of excessive temperature. This paper describes the problem along with several side-effects that are not generally appreciated. Various solutions to the problem are described along with their costs and benefits.
The Seven Types of Power Problems Many of the mysteries of equipment failure, downtime, software and data corruption, are often the result of a problematic supply of power. There is also a common problem with describing power problems in a standard way. This white paper will describe the most common types of power disturbances, what can cause them, what they can do to your critical equipment, and how to safeguard your equipment, using the IEEE standards for describing power quality problems.
A Hidden Reliability Threat in UPS Static Bypass Switches IT managers will be surprised to learn that some medium and high power UPS systems on the market today (rated 50 kW and higher) use undersized static bypass switches despite their negative implications. By using a contactor or a circuit breaker in parallel with SCRs, these static bypass switches are able use smaller, less expensive SCRs that are rated to carry less than full load current continuously. This paper shows that the availability of the UPS system is compromised when undersized static bypass switches are employed in the system. The advantages of fully rated static bypass switches are discussed.
Battery Technology for Data Centers and Network Rooms: Lead-Acid Battery Options The lead-acid battery is the predominant choice for Uninterruptible Power Supply (UPS) energy storage. Over 10 million UPSs are presently installed utilizing Flooded, Valve Regulated Lead Acid (VRLA), and High Density Modular Battery Cartridges (HDBCMBC) systems. This paper discusses the advantages and disadvantages of these three battery technologies.
Battery Technology for Data Centers and Network Rooms: VRLA Reliability and Safety The Valve Regulated lead-Acid (VRLA) battery is the predominant choice for small and medium-sized Uninterruptible Power Supply (UPS) energy storage. This white paper explores how the technology affects overall battery life and system reliability. It will examine the expected performance, life cycle factors, and failure mechanisms of VRLA batteries.
Modular Systems: The Evolution of Reliability Nature proved early on that in complex systems, modular designs are the ones that survive and thrive. An important contributor to this success is the critical reliability advantage of fault tolerance, in which a modular system can shift operation from failed modules to healthy ones while repairs are made. In data centers, modular design has already taken root in new fault-tolerant architectures for servers and storage systems. As data centers continue to evolve and borrow from nature’s blueprints, data center physical infrastructure (DCPI) must also evolve to support new strategies for survival, recovery, and growth.
Preventing Data Corruption in the Event of an Extended Power Outage Despite advances in computer technology, power outages continue to be a major cause of PC and server downtime. Protecting computer systems with Uninterruptible Power Supply (UPS) hardware is part of a total solution, but power management software is also necessary to prevent data corruption after extended power outages. Various software configurations are discussed, and best practices aimed at ensuring uptime are presented.
Reliability Analysis of the APC Symmetra MW Power System This paper is a quantitative reliability analysis of the APC Symmetra MW UPS performed by MTechnology, Inc. (MTech). In contrast to common MTBF calculations based on summing component failure rates, this study used techniques of Probabilistic Risk Assessment (PRA) to calculate the likelihood of over 680, 000 potential failure modes. The mathematical method accounts for uncertainty in failure rates and component performance and provides detailed guidance as to the contribution of each system component to the overall risk of failure. The study included an exhaustive analysis of the system’s architecture, component selection, control system, manufacturing practices, and response to internal and external faults. The study also included a detailed review of APC’s delta conversion online topology.
Battery Technology for Data Centers and Network Rooms: Environmental Regulations Some lead-acid batteries located in data centers are subject to government environmental compliance regulations. While most commercial battery back-up systems fall below required reporting levels, very large UPS and DC plant batteries may have to comply. Failure to comply can result in costly penalties. Environmental compliance regulations focus on the amount of sulfuric acid and lead in a given location. This paper offers a high level summary of the regulations and provides a list of environmental compliance information resources.
Comparing Availability of Various Rack Power Redundancy Configurations Transfer switches and dual-path power distribution to IT equipment are used to enhance the availability of computing systems. Statistical availability analysis techniques suggest large differences in availability are expected between the various methods commonly employed. This paper examines various electrical architectures for redundancy that are implemented in today’s mission-critical environments. The availability analyses of these various scenarios are then performed and the results are presented. The analysis identifies which approach provides the best overall performance, and how alternatives compare in performance and value.
Dynamic Power Variations in Data Centers and Network Rooms The power requirement required by data centers and network rooms varies on a minute by minute basis depending on the computational load. This magnitude of this variation has grown and continues to grow dramatically with the deployment of power management technologies in servers and communication equipment. This variation gives rise to new problems relating to availability and management.
Physical Security in Mission Critical Facilities Physical security is critical to achieving availability goals of mission critical facilities. Security of the data center accounts for it’s surroundings as well as data processing equipment inside and the systems supporting them. In this paper, systems for providing secure facilities are recommended and best practices for physical security are explained.
Powering Single-Corded Equipment in a Dual Path Environment The use of dual power path architecture in combination with IT equipment with dual power supplies and power cords is an industry best-practice. In facilities using this approach there are inevitably some IT devices which have only a single power cord. There are a number of options for integrating single-corded devices into a high availability dual path data center. This paper explains the differences between the various options and provides a guide to selecting the appropriate approach.
Cooling Strategies for Ultra-High Density Racks and Blade Servers Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.
Data Center Physical Infrastructure for Enterprise Wireless LANs Wireless LAN (WLAN) deployments can result in unexpected or unplanned power, cooling, management and security requirements. Most wiring closets do not have uninterruptible power supplies (UPS), and they do not provide adequate ventilation or cooling required to prevent equipment overheating. Understanding the unique data center physical infrastructure (DCPI) requirements of WLAN equipment allows planning for a successful and cost effective deployment. This paper explains how to plan for DCPI while deploying indoor WLANs in small, medium or large enterprise, with emphasis on power and cooling. Simple, fast, reliable, and cost effective strategies for upgrading old facilities or building new facilities are described.
Power and Cooling for VoIP and IP Telephony Applications Voice Over IP (VoIP) deployments can cause unexpected or unplanned power and cooling requirements in wiring closets and wiring rooms. Most wiring closets do not have uninterruptible power available, and they do not provide the ventilation or cooling required to prevent equipment overheating. Understanding the unique cooling and powering needs of VoIP equipment allows planning for a successful and cost effective VoIP deployment. This paper explains how to plan for VoIP power and cooling needs, and describes simple, fast, reliable, and cost effective strategies for upgrading old facilities and building new facilities.
Reducing the Hidden Costs Associated with Upgrades of Data Center Power Capacity Scaling the power capacity of legacy UPS systems leads to hidden costs that may outweigh the very benefit that scalability intends to provide. A scalable UPS system provides a significant benefit to the Total Cost of Ownership (TCO) of data center and network room physical infrastructure. This paper describes the drawbacks of scaling legacy UPS systems and how scalable rack-based systems address these drawbacks. The cost factors of both methods are described, quantified and compared.
How and Why Mission-Critical Cooling Systems Differ From Common Air Conditioners Todays technology rooms require precise, stable environments in order for sensitive electronics to operate optimally. Standard comfort air conditioning is ill suited for technology rooms, leading to system shutdowns and component failures. Because precision air conditioning maintains temperature and humidity within a very narrow range, it provides the environmental stability required by sensitive electronic equipment, allowing your business to avoid expensive downtime.
Four Steps to Determine When a Standby Generator is Needed for Small Data Centers Small data centers and network rooms vary dramatically in regard to the amount of UPS runtime commonly deployed. This paper describes a rational framework for establishing backup time requirements. Tradeoffs between supplemental UPS batteries and standby generators are discussed, including a total cost of ownership (TCO) analysis to help identify which solution makes the most economic sense. The analysis illustrates that the runtime at which generators become more cost effective than batteries varies dramatically with kW and ranges from approximately 20 minutes to over 10 hours.