Tuesday, July 26, 2011

Green Data Centers

Data Centers are booming Information Factories. They consumed 1.5% of US Electricity in 2006 with usage projected to double every five years.

1. Background
2. Acronyms/Definitions
3. Business Case
4. Energy Savings Strategies
5. Benefits
6. Risks/Issues
7. Success Criteria
8. Companies
9. Links

BT’s Rochdale Data Center uses curtains to separate the hot and cold aisles, thereby reducing the amount of hot air that mixes with cold air from the plenum floor and dramatically increasing cooling efficiency. Each cabinet is also fitted with blanking panel to ensure that the only path for cold air to pass is via a piece of equipment.


1.Background
  • Server farms—also known as data centers—are the enormous housing facilities that make the internet possible. A single Google data center, in Oregon consumes as much energy as a city of 200,000. That's because servers not only have to be on 24/7, they need to be kept cool 24/7. Up to 50 percent of the power they use is just to keep them from melting down. Overall, the internet is responsible for 2% of global carbon emissions, about the same as the aviation industry. A single Data Center can consume 30 -100 MW of electric power. Data Center energy demand may be larger than some small utilities’ capacity.
  • According to the DOE, data centers are responsible for 3 percent of U.S. energy consumption, and growing, and a typical 125,000-SF data center consumes $3 million worth of energy per year. U.S. data center power use is expected to double by 2015 to add up to $7.4 billion in annual power bills, the EPA says. That could drive a fourfold increase in the green data center market to some $41.4 billion by 2015, Pike Research estimates.
  • Data centers are becoming one of the single largest industrial energy users, consuming 61.4 billion kilowatt-hours annually in the U.S. alone, and producing more than 43 million tons of CO2 each year. In addition to financial and capacity considerations, reducing data center energy use has become a priority for organizations seeking to reduce their environmental footprint.
  • The cost of powering and cooling servers in the United States alone was expected to grow from $4.5 billion in 2006 to $7.4 billion in 2011, according to the Environmental Protection Agency (EPA).  However, thanks to the recession and energy efficiency technology, data centers consumed less electricity than expected between 2005 and 2010, according to a report by researcher Jonathan Koomey While data center electricity consumption doubled between 2000 and 2005, the electricity consumption of data centers grew by just 56 percent globally between 2005 and 2010, and specifically grew by 36 percent in the U.S. over that time period.


    The results are surprising because researchers, including the EPA, had predicted that the electricity consumption of data centers would again double between 2005 and 2010, following the trend of the previous five years. But the doubling trend was cut short, partly because of proactive reasons by data center operators, and partly because of the overall macroeconomic downturn.
  • Data centers consume around 2.5 percent of the power in Northern California and the total consumed by data centers in the area has been growing by 15 percent a year.
  • Another impact of higher energy densities is that server hardware is no longer the primary cost component of a data center. The purchase price of a new (1U) server has been exceeded by the capital cost of power and cooling infrastructure to support that server. And this will soon be exceeded by the lifetime energy costs for that server alone. This represents a significant shift in data center economics that seriously challenges conventional cooling strategies.


Analysis of a typical 5000-square-foot data center shows that demand-side computing equipment accounts for 52 percent of energy usage and supply-side systems account for 48 percent.


2. Acronyms/Definitions
  1. 80 Plus - An electric utility-funded incentive program to integrate more energy-efficient power supplies into desktop computers and servers. Participating utilities and energy efficiency organizations across North America have contributed over $5 million of incentives to help the computer industry transition to 80 PLUS certified power supplies

  2. ASE - Air Side Economizer - Can save energy in buildings by using cool outside air as a means of cooling the indoor space. When the enthalpy of the outside air is less than the enthalpy of the recirculated air, conditioning the outside air is more energy efficient than conditioning recirculated air. When the outside air is both sufficiently cool and sufficiently dry (depending on the climate) the amount of enthalpy in the air is acceptable to the control, no additional conditioning of it is needed; this portion of the air-side economizer control scheme is called free cooling.

    Google turns to free cooling, or outside air, whenever possible:


  3. Blade Servers – Stripped down computer servers with a modular design optimized to minimize the use of physical space. Whereas a standard rack-mount server can function with a power cord and network cable, blade servers have many components removed to save space, minimize power consumption. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer blade-enclosure designs feature high-speed, adjustable fans and control logic that tune the cooling to the system's requirements, or even liquid cooling-systems. At the same time, the increased density of blade-server configurations can still result in higher overall demands for cooling with racks populated at over 50% full.
  4. Blanking Panels - Minimize recirculation of hot air. Each cabinet is fitted with blanking panel to ensure that the only path for cold air to pass is via a piece of equipment. Missing cabinets and missing equipment offers an alternative route for airflow that causes hot and cold air to mix reducing air handling
    efficiency.

  5. Carbon Intensity Per Unit of Data. - Content delivery network leader Akamai has been reporting the carbon emissions related to its cloud computing-based services in the metric of “CO2 per megabytes of data delivered.” Establishing this metric enables Akamai to compare cloud computing energy usage across the industry, and Akamai is also in the process of making this metric available on a monthly basis to its customers. Greenpeace has lauded Akamai for its transparency and reporting of this metric, and gave Akamai the highest grade for “transparency,” out of 10 giant Internet players like Google, Apple and Facebook.

  6. CFD - Computational Fluid Dynamics - Used to identify inefficiencies and optimize data center airflow.

    How Google manages air flow:
  7. CRAC - Computer Room Air Conditioner

  8. CSCI - Climate Savers Computing Initiative - An effort to reduce the electric power consumption of PCs in active and inactive states.

  9. CUE - Carbon Usage Effectiveness - Similar to GPUE - Also created by Green Grid, CUE is a ratio of the total carbon emissions due to the energy consumption of the data center (the same one used for the PUE) compared to the energy consumption of the data center’s servers and IT equipment (the same one used for PUE). The metric is expressed in kilograms of carbon dioxide (kgCO2eq) per kilowatt-hour (kWh), and if a data center is 100-percent powered by clean energy, it will have a CUE of zero. The CUE could one day be a particularly important metric if there is a global price on carbon.

  10. DCIE - Data Center Infrastructure Efficiency - The reciprocal of PUE and is expressed as a percentage that improves as it approaches 100%. It is Energy for IT Equipment/Total Energy for the Data Center

  11. Data Deduplication - Whenever you send out a press release or goofy office party photo, that document gets duplicated several times. There is a tremendous amount of promise for reducing power through reducing storage. Saves up to 95% for full back ups; 25% to 55% for most data sets.
  12. Double Conversion UPS – Used in most datacenters, these systems convert incoming power to DC and then back to AC within the UPS. This enables the UPS to generate a clean, consistent waveform for IT equipment and effectively isolates IT equipment from the power source. UPS systems that don’t convert the incoming power (line interactive or passive standby systems) can operate at higher efficiencies because of the losses associated with the conversion process. However, these systems may compromise equipment protection because they do not fully condition incoming power. Care must be taken to ensure reductions in energy consumption are not achieved at the cost of reduced equipment availability.

  13. DPM - Distributed Power Management - Migrates machines off of underutilized hosts in an effort to put the hosts into standby mode to conserve energy.

  14. EPEAT - Electronic Product Environmental Assessment Tool - Created by the Green Electronics Council to assist in the purchase of "green" computing systems. The Council evaluates computing equipment on 28 criteria that measure a product's efficiency and sustainability attributes. On 2007-01-24, President Bush issued Executive Order 13423, which requires all United States Federal agencies to use EPEAT when purchasing computer systems.

  15. GPUE - Similar to CUE - Green Power usage effectiveness (GPUE) is a proposed measurement of both how much sustainable energy a computer data center uses, its carbon footprint per usable Kwh and how efficiently it uses its power; specifically, how much of the power is actually used by the computing equipment (in contrast to cooling and other overhead). It is an addition to the PUE definition and was first proposed by Greenqloud. GPUE is a way to “weigh” the PUE to better see which data centers are truly green in the sense that they indirectly cause the least amount of CO2 to be emitted by their use of sustainable or unsustainable energy sources.

    GPUE = G x PUEx (for inline comparison of data centers) or = G @ PUEx (a better display and for co2 emission calculations) G = Weighed Sum of energy sources and their lifecycle KG CO2/KWh

  16. Hot-Aisle/Cold-Aisle Configuration - The arrangement of rack computer cabinets within the data center so that hot and cold air are separated into separate aisles thereby improving cooling efficiency. Data Centers typically have a raised floor or plenum floor arrangement where cold air is delivered under pressure, causing it to escape from every opening. Typically a certain number of the tiles in the floor are perforated or integrate an opening vent. Hot air rises to the top of the computer room, where it is captured by Computer Room Air Conditioners (CRAC) and chilled to be pumped back under the floor. An efficient data center ensures that as much of the cold air is drawn across hot computer parts as possible and that hot and cold air are not allowed to mix.

  17. Hot-Aisle/Cold-Aisle Containment – A variation on the above configuration that isolates hot and cold air streams so they don’t mix with one another and cause energy inefficiencies.

  18. Hypervisor - Also called virtual machine monitor (VMM) - Allows multiple operating systems to run concurrently on a host computer— a feature called hardware virtualization. The hypervisor presents the guest operating systems with a virtual platform and monitors the execution of the guest operating systems. In that way, multiple operating systems, including multiple instances of the same operating system, can share hardware resources.

  19. PAR4 - A metric developed by startup Power Assure that measures server power consumption in different ways, including monitoring idle power, peak power, total utilization power, and “transactions-per-watt.” Essentially, PAR4 enables servers of different makes, models and generations to be compared to one another in terms of energy efficiency. That type of detailed measurement is mostly lacking for servers today. Power Assure says PAR4 is already gaining acceptance from big players like Intel, Dell and Cisco, which have been working to incorporate PAR4 into their systems
    .
  20. PSU – Power Supply Unit - (aka PDU - Power Distriibution Unit) Efficiency varies widely. Desktop PSU’s are generally 70–75% efficient dissipating the remaining energy as heat. An industry initiative called 80 PLUS certifies PSUs that are at least 80% efficient; typically these models are drop-in replacements for older, less efficient PSUs of the same form factor. As of July 20, 2007, all new Energy Star 4.0-certified desktop PSUs must be at least 80% efficient
    .
  21. PUE Ratio - Power Usage Effectiveness - A metric used to determine the energy efficiency of a data center. PUE is determined by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it. PUE is therefore expressed as a ratio, with overall efficiency improving as the quotient decreases toward 1. Uptime estimates most facilities could achieve 1.6 PUE using the most efficient equipment and best practices. PUE was developed by the Green Grid provides a measure of infrastructure efficiency, but not total facility efficiency.

    Data centers with legacy cooling infrastructures average a PUE of 2.25, according to the Uptime Institute. With the introduction of next-generation CRACs, AHUs and chiller systems, PUEs can be reduced to 1.25 without sacrificing temperature or humidity control. To put that in a financial context, lowering a PUE from 2.25 to 1.25 can slash spending from $0.44 per ton-hour for cooling to less than $0.05 per ton-hour.

  22. How Google looks at PUE
  23. RH – Relative Humidity - Data Center air should contain the proper amount of water vapor to maximize the availability of computing equipment. Air containing too much or too little water vapor can cause failures. At the outer extremities of RH, we can see condensation forming on equipment, or at the other end, static electricity buildup and discharge.

  24. RSE - Refrigerant Side Economizing - Can reduce refrigeration compressor energy by up to 100 percent. By deploying RSE solutions directly in the return air based on the outside wet-bulb temperature, rather than dry bulb temperature relied upon by ASE, economizing hours can be increased by as much as 50 percent, as compared to airside economizing and typical waterside economizing.

  25. Server Virtualization – A method of partitioning a physical server computer into multiple servers such that each has the appearance and capabilities of running on its own dedicated machine. Each virtual server can run its own full-fledged operating system, and each server can be independently rebooted.

  26. SPEC - Standard Performance Evaluation Corporation - A non-profit corporation formed to establish, maintain and endorse a standardized set of relevant benchmarks that can be applied to the newest generation of high-performance computers. SPEC develops benchmark suites and also reviews and publishes submitted results from our member organizations and other benchmark licensees. SPEC score being used as the server performance measure.

  27. TDP – Thermal Design Power - The maximum amount of power the cooling system in a computer is required to dissipate. For example, a laptop's CPU cooling system may be designed for a 20 watt TDP, which means that it can dissipate up to 20 watts of heat without exceeding the maximum junction temperature for the computer chip. In the absence of a true standard measure of processor efficiency comparable to the U.S. Department of Transportation fuel efficiency standard for automobiles, TDP serves as a proxy for server power consumption. The typical TDP of processors in use today is between 80 and 103 Watts (91W average).

  28. Thin Provisioning - A way to cut the power associated with storage. Most corporations reserve way too much storage for their needs. Reducing that number can potentially cut down power going to air conditioners and storage devices.

  29. UPS - Uninterruptible Power Suply - An electrical apparatus that provides emergency power to a load when the input power source, typically the utility mains, fails. A UPS differs from an auxiliary or emergency power system or standby generator in that it will provide instantaneous or near-instantaneous protection from input power interruptions by means of one or more attached batteries and associated electronic circuitry for low power users, and or by means of diesel generators and flywheels for high power users. The on-battery runtime of most uninterruptible power sources is relatively short—5–15 minutes being typical for smaller units—but sufficient to allow time to bring an auxiliary power source on line, or to properly shut down the protected equipment. Efficiency varies widely

  30. VDI - Virtual Desktop Infrastructure - (Sometimes Virtual Desktop Interface) - The server computing model enabling desktop virtualization, encompassing the hardware and software systems required to support the virtualized environment

  31. WUE - Water Usage Effectiveness - Also created by Green Grid,Calculates how efficiently a data center is using water. It is a ratio of the annual water usage to how much energy is being consumed by the IT equipment and servers, and is expressed in liters/kilowatt-hour (L/kWh). Like CUE, the ideal value of WUE is zero, meaning no water was used to operate the data center.


3. Business Case
  • The potential savings from more efficient data centers is enormous. 20%-40% savings are typically possible with aggressive strategies producing better than 50% savings. Paybacks are short - one to three years are common. However, today, most centers don't know if they are good or bad.
  • Facilities operating at high utilization rates throughout a 24-hour day will want to focus initial efforts on sourcing IT equipment with low power processors and high efficiency power supplies.
  • Facilities that experience predictable peaks in activity may achieve the greatest benefit from power management technology.


Power supply efficiency can vary significantly depending on load and power supplies are often sized for a load that exceeds the maximum server configuration. Sizing power supplies closer to actual load is another opportunity to increase efficiency. Notice that the maximum configuration is about 80 %of the n ameplate rating and the typical configuration is 67% of the nameplate rating.

4. Energy Saving Strategies
  1. Liquid Cooling - A number of vendors offer solutions that deliver cold water cooling either through coils fitted to the rear doors of computer cabinets or through cold plates attached directly to the CPU chips. Direct chip attachment has the advantage of reducing the case temperature so much that the CPUs can be clocked at a very high rate, much higher than that which can be supported with air cooling. While it might seem that water and electronics don’t mix well, experienced data center managers know that water is the main medium that is used in Computer Room Air Conditioners (CRAC) units to cool the raised floor area. Bringing water closer to the CPU core can improve the efficiency of heat transfer by as much as 4000 times.
    • One basic approach to get cold water closer to the case or heat-sinks of the hottest components (generally the CPUs) uses cold plates physically bolted to the CPUs with centrally delivered chilled water channeled through them. This removes heat efficiently but is mainly targeted at getting the case temperature down so that the chips can be over-clocked reliably delivering more performance for the same basic silicon.
    • The second approach uses cold water delivery to coils in a rear door. IBM’s Rear Door Heat eXchanger is 4 inches thick and weighs in at a hefty 70 lbs. IBM claim that the door can absorb as much as 50% of the heat coming from the server rack.
  2. Efficient Processors- For a price premium, processor manufacturers provide lower voltage versions of their processors that consumes on average 30 Watts less than standard processors. Independent research studies show these lower power processors deliver the same performance as higher power models.
  3. Efficient Power Supplies - Many of the server power supplies in use today are operating at efficiencies below what is currently available. The EPA estimated the average efficiency of installed server power supplies at 72 percent in 2005. Best-in-class power supplies are available today that deliver efficiency of 90 percent. As with other data center systems, server power supply efficiency varies depending on load. Some power supplies perform better at partial loads than others and this is particularly important in dual-corded devices where power supply utilization can average less than 30 percent.

    Google saves $30 a year in energy costs per server just by joining the battery to the server, instead of using a centralized UPS system.

  4. Power Management Software - Data centers are sized for peak conditions that may rarely exist. In a typical business data center, daily demand progressively increases from about 5 a.m. to 11 a.m. and then begins to drop again at 5 p.m. Server power consumption remains relatively high as server load decreases In idle mode, most servers consume between 70 and 85 percent of full operational power. Consequently, a facility operating at just 20 percent capacity may use 80 percent of the energy as the same facility operating at 100 percent capacity. Server processors have power management features built-in that can reduce power when the processor is idle. Too often these features are disabled because of concerns regarding response time; however, this decision may need to be reevaluated in light of the significant savings this technology can enable.
  5. Blade Servers - Many organizations have implemented blade servers to meet processing requirements and improve server management. While the move to blade servers is typically not driven by energy considerations, blade servers can play a role in energy consumption. Blade servers consume about 10 percent less power than equivalent rack mount servers because multiple servers share common power supplies, cooling fans and other components.
  6. Server Virtualization - As server technologies are optimized, virtualization is increasingly being deployed to increase server utilization and reduce the number of servers required.
  7. Cooling Best Practices - Most data centers have implemented some best practices, such as the hot-aisle/cold-aisle rack arrangement. Potential exists in sealing gaps in floors, using blanking panels in open spaces in racks, and avoiding mixing of hot and cold air. Temperatures in the cold aisle may be able to be raised if current temperatures are below 68° F. Chilled water temperatures can often be raised from 45° F to 50° F.
  8. 415V AC Power Distribution - In most data centers, the UPS facility provides power at 480V, which is then stepped down via a transformer, with accompanying losses, to 208V in the power distribution system. These stepdown losses can be eliminated by converting UPS output power to 415V. The 415V three-phase input provides 240V single-phase, line-to-neutral input directly to the server.. This higher voltage not only eliminates stepdown losses but also enables an increase in server power supply efficiency. Servers and other IT equipment can handle 240V AC input without any issues.
  9. Variable Capacity Cooling - Data center systems are sized to handle peak loads, which rarely exist. Consequently, operating efficiency at full load is often not a good indication of actual operating efficiency. Newer technologies, such as Digital Scroll compressors and variable frequency drives in computer room air conditioners (CRACs), allow high efficiencies to be maintained at partial loads. Digital scroll compressors allow the capacity of room air conditioners to be matched exactly to room conditions without turning compressors on and off. Typically, CRAC fans run at a constant speed and deliver a constant volume of air flow. Converting these fans to variable frequency drive fans allows fan speed and power draw to be reduced as load decreases. Fan power is directly proportional to the cube of fan rpm and a 20 percent reduction in fan speed provides almost 50 percent savings in fan power consumption. These drives are available in retrofit kits that make it easy to upgrade existing CRACs with a payback of less than one year.
  10. High Density Supplemental Cooling - Traditional room-cooling systems have proven very effective at maintaining a safe, controlled environment for IT equipment. However, optimizing data center energy efficiency requires moving from traditional data center densities (2 to 3 kW per rack) to an environment that can support much higher densities (in excess of 30 kW). This requires implementing an approach to cooling that shifts some of the cooling load from traditional CRAC units to supplemental cooling units. Supplemental cooling units are mounted above or alongside equipment racks and pull hot air directly from the hot aisle and deliver cold air to the cold aisle. Supplemental cooling units can reduce cooling costs by 30 percent compared to traditional approaches to cooling. These savings are achieved because supplemental cooling brings cooling closer to the source of heat, reducing the fan power required to move air. They also use more efficient heat exchangers and deliver only sensible cooling, which is ideal for the dry heat generated by electronic equipment. Refrigerant is delivered to the supplemental cooling modules through an overhead piping system, which, once installed, allows cooling modules to be easily added or relocated as the environment changes.
  11. Monitoring and Optimization - One of the consequences of rising equipment densities has been increased diversity within the data center. Rack densities are rarely uniform across a facility and this can create cooling inefficiencies if monitoring and optimization is not implemented. Room cooling units on one side of a facility may be humidifying the environment based on local conditions while units on the opposite side of the facility are dehumidifying. Cooling control systems can monitor conditions across the data center and coordinate the activities of multiple units to prevent conflicts and increase teamwork.
  12. Consolidated Data Storage – Move from direct attached storage to network attached storage. Also, faster disks consume more power so it is worthwhile to reorganizing data so that less frequently used data is on slower archival drives.
  13. Economizers - Economizers allow outside air to be used to support data center cooling during colder months, creating opportunities for free cooling. With today’s high-density computing environment, economizers can be cost effective in many more locations than might be expected.
  14. Monitor Generation Losses - Monitor and reduce parasitic losses from generators, exterior lighting and perimeter access control. For a 1 MW load, generator losses of 20 kW to 50 kW have been measured.
  15. Direct Current - Every power conversion (AC-DC, DC-AC, AC-AC) loses some energy and creates heat. Computer equipment uses and Solar creates Direct Current.


Using the model of a 5,000-square-foot data center consuming 1127 kW of power, the above actions work together to produce a 585 kW reduction in energy use.


5. Benefits
  • Reduce Data Center Capital Expense - Cutting wasted watts allows data centers to expand IT capacity within their existing walls and avoid building or buying new data center space.
  • Liquid Cooling Benefits
    • No refrigeration needed, even in the hottest climates free air cooling could be used as input liquid temperatures of 130o F could be used
    • Waste heat could be delivered at useful temperatures like 160o F
    • Pump energy could be minimized enabling PUE levels of 1.05 or better to be achieved
    • Data Centers could be silent as there would be no need for fans
    • New equipment could be thermally neutral, not adding any extra heat load to the site
    • No humidity problem, no humidifiers
    • No need for a raised floor
    • Massive improvement in reliability due to thermal stability
  • Vertical Cooling Benefits - Turning the hot and cold aisles into a horizontal configuration has a number of significant efficiency benefits:
    • Better CRAC unit efficiency
    • Lower air handling energy (larger more efficient fan units)
    • Significantly better Power Usage Efficiency
    • More cores per KW of electricity
    • The front and back of the servers (blades) are unencumbered by the need for cooling fans and slots freeing space for connectors and indicators
    • There is no front and back so servers (blades) can be fitted into both doubling the number of cores per U of rack space


6. Risks/Issues
  • Consumer Backlash - In April 2011 Greenpeace called out a number of top cloud computing companies that have fast-growing electricity needs for their lack of transparency and bad energy choices. The report also cites the positive contributions of the cloud and lists recommendations for how IT companies can green their act.
    Source: Greenpeace April 2011

  • Utilization - Data Center Servers are generally utilized at an average rate of 10%. The power draw on a server is very steap. A server may draw 225W when 100% utilized and only drop to 200W when idle. There is an opportunity to power down more efficiently. Virtualization can reduce the number of servers and increase utilization.

  • Power Availability - According to a Fall 2007 Survey of the Data Center Users Group (DCUG), an influential group of data center managers, power limitations were cited as the primary factor limiting growth by 46 percent of respondents, more than any other factor.

  • Misaligned Incentives - Except for the largest utility sized data centers, IT managers rarely have budgetary responsibility for facilities and energy use.

  • Critical Systems– Data Center availability can be critical to the organization’s health and trumps energy savings if there is a problem.

  • Retrofitting Legacy Data Centers - How do we get more cooling capacity from sites we built a decade ago for low power density applications? One of the biggest problems is that everyone is trying to work against a basic law of physics; hot air rises, and cold air falls. A conventional data centre is designed to support servers that pull air in from the front and blow it out the back of the cabinet. So common sense tells us that computers at the top of the cabinet don’t get as much cold air as computers at the bottom.

  • Inaccurate Server Specs - Many server makers tend to publish specifications for their servers that overstate how much power they need at a maximum power draw. There’s a reason for this: Server makers don’t want data center operators to try to run too many servers off one rack or power supply, only to see a surge in power use blow circuits or damage equipment. That’s how data center managers get fired. But overestimating energy consumption for servers also means data centers are often under-utilizing their available power per rack, row or section of the data center.

  • Over Cooling - According to a server expert at Intel, most data center managers keep their facilities much too cold -- as much as 15 percent too cold. In an article by Rik Myslewski published in The Register, Dylan Larson, Intel's director of server platform technology initiatives, explained that keeping data centers in the low 70s and high 60s leads to a significant amount of excess cooling, and wasted energy. The ideal temperature, per Larson as well as ASHRAE, is a balmy 80 degrees.
    Google suggests running data centers at hotter temperatures, like 80 degrees:

  • Immature Computing Efficiency Metrics – There is a need to define universally accepted metrics for processor, server and data center efficiency There have been tremendous technology advances in server processors in the last decade. Until 2005, higher processor performance was linked with higher clock speeds and hotter chips consuming more power. Recent advances in multi-core technology have driven performance increases by using more computing cores operating at relatively lesser clock speeds, which reduces power consumption. Today processor manufacturers offer a range of server processors from which a customer needs to select the right processor for the given application. What is lacking is an easy to understand and easy to use measure, such as the miles-per-gallon automotive fuel efficiency ratings, that can help buyers select the ideal processor for a given load. The performance per Watt metric is evolving gradually with SPEC score being used as the server performance measure, but more work is needed. This same philosophy could be applied to the facility level. An industry standard of data center efficiency that measures performance per Watt of energy used would be extremely beneficial in measuring the progress of data center optimization efforts. IT management needs to work with IT equipment and infrastructure manufacturers to develop the miles-per-gallon equivalent for both systems and facilities.

  • Inefficiency - Legacy equipment is inefficient. Infrastructure is typically oversized for much of its life because power requirements are overstated.
    • Multiple Power Conversion - Each time power is converted between AC and DC some power is converted to heat which must then be removed. The efficiency of UPS's, Transformers and PDU's (with transformers) varies.

  • Hot Aisle/Cold Aisle Issues
    • Fire Codes - When isolating those air streams, data centers can leave themselves vulnerable to violating fire codes that require detection and prevention devices throughout the room. If the hot aisle exceeds 110 degrees, you could actually exceed the National Electric Code standards. If you have plastic sheets over your racks and don’t have sprinklers in the contained area, how can a potential fire be squelched?
    • Employee Comfort - It isn’t comfortable for technicians to work on equipment in very hot conditions

  • Liquid Cooling Issues
    • Redundancy is much more problematic and expensive with liquid than air. An extra CRAC or two in the data center can be used as a backup for a fairly wide expanse of racks. How do you get cooling redundancy for a 20KW liquid cooled rack? Not impossible but very expensive.
    • Cost of equipment is significantly higher, at least today. Rather than a Single CRAC servicing 20-30 racks, the liquid solutions are most always one per rack. Even with densification of the racks, the costs of liquid cooling equipment inside the data center are many times that of air.
    • Space Issues - How many data centers really have a "Space" problem? With the 6-10KW racks of the typical data center replacing the traditional 1-3KW racks, very few sites run out of space before they are up against serious power and plant cooling constraints. 6-10KW is easily handled with traditional cooling solutions and proper airflow design.


7. Success Criteria
  1. Measure PUE -Know your data center's efficiency performance by measuring energy consumption and frequent PUE monitoring.
  2. More Sophisticated Power Management - While enabling power management features provides tremendous savings, IT management often prefers to stay away from this technology as the impact on availability is not clearly established. As more tools become available to manage power management features, and data is available to ensure that availability is not impacted, we should see this technology gain market acceptance. More sophisticated controls that would allow these features to be enabled only during periods of low utilization, or turned off when critical applications are being processed, would eliminate much of the resistance to using power management.
  3. Matching Power Supply Capacity to Server Configuration - Server manufacturers tend to oversize power supplies to accommodate the maximum configuration of a particular server. Some users may be willing to pay an efficiency penalty for the flexibility to more easily upgrade, but many would prefer a choice between a power supply sized for a standard configuration and one sized for maximum configuration. Server manufacturers should consider making these options available and users need to be educated about the impact power supply size has on energy consumption.
  4. Design for High Density - A perception persists that high-density environments are more expensive than simply spreading the load over a larger space. High density environments employing blade and virtualized servers are actually economical as they drive down energy costs and remove constraints to growth, often delaying or eliminating the need to build new facilities.
  5. Integrate Measurement and Control - Data that can be easily collected from IT systems and the racks that support them has yet to be effectively integrated with support systems controls. This level of integration would allow IT systems, applications and support systems to be more effectively managed based on actual conditions at the IT equipment level.
  6. Location, Location, Location - Ideally a data center ought to be located in a cold place with plenty of electrical power and close to a consumer (market garden, manufacturing process, swimming pool complex) that can use the warm water or air that is a byproduct of operations. It should not be close to incineration plants or other industrial processes that expel foul contamination or dust into the air. Trees that give off sap are also best kept a reasonable distance away. Being close to a source of cold water like a large lake, or a fast flowing river can make cooling much less costly. Being close to multiple sources of high capacity network connections is also pretty essential.
  7. Manage Air Flow - Good air flow management is a fundamental to efficient data center operation. Start with minimizing hot and cold air mixing and eliminate hot spots.
  8. Adjust the Thermostat - Raising the cold aisle temperature will minimize chiller energy use. Don't try to run at 70F in the cold aisle, try to run at 80F; virtually all equipment manufacturers allow this.
  9. Use Free Cooling - Water or air-side economizers can greatly improve energy efficiency.
  10. Optimize Power Distribution - Whenever possible use high-efficiency transformers and UPS systems. 415V power distribution is used commonly in Europe, but UPS systems that easily support this architecture are not readily available in the United States. Manufacturers of critical power equipment should provide the 415V output as an option on UPS systems and can do more to educate their customers regarding high-voltage power distribution.
  11. Buy Efficient Servers - Specify high-efficient servers and data storage systems. The Climate Savers Computing Initiative offers resources to identify power-efficient servers.
  12. Improved Sleep Mode - Engineers must architect networks that wake up and go to sleep faster. Network designers must challenge the “always on” assumption for desktops and appliances. Networks will require significant improvements in scheduling and forecasting of work to allow more machines to go to sleep at any given moment.

8. Companies
  1. Cirrascale, Poway, CA- Trying to maintain hot and cold air in separate vertical aisles between racks is hard to achieve. Cirrascale has a patented technology that uses bottom to top cooling (Vertical Cooling) using specialized racks or containers. This turns the hot aisle, cold aisle concept through 90 degrees and the result is a more natural cold layer at the bottom of the rack with a hot layer above. Cold air, hot air mixing is reduced. Fans trays throughout the infrastructure of the rack push the cold air from the bottom of the rack out the top.

  2. Emerson Network Power - Leibert Global HQ, Columbus, OH - Supplies cold aisle containment solutions, The portfolio includes high performance cooling systems which includes Thermal Management for larger systems for Telecom Switching Centers, Internet Data Centers and Computer Rooms.

  3. Green Grid, Beaverton, OR - A global consortium dedicated to advancing energy efficiency in data centers and business computing ecosystems. It was founded in February 2007 by several key companies in the industry – AMD, APC, Dell, HP, IBM, Intel, Microsoft, Sun Microsystems and VMware. The Green Grid has since grown to hundreds of members, including end users and government organizations, all focused on improving data center efficiency.

  4. Google - Publishes the data about the efficiency of their data centers on their sustainable computing website.

  5. IBM - Says that removing excess heat from data centers is as much as 4000 times more efficient via water than it is by air. Their Power 575® supercomputer introduced in 2008, equipped with IBM’s latest POWER6® microprocessor, uses water-chilled copper plates located above each microprocessor to remove heat from the electronics. The IBM lab in Zurich has developed a new cooling technology by attaching small water pipes to the surface of each computer chip in a server. Water is piped within microns of the chip to cool it down, then the waste water is piped out hot enough to make a cup of Ramen, heat a building, or keep a swimming pool warm. The new cooling system will reduce the carbon footprint of servers by 85 percent and the energy use by 40 percent.
    IBM Hydrocluster -
  6. EPA - The Role of Distributed Generation and Combined Heat and Power (CHP) Systems in Data Centers

  7. Modius, a San Francisco-based startup that integrates a host of facility-side systems in a single database and automation platform. It has crossbreed facility partnerships under way with data center power distribution company STARLINE, which adds a store of super-accurate power data to Modius’s per-server calculations. That technology could be important when some servers measure their own power use inaccurately, or not at.

  8. Power Assure, Santa Clara, CA - Provides visibility, intelligence, and dynamic automation to help CIOs, IT directors, and facilities managers optimize efficiency, service levels, and power consumption within and across data centers. Developed PAR4. The company is privately held with funding from ABB, Draper Fisher Jurvetson, Good Energies, Point Judith Capital, and a grant from the Department of Energy. Power Assure partners include UL, Cisco, ABB, Intel, Dell, IBM, Raritan, and VMware.

  9. Sentilla, Redwood City, CA - Their flagship product Sentilla Energy Manager is a software-only approach and patent-pending virtual metering that analyzes power usage directly and tracks requirements, performance and capacity of every piece of equipment. Sentilla has begun to graduate from analyzing data centers for energy efficiency to analyzing them for overall computing efficiency and effectiveness. In 2010, the company estimated that the top 40 online retailers spend an estimated $110 million more on energy than they should in preparing for Cyber Monday, the first workday after Thanksgiving that's been enshrined as the start of the online holiday shopping season. The excess power comes from servers churning in idle and untracked assets waiting around for the big shopping bump.

    In August 2011, Sentilla raised $15 million in a third round of funding. SingTel Innov8, the VC arm of Singapore telco carrier, joined existing investors and now owns 23.4 percent of the company.


  10. Synapsense - Folsom, CA - A startup launched by former Intel exec Peter Van Deventer and UC Davis computer science professor Raju Pandey that makes wireless sensor technology and software to monitor and reduce power usage and cooling in data centers. Obtained a $5 million round of funding in 2010 from GE, Emerald Technology Ventures, Sequoia Capital, Robert Bosch Venture Capital, American River Ventures, Nth Power and DFJ Frontier.

  11. TrendPoint Systems, San Ramon, CA - Provides web based solutions for remote data center monitoring. EnviroCube, monitors the power going into various datacenter equipment and the air conditioning system. The data allows it to determine how much heat should be produced and where it will come out. It then cross-checks it against data on the ambient environment to determine cooling efficiency or gaps in a cooling strategy.

  12. Verdiem - Seattle, WA - Provides enterprise software solutions to global businesses and individuals that help reduce energy consumption of PC networks. Verdiem recevied $4.7 million from Kleiner Perkins and Microsoft for PC energy management

    Verdiem's SURVEYOR allows the central administration of power management settings for networked PCs. Intelligent policies maximize energy savings by placing machines into a lower power states without interfering with end-user productivity, desktop maintenance or upgrades. Verdiem's consumer product, Edison, is available for free download.

With the Cascade Effect, a 1 Watt savings at the server component level creates a reduction in facility energy consumption of approximately 2.84 Watts


8. Links
  1. Facebook is sharing much of what it has learned about making data centers more efficient in its Open Compute Project  As a result of the Open Compute Project, Facebook’s Prineville, Oregon data center is now one of the most efficient in the world:
    • Facebook’s energy consumption per unit of computing power has declined by 38% 
    • Their Prineville, Ore., data center, which opened in April 2011, had a Power Usage Effectiveness PUE of 1.08 for the second quarter of 2011, compared to 1.07 in the first quarter 
    • For the first half of the year, this means 93% of the energy from the grid makes it into every Open Compute server. This PUE is much lower than the industry standard of 1.5 
  2. We’ve removed centralized chillers, eliminated traditional inline UPS systems and removed a 480V to 208V transformation.
    Ethernet-powered LED lighting and passive cooling infrastructure reduce energy spent on running the facility.
  3. DOE - Data Center Energy Efficiency
  4. Information Week White Paper - Energy Logic: Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems
  5. ASHRAE has published several excellent papers on cooling best practices.
  6. The Hot Aisle Blog – Cooling Articles
  7. The Green Grid - The Green Grid is a global consortium of IT companies and professionals seeking to improve energy efficiency in data centers and business computing ecosystems around the globe. The organization seeks to unite global industry efforts to standardize on a common set of metrics, processes, methods and new technologies to further its common goals. White papers on metrics
  8. DOE Data Center Website: Sign up to stay up to date on new developments
  9. Lawrence Berkeley National Laboratory (LBNL) Data Center Energy Efficiency
    Design Guides developed based on best practices Web based training
  10. ASHRAE Data Center technical guidebooks
  11. Energy Star® Server and Data Center Efficiency Program
  12. Uptime Institute - White Papers

    6 comments:

    1. I really find these data centers very interesting. I think it's good working in those places. Thanks.

      office space

      ReplyDelete
    2. Buying highly efficient servers and data-storage systems will cost you a lot of money. Though in the long run, these units can be save your business.

      ReplyDelete
    3. I like this site presented and it has given me some sort of inspiration to succeed for some reason, so thank you.
      Cooling in GTA

      ReplyDelete
    4. Excellent post, thanks for sharing. If you're interested in learning more about data center management software I would strongly recommend you check out AlphaPoint Technology. Great DCIM software at competitive rates.

      - J.O.

      ReplyDelete