Cover Letter

Coming Soon

Saturday, September 26, 2015

Revenue Decoupling

By breaking the link between the utility's sales and profits, decoupling creates an incentive for utilities to sell less energy and focus on energy efficiency.

Navigate this Report
Back to Markets and Pricing Index
1. Background

2. Acronyms/Definitions
3. Business Case
4. Benefits
5. Risks/Issues
6. Next Steps
7. Links

Decoupling is a major reason California's per capita energy consumption has stayed flat since the 1970's while the country as a whole has increased 50%

  • The revenues and profits of most utilities are still based on how much electricity they sell, rather than on the quality, reliability and efficiency of service they provide. As a result, utilities have had little incentive to meet consumer quality expectations, improve reliability, or create new kinds of innovative products and services like those found in all other service sectors.

  • Most investor-owned utilities create shareholder value every time they make capital investments, regardless of the value of that investment to society or in the market. In other words: utilities build it, rate base it, and get a regulated return on it. Regulators authorized this system under the assumption that grid directly and indirectly powers the economy, keeping rates aligned with sales.

  • Power companies have been reluctant to invest in technologies that will reduce consumption of the product they sell, even if there are other benefits. One way to realign the public interest with that of the utilities is through a process called “decoupling” which breaks the direct relationship between electricity sales and profits. Decoupling has been successfully employed in California. Largely due to the proper incentives, energy use per person has remained largely flat over the past 30 years in California, but it has increased by roughly 50% for the rest of America.

  • Historically, utility regulators have set electric and gas rates based on projected sales volume. Since this also sets a utility’s revenues, it is a disincentive for them to promote efficiency or to make it easy for customers to install on-site generation. “Decoupling” breaks the linkage between the amount of electricity or gas a utility sells and its ability to generate profits. This approach has the potential to enable utilities to remain profitable while investing in improved efficiency and reliability. Some states let utilities keep a small part of what they save for their customers as extra profit. This fully aligns utilities with customers’ incentives and can strongly motivate utilities to help customers use electricity more efficiently.

2. Acronyms/Definitions
  1. Federal Utilities - Federal electric utilities represent less than 1% of all electric utilities; provide approximately 4% of generation, and account for about 1% of total sales to ultimate consumers. Federal electric utility generation is primarily sold at wholesale to municipal and cooperative electric utilities. Federal power is sold not for profit, but to recover the costs of operations and repay the Treasury for funds borrowed to construct generation and transmission facilities. While the Federal utilities are not subject to rate regulation, they must submit their rates to the FERC for purposes of demonstrating that they are at a level sufficient to repay debt owed to the Federal government. Federal electric utilities operate approximately 200 power plants. Most of the Federal power plants are hydroelectric projects designed for flood control, irrigation purposes and pursuant to statutory obligations to supply wholesale power to publicly-owned utilities and electric cooperatives The 9 Federal electric utilities in the United States are part of several agencies in the U.S. Government:
    1. Army Corps of Engineers;
    2. Bureau of Indian Affairs
    3. Bureau of Reclamation
    4. International Boundary and Water Commission in the Department of State
    5. Power Marketing Administrations in the Department of Energy (Bonneville, Southeastern, Southwestern, and Western)
    6. TVA- Tennessee Valley Authority the largest Federal producer

  2. Decoupling - Disassociation of a utility’s profits from its energy sales. - Here, one determines during a normal rate case how much revenue a utility requires to cover its expenses and sets an electric rate which is expected to produce that level. Later, perhaps at the end of a year, we return to see whether, in fact, that revenue has been generated or whether, due to fluctuations in sales from the expected level, some greater or lesser amount has been realized. ;To the extent that the utility has, in fact, received too little (too much) the error is correct through a surcharge (rebate).

  3. DER – Distributed Energy Resource - Including energy efficiency, load management and on-site electricity generation. Customers who generate their own electricity and offset their consumption at retail electricity rates pay lower electricity bills and reduce the amount of electricity revenues collected by the utility. Reduced revenues have a direct impact on all utilities’ recovery of costs associated with each customer, and for regulated utilities, profits.

  4. DSM - Demand Side Management - the modification of consumer demand for energy through various methods such as financial incentives and education. Usually, the goal of demand side management is to encourage the consumer to use less energy during peak hours, or to move the time of energy use to off-peak times such as nighttime and weekends.

  5. LRA - Lost Revenue Adjustment - An alternative to decoupling - One calculates how many dollars a utility has lost due to its DSM programs and increases revenues by that amount. For example, suppose a utility has a program to replace existing electric motors with more efficient ones, and that it estimates that, as a result, its electricity sales are 100 million kwh lower as a result. If each kilowatt hour produced, say two cents in revenue net of fuel and any other variable costs, then the utility would lose $2 million in net revenue to this program which would be recovered under a lost-base revenue adjustment.

  6. FERC - The Federal Energy Regulatory Commission - United States federal agency with jurisdiction over interstate electricity sales and wholesale electric rates,

  7. Inflation and Productivity Decoupling - The target revenues are adjusted between rates, based on assumed or known changes in inflation and company productivity. Inflation is often based on a recognized government-published index, such as the consumer price index. Productivity is more often litigated in the rate case and serves to offset inflation over time.

  8. IOU – Investor Owned Utility - Privately owned, they represent 60% of the total number of electric utilities and approximately 42% of generation, and 66% of sales in the United States. Like all private businesses, IOU’s fundamental objective is to produce a profit for their investors.
    IOU’s are granted service monopolies in certain geographic areas and are obliged to serve all consumers. As franchised monopolies, these utilities are regulated and required to charge reasonable prices, to charge comparable prices to similar classifications of consumers, and to give consumers access to services under similar conditions.
    Many IOU’s in states that have adopted retail competition have divested their generation and placed their transmission assets under the operational control of independent system operators (ISOs). These IOUs’ primary function is providing distribution service and serving as the supplier of last resort for those retail customers that have not chosen an alternative retail energy service provider.

  9. Net Margins - a term used in electric cooperative financial statements and equals revenues in excess of the cost of providing service

  10. POU - Publicly-Owned Utility - Non-profit government entity that are organized at either the local or State level. There are 2,009 publicly-owned electric utilities in the United States. They represent about 61% of the number of electric utilities, supply approximately 8% of generation, and account for about 15% of retail sales. They obtain their financing from the sale of general obligation bonds and from revenue bonds secured by proceeds from the sale of electricity. Publicly-owned electric utilities include:
    1. Municipals,
    2. Public utility districts and public power districts
    3. State authorities
    4. Irrigation districts
    5. Joint municipal action agencies

    Municipal utilities were established to provide service to their communities and nearby consumers at cost. Municipal utilities typically return a portion of their net income to consumers in the form of a general funds transfer. Retail rates may be lower than neighboring investor-owned utilities because they are not subject to State and Federal income tax. Municipal utilities, as well as other publicly owned utilities, are able to issue low cost, tax exempt debt to finance construction. Most municipal utilities simply distribute power, although some large ones produce and transmit electricity as well. Public utility districts are concentrated in Nebraska, Washington, Oregon, and California.

  11. PUC - Public Utility Commission – Regulates retail rates for electricity. Operate at the state level

  12. REV - Reforming the Electric Vision - New York is actively spurring clean energy innovation, bringing new investments into the State and improving consumer choice and affordability. In its role, the PSC is aligning markets and the regulatory landscape with the overarching state policy objectives of giving all customers new opportunities for energy savings, local power generation, and enhanced reliability to provide safe, clean, and affordable electric service.

    The REV initiative will lead to regulatory changes that promote more efficient use of energy, deeper penetration of renewable energy resources such as wind and solar, wider deployment of “distributed” energy resources, such as micro grids, roof-top solar and other on-site power supplies, and storage. It will also promote markets to achieve greater use of advanced energy management products to enhance demand elasticity and efficiencies. These changes, in turn, will empower customers by allowing them more choice in how they manage and consume electric energy.

    The vision put forth in REV is one in which DERs are the first resource chosen, not the last.
  13. RPC - Revenue Per Customer Decoupling - The average revenue per customer for each volumetric rate is computed at the end of the rate case. In subsequent periods, target revenues are derived by multiplying the actual number of customers served by the RPC value. The underlying premise for RPC decoupling is that, between rate cases, a utility’s underlying cost structure is driven primarily by changes in the number of customers served.

  14. Rates –in a decoupled scenario, rates are set to recover the pre-determined revenue requirement and rate options are usually set to be revenue-neutral. Rates vary by service voltage.

  15. Stranded Assets - Assets that have suffered from unanticipated or premature write-downs, devaluations or conversion to liabilities. In discussions of electric power generation deregulation, the related term stranded costs represents the existing investments in infrastructure for the incumbent utility that may become redundant in a competitive environment. Utilities fear that they'll get stuck with billions of dollars in stranded assets.

  16. Utility Cooperative - Owned by their members, the consumers they serve. There are 883 cooperatives operating in 47 States, generally in rural areas. Cooperative electric utilities represent about 27% of U.S. electric utilities, 10% of sales, and around 4% of generation. Cooperative service territories generally reflect areas that historically were viewed as unprofitable to service by investor-owned utilities because of the relative low number of customers per line-mile.
    Cooperatives are required to provide electric service to their members at cost, as that term is defined by the IRS. Electric cooperatives set rates similar to municipal utilities. However, while municipal utilities may return a portion of net income to the general fund of the local government, the net margins earned by cooperatives are considered a contribution of equity by the members that are required to be returned to the members.

US Electric Sales by Class of Ownership

3. Business Case
  • As a Smart Grid enables more conservation and distributed generation, regulators will have to address the problem of how to provide appropriate rewards to utilities for actions that will reduce total electricity sales. Decoupling makes it cost effective for utilities to invest in smart grid technologies that will reduce consumption of the product they sell.

  • Under decoupling, utilities submit their revenue requirements and estimated sales to regulators. The state PUC sets the rates by regularly applying adjustments to ensure that utilities collect no more and no less than is necessary to run the business and provide a fair return to investors. Any excess revenue gets credited back to customers. Any shortfall gets recovered later from customers.
  • First-mover states such New York, Hawaii, and California, are tackling this problem by investigating how to align utility investments with a more-efficient grid that relies less on peak power from centralized assets, and more on integrating DERs into enhanced planning, grid operations, and market operations.

  • Decoupling v. Lost Revenues
  • Decoupling Lost Revenue

    • Removes sales incentive and all DSM disincentives

    • Removes some DSM disincentives

    • Does not require sophisticated measurement and/or estimation

    • Requires sophisticated measurement and/or estimation

    • Utility does not profit from DSM which does not actually produce savings

    • Utility may profit from DSM which does not actually produce savings

    • May reduce controversy in subsequent utility rate cases

    • No direct effect on subsequent rate cases

    • Removes utility disincentive to support public policies which increase efficiency, e.g. rate design, appliance efficiency standards, customer initiated conservation

    • Continues utility disincentive to pursue activities or support public policies which increase efficiency

    • Reduces volatility of utility revenue resulting from many causes

    • Reduces volatility of utility earnings only from specified DSM projects

    4. Benefits
    • Removes the disincentive for utilities to encourage energy conservation, since their revenues are not tied to the amount of energy sold.

    • Provides an incentive for utilities to focus on effective energy efficiency programs and invest in activities that reduce load and reduce stress on the grid.

    • Aligns shareholder and customer interests to provide for more economically and environmentally efficient resource decisions.
    • Liberal Economics - markets will do a better job of determining value than central planners, which means a cheaper, more efficient power system with less waste and redundancy.

    5. Risks/Issues
    • Protects Utilities from Externalities - RPC decoupling may inadvertently compensate for revenue changes driven by activities other than energy conservation programs like demand response that regulators would rather not include in the system. For example, reductions in consumption caused by an economic downturn would be compensated for in a decoupling mechanism. Year-to-year reductions in electric sales can be the result of factors other than conservation, notably changes in weather and economic conditions. As a result, adjusting electric rates to make up for reduced sales can shift risks from the companies to ratepayers and provide the companies with a windfall.

    • Negotiated Return Discourages Innovation - Many of today’s utility business models are based upon the utility earning a negotiated return on prudent capital investments. It is not surprising, therefore, that the utilities responsible for making prudent investments focus on minimizing risk. Even with decoupling, utilities are often slow to adopt new technologies that have not been extensively proven outside of a laboratory. In general, the existing utility business model does not provide economic rewards for cutting-edge utilities.

    • Regulated Monopoly Utilities Distort Markets - utilities have every incentive to begrudge competitors, cling to sunk costs, and use access to regulators to keep the game rigged in their favor. As long as a company with a captive set of customers and state-guaranteed returns is participating in energy-service markets, it will distort those markets.   If they use their pull with regulators to protect themselves, it will only slow down (and raise the cost of) the clean energy transition.

    • Regulatory Model Discourages Innovation - regulatory model is built for caution. It takes a long time to innovate on technologies that only come in multibillion-dollar increments, and when your returns are guaranteed by law and reliability is your only obligation, you're likely to stick with the tried and true.

    6. Next Steps
    • Expand decoupling beyond the states where it is approved currently: California, Connecticut, District of Columbia, Idaho, Hawaii, Maryland, Massachusetts, Michigan, Minnesota, New York, Oregon, Vermont, and Wisconsin

    • In February 2010, the Hawaii Public Utilities Commission (PUC) approved a new method for setting electric rates designed to encourage a clean energy economy for Hawaii. Under the new “decoupling” method, electric revenues would be de-linked, or “decoupled,” from the amount of electricity (kilowatt-hours) sold

    • Decoupling pilots are underway in: Minnesota, Oregon, Wisconsin
    • Decoupling is pending in : Delaware, Indiana, New Hampshire, New Jersey, New Mexico, Utah

    7. Links
    1. Vox Energy and Environment -  Reimagining electric utilities for the 21st century David Roberts Sept 11, 2015
    2. Solar Electric Power Association - REPORT # 03-09 Decoupling Utility Profits from Sales
    3. Smart Grid News - Status of Revenue Decoupling for Electric Utilities by State March 2009

    4. Electric Rate Decoupling in Other States By: Kevin E. McCarthy, Principal Analyst Connecticut General Assembly 2009

    5. A state activity report developed by The Edison Foundation, Institute for Electric Efficiency is available here.

Markets and Pricing

If consumers are willing to accept more variability in price, they can get a discount from a flat base rate because the cost of delivering service goes down

Back to Smart Energy

saturday, september 26, 2015
By breaking the link between the utility's sales and profits, decoupling creates an incentive for utilities to sell less energy and focus on energy efficiency.

I.  Community Solar

“What if everyone, absolutely everyone, could own their own solar panels?”

H. California Net Metering 
monday, november 3, 2014 
The solar industry and the state’s utilities disagree over just what impact net metering has on electricity rates, utility costs and grid operations.

G. Texas Retail Electricity Market 
Electricity deregulation allows multiple companies to compete for business in an electric market, but some argue that the system implemented in Texas in 2002 has led to higher prices for consumers.

Smart meters don't offer much value to consumers unless coupled with dynamic pricing, California PUC is currently researching whether or not to replace tiered rates.

E. Community Choice Aggregation
Procures renewable sources of electricity and partners with a utility to distribute energy to local communities, You get all the advantages of cleaner, greener, healthier energy consumption AND all of the advantages of the established, experienced energy provider.

D. Wholesale Electricity Markets
sunday, august 5, 2012
Once generated, electricity must be able to instantaneously find an end use. The precise balancing act between creating electricity and getting it to the end user requires the ultimate just-in-time market.

Regulatory market to create competition can also hamper smart grid development. The rules force separation of supply, wholesale transmission, and retail distribution functions. But all those areas need to coordinate to optimize smart grid planning and data usage.

MONDAY, APRIL 16, 2012
Obligates utilities to buy renewable electricity at above-market rates set by the government.

monday, October 31, 2011
How will frequency regulation and load management be monetized?

MondaY, june 11, 2011
Experience in the introduction of retail competition has been mixed

Green Data Centers

Data Centers are booming Information Factories. They consumed 1.5% of US Electricity in 2006 with usage projected to double every five years.

1. Background
2. Acronyms/Definitions
3. Business Case
4. Energy Savings Strategies
5. Benefits
6. Risks/Issues
7. Success Criteria
8. Case Study - Substituting CHP for UPS
9. Companies
10. Links

BT’s Rochdale Data Center uses curtains to separate the hot and cold aisles, thereby reducing the amount of hot air that mixes with cold air from the plenum floor and dramatically increasing cooling efficiency. Each cabinet is also fitted with blanking panel to ensure that the only path for cold air to pass is via a piece of equipment.

  • The explosion of digital content, big data, e-commerce, and Internet traffic is driving the economy, but it is also making data centers one of the fastest-growing consumers of electricity in developed countries, and one of the key drivers in the construction of new power plants.
  • Server farms—also known as data centers—are the enormous housing facilities that make the internet possible. A single Google data center, in Oregon consumes as much energy as a city of 200,000. That's because servers not only have to be on 24/7, they need to be kept cool 24/7. Up to 50 percent of the power they use is just to keep them from melting down. Overall, the internet is responsible for 2% of global carbon emissions, about the same as the aviation industry. A single Data Center can consume 30 -100 MW of electric power. Data Center energy demand may be larger than some small utilities’ capacity.
  • According to the DOE, data centers are responsible for 3 percent of U.S. energy consumption, and growing, and a typical 125,000-SF data center consumes $3 million worth of energy per year. 
  • The Natural Resources Defense Council reports that in 2013, U.S. data centers consumed an estimated 91 billion kilowatt-hours of electricity, equivalent to the annual output of 34 large (500-megawatt) coal-fired power plants. Data center electricity consumption is projected to increase to roughly 140 billion kilowatt-hours annually by 2020, the equivalent annual output of 50 power plants, costing American businesses $13 billion annually in electricity bills and emitting nearly 100 million metric tons of carbon pollution per year.
  • Data centers consume around 2.5 percent of the power in Northern California and the total consumed by data centers in the area has been growing by 15 percent a year.
  • Another impact of higher energy densities is that server hardware is no longer the primary cost component of a data center. The purchase price of a new (1U) server has been exceeded by the capital cost of power and cooling infrastructure to support that server. And this will soon be exceeded by the lifetime energy costs for that server alone. This represents a significant shift in data center economics that seriously challenges conventional cooling strategies.

Analysis of a typical 5000-square-foot data center shows that demand-side computing equipment accounts for 52 percent of energy usage and supply-side systems account for 48 percent.

2. Acronyms/Definitions
  1. 80 Plus - An electric utility-funded incentive program to integrate more energy-efficient power supplies into desktop computers and servers. Participating utilities and energy efficiency organizations across North America have contributed over $5 million of incentives to help the computer industry transition to 80 PLUS certified power supplies

  2. ASE - Air Side Economizer - Can save energy in buildings by using cool outside air as a means of cooling the indoor space. When the enthalpy of the outside air is less than the enthalpy of the recirculated air, conditioning the outside air is more energy efficient than conditioning recirculated air. When the outside air is both sufficiently cool and sufficiently dry (depending on the climate) the amount of enthalpy in the air is acceptable to the control, no additional conditioning of it is needed; this portion of the air-side economizer control scheme is called free cooling.

    Google turns to free cooling, or outside air, whenever possible:
  3. Blade Servers – Stripped down computer servers with a modular design optimized to minimize the use of physical space. Whereas a standard rack-mount server can function with a power cord and network cable, blade servers have many components removed to save space, minimize power consumption. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer blade-enclosure designs feature high-speed, adjustable fans and control logic that tune the cooling to the system's requirements, or even liquid cooling-systems. At the same time, the increased density of blade-server configurations can still result in higher overall demands for cooling with racks populated at over 50% full.
  4. Blanking Panels - Minimize recirculation of hot air. Each cabinet is fitted with blanking panel to ensure that the only path for cold air to pass is via a piece of equipment. Missing cabinets and missing equipment offers an alternative route for airflow that causes hot and cold air to mix reducing air handling

  5. Carbon Intensity Per Unit of Data. - Content delivery network leader Akamai has been reporting the carbon emissions related to its cloud computing-based services in the metric of “CO2 per megabytes of data delivered.” Establishing this metric enables Akamai to compare cloud computing energy usage across the industry, and Akamai is also in the process of making this metric available on a monthly basis to its customers. Greenpeace has lauded Akamai for its transparency and reporting of this metric, and gave Akamai the highest grade for “transparency,” out of 10 giant Internet players like Google, Apple and Facebook.

  6. CFD - Computational Fluid Dynamics - Used to identify inefficiencies and optimize data center airflow.

    How Google manages air flow:
  7. CRAC - Computer Room Air Conditioner

  8. CSCI - Climate Savers Computing Initiative - An effort to reduce the electric power consumption of PCs in active and inactive states.

  9. CUE - Carbon Usage Effectiveness - Similar to GPUE - Also created by Green Grid, CUE is a ratio of the total carbon emissions due to the energy consumption of the data center (the same one used for the PUE) compared to the energy consumption of the data center’s servers and IT equipment (the same one used for PUE). The metric is expressed in kilograms of carbon dioxide (kgCO2eq) per kilowatt-hour (kWh), and if a data center is 100-percent powered by clean energy, it will have a CUE of zero. The CUE could one day be a particularly important metric if there is a global price on carbon.

  10. DCIE - Data Center Infrastructure Efficiency - The reciprocal of PUE and is expressed as a percentage that improves as it approaches 100%. It is Energy for IT Equipment/Total Energy for the Data Center

  11. Data Deduplication - Whenever you send out a press release or goofy office party photo, that document gets duplicated several times. There is a tremendous amount of promise for reducing power through reducing storage. Saves up to 95% for full back ups; 25% to 55% for most data sets.
  12. Double Conversion UPS – Used in most datacenters, these systems convert incoming power to DC and then back to AC within the UPS. This enables the UPS to generate a clean, consistent waveform for IT equipment and effectively isolates IT equipment from the power source. UPS systems that don’t convert the incoming power (line interactive or passive standby systems) can operate at higher efficiencies because of the losses associated with the conversion process. However, these systems may compromise equipment protection because they do not fully condition incoming power. Care must be taken to ensure reductions in energy consumption are not achieved at the cost of reduced equipment availability.

  13. DPM - Distributed Power Management - Migrates machines off of underutilized hosts in an effort to put the hosts into standby mode to conserve energy.

  14. EPEAT - Electronic Product Environmental Assessment Tool - Created by the Green Electronics Council to assist in the purchase of "green" computing systems. The Council evaluates computing equipment on 28 criteria that measure a product's efficiency and sustainability attributes. On 2007-01-24, President Bush issued Executive Order 13423, which requires all United States Federal agencies to use EPEAT when purchasing computer systems.

  15. GPUE - Similar to CUE - Green Power usage effectiveness (GPUE) is a proposed measurement of both how much sustainable energy a computer data center uses, its carbon footprint per usable Kwh and how efficiently it uses its power; specifically, how much of the power is actually used by the computing equipment (in contrast to cooling and other overhead). It is an addition to the PUE definition and was first proposed by Greenqloud. GPUE is a way to “weigh” the PUE to better see which data centers are truly green in the sense that they indirectly cause the least amount of CO2 to be emitted by their use of sustainable or unsustainable energy sources.

    GPUE = G x PUEx (for inline comparison of data centers) or = G @ PUEx (a better display and for co2 emission calculations) G = Weighed Sum of energy sources and their lifecycle KG CO2/KWh

  16. Hot-Aisle/Cold-Aisle Configuration - The arrangement of rack computer cabinets within the data center so that hot and cold air are separated into separate aisles thereby improving cooling efficiency. Data Centers typically have a raised floor or plenum floor arrangement where cold air is delivered under pressure, causing it to escape from every opening. Typically a certain number of the tiles in the floor are perforated or integrate an opening vent. Hot air rises to the top of the computer room, where it is captured by Computer Room Air Conditioners (CRAC) and chilled to be pumped back under the floor. An efficient data center ensures that as much of the cold air is drawn across hot computer parts as possible and that hot and cold air are not allowed to mix.

  17. Hot-Aisle/Cold-Aisle Containment – A variation on the above configuration that isolates hot and cold air streams so they don’t mix with one another and cause energy inefficiencies.

  18. Hypervisor - Also called virtual machine monitor (VMM) - Allows multiple operating systems to run concurrently on a host computer— a feature called hardware virtualization. The hypervisor presents the guest operating systems with a virtual platform and monitors the execution of the guest operating systems. In that way, multiple operating systems, including multiple instances of the same operating system, can share hardware resources.

  19. PAR4 - A metric developed by startup Power Assure that measures server power consumption in different ways, including monitoring idle power, peak power, total utilization power, and “transactions-per-watt.” Essentially, PAR4 enables servers of different makes, models and generations to be compared to one another in terms of energy efficiency. That type of detailed measurement is mostly lacking for servers today. Power Assure says PAR4 is already gaining acceptance from big players like Intel, Dell and Cisco, which have been working to incorporate PAR4 into their systems
  20. PSU – Power Supply Unit - (aka PDU - Power Distriibution Unit) Efficiency varies widely. Desktop PSU’s are generally 70–75% efficient dissipating the remaining energy as heat. An industry initiative called 80 PLUS certifies PSUs that are at least 80% efficient; typically these models are drop-in replacements for older, less efficient PSUs of the same form factor. As of July 20, 2007, all new Energy Star 4.0-certified desktop PSUs must be at least 80% efficient
  21. PUE Ratio - Power Usage Effectiveness - A metric used to determine the energy efficiency of a data center. PUE is determined by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it. PUE is therefore expressed as a ratio, with overall efficiency improving as the quotient decreases toward 1. Uptime estimates most facilities could achieve 1.6 PUE using the most efficient equipment and best practices. PUE was developed by the Green Grid provides a measure of infrastructure efficiency, but not total facility efficiency.

    Data centers with legacy cooling infrastructures average a PUE of 2.25, according to the Uptime Institute. With the introduction of next-generation CRACs, AHUs and chiller systems, PUEs can be reduced to 1.25 without sacrificing temperature or humidity control. To put that in a financial context, lowering a PUE from 2.25 to 1.25 can slash spending from $0.44 per ton-hour for cooling to less than $0.05 per ton-hour.

  22. How Google looks at PUE
  23. RH – Relative Humidity - Data Center air should contain the proper amount of water vapor to maximize the availability of computing equipment. Air containing too much or too little water vapor can cause failures. At the outer extremities of RH, we can see condensation forming on equipment, or at the other end, static electricity buildup and discharge.

  24. RSE - Refrigerant Side Economizing - Can reduce refrigeration compressor energy by up to 100 percent. By deploying RSE solutions directly in the return air based on the outside wet-bulb temperature, rather than dry bulb temperature relied upon by ASE, economizing hours can be increased by as much as 50 percent, as compared to airside economizing and typical waterside economizing.

  25. Server Virtualization – A method of partitioning a physical server computer into multiple servers such that each has the appearance and capabilities of running on its own dedicated machine. Each virtual server can run its own full-fledged operating system, and each server can be independently rebooted.

  26. SPEC - Standard Performance Evaluation Corporation - A non-profit corporation formed to establish, maintain and endorse a standardized set of relevant benchmarks that can be applied to the newest generation of high-performance computers. SPEC develops benchmark suites and also reviews and publishes submitted results from our member organizations and other benchmark licensees. SPEC score being used as the server performance measure.

  27. TDP – Thermal Design Power - The maximum amount of power the cooling system in a computer is required to dissipate. For example, a laptop's CPU cooling system may be designed for a 20 watt TDP, which means that it can dissipate up to 20 watts of heat without exceeding the maximum junction temperature for the computer chip. In the absence of a true standard measure of processor efficiency comparable to the U.S. Department of Transportation fuel efficiency standard for automobiles, TDP serves as a proxy for server power consumption. The typical TDP of processors in use today is between 80 and 103 Watts (91W average).

  28. Thin Provisioning - A way to cut the power associated with storage. Most corporations reserve way too much storage for their needs. Reducing that number can potentially cut down power going to air conditioners and storage devices.

  29. UPS - Uninterruptible Power Suply - An electrical apparatus that provides emergency power to a load when the input power source, typically the utility mains, fails. A UPS differs from an auxiliary or emergency power system or standby generator in that it will provide instantaneous or near-instantaneous protection from input power interruptions by means of one or more attached batteries and associated electronic circuitry for low power users, and or by means of diesel generators and flywheels for high power users. The on-battery runtime of most uninterruptible power sources is relatively short—5–15 minutes being typical for smaller units—but sufficient to allow time to bring an auxiliary power source on line, or to properly shut down the protected equipment. Efficiency varies widely

  30. VDI - Virtual Desktop Infrastructure - (Sometimes Virtual Desktop Interface) - The server computing model enabling desktop virtualization, encompassing the hardware and software systems required to support the virtualized environment

  31. WUE - Water Usage Effectiveness - Also created by Green Grid,Calculates how efficiently a data center is using water. It is a ratio of the annual water usage to how much energy is being consumed by the IT equipment and servers, and is expressed in liters/kilowatt-hour (L/kWh). Like CUE, the ideal value of WUE is zero, meaning no water was used to operate the data center.
3. Business Case
  • The potential savings from more efficient data centers is enormous. 20%-40% savings are typically possible with aggressive strategies producing better than 50% savings. Paybacks are short - one to three years are common. However, today, most centers don't know if they are good or bad.
  • Facilities operating at high utilization rates throughout a 24-hour day will want to focus initial efforts on sourcing IT equipment with low power processors and high efficiency power supplies.
  • Facilities that experience predictable peaks in activity may achieve the greatest benefit from power management technology.
Power supply efficiency can vary significantly depending on load and power supplies are often sized for a load that exceeds the maximum server configuration. Sizing power supplies closer to actual load is another opportunity to increase efficiency. Notice that the maximum configuration is about 80 %of the n ameplate rating and the typical configuration is 67% of the nameplate rating.
4. Energy Saving Strategies
  1. Liquid Cooling - A number of vendors offer solutions that deliver cold water cooling either through coils fitted to the rear doors of computer cabinets or through cold plates attached directly to the CPU chips. Direct chip attachment has the advantage of reducing the case temperature so much that the CPUs can be clocked at a very high rate, much higher than that which can be supported with air cooling. While it might seem that water and electronics don’t mix well, experienced data center managers know that water is the main medium that is used in Computer Room Air Conditioners (CRAC) units to cool the raised floor area. Bringing water closer to the CPU core can improve the efficiency of heat transfer by as much as 4000 times.
    • One basic approach to get cold water closer to the case or heat-sinks of the hottest components (generally the CPUs) uses cold plates physically bolted to the CPUs with centrally delivered chilled water channeled through them. This removes heat efficiently but is mainly targeted at getting the case temperature down so that the chips can be over-clocked reliably delivering more performance for the same basic silicon.
    • The second approach uses cold water delivery to coils in a rear door. IBM’s Rear Door Heat eXchanger is 4 inches thick and weighs in at a hefty 70 lbs. IBM claim that the door can absorb as much as 50% of the heat coming from the server rack.
  2. Efficient Processors- For a price premium, processor manufacturers provide lower voltage versions of their processors that consumes on average 30 Watts less than standard processors. Independent research studies show these lower power processors deliver the same performance as higher power models.
  3. Efficient Power Supplies - Many of the server power supplies in use today are operating at efficiencies below what is currently available. The EPA estimated the average efficiency of installed server power supplies at 72 percent in 2005. Best-in-class power supplies are available today that deliver efficiency of 90 percent. As with other data center systems, server power supply efficiency varies depending on load. Some power supplies perform better at partial loads than others and this is particularly important in dual-corded devices where power supply utilization can average less than 30 percent.

    Google saves $30 a year in energy costs per server just by joining the battery to the server, instead of using a centralized UPS system.
  4. Power Management Software - Data centers are sized for peak conditions that may rarely exist. In a typical business data center, daily demand progressively increases from about 5 a.m. to 11 a.m. and then begins to drop again at 5 p.m. Server power consumption remains relatively high as server load decreases In idle mode, most servers consume between 70 and 85 percent of full operational power. Consequently, a facility operating at just 20 percent capacity may use 80 percent of the energy as the same facility operating at 100 percent capacity. Server processors have power management features built-in that can reduce power when the processor is idle. Too often these features are disabled because of concerns regarding response time; however, this decision may need to be reevaluated in light of the significant savings this technology can enable.
  5. Blade Servers - Many organizations have implemented blade servers to meet processing requirements and improve server management. While the move to blade servers is typically not driven by energy considerations, blade servers can play a role in energy consumption. Blade servers consume about 10 percent less power than equivalent rack mount servers because multiple servers share common power supplies, cooling fans and other components.
  6. Server Virtualization - As server technologies are optimized, virtualization is increasingly being deployed to increase server utilization and reduce the number of servers required.
  7. Cooling Best Practices - Most data centers have implemented some best practices, such as the hot-aisle/cold-aisle rack arrangement. Potential exists in sealing gaps in floors, using blanking panels in open spaces in racks, and avoiding mixing of hot and cold air. Temperatures in the cold aisle may be able to be raised if current temperatures are below 68° F. Chilled water temperatures can often be raised from 45° F to 50° F.
  8. 415V AC Power Distribution - In most data centers, the UPS facility provides power at 480V, which is then stepped down via a transformer, with accompanying losses, to 208V in the power distribution system. These stepdown losses can be eliminated by converting UPS output power to 415V. The 415V three-phase input provides 240V single-phase, line-to-neutral input directly to the server.. This higher voltage not only eliminates stepdown losses but also enables an increase in server power supply efficiency. Servers and other IT equipment can handle 240V AC input without any issues.
  9. Variable Capacity Cooling - Data center systems are sized to handle peak loads, which rarely exist. Consequently, operating efficiency at full load is often not a good indication of actual operating efficiency. Newer technologies, such as Digital Scroll compressors and variable frequency drives in computer room air conditioners (CRACs), allow high efficiencies to be maintained at partial loads. Digital scroll compressors allow the capacity of room air conditioners to be matched exactly to room conditions without turning compressors on and off. Typically, CRAC fans run at a constant speed and deliver a constant volume of air flow. Converting these fans to variable frequency drive fans allows fan speed and power draw to be reduced as load decreases. Fan power is directly proportional to the cube of fan rpm and a 20 percent reduction in fan speed provides almost 50 percent savings in fan power consumption. These drives are available in retrofit kits that make it easy to upgrade existing CRACs with a payback of less than one year.
  10. High Density Supplemental Cooling - Traditional room-cooling systems have proven very effective at maintaining a safe, controlled environment for IT equipment. However, optimizing data center energy efficiency requires moving from traditional data center densities (2 to 3 kW per rack) to an environment that can support much higher densities (in excess of 30 kW). This requires implementing an approach to cooling that shifts some of the cooling load from traditional CRAC units to supplemental cooling units. Supplemental cooling units are mounted above or alongside equipment racks and pull hot air directly from the hot aisle and deliver cold air to the cold aisle. Supplemental cooling units can reduce cooling costs by 30 percent compared to traditional approaches to cooling. These savings are achieved because supplemental cooling brings cooling closer to the source of heat, reducing the fan power required to move air. They also use more efficient heat exchangers and deliver only sensible cooling, which is ideal for the dry heat generated by electronic equipment. Refrigerant is delivered to the supplemental cooling modules through an overhead piping system, which, once installed, allows cooling modules to be easily added or relocated as the environment changes.
  11. Monitoring and Optimization - One of the consequences of rising equipment densities has been increased diversity within the data center. Rack densities are rarely uniform across a facility and this can create cooling inefficiencies if monitoring and optimization is not implemented. Room cooling units on one side of a facility may be humidifying the environment based on local conditions while units on the opposite side of the facility are dehumidifying. Cooling control systems can monitor conditions across the data center and coordinate the activities of multiple units to prevent conflicts and increase teamwork.
  12. Consolidated Data Storage – Move from direct attached storage to network attached storage. Also, faster disks consume more power so it is worthwhile to reorganizing data so that less frequently used data is on slower archival drives.
  13. Economizers - Economizers allow outside air to be used to support data center cooling during colder months, creating opportunities for free cooling. With today’s high-density computing environment, economizers can be cost effective in many more locations than might be expected.
  14. Monitor Generation Losses - Monitor and reduce parasitic losses from generators, exterior lighting and perimeter access control. For a 1 MW load, generator losses of 20 kW to 50 kW have been measured.
  15. Direct Current - Every power conversion (AC-DC, DC-AC, AC-AC) loses some energy and creates heat. Computer equipment uses and Solar creates Direct Current.
Using the model of a 5,000-square-foot data center consuming 1127 kW of power, the above actions work together to produce a 585 kW reduction in energy use.
5. Benefits
  • Reduce Data Center Capital Expense - Cutting wasted watts allows data centers to expand IT capacity within their existing walls and avoid building or buying new data center space.
  • Liquid Cooling Benefits
    • No refrigeration needed, even in the hottest climates free air cooling could be used as input liquid temperatures of 130o F could be used
    • Waste heat could be delivered at useful temperatures like 160o F
    • Pump energy could be minimized enabling PUE levels of 1.05 or better to be achieved
    • Data Centers could be silent as there would be no need for fans
    • New equipment could be thermally neutral, not adding any extra heat load to the site
    • No humidity problem, no humidifiers
    • No need for a raised floor
    • Massive improvement in reliability due to thermal stability
  • Vertical Cooling Benefits - Turning the hot and cold aisles into a horizontal configuration has a number of significant efficiency benefits:
    • Better CRAC unit efficiency
    • Lower air handling energy (larger more efficient fan units)
    • Significantly better Power Usage Efficiency
    • More cores per KW of electricity
    • The front and back of the servers (blades) are unencumbered by the need for cooling fans and slots freeing space for connectors and indicators
    • There is no front and back so servers (blades) can be fitted into both doubling the number of cores per U of rack space
6. Risks/Issues
  • Efficiency of Small to Medium Data Centers - Some large server farms operated by well-known Internet brands provide shining example of ultra-efficient data centers. Yet small, medium, and corporate data centers are responsible for the vast majority of data center energy consumption and are generally much less efficient. The vast majority of data center energy is consumed in small, medium, and large corporate data centers as well as in the multi-tenant data centers to which a growing number of companies outsource their data center needs.

  • Consumer Backlash - In April 2011 Greenpeace called out a number of top cloud computing companies that have fast-growing electricity needs for their lack of transparency and bad energy choices. The report also cites the positive contributions of the cloud and lists recommendations for how IT companies can green their act.
    Source: Greenpeace April 2011

  • Utilization - Data Center Servers are generally utilized at an average rate of 10%. The power draw on a server is very steap. A server may draw 225W when 100% utilized and only drop to 200W when idle. There is an opportunity to power down more efficiently. Virtualization can reduce the number of servers and increase utilization.

    The data center industry should adopt a simple metric, such as the average utilization of the server central processing unit(s) (CPU), to help resolve  underutilization of servers, one of the biggest efficiency issues in data centers.  Measuring and reporting CPU utilization is a simple, affordable, and adequate way of gauging data center efficiency that could be used immediately to drive greater IT energy savings in data centers.

  • Power Availability - According to a Fall 2007 Survey of the Data Center Users Group (DCUG), an influential group of data center managers, power limitations were cited as the primary factor limiting growth by 46 percent of respondents, more than any other factor.

  • Misaligned Incentives - Except for the largest utility sized data centers, IT managers rarely have budgetary responsibility for facilities and energy use.  This is especially a problem in the  fast growing multi-tenant data center market segment.

    Data center operators, service providers, and multi-tenant customers should review their internal organizational structure and external contractual arrangements and ensure that incentives are aligned to provide financial rewards for efficiency best practices. Multi-tenant data center stakeholders should develop a "green lease" contract template to make it easier for all customers to establish contracts that incentivize rather than stand in the way of energy savings.
  • Zombie Servers - "comatose" (aka "zombie") computer servers are wasting vast amounts of energy. According to a 2015 study by the NRDC, possibly as many as 10 million servers sitting idle globally but still powered on and gobbling energy even though they haven't delivered any information or computing services for six months or more.

    A big part of the problem is "out of sight, out of mind." Many small and mid-sized organizations keep their computer servers tucked away in a closet or storeroom, where, unless they malfunction, managers have little reason to concern themselves with server energy use. That's true as well of many multi-tenant data centers -- which make IT (information technology) facility and computing services available to other organizations for a fee.

    Data centers often have excess server capacity because identifying unused or over-provisioned hardware with certainty is difficult with conventional tools,  Analysis of user demand and traffic could identify "IT resources not doing any useful work so they can be decommissioned without adding any risk to the business.

  • Critical Systems– Data Center availability can be critical to the organization’s health and trumps energy savings if there is a problem.

  • Retrofitting Legacy Data Centers - How do we get more cooling capacity from sites we built a decade ago for low power density applications? One of the biggest problems is that everyone is trying to work against a basic law of physics; hot air rises, and cold air falls. A conventional data centre is designed to support servers that pull air in from the front and blow it out the back of the cabinet. So common sense tells us that computers at the top of the cabinet don’t get as much cold air as computers at the bottom.

  • Inaccurate Server Specs - Many server makers tend to publish specifications for their servers that overstate how much power they need at a maximum power draw. There’s a reason for this: Server makers don’t want data center operators to try to run too many servers off one rack or power supply, only to see a surge in power use blow circuits or damage equipment. That’s how data center managers get fired. But overestimating energy consumption for servers also means data centers are often under-utilizing their available power per rack, row or section of the data center.

  • Over Cooling - According to a server expert at Intel, most data center managers keep their facilities much too cold -- as much as 15 percent too cold. In an article by Rik Myslewski published in The Register, Dylan Larson, Intel's director of server platform technology initiatives, explained that keeping data centers in the low 70s and high 60s leads to a significant amount of excess cooling, and wasted energy. The ideal temperature, per Larson as well as ASHRAE, is a balmy 80 degrees.
    Google suggests running data centers at hotter temperatures, like 80 degrees:
  • Immature Computing Efficiency Metrics – There is a need to define universally accepted metrics for processor, server and data center efficiency There have been tremendous technology advances in server processors in the last decade. Until 2005, higher processor performance was linked with higher clock speeds and hotter chips consuming more power. Recent advances in multi-core technology have driven performance increases by using more computing cores operating at relatively lesser clock speeds, which reduces power consumption. Today processor manufacturers offer a range of server processors from which a customer needs to select the right processor for the given application. What is lacking is an easy to understand and easy to use measure, such as the miles-per-gallon automotive fuel efficiency ratings, that can help buyers select the ideal processor for a given load. The performance per Watt metric is evolving gradually with SPEC score being used as the server performance measure, but more work is needed. This same philosophy could be applied to the facility level. An industry standard of data center efficiency that measures performance per Watt of energy used would be extremely beneficial in measuring the progress of data center optimization efforts. IT management needs to work with IT equipment and infrastructure manufacturers to develop the miles-per-gallon equivalent for both systems and facilities.

  • Inefficiency - Legacy equipment is inefficient. Infrastructure is typically oversized for much of its life because power requirements are overstated.
    • Multiple Power Conversion - Each time power is converted between AC and DC some power is converted to heat which must then be removed. The efficiency of UPS's, Transformers and PDU's (with transformers) varies.

  • Hot Aisle/Cold Aisle Issues
    • Fire Codes - When isolating those air streams, data centers can leave themselves vulnerable to violating fire codes that require detection and prevention devices throughout the room. If the hot aisle exceeds 110 degrees, you could actually exceed the National Electric Code standards. If you have plastic sheets over your racks and don’t have sprinklers in the contained area, how can a potential fire be squelched?
    • Employee Comfort - It isn’t comfortable for technicians to work on equipment in very hot conditions

  • Liquid Cooling Issues
    • Redundancy is much more problematic and expensive with liquid than air. An extra CRAC or two in the data center can be used as a backup for a fairly wide expanse of racks. How do you get cooling redundancy for a 20KW liquid cooled rack? Not impossible but very expensive.
    • Cost of equipment is significantly higher, at least today. Rather than a Single CRAC servicing 20-30 racks, the liquid solutions are most always one per rack. Even with densification of the racks, the costs of liquid cooling equipment inside the data center are many times that of air.
    • Space Issues - How many data centers really have a "Space" problem? With the 6-10KW racks of the typical data center replacing the traditional 1-3KW racks, very few sites run out of space before they are up against serious power and plant cooling constraints. 6-10KW is easily handled with traditional cooling solutions and proper airflow design.
7. Success Criteria
  1. Public Disclosure - the public disclosure of efficiency metrics are necessary to create the conditions for best-practice efficiency behaviors across the data center industry.
  2. Measure PUE -Know your data center's efficiency performance by measuring energy consumption and frequent PUE monitoring.
  3. More Sophisticated Power Management - While enabling power management features provides tremendous savings, IT management often prefers to stay away from this technology as the impact on availability is not clearly established. As more tools become available to manage power management features, and data is available to ensure that availability is not impacted, we should see this technology gain market acceptance. More sophisticated controls that would allow these features to be enabled only during periods of low utilization, or turned off when critical applications are being processed, would eliminate much of the resistance to using power management.
  4. Matching Power Supply Capacity to Server Configuration - Server manufacturers tend to oversize power supplies to accommodate the maximum configuration of a particular server. Some users may be willing to pay an efficiency penalty for the flexibility to more easily upgrade, but many would prefer a choice between a power supply sized for a standard configuration and one sized for maximum configuration. Server manufacturers should consider making these options available and users need to be educated about the impact power supply size has on energy consumption.
  5. Design for High Density - A perception persists that high-density environments are more expensive than simply spreading the load over a larger space. High density environments employing blade and virtualized servers are actually economical as they drive down energy costs and remove constraints to growth, often delaying or eliminating the need to build new facilities.
  6. Integrate Measurement and Control - Data that can be easily collected from IT systems and the racks that support them has yet to be effectively integrated with support systems controls. This level of integration would allow IT systems, applications and support systems to be more effectively managed based on actual conditions at the IT equipment level.
  7. Location, Location, Location - Ideally a data center ought to be located in a cold place with plenty of electrical power and close to a consumer (market garden, manufacturing process, swimming pool complex) that can use the warm water or air that is a byproduct of operations. It should not be close to incineration plants or other industrial processes that expel foul contamination or dust into the air. Trees that give off sap are also best kept a reasonable distance away. Being close to a source of cold water like a large lake, or a fast flowing river can make cooling much less costly. Being close to multiple sources of high capacity network connections is also pretty essential.
  8. Manage Air Flow - Good air flow management is a fundamental to efficient data center operation. Start with minimizing hot and cold air mixing and eliminate hot spots.
  9. Adjust the Thermostat - Raising the cold aisle temperature will minimize chiller energy use. Don't try to run at 70F in the cold aisle, try to run at 80F; virtually all equipment manufacturers allow this.
  10. Use Free Cooling - Water or air-side economizers can greatly improve energy efficiency.
  11. Optimize Power Distribution - Whenever possible use high-efficiency transformers and UPS systems. 415V power distribution is used commonly in Europe, but UPS systems that easily support this architecture are not readily available in the United States. Manufacturers of critical power equipment should provide the 415V output as an option on UPS systems and can do more to educate their customers regarding high-voltage power distribution.
  12. Buy Efficient Servers - Specify high-efficient servers and data storage systems. The Climate Savers Computing Initiative offers resources to identify power-efficient servers.
  13. Improved Sleep Mode - Engineers must architect networks that wake up and go to sleep faster. Network designers must challenge the “always on” assumption for desktops and appliances. Networks will require significant improvements in scheduling and forecasting of work to allow more machines to go to sleep at any given moment.
8. Case Study - Substituting CHP for UPS

Capstone Turbine, Chatsworth, CA,  a microturbine manufacturer,, recognized that Combined Heat at Power systems could offer benefits to data centers if CHP systems could be designed as an alternative to conventional Uninterruptible Power Supply hardware. Capstone launched their Hybrid UPS product to provide data centers with all of the benefits associated with CHP technology plus the added benefit of avoiding the cost of installing or replacing conventional UPS hardware

Combined heat and power (CHP) is an under-utilized solution that can significantly reduce demand for grid electricity at data centers. In addition to reducing electricity demand and relieving stress on the grid, CHP can reduce energy costs for data center operators and reduce emissions, thereby providing environmental benefits.

While the energy efficiency benefits of CHP are well matched to data center needs, the adoption of CHP has been slow at data centers. One barrier to the adoption of CHP is that data centers have sophisticated redundant backup power systems, and the addition of a CHP system is often viewed as an unnecessary additional layer to conventional data center power schemes.

All data centers have uninterruptible power supply (UPS) hardware to provide power in the event of a grid outage. A conventional data center UPS system typically includes power electronics and batteries integrated with one or more emergency backup generators. The cost of a conventional UPS system varies widely between data centers depending on the required electrical power capacity, specific hardware, and complexity of the control strategy. It is not uncommon, however, for a data center UPS system to cost more than a million dollars. While the UPS function is vital, the cost of a conventional UPS system is high given that most UPS systems are only needed to back up the grid for a few hours each year.

At the SoCalGas data center in Monterey Park, California, the Capstone Hybrid UPS microturbine units will integrate with an advanced double-effect Thermax exhaust-fired absorption chiller. Clean-and-green power generated onsite will support critical loads. The microturbines' clean waste-heat energy will drive the absorption chiller to meet the data center's cooling needs in the summer and support heating needs in the winter.

The demonstration system, funded in part by the California Energy Commission included three Capstone Hybrid UPS microturbines integrated with one Thermax absorption chiller to meet partial load requirements for the Monterey Park data center. Regatta Solutions served as the general contractor, and completed the hardware installation in 2014.

Demonstration results based on data collected between August 2014 and January 2015 are summarized below.   There was a significant reduction in energy costs attributed to operation of the CHP system – 20 to 44% depending on the operating schedule for the CHP Hybrid UPS system. In addition to reducing energy costs, the CHP Hybrid UPS system also produced environmental benefits. The South Coast Air Quality Management District (SCAQMD) permitted the system at 10 ppm CO and 9 ppm NOx (values reported at 15% oxygen). The CHP system complied with these stringent requirements, and based on SCAQMD tests, the CHP system reduced NOx emissions by 66% compared to NOx emissions associated with the consumption of grid electricity.

9. Companies
  1. Cirrascale, Poway, CA- Trying to maintain hot and cold air in separate vertical aisles between racks is hard to achieve. Cirrascale has a patented technology that uses bottom to top cooling (Vertical Cooling) using specialized racks or containers. This turns the hot aisle, cold aisle concept through 90 degrees and the result is a more natural cold layer at the bottom of the rack with a hot layer above. Cold air, hot air mixing is reduced. Fans trays throughout the infrastructure of the rack push the cold air from the bottom of the rack out the top.

  2. Emerson Network Power - Leibert Global HQ, Columbus, OH - Supplies cold aisle containment solutions, The portfolio includes high performance cooling systems which includes Thermal Management for larger systems for Telecom Switching Centers, Internet Data Centers and Computer Rooms.

  3. Green Grid, Beaverton, OR - A global consortium dedicated to advancing energy efficiency in data centers and business computing ecosystems. It was founded in February 2007 by several key companies in the industry – AMD, APC, Dell, HP, IBM, Intel, Microsoft, Sun Microsystems and VMware. The Green Grid has since grown to hundreds of members, including end users and government organizations, all focused on improving data center efficiency.

  4. Google - Publishes the data about the efficiency of their data centers on their sustainable computing website.

  5. IBM - Says that removing excess heat from data centers is as much as 4000 times more efficient via water than it is by air. Their Power 575® supercomputer introduced in 2008, equipped with IBM’s latest POWER6® microprocessor, uses water-chilled copper plates located above each microprocessor to remove heat from the electronics. The IBM lab in Zurich has developed a new cooling technology by attaching small water pipes to the surface of each computer chip in a server. Water is piped within microns of the chip to cool it down, then the waste water is piped out hot enough to make a cup of Ramen, heat a building, or keep a swimming pool warm. The new cooling system will reduce the carbon footprint of servers by 85 percent and the energy use by 40 percent.
    IBM Hydrocluster -
  6. EPA - The Role of Distributed Generation and Combined Heat and Power (CHP) Systems in Data Centers

  7. Modius, a San Francisco-based startup that integrates a host of facility-side systems in a single database and automation platform. It has crossbreed facility partnerships under way with data center power distribution company STARLINE, which adds a store of super-accurate power data to Modius’s per-server calculations. That technology could be important when some servers measure their own power use inaccurately, or not at.

  8. Power Assure, Santa Clara, CA - Provides visibility, intelligence, and dynamic automation to help CIOs, IT directors, and facilities managers optimize efficiency, service levels, and power consumption within and across data centers. Developed PAR4. The company is privately held with funding from ABB, Draper Fisher Jurvetson, Good Energies, Point Judith Capital, and a grant from the Department of Energy. Power Assure partners include UL, Cisco, ABB, Intel, Dell, IBM, Raritan, and VMware.

  9. Sentilla, Redwood City, CA - Their flagship product Sentilla Energy Manager is a software-only approach and patent-pending virtual metering that analyzes power usage directly and tracks requirements, performance and capacity of every piece of equipment. Sentilla has begun to graduate from analyzing data centers for energy efficiency to analyzing them for overall computing efficiency and effectiveness. In 2010, the company estimated that the top 40 online retailers spend an estimated $110 million more on energy than they should in preparing for Cyber Monday, the first workday after Thanksgiving that's been enshrined as the start of the online holiday shopping season. The excess power comes from servers churning in idle and untracked assets waiting around for the big shopping bump.

    In August 2011, Sentilla raised $15 million in a third round of funding. SingTel Innov8, the VC arm of Singapore telco carrier, joined existing investors and now owns 23.4 percent of the company.

  10. Synapsense - Folsom, CA - A startup launched by former Intel exec Peter Van Deventer and UC Davis computer science professor Raju Pandey that makes wireless sensor technology and software to monitor and reduce power usage and cooling in data centers. Obtained a $5 million round of funding in 2010 from GE, Emerald Technology Ventures, Sequoia Capital, Robert Bosch Venture Capital, American River Ventures, Nth Power and DFJ Frontier.

  11. TrendPoint Systems, San Ramon, CA - Provides web based solutions for remote data center monitoring. EnviroCube, monitors the power going into various datacenter equipment and the air conditioning system. The data allows it to determine how much heat should be produced and where it will come out. It then cross-checks it against data on the ambient environment to determine cooling efficiency or gaps in a cooling strategy.

  12. TSO Logic -  Vancouver, BC - IT efficiency software for data centers. Visibility, action, and control: the individual components of the platform mesh seamlessly to enable IT to dynamically positions workloads, improve uptime, and reduce operational expenses - Server Savings Calculator

  13. Verdiem - Seattle, WA - Provides enterprise software solutions to global businesses and individuals that help reduce energy consumption of PC networks. Verdiem recevied $4.7 million from Kleiner Perkins and Microsoft for PC energy management

    Verdiem's SURVEYOR allows the central administration of power management settings for networked PCs. Intelligent policies maximize energy savings by placing machines into a lower power states without interfering with end-user productivity, desktop maintenance or upgrades. Verdiem's consumer product, Edison, is available for free download.
With the Cascade Effect, a 1 Watt savings at the server component level creates a reduction in facility energy consumption of approximately 2.84 Watts
10. Links
  1. Update Institute - Comatose Server Savings Calculator
  2. Facebook is sharing much of what it has learned about making data centers more efficient in its Open Compute Project  As a result of the Open Compute Project, Facebook’s Prineville, Oregon data center is now one of the most efficient in the world:
    • Facebook’s energy consumption per unit of computing power has declined by 38% 
    • Their Prineville, Ore., data center, which opened in April 2011, had a Power Usage Effectiveness PUE of 1.08 for the second quarter of 2011, compared to 1.07 in the first quarter 
    • For the first half of the year, this means 93% of the energy from the grid makes it into every Open Compute server. This PUE is much lower than the industry standard of 1.5 
  3. We’ve removed centralized chillers, eliminated traditional inline UPS systems and removed a 480V to 208V transformation.
    Ethernet-powered LED lighting and passive cooling infrastructure reduce energy spent on running the facility.
  4. DOE - Data Center Energy Efficiency
  5. Information Week White Paper - Energy Logic: Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems
  6. ASHRAE has published several excellent papers on cooling best practices.
  7. The Hot Aisle Blog – Cooling Articles
  8. The Green Grid - The Green Grid is a global consortium of IT companies and professionals seeking to improve energy efficiency in data centers and business computing ecosystems around the globe. The organization seeks to unite global industry efforts to standardize on a common set of metrics, processes, methods and new technologies to further its common goals. White papers on metrics
  9. DOE Data Center Website: Sign up to stay up to date on new developments
  10. Lawrence Berkeley National Laboratory (LBNL) Data Center Energy Efficiency
    Design Guides developed based on best practices Web based training
  11. ASHRAE Data Center technical guidebooks
  12. Energy Star® Server and Data Center Efficiency Program
  13. Uptime Institute - White Papers