Cover

OwnerJuly 2015 to presentEl Cerrito

Finding best available technologies for meeting energy needs today and tomorrow: energy efficiency, demand response,, solar, wind, electric vehicles, biofuels and smart grid. It’s all the innovations that make the energy we use more secure, clean, and affordable. The energy world's best hopes lie in what's happening in the digital realm, especially in data analytics.

Wednesday, June 27, 2012

San Diego Blackout

An outage , started about 3:30 p.m. Thursday September 8, affecting almost 5 million people in southern California, northwest Mexico and part of Arizona,  started at a substation outside Yuma.


This post has been updated based on FERC/NERC's  May 2012 report on causes and recommendations 


San Diego is served by only one high voltage connection from the east and one from the north  Source: NPR




Navigate this Report
Back to Case Studies Index
1. Background

2. Acronyms/Definitions
3. SOE - Sequence of Events
4. Causes
5. Risks/Issues
6. Success Factors
7. FERC/NERC Findings and Recommendations
8. Companies, Organization and Affected Entities
9. Links

Was a lone Homer Simpson in Yuma, Arizona responsible for the San Diego Blackout?  The explanation that ultimately emerges will likely include several more moving parts.

1.Background

  • Thursday's blackout affecting almost 5 million people in southern California, northwest Mexico and part of Arizona appears to have started at a substation outside Yuma, quickly knocking out a major transmission line between San Diego and Arizona.

  • The largest recent single outage was the 2003 event in the Northeast, when virtually the entire region was blacked out and 50 million people were affected. A federal investigation identified a wide range of causes and recommended a series of improvements that were intended to preclude another such failure.

  • About five years later, the electrical power industry decreed that it had so vastly improved the system that a similar event was "much less likely to occur".

  • See my blog article Outage Management 



2. Acronyms/Definitions
  1. ACE - Area Control Error -

  2. BA - Balancing Authority - The BA integrates resource plans ahead of time, maintains in real time the balance of electricity resources (generation and interchange) and electricity demand or load within its footprint, and supports the Interconnection frequency in real time. There are 37 BAs in the WECC footprint. The following five BAs were affected by the event: APS, IID, WALC, CAISO, and CFE.

  3. Blackstart Recovery – The process of restoring a power station to operation without relying on the external electric power transmission In the absence of grid power, a so-called black start needs to be performed to bootstrap the power grid into operation. The dynamic controller could also provide other ancillary services, such as aiding if programmed with that function. Generally blackstarts are made more difficult because of the large number of reactive loads attempting to draw power simultaneously at start up when voltages are low. This causes huge overloads that trip local breakers delaying full system recovery. The dynamic controller could have these loads "wait their turn", as it were until full power had been restored.

  4. Brownout – When voltage is reduced intentionally in response to a shortage of electric supply. Out of service lines or transformers sometimes cause Undervoltage conditions usually lasting less than one or two days. Brownouts can cause overheating in constant speed motors due to the increase in current density as well as the problems with electronic equipment that occur with short-term voltage variations.

  5. BES - Bulk Electric System -

  6. BPS - Bulk-Power System - a large interconnected electrical system made up of generation and transmission facilities and their control systems. A BPS does not include facilities used in the local distribution of electric energy. If a bulk power system is disrupted, the effects are felt in more than one location. In the United States, bulk power systems are overseen by the North American Electric Reliability Corporation (NERC).

  7. Cascading – The uncontrolled successive loss of system elements triggered by an incident. Cascading results in widespread service interruption which cannot be restrained from sequentially spreading beyond an area predetermined by appropriate studies.

  8. IROL - Interconnection Reliability Operating Limit - In order to ensure the reliable operation of the BPS, entities are required to identify and plan for IROLs, which are SOLs that, if violated, can cause instability, uncontrolled separation, and cascading outages. Once an IROL is identified, system operators are then required to create plans to mitigate the impact of exceeding such a limit to maintain system reliability.

  9. N - 1 - "N minus one" - NERC's mandatory Reliability Standards applicable to the Bulk Electric System (BES) require that the BES be operated so that it generally remains in a reliable condition, without instability, uncontrolled separation or cascading, even with the occurrence of any single contingency, such as the loss of a generator, transformer, or transmission line.  This is commonly known as the “N‐1 criterion.”  N‐1 contingency planning allows entities to identify potential N‐1 contingencies before they occur and to adopt mitigating measures, as necessary, to prevent instability, uncontrolled separation, or cascading.  As the Federal Energy Regulatory Commission (Commission) stated in Order No. 693 with regard to contingency planning, “a single contingency consists of a failure of a single element that faithfully duplicates what will happen in the actual system.  Such an approach is necessary to ensure that planning will produce results that will enhance the reliability of that system.  Thus, if the system is designed such that failure of a single element removes from service multiple elements in order to isolate the faulted element, then that is what should be simulated to assess system performance.”

    The loss of a single 500 kilovolt (kV) transmission line initiated the event, but was not the sole cause of the widespread outages. The system is designed, and should be operated, to withstand the loss of a single line, even one as large as 500 kV. The affected line—Arizona Public Service’s (APS) Hassayampa-N. Gila 500 kV line (H-NG)—is a segment of the Southwest Power Link (SWPL), a major transmission corridor that transports power in an east-west direction, from generators in Arizona, through the service territory of Imperial Irrigation District (IID), into the San Diego area. It had tripped on multiple occasions, as recently as July 7, 2011, without causing cascading outages.

  10. Overvoltages and Undervoltages - When sags and swells last for longer than 2 minutes they are classified as over or under voltage conditions. Such long duration voltage variations are most often caused by unusual conditions on the power system. Most utility voltage regulation problems are the result of too much impedance in the power system to supply the load; customers at the end of feeders suffer the most from low voltage. Problems on the utility grid can cause higher than nominal voltages long enough to adversely affect facilities. This situation might happen because of problems with voltage regulation capacitors or transmission and distribution transformers. The utility does have overvoltage protection, but at times these devices do not respond fast enough to completely protect all equipment downstream.

  11. RC - Reliability Coordinator - The RC and TOP have similar roles, but different scopes. The TOP directly maintains reliability for its own defined area. The RC is the “highest level of authority” according to NERC, and maintains reliability for the Interconnection as a whole. Thus, the RC is expected to have a “wide-area” view of the entire Interconnection, beyond what  any single TOP could observe, to ensure operation within IROLs. The RC oversees both transmission and balancing operations, and it has the authority to direct other functional entities to take certain actions to ensure reliable operation. The RC, for example, may direct a TOP to take whatever action is necessary to ensure that IROLs are not exceeded.

    The RC performs reliability analyses including next-day planning and RTCA for the Interconnection, but these studies are not intended to substitute for TOPs’ studies of their own areas. Other responsibilities of the RC include responding to requests from TOPs to assist in mitigating equipment overloads. The RC also coordinates with TOPs on system restoration plans, contingency plans, and reliability-related services.

  12. RE - Regional Entity - WECC

  13. Restoration – The process of returning generators and transmission system elements and restoring load following an outage on the electric system.

  14. RTCA - Real-Time Contingency Analysis - Takes the estimated real-time values for the whole system calculated by the State Estimator (based on the available measurements from the SCADA system) and studies “what if” scenarios.
    For example, RTCA determines the potential effects of losing a specific facility, such as a
    generator, transmission line, or transformer, on the rest of the system. In addition to
    studying the effects of various contingencies, RTCA can prioritize contingencies. It can
    also provide mitigating actions and send alarms (visual and/or audible) to operators to
    alert them to potential contingencies.

  15. Series Capacitor - A piece of equipment about the size of a small car that helps the utility manage voltage. Series connected controllers are usually employed in active power control and to improve the transient stability of power systems. Controlled by Thyristor add another controllability dimension, as thyristors are used to dynamically modulate the ohms provided by the inserted capacitor. This is primarily used to provide inter-area damping of prospective low frequency electromechanical oscillations, but it also makes the whole Series Compensation scheme immune to Subsynchronous Resonance (SSR). See my blog article Flexible Alternating Current Transmisson System (FACTS)

  16. TOP - Transmission Operator - The TOP is responsible for the real-time operation of the transmission assets under its purview. The TOP has the authority to take corrective actions to ensure that its area operates reliably. The TOP performs reliability analyses, including seasonal and next-day planning and RTCA, and coordinates its analyses and operations with neighboring BAs and TOPs to achieve reliable operations. It also develops contingency plans, operates within established SOLs, and monitors operations of the transmission facilities within its area. There are 53 TOPs in the WECC region. The following seven TOPs were affected by the event: APS, IID, WALC, CAISO, CFE, SDG&E, and SCE

  17. SOE Sequence of Events - A a precise and accurate sequence of events to provide a foundation for root cause analysis, computer model simulations, and other analytical aspects of the inquiry.

    More than 100 notable events occurred in less than 11 minutes on Sep 8, 2011. The inquiry’s SOE team established a precise and accurate sequence of outage related events to form a critical building block for the other parts of the inquiry. It provided, for example, a foundation for the root cause analysis, computer-based simulations, and other event analyses. Although entities time-stamp much of the data related to specific events, their time-stamping methodologies vary, and not all of the time-stamps were synchronized to the National Institute of Standards and Technology (NIST) standard clock in Boulder, Colorado. Validating the precise timing of specific events became a time-consuming, important, and sometimes difficult task. The availability of global positioning system (GPS)-time synchronized PMU data on frequency, voltage, and related power angles made this task much easier than in previous blackout inquiries and investigations.

  18. SOL - System Operating Limit -

  19. TP - Transmission Planner -

3. Sequence of Events
Source:  April 2012 NERC/FERC Staff Report

  • Phase 1. Pre-Disturbance Conditions
    • Timing: September 8, 2011, before H-NG trips at 15:27:39
    • A hot, shoulder season day with some generation and transmission maintenance outages
    • Relatively high loading on some key facilities: H-NG at 78% of its normal rating, CV transformers at 83%
    • 44 minutes before loss of H-NG, IID’s RTCA results showed that the N-1 contingency loss of the first CV transformer would result in an overload of the second transformer above its trip point
    • An APS technician skipped a critical step in isolating the series capacitor bank at the North Gila substation

    September 8, 2011, was a relatively normal, hot day in Arizona, Southern California, and Baja California, Mexico, with heavy power imports into Southern California from Arizona. In fact, imports into Southern California were approximately 2,750 MW, just below the import limit of 2,850 MW. September is generally considered a “shoulder” season, when demand is lower than peak seasons and generation and transmission maintenance outages are scheduled. By September 8th, entities throughout the WECC region, including some of the affected entities, had begun generation and transmission outages for maintenance purposes. For example, on September 8th maintenance outages included over 600 MW of generation in Baja California and two 230 kV transmission lines in SDG&E’s territory. However, there were no major forced outages or major planned transmission outages that would result in a reduction of the SOLs in the area.
     Despite September being considered a shoulder month, temperatures in IID’s service territory reached 115 degrees on September 8th. IID’s load headed toward near-peak levels of more than 900 MW, which required it to dispatch local combustion turbine generation in accordance with established operating procedures. Prior to the event, loading on IID’s CV transformers reached approximately 125 megavolt amperes (MVA) per transformer, which is approximately 83% of the transformers’ normal limit.
     Forty-four minutes prior to the loss of H-NG on September 8, 2011, IID’s RTCA results showed that the N-1 contingency loss of the first CV transformer would result in an overload of the second transformer above its trip point. The IID operator was not actively monitoring the RTCA results and, therefore, was not alerted to the need to take any corrective actions. At the time of the event, IID operators did not keep the RTCA display visible, and RTCA alarms were not audible. By reducing loading on the CV transformers at this pre-event stage, the operator could have mitigated the severe effects on the transformers that resulted when H-NG tripped. Since the event, IID has required, and now requires, its operators to have RTCA results displayed at all times. The loading on IID’s CV transformers was pivotal to this event.

     
  • Initial Fault
    • APS manages H-NG, a segment of the SWPL. At 13:57:46, the series capacitors at APS’s North Gila substation were automatically bypassed due to phase imbalance protection. APS sent a substation technician to perform switching to isolate the capacitor bank. The technician was experienced in switching capacitor banks, having performed switching approximately a dozen times. APS also had a written switching order for the specific H-NG series capacitor bank at North Gila. After the APS system operator and the technician verified that they were working from the same switching order, the operator read steps 6 through 16 of the switching order to the technician. The technician repeated each step after the operator read it, and the operator verified the technician had correctly understood the step. The technician then put a hash mark beside each of steps 6 through 16 to indicate that he was to perform those steps. The technician did not begin to perform any of steps 6 through 16 until after all steps had been verified with the system operator.

      The technician successfully performed step 6, verifying that the capacitor breaker
      was closed, placing it in “local” and tagging it out with “do not operate” tags. However, because he was preoccupied with obtaining assistance from a maintenance crew to hang grounds for a later step, he accidentally wrote the time that he had completed step 6 on the line for step 8. For several minutes, he had multiple conversations about obtaining assistance to hang the grounds. He then looked back at the switching order to see what step should be performed next. His mistake in writing the time for step 6 on the line for step 8 caused him to pick up with step 9, rather than step 7.

      Thus, he skipped two steps, one of them the crucial step (step 8) of closing a line switch to place H-NG in parallel with the series capacitor bank. This step would bypass the capacitor bank, resulting in almost zero voltage across the bank and virtually zero current through the bank. Because he skipped step 8, when he began to crank open the hand-operated disconnect switch to isolate the capacitor bank, it began arcing under load.

  • Phase 2: Trip of the Hassayampa‐North Gila 500 kV Line 
    • Timing: 15:27:39 to 15:28:16, just before CV transformer No. 2 trips
    • H-NG trips due to fault; APS operators believe they will restore it quickly and tell WECC RC
    • H-NG flow redistributed to Path 44 (84% increase in flow), IID, and WALC systems
    • CV transformers immediately overloaded above their relay setting
    • At end of Phase 2, loading on Path 44 at 5,900 out of 8,000 amps needed to initiate SONGS separation scheme

  • Phase 3:  Trip of the Coachella Valley 230/92 kV Transformer and Voltage Depression   
    • Timing: 15:28:16, when CV transformer bank No. 2 tripped, to just before 15:32:10, when Ramon transformer tripped 
    •  Both CV transformers tripped within 40 seconds of H-NG tripping 
    • IID knew losing both CV transformers would overload Ramon transformer and S Line connecting it with SDG&E 
    • Severe low voltage in WALC’s 161 kV system 
    • At end of Phase 3, loading on Path 44 at 6,700 amps out of 8,000 needed to initiate SONGS separation scheme 

  •  Phase 4:  Trip of Ramon 230/92 kV Transformer and Collapse of IID’s Northern 92 kV System
    • Timing: 15:32:10 to just before 15:35:40 
    • IID’s Ramon 230/92 transformer tripped at 15:32:10, was set for 207% of its normal rating instead of its design setting of 120%, which allowed it to last approximately four minutes longer than CV transformers 
    • IID experienced undervoltage load shedding, generation and transmission line loss in its 92 kV system 
    •  Path 44 loading increased from approximately 6,700 amps, to as high as 7,800 amps, and ended at around 7,200 amps (out of 8,000 needed to initiate the SONGS separation scheme)

  •  Phase 5:  Yuma Load Pocket Separates from IID and WALC 
    • Timing: 15:35:40 to just before 15:37:55 
    • The Gila and Yucca transformers tripped, isolating the Yuma load pocket to a single tie with SDG&E 
    • Path 44 loading increased from 7,200 to 7,400 amps after Gila transformer tripped, and ended at 7,800 amps after loss of the Yucca transformers and YCA generator (very close to the 8,000 amps needed to initiate the SONGS separation scheme) 

  •  Phase 6: High‐Speed Cascade, Operation of the SONGS Separation Scheme and Islanding of San Diego, IID, CFE, and Yuma 
    • Timing: 15:37:55 to 15:38:21.2 
    • IID’s El Centro-Pilot Knob line tripped, forcing all of IID’s southern 92 kV system to draw from SDG&E via the S Line 
    • S Line RAS operates, tripping generation at Imperial Valley and worsening the loading on Path 44 
    • S Line RAS trips S Line, isolating IID from SDG&E 
    • Path 44 exceeds trip point of 8,000 amps, to as high as 9,500 amps  SONGS separation scheme operates and creates SDG&E/CFE/Yuma island

  • Thursday September 8 - Utility employees had noticed a problem with a series capacitor. APS personnel were dispatched to take it offline. Typically, the utility can shut down an individual capacitor and reroute power without any disruption of service. But this time, something went wrong. After the North Gila capacitor was taken offline, the 500-kilovolt transmission line that runs through the substation went down. For an overview of series capacitors and how Flexible Alternating Current Transmisson System (FACTS) work, see my blog article. FACTS is a system composed of static equipment used for the AC transmission of electrical energy. It enhances controllability and increase power transfer capability of the network and is generally a power electronics-based device.


    At that point — 3:27 p.m. — the grid should have compensated for the loss of the line, which runs from Yuma to the Imperial Valley and San Diego. It is essentially one of two lines that carry power to San Diego. The other runs down from the north, along the coast through San Onofre.

    "The intent is for the system to automatically open and close different breakers and relays and switches to reroute power from the line that was lost to other lines to continue to provide service," Froetscher said. "Most times when a line goes out unexpectedly, the system performs exactly as it's expected to and customers never know the difference."
  • For about 10 minutes, the system seemed to be working properly. But by 3:38 p.m., residents in Yuma began to lose power.  The North Gila – Hassayampa 500 kV transmission line near Yuma, Ariz., tripped off line resulting in a major power outage across southwest Arizona and into Southern California. Among APS customers, approximately 56,000 lost service throughout Yuma, Somerton, San Luis and Gadsden.
  • Inquiries have focused on an 11-minute period between an initial transmission line failure in Arizona, likely triggered by a utility worker, to a complete regional blackout. Losing the power line connecting us to Arizona placed more pressure on San Diego in two ways. First, the grid instantly lost about a third of its power supply. Second, the customers normally served by the Arizona power line were transferred to nearby power grids, including San Diego's, which are all interconnected. It's unclear how the shift happened. Jim Avery, SDG&E's senior vice president of power supply, said it occurred automatically when the Arizona power line went out. The California Independent System Operator, the agency ultimately responsible for operating the power grid, did not return messages seeking an explanation.
  • Utility watchdog Michael Shames doubts the shift occurred automatically. Shames, executive director of the Utility Consumers' Action Network, said San Diego's grid is operated in real-time by the Independent System Operator and SDG&E. A shift of that magnitude would not have happened without both agencies being involved, he said. Somebody made the decision to accept more customers, he said, but it's still unknown who.
  • That electrical grid disturbance echoed across the region, setting off sensors at San Onofre that shut down the oceanfront plant and key transmission lines leading north from San Diego.  From there, outages spread across the Southwest. 
  • As a result, SDG&E did not have adequate resources on its system to keep power on across its service territory.

  • Thursday Afternoon - Utility crews were scrambling to restore some power by tapping into local energy sources at gas-fired plants in Escondido and Otay Mesa, officials said. By Thursday night, power had been restored to some communities in Orange and Imperial counties.

  • Around 7:15 p.m., automated messages began to go out to 19,000 customers the company knows have medical conditions requiring electricity, Winn said. About half those people were reached, and as the night lengthened crews were sent out to reach those they hadn’t heard from.

  • By 8:30 p.m., Orange County customers were back online. By 10 p.m., 55,000 customers had regained power. Power resumption was slow at that point until midnight, then it sped up.

  • Friday September 9 - As of 2:15 a.m. PDT , all but 200 MW of the initial 4,300 MW lost had been restored. 
  • SDG&E restored power at 3:25 a.m. to its 1.4 million customers affected by the Sept. 8 outage. The restoration was accomplished almost exactly 12 hours after a major electric transmission system outage in western Arizona and the loss of a key connection with the San Onofre Nuclear Generating Station and other factors resulted in the most widespread power outage in the company’s history.

  • Friday Morning- The restoration process  left the local power grid very fragile and the ISO and SDG&E asked their ustomers to conserve electricity throughout the day Friday. “   SDG&E is asking customers to: set air conditioners to 78 degrees or higher; use fans rather than air conditioning; keep windows, doors and fireplace dampers closed when using an air conditioner; turn off the air conditioner when leaving the house; draw blinds and drapes to keep the sun out during the warmer parts of the day and open windows at night and during the cool of the day.

  • Sunday - San Onofre brought back one generating unit on Sunday
  • All beaches from Solana Beach to Scripps Pier, as well as Bayside Park in Chula Vista and the inland side of the Silver Strand, remained closed Sunday morning because of sewage spills that occurred when pump stations failed during the blackout.

  • Monday September 12 -  6:33 a.m. the San Onofre nuclear power plant ;reconnected the second of two generating units on Monday, nearly four days after the plant shut down during a major power outage.. Plant spokesman Gil Alexander said one reactor was operating at 98 percent of capacity and that the other would ramp up throughout the day on Monday.

    Now the question facing investigators is: "Why?"


4. Causes
  • Most every cascading blackout involves not a single breakdown, but multiple system breakdowns
  • It's possible there were errors made by APS in the work done on the capacitor and the steps it took after the single line went down. However,SDG&E's system is designed to continue to operate if SWPL, the transmission line between Arizona and San Diego, is knocked out. During the 2007 fires, SPWL had to be turned off due to the fires, yet San Diego's electrical grid continued to function. Additional errors must have contributed to the outage.

  • The source of San Diego County's electrical power comes from three sources: Eastern and northern transmission lines and local generation. At least two of those sources are needed to keep the juice flowing.

  • The operator in Arizona managed to cause his line to disrupt and when it dropped or failed, it caused a low voltage situation in San Diego. Low voltage went across the county over the network and affected the San Onofre Nuclear Generating Station on the northern transmission side and caused San Onofre to trip off line or fail and that's when two sources of energy were lost that meant that the third one just couldn't handle the load and the system collapsed.

  • The San Onofre Nuclear Generating Station reacted the way it was supposed to undervoltage it has to protect the nuclear reactor and keep the equipment from being damaged. However, the network between the eastern transmission substation and the northern transmission substation should have reacted and isolated and prevented that undervoltage from transmitting across the county.

5. Risks/Issues
  • Real Time Communications - The California Public Utilities Commission was briefed Thursday September 23 by its staff about apparently poor communication as the outage unfolded. “Something we’re looking at is the seeming lack of communication between the various balancing authorities,” said Valerie Beck, an interim program manager at the Consumer Protection and Safety Division of the California Public Utilities Commission. Beck explained that the five authorities, including the California ISO overseeing San Diego Gas & Electric service territory, could not see into each other’s activities as the outage cascaded across the region. “While it’s true that they can’t see into each others’ authority areas, they really don’t communicate at all, as near as we can tell.”

    See my blog article Wide Area Situational Awareness for more about Real Time Communicatinons. The Great Northeast Blackout could have been avoided by better communication across system operators

    The CPUC is not part of a task force of stakeholders looking into the blackout that extended into Mexico, but it still has access to findings that eventually will be shared with federal investigators. The utilities commission made public an extensive written account of the power failure, highlighting eight major events that quickly switched off transmission lines feeding electricity to San Diego from Arizona. A second set of transmission lines leading north were then overloaded, tripping offline at a switch yard at the San Onofre Nuclear Generating Station.

    Isolated and unable to keep up with local electricity demand in hot weather, the nuclear plant and other plants farther south shut down to protect themselves from damage. Protective measures built into the system worked appropriately to prevent the outage from spreading, Beck said. “Unfortunately for San Diego Gas & Electric customers ... basically their service was sacrificed so that (the blackout) wouldn’t continue into the rest of the state,” she told the utilities commission.

    The Cal-ISO attempted to bring online quick-response generators, known as peaker plants, but it’s not clear if that power was added before the blackout. Officials at Cal-ISO acknowledge that communication with its counterparts is a concern. “We would like to work with our neighboring grid operators to increase real-time visibility into their balancing authorities,” said Stephanie McCorkle. Cal-ISO declined to release its own outage map and preliminary sequence of events, citing a possible risk to grid security. In private, Cal-ISO and SDG&E officials briefed the utilities commission on Thursday about their findings on the blackout.

  • Vulnerability to Sabotage - Some experts said the failure of safeguards suggests the potential for a saboteur to take down a regional power system.  A grid that relies more heavily on computer technology could become more vulnerable to security attacks. The White House warned in June that an updated grid could be open to threats that include "malware, compromised devices, insider threats and hijacked systems." ee my blog article Grid Security

  • Transmission Bottlenecks - In southern California there exists insufficient electrical generating capacity close to electrical loads; the cities. Instead, utilities rely heavily on power imported over long distances from neighboring states, and there may be too few power plants inside transmission “bottle necks.” This places cities like San Diego at much greater risk of blackouts. When the umbilical cord from Arizona was unexpectedly severed, the few power plants close to the city simply could not provide enough power to maintain grid voltage. As voltage dropped those power plants automatically disconnected to protect themselves from the low voltage condition. The result? A major blackout.

    If the San Diego grid had sufficient local power they should have been able to isolate a small part of the grid and continue to run on their own power plants. Even if the local grid lost power, they should have been able to call reserve power plants into operation to repower the grid within a few minutes. Unfortunately, the power plants were over loaded; there simply wasn’t enough capacity to repower the grid without assistance from the outside.

    Distributed generation can augment or even replace the large central power generators of today’s electricity grid. The portfolio of distributed generation technologies includes microtubines, solar photovoltaics and various types of fuel cells in addition to today’s mainstay diesel engines. These devices put generation closer to the end user, and they are capable of improving power reliability and security for entire communities or individual residences and businesses. See my blog article Distributed Energy Resources

  • Uncontrolled Oscilations - During the 2007 fires, SPWL had to be turned off due to the fires, yet San Diego's electrical grid continued to function. So what was different about this circumstance? So far, SDG&E and CAISO aren't saying. One theory instability is akin to the problem with the Tacoma Narrows bridge, and like that old car that starts to jitter at a one particular speed.

    Here's what might have happened. The worker in Arizona was doing some work and turned off some power in a substation, causing the voltage to drop on the line. This discontinuity wave traveled through the system at just under the speed of light, from Yuma, to San Diego, and then up to San Onofre, taking about 9 seconds to travel the 200 miles or so from the substation. On the other side of the discontinuity, there is a generating station, and as soon as it sensed the low voltage condition, it will turn the voltage up to compensate. 9 to 20 seconds later, the glitch hits San Onofre, and it turns up a bit to compensate, but then a few seconds later, it sees the increase from Arizona, and it turns down, the glitch reflects back to the first station and it turns up, etc. If it takes a few seconds to turn up the power plant, and then turn it back down, you can get into a situation, like the Tacoma Narrows bridge where if you don't stop everything, it will get bigger and bigger, and in no time, start to blow out equipment in stations throughout the network.

  • Forensics - Because we have so little control systems forensics, it is very difficult to determine what happened with many of these incidents.

  • Disruptions -
    • International Airport Closed
    • All schools across the county closed on Friday
    • Gas stations could not pump gas
    • Cell phone service disrupted - Hundreds of AT&T cell phone towers in San Diego County were disabled by Thursday's power outage, the company said Friday. AT&T brought the towers back online over the next six hours after the electrical grid collapsed
      Three other carriers, Cricket Wireless, Verizon Wireless and Sprint Nextel, said their San Diego County networks did not experience significant disruptions during the outage
      .
    • The National University System Institute for Policy Research estimated the economic impact of the power outage to be between $97 million and $118 million. Most of those costs come from perishable food loses, government overtime and a general loss of productivity.

  • Water Contamination - A precautionary boil-water advisory was put into effect Thursday evening following the biggest power outage in the county’s history. When the blackout struck about 3:40 p.m., pump failure led to a loss of pressure in pipes, leading to the risk of contamination. The neighborhoods affected were Carmel Mountain Ranch, College Area/College Grove, La Jolla, North City/Flower, Otay Mesa, Rancho Bernardo, San Carlos, Scripps Ranch/Stonebridge and Tierrasanta. The advisory was lifted on Sunday.

    About 1.9 million gallons of sewage spilled into the Los Peñasquitos Lagoon after a pump station that doesn’t have an on-site emergency generator stopped working. That prompted officials to close all beaches north of Scripps Pier through Del Mar, Solana Beach and into the Cardiff area of Encinitas at least until Saturday.

  • Backup Generator Failure - Generators failed at two hospitals, including the lone generator at Scripps Mercy Hospital in Chula Vista. The unit was replaced within two hours, while seven high-risk patients were moved to other hospitals. Flashlights were used to illuminate some areas of the hospital.

    At Sharp Memorial Hospital, one generator quit and the remaining three operated at less than full power for several hours until being fixed. Officials at both hospitals said changes need to be made to avoid a future mishap. Hospital officials said patient care was uninterrupted.

Revelers enjoyed drinks by candlelight at Stout Public House in downtown San Diego.
6. Success Factors
  1. In the future, self-healing grids will isolate problems automatically, forestalling system-wide outages. It is likely the control systems now in place are not fast enough to catch problems like the one which caused Thursday's blackout. The system is designed to protect itself, but it failed.

  2. Sunrise Powerlink Project - For years, SDG&E has pitched its Sunrise Powerlink project — the massive transmission line under construction in East County — as a critical safeguard against blackouts. So could it have prevented Thursday’s countywide outage? Or at least limited its scope? The massive lblackout sparked renewed questions about the value of the $1.9 billion line.

    Mike Niggli, president of San Diego Gas & Electric, on Friday said until the utility is able to reconstruct exactly how the system collapsed, it’s too soon to assess the role a completed Powerlink would have played. He said it might have allowed the power to come back quicker. In some communities, the blackout lasted 12 hours or more. “That line goes right to the heart of our system,” Niggli said. “So I suspect we could have had a little bit faster restoration.”

    Critics of the line, which will import electricity into San Diego from Imperial Valley, consider the outage a stark reminder of the need to develop local sources of energy. Longtime opponent Michael Shames, with the Utility Consumers’ Action Network, agrees that it’s too early to say if Powerlink could have prevented last week’s outage. He’s skeptical though, especially if it’s proven that the blackout was due to human error or a flawed design system — issues that a separate line would not be able to immediately correct.

    In those cases, he said, “having an additional line, or even five additional lines, wouldn’t have kept it from happening.” He noted there already is a major transmission line between San Diego and Imperial Valley. That line, known as the Southwest Powerlink, runs parallel to Sunrise’s planned path through part of San Diego and Imperial counties.

    The east end of both transmission routes terminate in the El Centro area, but are linked to other lines tied to where Friday’s outage began, suggesting that the presence of a completed Sunrise might not have made a difference.

7. FERC/ NERC Findings and Recommendations
  • Finding 1 - Failure to Conduct and Share Next-Day Studies:

    Not all of the affected TOPs conduct next-day studies or share them with their neighbors and WECC RC. As a result of failing to exchange studies, on September 8, 2011 TOPs were not alerted to contingencies on neighboring systems that could impact their internal system and the need to plan for such contingencies.

  • Recommendation 1 - All TOPs should conduct next-day studies and share the results with neighboring TOPs and the RC (before the next day) to ensure that all contingencies that could impact the BPS are studied.


  • Finding 2 - Lack of Updated External Networks in Next-Day Study Models:

    When conducting next-day studies, some affected TOPs use models for external networks that are not updated to reflect next-day operating conditions external to their systems, such as generation schedules and transmission outages. As a result, these TOPs’ next-day studies do not adequately predict the impact of external contingencies on their systems or internal contingencies on external systems.

  • Recommendation 2 - TOPs and BAs should ensure that their next-day studies are updated to reflect next-day operating conditions external to their systems, such as generation and transmission outages and scheduled interchanges, which can significantly impact the operation of their systems. TOPs and BAs should take the necessary steps, such as executing nondisclosure agreements, to allow the free exchange of next-day operations data between operating entities. Also, RCs should review the procedures in the region for coordinating next-day studies, ensure adequate data exchange among BAs and TOPs, and facilitate the next-day studies of BAs and TOPs.


  • Finding 3 - Sub-100 kV Facilities Not Adequately Considered in Next-Day Studies:

    In conducting next-day studies, some affected TOPs focus primarily on the TOPs’ internal SOLs and the need to stay within established Rated Path limits, without adequate consideration of some lower voltage facilities. As a result, these TOPs risk overlooking facilities that may become overloaded and impact the reliability of the BPS. Similarly, the RC does not study sub-100 kV facilities that impact BPS reliability unless it has specifically been alerted to issues with such facilities by individual TOPs or the RC has otherwise identified a particular sub-100 kV facility as affecting the BPS.

    For example, APS does not routinely study IID’s lower voltage facilities, including the CV and Ramon transformers, in the day-ahead timeframe. As a result, APS was not able to predict what occurred on IID’s system—increased flows and overloading on its 92 and 161 kV transformers and transmission lines—when H-NG tripped offline

  • Recommendation 3 - TOPs and RCs should ensure that their next-day studies include all internal and external facilities (including those below 100 kV) that can impact BPS reliability.


  • Finding 4 - Flawed Process for Estimating Scheduled Interchanges:

    WECC RC’s process for estimating scheduled interchanges is not adequate to
    ensure that such values are accurately reflected in its next-day studies. As a
    result, its next-day studies may not accurately predict actual power flows and
    contingency overloads.

  • Recommendation 4 - WECC RC should improve its process for predicting interchange in the dayahead timeframe.


  • Finding 5 -  Lack of Coordination in Seasonal Planning Process:

    The seasonal planning process in the WECC region lacks effective coordination. Specifically, the four WECC subregions do not adequately integrate and coordinate studies across the subregions, and no single entity is responsible for ensuring a thorough seasonal planning process. Instead of conducting a full contingency analysis based on all of the subregions’ studies, the subregions rely on experience and engineering judgment in choosing which contingencies to discuss. As a result, individual TOPs may not identify contingencies in one subregion that may affect TOPs in the same or another subregion.

  • Recommendation 5 - WECC RE should ensure better integration and coordination of the various subregions’ seasonal studies for the entire WECC system. To ensure a thorough seasonal planning process, at a minimum, WECC RE should require a full contingency analysis of the entire WECC system, using one integrated seasonal study, and should identify and eliminate gaps between subregional studies. Individual TOPs should also conduct a full contingency analysis to identify contingencies outside their own systems that can impact the reliability of the BPS within their system and should share their seasonal studies with TOPs shown to affect or be affected by their contingencies.


  • Finding 6 -  External and Lower-Voltage Facilities Not Adequately Considered
    in Seasonal Planning Process:


    Seasonal planning studies do not adequately consider all facilities that may affect BPS reliability, including external facilities and lower-voltage facilities.

    The events of September 8, 2011, demonstrate that sub-100 kV facilities in parallel with BPS systems can have a significant effect on BPS reliability. The loss of HNG caused the overloading and tripping of both 230/92 kV transformers at CV, which in turn caused another sub-100 kV transformer to trip at Ramon, which led to the cascading outages discussed in detail above. This possibility was not studied as part of the seasonal studies by any of the TOPs, other than IID, because the CV transformers’ secondary windings are below 100 kV

  • Recommendation 6 - TOPs should expand the focus of their seasonal planning to include external facilities and internal and external sub-100 kV facilities that impact BPS reliability.


  • Finding 7 - Failure to Study Multiple Load Levels:

    TOPs do not always run their individual seasonal planning studies based on the multiple WECC base cases (heavy and light load summer, heavy and light load winter, and heavy spring), but, instead, may focus on only one load level. As a result, contingencies that occur during the shoulder seasons (or other load levels not studied) might be missed.

    September 8, 2011 was a very hot day in the region, and scheduled flows in the IID footprint were near record peaks. The high demand on September 8th was indeed similar to what would have been modeled in a heavy load summer seasonal study. The generation picture, however, was very different. By September 8, 2011 generation maintenance—which is not typically scheduled for summer peak days—had begun. The “heavy peak” summer study base cases that were actually used for September 8th therefore had built into them the incorrect assumption that there would be minimal maintenance—i.e., that most generation would be on line— and thus did not account for the normal resumption of facility maintenance in the shoulder season.

  • Recommendation 7 - TOPs should expand the cases on which they run their individual planning studies to include multiple base cases, as well as generation maintenance outages and dispatch scenarios during high load shoulder periods
    .

  • Finding 8 -  Not Sharing Overload Relay Trip Settings:

    In the seasonal planning process, at least one TOP did not share with neighboring TOPs overload relay trip settings on transformers and transmission lines that impacted external BPS systems.

  • Recommendation 8 - TOPs should include in the information they share during the seasonal planning process the overload relay trip settings on transformers and transmission lines that impact the BPS, and separately identify those that have overload trip settings below 150% of their normal rating, or below 115% of the highest emergency rating, whichever of these two values is greater.


  • Finding 9 -  Gaps in Near- and Long-Term Planning Process:

    Gaps exist in WECC RE’s, TPs’ and PCs’ processes for conducting near- and long-term planning studies, resulting in a lack of consideration for: (1) critical system conditions; (2) the impact of elements operated at less than 100 kV on BPS reliability; and (3) the interaction of protection systems, including RASs. As a consequence, the affected entities did not identify during the planning process that the loss of a single 500 kV transmission line could potentially cause cascading outages. Planning studies conducted between 2006 and 2011 should have identified the critical conditions that existed on September 8th and proposed appropriate mitigation strategies.

  • Recommendation 9 - WECC RE should take actions to mitigate these and any other identified gaps in the procedures for conducting near- and long-term planning studies. The September 8th event and other major events should be used to identify shortcomings when developing valid cases over the planning horizon and to identify flaws in the existing planning structure. WECC RE should then propose changes to improve the performance of planning studies on a subregional- and Interconnection-wide basis and ensure a coordinated review of TPs’ and PCs’ studies. TOPs, TPs and PCs should develop study cases that cover critical system conditions over the planning horizon; consider the benefits and potential adverse effects of all protection systems, including RASs, Safety Nets (such as the SONGS separation scheme), and overload protection schemes; study the interaction of RASs and Safety Nets; and consider the impact of elements operated at less than 100 kV on BPS reliability.


  • Finding 10 -  Benchmarking WECC Dynamic Models:

    The inquiry obtained a very good correlation between the simulations and the actual event until the SONGS separation scheme activated. After activation of the scheme, however, neither the tripping of the SONGS units nor the system collapse of SDG&E and CFE could be detected using WECC dynamic models because some of the elements of the event are not explicitly
    included in those models. Sample simulations of the islanded region showed that by adding known details from the actual event, including UFLS programs and automatic capacitor switching, the simulation and event become more closely aligned following activation of the SONGS separation scheme.

  • Recommendation 10 - WECC dynamic models should be benchmarked by TPs against actual data from the September 8th event to improve their conformity to actual system performance. In particular, improvements to model performance from validation would be helpful in analysis of under and/or over frequency events in the Western Interconnection and the stability of islanding scenarios in the SDG&E and CFE areas.


  • Finding 11 -  Lack of Real-Time External Visibility:

    Affected TOPs have limited real-time visibility outside their systems, typically monitoring only one external bus. As a result, they lack adequate situational awareness of external contingencies that could impact their systems. They also may not fully understand how internal contingencies could affect SOLs in their neighbors’ systems.

    The September 8th event exposed the negative consequences of TOPs having
    limited external visibility into neighboring systems.

  • Recommendation 11 - TOPs should engage in more real-time data sharing to increase their visibility and situational awareness of external contingencies that could impact the reliability of their systems. They should obtain sufficient data to monitor significant external facilities in real time, especially those that are known to have a direct bearing on the reliability of their system, and properly assess the impact of internal contingencies on the SOLs of other TOPs. In addition, TOPs should review their real-time monitoring tools, such as State Estimator and RTCA, to ensure that such tools represent critical facilities needed for the reliable operation of the BPS.


  • Finding 12 -  Inadequate Real-Time Tools:

    Affected TOPs’ real-time tools are not adequate or, in one case, operational to provide the situational awareness necessary to identify contingencies and reliably operate their systems.

    The alarming function on IID’s RTCA provides an example of a real-time tool that does not adequately maximize situational awareness capabilities. IID’s RTCA does not provide operators with any audible alarms or pop-up visual alerts when an overload is predicted to occur. Instead, IID’s RTCA uses color codes on a display that the operator must call up manually to learn of significant potential contingencies.

  • Recommendation 12 - TOPs should take measures to ensure that their real-time tools are adequate, operational, and run frequently enough to provide their operators the situational awareness necessary to identify and plan for contingencies and reliably operate their systems.


  • Finding 13 -  Reliance on Post-Contingency Mitigation Plans:

    One affected TOP operated in an unsecured N-1 state on September 8, 2011, when it relied on post-contingency mitigation plans for its internal contingencies and subsequent overload and tripping, while assuming there would be sufficient time to mitigate the contingencies. Post-contingency mitigation plans are not viable under all circumstances, such as when equipment trips on overload relay protection that prevents operators from taking timely control actions. If this TOP had used pre-contingency measures on September 8th, such as dispatching additional generation, to mitigate first contingency emergency overloads for its internal contingencies, the cascading outages that were triggered by the loss of H-NG might have been avoided with the prevailing system conditions on September 8, 2011.

  • Recommendation 13 - TOPs should review existing operating processes and procedures to ensure that post-contingency mitigation plans reflect the time necessary to take mitigating actions, including control actions, to return the system to a secure N-1 state as soon as possible but no longer than 30 minutes following a single contingency. As part of this review, TOPs should consider the effect of relays that automatically isolate facilities without providing operators sufficient time to take mitigating measures.


  • Finding 14 -  WECC RC Staffing Concerns:

    WECC RC staffs a total of four operators at any one time to meet the functional requirements of an RC, including continuous monitoring, conducting studies, and giving directives. The September 8th event raises concerns that WECC RC’s staffing is not adequate to respond to emergency conditions.

  • Recommendation 14 - WECC RC should evaluate the effectiveness of its staffing level, training and tools. Based on the results of this evaluation, it should determine what actions are necessary to perform its functions appropriately as the RC and address any identified deficiencies.


  • Finding 15 -  Failure to Notify WECC RC and Neighboring TOPs Upon Losing RTCA:

    On September 8, 2011, at least one affected TOP lost the ability to conduct RTCA more than 30 minutes prior to and throughout the course of the event due to the failure of its State Estimator to converge. The entity did not notify WECC RC or any of its neighboring TOPs, preventing this entity from regaining situational awareness.

  • Recommendation 15 - TOPs should ensure procedures and training are in place to notify WECC RC and neighboring TOPs and BAs promptly after losing RTCA capabilities.

  • Finding 16 -  Discrepancies Between RTCA and Planning Models:

    WECC’s model used by TOPs to conduct RTCA studies is not consistent with WECC’s planning model and produces conflicting solutions.

  • Recommendation 16 - WECC should ensure consistencies in model parameters between its planning model and its RTCA model and should review all model parameters on a consistent basis to make sure discrepancies do not occur.


  • Finding 17 -  Impact of Sub-100 kV Facilities on BPS Reliability:

    WECC RC and affected TOPs and BAs do not consistently recognize the adverse impact sub-100 kV facilities can have on BPS reliability. As a result, sub-100 kV facilities might not be designated as part of the BES, which can leave entities unable to address the reliability impact they can have in the planning and operations time horizons. If, prior to September 8, 2011, certain sub-100 kV facilities had been designated as part of the BES and, as a result, were incorporated into the TOPs’ and RC’s planning and operations studies, or otherwise had been incorporated into these studies, cascading outages may have been avoided on the day of the event.

  • Recommendation 17 - WECC, as the RE, should lead other entities, including TOPs and BAs, to ensure that all facilities that can adversely impact BPS reliability are either designated as part of the BES or otherwise incorporated into planning and operations studies and actively monitored and alarmed in RTCA systems.

  • Finding 18 -  Failure to Establish Valid SOLs and Identify IROLs:

    The cascading nature of the event that led to uncontrolled separation of San Diego, IID, Yuma, and CFE indicates that an IROL was violated on September 8, 2011, even though WECC RC did not recognize any IROLs in existence on that day. In addition, the established SOL of 2,200 MW on Path 44 and 1,800 MW on H-NG are invalid for the present infrastructure, as demonstrated by the event.

  • Recommendation 18 - WECC RC should recognize that IROLs do exist on its system and, thus, should study IROLs in the day-ahead timeframe and monitor potential IROL exceedances in real-time.

  • Finding 19 -  Failure to Establish Valid SOLs and Identify IROLs:

    The cascading nature of the event that led to uncontrolled separation of San Diego, IID, Yuma, and CFE indicates that an IROL was violated on September 8, 2011, even though WECC RC did not recognize any IROLs in existence on that day. In addition, the established SOL of 2,200 MW on Path 44 and 1,800 MW on H-NG are invalid for the present infrastructure, as demonstrated by the event.

  • Recommendation 18 - WECC RC should recognize that IROLs do exist on its system and, thus, should study IROLs in the day-ahead timeframe and monitor potential IROL exceedances in real-time.

    WECC RC should work with TOPs to consider whether any SOLs in the Western Interconnection constitute IROLs. As part of this effort, WECC RC should: (1) work with affected TOPs to consider whether Path 44 and H-NG should be recognized as IROLs; and (2) validate existing SOLs, and ensure that they take into account all transmission and generation facilities and protection systems that impact BPS reliability.

  • Finding 19 -  Lack of Coordination of the S Line RAS:

    Several TOs and TOPs did not properly coordinate a RAS by: (1) not performing coordination studies with the overload protection schemes on the facilities that the S Line RAS is designed to protect; and (2) not assessing the impact of setting relays to trip generation sources and a 230 kV transmission tie line prior to the operation of a single 161/92 kV transformer’s overload protection. As a result, BES facilities were isolated in excess of those needed to maintain reliability, with adverse impact on BPS reliability.

  • Recommendation 19 - The TOs and TOPs responsible for design and coordination of the S Line RAS should revisit its design basis and protection settings to ensure coordination with other protection systems in order to prevent adverse impact to the BPS, premature operation, and excessive isolation of facilities. TOs and TOPs should share any changes to the S Line RAS with TPs and PCs so that they can accurately reflect the S Line RAS when planning.

  • Finding 20 -  Lack of Coordination of the SONGS Separation Scheme:

    SCE did not coordinate the SONGS separation scheme with other protection systems, including protection and turbine control systems on the two SONGS generators. As a result, SCE did not realize that Units 2 and 3 at SONGS would trip after operation of the separation scheme.

  • Recommendation 20 - SCE should ensure that the SONGS separation scheme is coordinated with other protection schemes, such as the generation protection and turbine control systems on the units at SONGS and UFLS schemes.

  • Finding 21 -  Effect of SONGS Separation Scheme on SONGS Units:

    The SONGS units tripped due to their turbine control systems detecting unacceptable acceleration following operation of the SONGS separation scheme.

  • Recommendation 21 - GOs and GOPs should evaluate the sensitivity of the acceleration control functions in turbine control systems to verify that transient perturbations or fault conditions in the transmission system resulting in unit acceleration will not result in unit trip without allowing time for protective devices to clear the fault on the transmission system.


  • Finding 22 -  Lack of Review and Studying Impact of SPSs:

    Although WECC equates SPSs with RASs, prior to October 1, 2011, WECC’s definition of RAS excluded many protection systems that would be included within NERC’s definition of SPS. As a result, WECC did not review and assess all NERC-defined SPSs in its region, and WECC’s TOPs did not perform the required review and assessment of all NERC-defined SPSs in their areas.

  • Recommendation 22 - WECC RE, along with TOs, GOs, and Distribution Providers (DPs), should periodically review the purpose and impact of RASs, including Safety Nets and Local Area Protection Schemes, to ensure they are properly classified, are still necessary, serve their intended purposes, are coordinated properly with other protection systems, and do not have unintended consequences on reliability. WECC RE and the appropriate TOPs should promptly conduct these reviews for the SONGS separation scheme and the S Line RAS.


  • Finding 23 -  Effect of Inadvertent Operation of SONGS Separation Scheme on BPS Reliability:

    The inquiry’s simulation of the event shows that the inadvertent operation of the SONGS separation scheme under normal system operations could lead to a voltage collapse and blackout in the SDG&E areas under certain high load conditions.

  • Recommendation 23 - CAISO and SCE should promptly verify that the inadvertent operation of the SONGS separation scheme does not pose an unacceptable risk to BPS reliability. Until this verification can be completed, they should consider all actions to minimize this risk, up to and including temporarily removing the SONGS separation scheme from service.

  • Finding 24 -  Not Recognizing Relay Settings When Establishing SOLs:

    An affected TO did not properly establish the SOL for two transformers, as the SOL did not recognize that the most limiting elements (protective relays) were set to trip below the established emergency rating. As a result, the transformers tripped prior to the facilities being loaded to their emergency ratings during the restoration process, which delayed the restoration of power to the Yuma load pocket.

  • Recommendation 24 - TOs should reevaluate their facility ratings methodologies and
    implementation of the methodologies to ensure that their ratings are equal to the most limiting piece of equipment, including relay settings. No relay settings should be set below a facility’s emergency rating. When the relay setting is determined to be the most limiting piece of equipment, consideration should be given to reviewing the setting to ensure that it does not unnecessarily restrict the transmission loadability.


  • Finding 25 -  Margin Between Overload Relay Protection Settings and Emergency Rating:

    Some affected TOs set overload relay protection settings on transformers just above the transformers’ emergency rating, resulting in facilities being automatically removed from service before TOPs have sufficient time to take control actions to mitigate the resulting overloads. One TO in particular set its transformers’ overload protection schemes with such narrow margins between the emergency ratings and the relay trip settings that the protective relays tripped the transformers following an N-1 contingency.

  • Recommendation 25 - TOs should review their transformers’ overload protection relay settings with their TOPs to ensure appropriate margins between relay settings and emergency ratings developed by TOPs. For example, TOs could consider using the settings of Reliability Standard PRC-023-1 R.1.11 even for those transformers not classified as BES. PRC-023-1 R.1.11 requires relays to be set to allow the transformer to be operated at an overload level of at least 150% of the maximum applicable nameplate rating, or 115% of the highest operator established emergency transformer rating, whichever is greater.


  • Finding 26 -  Relay Settings and Proximity to Emergency Ratings:

    Some TOs set relays to isolate facilities for loading conditions slightly above their thirty minute emergency ratings. As a result, several transmission lines and transformers tripped within seconds of exceeding their emergency ratings, leaving TOPs insufficient time to mitigate overloads.

  • Recommendation 26 - TOs should evaluate load responsive relays on transmission lines and transformers to determine if the settings can be raised to provide more time
    for TOPs to take manual action to mitigate overloads that are within the short-time thermal capability of the equipment instead of allowing relaysprematurely isolate the transmission lines. If the settings cannot be raised to allow more time for TOPs to take manual action. TOPs must ensure that the settings are taken into account in developing facility ratings and that automatic isolation does not result in cascading outages.


  • Finding 27 -  Phase Angle Difference Following Loss of Transmission Line:

    A TOP did not have tools in place to determine the phase angle difference between the two terminals of its 500 kV line after the line tripped. Yet it informed the RC and another TOP that the line would be restored quickly when in fact this could not be accomplished.

  • Recommendation 27 - TOPs should have: (1) the tools necessary to determine phase angle differences following the loss of lines; and (2) mitigation and operating plans for reclosing lines with large phase angle differences. TOPs should also train operators to effectively respond to phase angle differences. These plans should be developed based on the seasonal and next-day contingency analyses that address the angular differences across opened system elements.


8. Companies, Organization and Affected Entities
  1. APS - Arizona Public Services - Phoenix, AZ - Arizona’s largest and longest-serving electricity utility, serves more than 1.1 million customers in 11 of the state’s 15 counties. With headquarters in Phoenix, APS is the principal subsidiary of Pinnacle West Capital Corp. (NYSE: PNW).

    APS is a vertically integrated utility that serves a 50,000 square mile territory spanning 11 of Arizona’s 15 counties. Among other NERC registrations, APS is the TOP and BA for its territory. APS engages in both marketing and grid operation functions, which are separated. APS owns and operates transmission facilities at the 500 (including H-NG), 345, 230, 115, and 69 kV levels, and owns approximately 6,300 MW of installed generation capacity. APS’s 2011 peak load was 7,087 MW.

  2. CAISO - California Independent System Operator Corp Initiated a joint task force on September 9 to investigate the widespread blackout that left more than a million Southern Californians without power for nearly 12 hours. In close coordination with the ;(WECC), CAISO will bring together all of the utilities impacted by the outage including Arizona Public Service, San Diego Gas & Electric, Southern California Edison, Imperial Irrigation District and Comisión Federal de Electricidad, which serves customers in northern Baja.

    CAISO runs the primary market for wholesale electric power and open-access transmission in California, and manages the high-voltage transmission lines that make up approximately 80% of California’s power grid. CAISO operates its market through day-ahead and hour-ahead markets, as well as scheduling power in real time as necessary. Among other registrations, CAISO is PC and BA for most of California, including the city of San Diego. It also acts as TOP for several entities within its footprint, including SDG&E and SCE. CAISO likewise engages in modeling and planning functions in order to ensure long-term grid reliability, as well as identifying infrastructure upgrades necessary for grid function

  3. CFE Comisión Federal de Electricidad - The only electric utility in Mexico, servicing up to 98% of the total population. CFE’s Baja California Control Area is not connected to the rest of Mexico’s electric grid but is connected to the Western Interconnection. CFE’s Baja California Control Area covers the northwest corner of Mexico, including the cities of Tijuana, Rosarito, Tecate, Ensenada, Mexicali, and San Luis Rio Colorado. CFE’s Baja California control Area operates transmission systems at the 230, 161, 115, and 69 kV levels, and owns 2,039 MW of gross generating capacity and the rights to a 489 MW independent power producer within the Baja California Control Area. CFE’s Baja California Control Area had a net peak load of 2,184 MW for summer 2010. CFE’s Baja California Control
    Area is connected at the 230 kV level with SDG&E through two transmission lines on WECC Path 45. CFE functions as the TO, TOP, and BA for its Baja California Control Area under the oversight of WECC RC.

  4. FERC - Federal Energy Regulatory Commission and the North American Electric Reliability Corporation (NERC) also announced a joint inquiry in cooperation with the ISO, WECC, all utilities impacted and state regulators in California and Arizona. "This inquiry is an effective way for us to protect consumers and ensure the reliability of the bulk power system," FERC Chairman Jon Wellinghoff stated.FERC is charged with oversight of reliability of the nation’s bulk power system. FERC has designated NERC as the organization that develops and enforces mandatory reliability standards.

  5. IID - Imperial Irrigation District - Encompasses the Imperial Valley, the eastern part of Coachella Valley in Riverside County, and a small portion of San Diego County, in California, owns and operates generation, transmission, and distribution facilities in its service area to provide comprehensive electric service to its customers. Thus, IID is a vertically integrated utility. IID’s generation consists of hydroelectric units on the All-American Canal as well as oil-, nuclear-, coal-, and gas-fired generation facilities, with a total net capability of 514 MW. IID purchases power from other electric utilities to meet its peak demands in summer, which can exceed 990 MW. IID’s transmission system consists of approximately 1,400 miles of 500, 230, 161, and 92 kV lines, as well as 26 transmission substations. Among other NERC registrations, IID is a TOP, BA, and TP responsible for resource and transmission planning, load balancing, and frequency support for its footprint.


  6. NERC - North American Electric Reliability Corporation -

  7. SCE - Southern California Edison - A large investor-owned utility which provides electricity in central, coastal, and southern California. SCE is a wholly-owned subsidiary of Edison International, which is also based in California. Among other NERC registrations, SCE operates as a TOP within CAISO’s BA footprint, and has delegated part of its responsibilities as a TOP to CAISO. SCE is also registered as TP, and is responsible for the reliability assessments of the SONGS separation scheme. SCE owns 5,490 circuit miles of transmission lines, including 500, 230, and 161 kV lines. SCE also operates a subtransmission system of 7,079 circuit miles at the 115, 66, 55, and 33 kV levels. Of the affected entities, SCE is interconnected with APS, IID, and SDG&E at various transmission voltage levels. SCE owns over 5,600 MW of generation, including a majority share in SONGS, and its peak load exceeds 22,000 MW. Along with SONGS staff, SCE is responsible for the safe and reliable operation of the nuclear facility.

  8. SDG&E - San Diego Gas & Electric Co. - A regulated public utility that provides  energy service to 3.5 million consumers through 1.4 million electric meters and more than 850,000 natural gas meters in San Diego and southern Orange counties. The utility’s area spans 4,100 square miles. SDG&E is a subsidiary of Sempra Energy (NYSE: SRE), a Fortune 500 energy services holding company based in San Diego.

    SDG&E is a utility that serves both electricity and natural gas to its customers in San Diego County and a portion of southern Orange County, and is the primary utility for the city of San Diego. SDG&E owns relatively little generation—approximately 600 MW—although generation owned by others in its footprint brings the total generation capacity of the area above 3,350 MW. Peak load for the area can exceed 4,500 MW in the summer. SDG&E also operates an extensive high-voltage transmission network at the 500, 230, and 138 kV levels. SDG&E, operating as a TOP within CAISO’s BA footprint, has delegated part of its responsibilities as a TOP to CAISO.

  9. SONGS San Onofre Nuclear Generating Station - The two San Onofre generating units produce a total of 2,200 megawatts, enough to power 1.4 million households.

    SONGS is a two-unit nuclear generation facility capable of producing approximately 2,200 MW of power, and is located north of San Diego. SONGS produces approximately 19% of the power used by SCE customers and 25% of the power used by SDG&E customers. SONGS is jointly owned by SCE (78.21%), SDG&E (20%), and the City of Riverside (1.79%). SCE, as TO and GO, is responsible for ensuring the safe and reliable operation of SONGS within the grid.

  10. Sunrise Power Link - A a high-voltage power transmission line that will bring 1000 megawatts of power from the Imperial Valley to San Diego County. 117-mile, $1.883 billion 500-kilovolt electric “superhighway” from Imperial County to San Diego with 1,000 megawatt capacity. Project schedule: Fall 2010 construction start and 2012 in-service

  11. SWPL - Southwest Powerlink - According to SDG&E, the replacement of faulty equipment at a power substation in Arizona triggered a series of events that knocked out SWPL, one of two major transmission links that connect the San Diego area to the electrical grid for the western United States.


  12. Transmission Corridors
    1. 500 kV H-NG - One of several transmission lines forming Path 49 (“East of River”). Along with two 500 kV lines, one from North Gila to Imperial Valley and another from Imperial Valley to Miguel, they form the SWPL. The majority of the SWPL is geographically parallel to the United States-Mexico border. The SWPL meets the SDG&E and IID systems at the Imperial Valley substation.

    2. Path 44, also known as “South of SONGS,” operated by CAISO. This corridor includes the five 230 kV lines in the northernmost part of the SDG&E system that connect SDG&E with SCE at SONGS.

    3. “S Corridor” - Consists of lower voltage (230, 161 and 92 kV) facilities operated by IID and WALC in parallel with those of SCE, SDG&E, and APS. The only major interconnection between IID and SDG&E is through the 230 kV “S” Line, which connects the SDG&E/IID jointly-owned Imperial Valley Substation (operated by SDG&E) to IID’s El Centro Switching Station. The S Line interconnects the southern IID system with SDG&E and APS at Imperial Valley, which is also a terminus for the SWPL segment from Miguel and the SWPL segment from North Gila. WALC is connected to the SCE system and the rest of the Western Interconnection by 161 kV ties at Blythe, to IID by the 161 kV tie between WALC’s Knob and IID’s Pilot Knob substations, and to APS by a 69 kV tie via Gila at North Gila.

    4. The eastern end of the SWPL, which terminates at APS’s Hassayampa hub, is connected to SCE via a 500 kV line that connects APS’s Palo Verde and SCE’s Devers substations. The northern IID system is connected to SCE’s Devers substation via a 230 kV transmission line that connects from Devers to IID’s CV substation. These connections, along with SDG&E’s connection to SCE via Path 44’s terminus at SONGS, make the SWPL, Path 44, and IID’s and WALC’s systems operate as electrically parallel transmission corridors.
      This  simplified diagram illustrates the interconnected nature of these three parallel corridors. Red lines represent 500 kV, blue lines represent 230 kV, and green lines represent 161 kV. Source:  NERC/FERC April 2012 Report
  13. UCAN - Utility Consumers' Action Network - San Diego utility watchdog that questions the cause of the blackout.

  14. WALC - Western Area Power Administration – Lower Colorado - One of the four entities constituting the Western Area Power Administration, a federal power marketer within the United States Department of Energy. WALC operates in Arizona, Southern California, Colorado, Utah, New Mexico, and Nevada, and is registered with NERC as a BA, TOP, and PC for its footprint. As a net exporter of energy, WALC’s territory has over 6,200 MW of generation, serving at most 2,100 MW of peak load. A majority of WALC’s generation is federal hydroelectric facilities, with the balance consisting of thermal generation owned and operated by independent power producers. WALC also operates an extensive transmission network within its footprint, and is interconnected with APS, SCE, and nine other balancing areas.

  15. WECC - Western Electricity Coordinating Council - Regional forum for promoting regional electric service reliability in Western Canada and the Western United States. - WECC operates two reliability coordination offices that provide situational awareness and real-time supervision of the entire Western Interconnection.

    WECC has also announced its own Event Analysis that will be conducted with participation from the California ISO.

    In its capacity as the RC, WECC is the highest level of authority responsible for
    the reliable operation of the BPS in the Western Interconnection. WECC RC oversees
    the operation of the Western Interconnection in real time, receiving data from entities
    throughout the entire Interconnection, and providing high-level situational awareness
    for the entire system. WECC RC can direct the entities it oversees to take certain actions
    in order to preserve system reliability. Although WECC is both an RE and an RC, these
    two functions are organizationally separated.



9. Links
  1. Arizona-California Outages on September 8, 2011 - A report on causes and recommendations issued May 2012 [PDF] by the North American Electric Reliability Corporation and the Federal Energy Regulatory Commission.  The reportg blames the outage on "inadequate planning and a lack of observability and awareness of system operation conditions on the day of the event.
  2. SDG&E Outage Map 
  3. Four Big Unanswered Blackout Questions - Voice of San Diego - Includes claims that the customers normally served by the Arizona power line were transferred to nearby power grids, including San Diego's, which are all interconnected.

    Jim Avery, SDG&E's senior vice president of power supply, said SDG&E never anticipated what happened Thursday. The company planned how to maintain power without San Onofre and the Arizona power line, but never for maintaining power in those conditions while also handling more customers from Arizona. (It isn't clear how many additional customers were transferred. SDG&E referred follow-up questions to the Independent System Operator, which didn't return messages.)

    "We planned for what we deem to be a credible threat. We do not plan for every contingency," Avery said. "What occurred is greater than we planned for."

    Because SDG&E never planned for it, Avery said, the safeguards to prevent a system-wide collapse didn't work. San Diego has three substations where grid managers and utility officials can stop a surge from turning off local power plants, too. They closed none before the blackout. If they had, the whole system wouldn't have failed.

    Avery said the new conditions arrived too fast for humans to act and the computer sensors that could have automatically closed the gateways didn't because the scenario had never been planned for. n "The system did exactly as it was designed to do," Avery repeated several times.
  4.  Video Report on PR Blame Game - Channel 10 News, September 12, 2011
  5. Four Big Unanswered Blackout Questions - Voice of San Diego

No comments:

Post a Comment