StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Nuclear Safety and Nuclear Accidents: Complex Systems or Protocol Failure - Essay Example

Cite this document
Summary
This essay "Nuclear Safety and Nuclear Accidents: Complex Systems or Protocol Failure?" reviews basic nuclear protocol as supplied by engineers and government agencies, looks at case studies of failure, and offers proposals and conclusions. Cases reviewed include Chernobyl, Three Mile Island…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER94.9% of users find it useful
Nuclear Safety and Nuclear Accidents: Complex Systems or Protocol Failure
Read Text Preview

Extract of sample "Nuclear Safety and Nuclear Accidents: Complex Systems or Protocol Failure"

?Nuclear Safety and Nuclear Accidents: Complex Systems or Protocol Failure? [ID Nuclear safety is a hot-button political, economic and engineering issue. Major failures like at Chernobyl and Three Mile Island harmed the reputation of the nuclear industry, and films like The China Syndrome underlined the dangers. Nuclear systems are complex, and complex systems break down in complex ways. No engineer can possibly be expected to anticipate every possible contingency and wear pattern: The world is just too complex for that. But reviewing major nuclear accidents, it is clear that engineering and protocol problems, not simple and unavoidable breakdowns, are most often to blame. This paper reviews basic nuclear protocol as supplied by engineers and government agencies, looks at case studies of failure, and offers proposals and conclusions. Cases reviewed include Chernobyl, Three Mile Island, and the 2002 degradation in Davis-Besse. Nuclear Protocol Nuclear safety mandates are highly detailed and specific (United States Nuclear Regulatory Commission, 2004; Health and Safety Executive, 2006; CANDU, 2005a; CANDU, 2005b; IAEA, 2006). Regulations are provided at multiple levels: Local, provincial or state, national, and international under the auspices of the International Atomic Energy Agency. Reviewing these mandates is illustrative for understanding major nuclear failures. The US Nuclear Regulatory Commission, or USNRC, emphasizes a “strong nuclear safety culture” (2004). They instruct “managers” to make comparisons between provided principles and day-to-day operation. They argue that organizational culture is as important to long-term safety as engineering issues. Safety culture and safety-conscious work environments are essential, and the USNRC argues that, in fact, there is no tradeoff between safety culture and cost-effectiveness, as the same procedures and policies apply for both. They point out that the nuclear safety of a plant is a collective responsibility. In the failures reviewed in this paper, buck-passing, attempts to duck responsibility and other factors plagued the ability of the plants to cope with problems. They recommend, among other proposals, that Partnerships along the utility corridor not be used to decay responsibility Board members and corporate officers personally assess safety Support and administration departments are briefed on nuclear safety Coaching and mentoring with supervision by managers Production goals be secondary to safety concerns, and this commitment be explicitly stated in production goal review Trust is maintained: Management and labor give concessions to keep the plant's atmosphere trust-sustaining Probabilistic risk analyses used in day-to-day operation Design and operating margins kept under control and review, activities affecting core functionality be especially reviewed The UK's HSE lists over thirty engineering principles that must be strictly followed, with risk erring on the side of caution (2006, 4-80), as well as principles of radiation protection. Shutdown systems, for example, should have two redundant and diverse mechanisms in place ( Health and Safety Executive, 2006, 80). Need for personnel access should be controlled (Health and Safety Executive, 2006, 75). Passive sealed containment systems and intrinsic safety features should trump active dynamic systems (Health and Safety Executive, 74). Each of the failures reviewed stemmed from a failure to adopt a parallel standard. CANDU (2006b) finds that the mechanism for handling problems should be control, cool and contain, or the three Cs. If reactor power is controlled, fuel is cooled and radioactivity is contained, threats are controlled. This mean that 3C-compliance has to be in place at all times: Emergency conditions or regular, peak hours or slow, shutdown or upset operation, etc. The IAEA (2006) add a few salient principles. Principle 4 notes that any radioactivity gain must be justified by a pressing need. Principle 5 states that protection must be at the maximum feasible level at every stage of the process. Principle 6 argues that engineering must focus on preventing risks to individuals, whether employees or citizens: Property and other non-individual goods must be sacrificed first. Emergency preparedness and transport procedures must be in place (IAEA, 2006, Principle 9). These principles will guide the analysis of the failures. Chernobyl Chernobyl's failure is complex, and many scholars have identified different sources. Nonetheless, one thing is clear: The failure was not due to normal operating procedure and acceptable margins of error. The above picture helps to illustrate the point. The damage is overwhelmingly contained to the reactor section of the plant: Other areas are either untouched or nominally damaged. The IAEA's Principle 6 was not met: There was a clear failure to protect individuals over property. The reactor itself retained its integrity enough to be put into a sarcophagus (World Nuclear Association, 2011). Milling and Salge (2006) find that “the accident was caused by the combination of human failures in (stage 1) the design of the reactor and (stage 2) on-line operations”. Reactor design is ERC 1 to 4 in the UK's HSE handbook (Health and Safety Executive, 2006, 78). Chernobyl failed on a number of levels here. It did not have two separate systems for reactor shutdown as outlined in ERC.2. Normal operating modes had too low of margins for safety, with insufficient removal of heat from the core, as outlined in ERC.1. But human failures also led to the problem, as did consistent failures in on-line operations. The World Nuclear Association summarises the core problems as being “the product of a flawed Soviet reactor design coupled with serious mistakes made by the plant operators. It was a direct consequence of Cold War isolation and the resulting lack of any safety culture” (2011). One of the major problems was the void coefficient (World Nuclear Association, 2011; Greenpeace, 1996). The RBMK reactor design is capable of having a positive void coefficient, which can lead to a runaway meltdown effect. Steam bubbles, or “voids”, increase and cause a reactivity overload (World Nuclear Association, 2011). The new RBMK reactor avoids this problem, showing that it is not an insurmountable technical problem. A simple reactor misdesign is a clear problem, but such oversights occur and are understandable. Where the serious problem comes in is that the RBMK reactor used by Chernobyl does not have to have a positive void coefficient. “[A]t the time of the accident at Chernobyl 4, the reactor's fuel burn-up, control rod configuration and power level led to a positive void coefficient large enough to overwhelm all other influences on the power coefficient” (World Nuclear Council, 2011). This led to a failure during normal operation: Any engineering project can fail under extraordinary conditions, but this was simple day-to-day operations, during which safety margins should have been controlled. Worse, the void coefficient meant that even when power declined, reactivity increased, stymieing normal shutdown procedures. The void coefficient result indicates a few failures: 1. Reactor failures in design were never tested longitudinally, and testing didn't determine a serious problem, preventing engineering feedback. Engineers can't know how to fix things if they never get information. 2. Maintenance and upkeep was sacrificed for power output. This is a direct inversion of the IAEA and HSE principles. 3. A day-to-day problem was not diagnosed day-to-day. Chronic inattention to operating parameters prevented engineers and safety experts from doing their work in identifying a clear problem and stopping it. Aside from the void problem Greenpeace (1996) identifies additional factors: “the sensitivity of the neutron field to reactivity perturbations leading to control difficulties and requiring complicated control systems” and “no functioning containment”. INSAG-7 adds by noting: “It was stated in INSAG-1 that blocking of the emergency core cooling system (ECCS) was a violation of procedures. However, recent Soviet information confirms that blocking of the ECCS was in fact permissible at Chernobyl if authorized by the Chief Engineer, and that this authorization was given for the tests leading up to the accident and was even an approved step in the test procedure” (International Safety Advisory Group, 1992). But the plant had been operating at half power for 11 hours before the accident, which was not part of this amended protocol. “Blocking the ECCS over this period and permitting operation for a prolonged period with a vital safety system unavailable are indicative of an absence of safety culture” (International Nuclear Safety Advisory Group, 1992). The INSAG also found that a minimum safe operating level was not instituted, a local to global power shift aggravated problems, a turbogenerator trip signal blockage, steam protection disabled, and the required operating reactivity margin was violated. Malko notes that safety documentation was also inadequate to the task. He finds that the following errors led to the accident: “[O]peration of the reactor at a very low operative reactivity surplus (ORS)”, the above-mentioned low power for the test, “blocking of the protection system relaying on water level and steam pressure in steam-separators”, the shutdown signal system that would stop the two turbo generators, and all the main circulating pumps being connected to the reactor simultaneously. (15). It was a combination of safety problems at the plant and reactor misdesign. Three Mile Island The Three Mile Island disaster is in many respects even more galling than the Chernobyl disaster. Three Mile Island was a multi-accident system failure, with cascading failures, most of which had to be in place for failure to occur (Ireland et al, 2005). There is no point for redundant systems if none of them are maintained; in fact, redundancy can lead to a false sense of reassurance. The accident was caused by an attempt to unclog a demineralizing pipe in a secondary loop, a non-reactor function (Ireland et al, 2005). This indicates that non-primary functions can cascade to cause problems throughout the rest of the plant: Failures anywhere in the system can cause problems at the primary function level. Afterwards, feedwater got blocked off, causing rising temperature caused pressure which caused an open release valve. One hundred and fifty minutes later, the valve had let off so much coolant that the problem cascaded. If anyone had noticed, it would have stopped the chain reaction. Cladding around the fuel failed, damage going on for two hundred minutes... by the time anyone noticed and Los Alamos was called, nothing could be done. The key is that many small systems failed in order. A secondary system led to a cascade of failures down the line. If anyone had noticed, if any of the redundant or safety elements triggered, things would have proceeded differently. In fact, the release valve did trigger successfully... it only made the problem worse over time, however. Three Mile Island teaches that no safety problem can be ignored. If one problem is seen, it must be dealt with and noted so that other problems are also ignored. Sleep deprivation was also a problem (SleepDex, 2011). This indicates that human resources is essential to the safety problem. The primary problem was that Three Mile Island was the first. Well-known experts said, “We think the chance of THAT [i.e., a complete meltdown] is nearly infinitesimal, about one in 10[to the 4th] reactor-years of operation, or roughly the same as, say, the likelihood that a feature-length movie describing a reactor accident should open in New York just as a similar accident actually happens in a neighboring state” (Tucker, 2009). No one believed that a chain reaction could happen. Three Mile Island taught a lesson, but Chernobyl and Davis-Bessel shows that it has not fully sunk in. Davis-Bessel The Davis-Bessel failure did not lead to a reaction or an explosion. It was a minor failure, but underscores the need for vigilance. UCS USA summarizes, “The outage began with an operational event; namely, a loss of feedwater caused in part by a known design problem. Subsequent investigations by plant workers and NRC inspectors revealed maintenance and testing program deficiencies that caused the known design problems to be challenged and fail. The outage length was originally dictated by the time required to correct the design, maintenance, and testing program problems. The outage’s length was protracted several months when the failure of a reactor coolant pump shaft at the Crystal River nuclear power plant prompted the look for, and discovery of, cracks in the reactor coolant pump shafts that required replacements before restart”. Davis-Bessel shows that nuclear accident prevention is a priority now, but it also demonstrates that regular operations can still cascade into failures even with modern reactors. Conclusions Organizational culture and the failure of safety cultures were implicated in all the scenarios. Overconfidence in the case of Three Mile Island, thriftiness and cost-focus with the Soviets at Chernobyl... safety cultures are essential. Every valve release and emergency signal in the world will not stop a reaction if people are not able to respond. However, at the same time, design issues are always implicated in these problems. In particular, overconfident design or design that assumes that people will pick up messes is a problem. Control panels were implicated in the disasters, if only because the panels were unable to do anything and gave inaccurate results due to chronic mis-maintenance and arrogance, but ultimately the real problem was the man-machine interface: Inexperienced, overworked people operating machines that were not operating up to snuff. Finally, and most importantly, noting failures and responding immediately with proper maintenance and emergency responses would have stopped every one of the featured disasters. Works Cited CANDU. 2006, “Principles of Nuclear Safety: 3 Cs”. Greenpeace. 1996, “Chernobyl: Ten Years After Causes, Consequences, Solutions”, 3rd version, April. Health and Safety Executive. 2006, “Safety Assessment Principles for Nuclear Facilities”, Revision 1. IAEA. 2006 “IAEA Safety Standards”. INSAG. 1992, “INSAG-7”, Vienna. Ireland, JR., Scott, JH, and Stratton, WR. 2005, “Three Mile Island and Multiple Failure Accidents”, Los Alamos. Libmann, J. 1996, Elements of nuclear safety, EDP Sciences. Malko, M. “The Chernobyl Reactor: Design Features and Reasons for Accident”, Joint Institute of Power and Nuclear Research, National Academy of Sciences of Belarus. Mosey, D. 1990, Reactor accidents, Nuclear Engineering International Special Publications. Salge, M., and Milling, PM. 2006, “Who is to blame, the operator or the designer? Two stages of human failure in the Chernobyl accident”, System Dynamics Review, vol. 2 no. 2, Summer. SleepDex. 2011, “Sleep Deprivation”. Available at: http://www.sleepdex.org/deficit.htm UCS USA. “Davis-Besse”. Available at: http://www.ucsusa.org/assets/documents/nuclear_power/davis-besse-i.pdf United States Nuclear Regulatory Commission. 2004, “Principles for a Strong Nuclear Safety Culture”, November. Tucker, W. 2009, “Three Mile Island -- Thirty Years After”, American Spectator, March 31. World Nuclear Association. 2011, “Chernobyl Accident”, February. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Nuclear Accidents Essay Example | Topics and Well Written Essays - 2000 words”, n.d.)
Retrieved from https://studentshare.org/environmental-studies/1408498-nuclear-accidents
(Nuclear Accidents Essay Example | Topics and Well Written Essays - 2000 Words)
https://studentshare.org/environmental-studies/1408498-nuclear-accidents.
“Nuclear Accidents Essay Example | Topics and Well Written Essays - 2000 Words”, n.d. https://studentshare.org/environmental-studies/1408498-nuclear-accidents.
  • Cited: 0 times

CHECK THESE SAMPLES OF Nuclear Safety and Nuclear Accidents: Complex Systems or Protocol Failure

What is the Role of Forensic Dental Radiography in Investigations Following Mass Fatalities

Current UK protocol and the legal and moral issues surrounding forensic medicine (including radiography) is also examined to further highlight some of the problems and limitations to these methods.... Forensic radiography is an important field in forensic medicine, becoming increasingly important in the event of a mass fatality....
35 Pages (8750 words) Dissertation

Safety Plant: The Piper Alpha Disaster

This report highlights the inter-connectivity of these failures as well as discussion of the sweeping safety and procedural changes that occurred in the industry post-disaster.... Up until this point, the safety and operational processes were in-line with platform practice and expectations.... In the 1980s, the platform received an overhaul to allow for natural gas production, which changed some of the dynamics of its operational and technological systems....
19 Pages (4750 words) Essay

Implementation of triage protocol for nurses

This paper seeks to provide evidence based literature review on triage protocol covering various aspects related to it.... Correct protocol needs to be identified based on the symptoms as explained by the patients.... Nursing triage protocol system in an organization enhances the efficiency level and improves patient health care.... The implementation of Triage protocol in an organization involves establishment of infrastructure required for triage practice including the support services and training to the triage nurses in the triage policies of the organization and the triage procedures to be adopted on day-to-day work in triage practice....
18 Pages (4500 words) Dissertation

Vibration Exposure to Have Health Effects

In 2008, the Australian safety and Compensation Council (ASCC) decided to find out how its workers were being affected in terms of health due to vibration exposures.... Name Professor Course Date Section A: Health & safety Introduction Majority of individuals are ignorant of the fat that vibration exposure has the ability to haver health effects on them....
28 Pages (7000 words) Essay

Magnetic Resonance Imaging Safety and Monitorin

INTRODUCTION Magnetic Resonance Imaging (MRI) is one of the more commonly used imaging techniques used to aid diagnosis and management.... It uses powerful magnets and radio waves to make images (Dugdale, 2010), and emits strong static magnetic field, high frequency electromagnetic waves, and a pulsed magnetic field in the process (American Society of Anesthesiologists, 2009)....
9 Pages (2250 words) Essay

The Fukushima Nuclear Power Plant Disaster

Name of Professor The Fukushima nuclear Power Plant Disaster: An Analysis of the Causes and Effects The massive earthquake and tsunami that struck Japan in 2011 led to the disastrous Fukushima nuclear power plant disaster.... hellip; This disaster revealed many issues inherent in Japan's nuclear sector and crisis preparedness and emergency management.... This essay discusses the possible causes of the nuclear meltdown, such as construction or design problems, how prepared the government is for the crisis, how it responded to the disaster, and its long-term environmental impact on Japan and the rest of the world....
12 Pages (3000 words) Essay

Risk Management Plan in Hospital

o manage risks to service qualityTo manage risks to efficient servicesTo manage risks to quality of servicesTo manage risks to safety of patients, caregivers, and visitorsTo manage risks of failure to meet national and local prioritised governmental targetsTo manage risks to the hospital reputation Step 1: Communicate and ConsultCommunication and consultation with internal and external stakeholders in all steps of the process.... hellip; It is also important to identify the risks associated with healthcare, and the hospital management system must implement processes to minimise adverse events and their impacts related to buildings, grounds, occupants, and internal physical safety systems....
27 Pages (6750 words) Essay

Flying Dry - Air Tahoma Fuel Starvation

Modern jet engines take advantage of the turbine's ability to convert fluid movements to mechanical energy, and many use complex electronic fuel balancing systems to ensure these turbine engines function optimally—systems that may become prone to mechanical or electronic failure and operator error.... nbsp; This type of failure occurred in the August 2004 crash of the Air Tahoma Flight 185, in which the Convair 580 twin-engine turboprop was destroyed on impact only one mile short of the runway, resulting in the death of the first-officer and damage to surrounding property....
6 Pages (1500 words) Case Study
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us