The purpose of this paper is to examine one of the largest offshore oil spills in U.S. history, BP’s Deepwater Horizon explosion in 2010. This paper seeks to investigate closely the system and the reasons of failure, to answer the questions behind numerous studies on this accident, Is Deepwater Horizon explosion inevitable? and What can we do to avoid such accidents?
Within the scope of Perrow’s normal accident theory, this paper aims to draw conclusions to the above questions and set forth recommendations to be considered in other similar oil drilling systems.
On April 20, 2010, BP-operated Deepwater Horizon oil rig exploded, claiming the lives of eleven people and injuring 17 others. The fire continued for 36 hours until the rig sank. Following the sinking, a sea-floor oil gusher flowed for 87 days, causing a spill of national significance (BP, 2010).
Numerous investigations explored the causes of the explosion and record-setting spill. Reports, including that from the U.S. government, pointed to defective cement on the well, faulting mostly BP, but also rig operator Transocean and contractor Halliburton (BP, 2010; Hogue, 2010). In 2011, a White House commission blamed BP and its partners for a series of cost-cutting decisions and an insufficient safety system, but also concluded that the spill resulted from “systemic” root causes and failure to create and apply in both industry practices and government policies (Broder, 2011).
Until now, BP is still struggling with the repercussions. Legal proceedings regarding this case are ongoing, with settlements and payments have cost BP more than 40 billion dollars (Fontevecchia, 2013).
BP’s accident will be examined under Perrow (1984)’s normal accident theory to understand why it happened and if anything could be done to prevent that. Perrow uses the term “normal accident” to describe inevitable accidents in a complex technological system. In his normal accident theory, Perrow considers the combination of interactive complexity and tight coupling as measures of a system’s potential to create a disaster. Accidents that result from the confluence of these two attributes are considered “normal accident” as they are inevitable, given the level of complexity and tight coupling inherent in the system (Perrow, 1999).
A high level of interconnectedness between system components, reliance on indirect information sources, an unpredictable environment, or incomprehensibility of a system to its operators indicates complexity within a system (Perrow, 1999). Since systems are designed, run and built by humans, they cannot be perfect. Every part of the system is subject to failure; the design can be faulty, as can the equipment, the procedures, the operators, the supplies, and the environment. Since nothing is perfect, humans build in safeguards, such as redundancies, buffers, and alarms that tell operators to take corrective action. But occasionally two or more...