top of page
Search
  • Writer's pictureKNP Communications

Deepwater Horizon: 10 Years, 5 Insights


Deepwater Horizon drilling rig disaster

by Matthew Kohut


On April 20, 2010, an explosion on the BP Deepwater Horizon drilling rig 49 miles off the coast of Louisiana in the Gulf of Mexico killed 11 men. The blowout of the well initiated the worst oil spill in U.S. history. Ten years later, five insights about organizational culture and cognitive biases remain evergreen for leaders of project-based organizations.[1]

An accident like Deepwater Horizon does not happen simply because individuals make poor decisions due to cognitive biases. Organizational culture sets the context and ultimately determines the damage that faulty decisions can cause. In this respect, organizations are like living organisms with immune systems. A dysfunctional culture compromises an organization’s immune system, creating a fertile breeding ground for poor decision-making processes. Three of the insights below address cultural dynamics, while two speak to cognitive biases.

1. Normalization of deviance is a ticking time bomb. “When you see something, however abnormal, often enough, you begin to think it’s normal,” former NASA Administrator Sean O’Keefe once said. NASA understands this better than most organizations. Sociologist Diane Vaughan coined the term “normalization of deviance” to explain the organizational dynamics that led NASA officials to launch the space shuttle Challenger on a freezing January day in 1986 despite a string of data points from previous missions that led engineers to warn of the likelihood of catastrophic failure.[2] Normalization of deviance is a cultural phenomenon—it’s a shared understanding of a new normal, regardless of data, processes, or procedures.

Evidence of the normalization of deviance on Deepwater Horizon extended to oil giant BP, owner of the well, and Halliburton, the contractor BP hired to cement each segment of the well in place. Halliburton prepared cement for this well that had repeatedly failed Halliburton’s own laboratory tests. Despite those test results, Halliburton managers onshore let its crew on Deepwater Horizon as well as the crews from BP and Transocean, the owner/operator of the rig, continue with the cement job.

On February 10, soon after Deepwater Horizon began work on the well, Halliburton engineer Jesse Gagliano asked Halliburton laboratory personnel to run a series of “pilot tests” on the cement blend stored on the Deepwater Horizon that Halliburton planned to use. They tested and reported the results to Gagliano. He sent the lab report to BP on March 8 as an attachment to an e-mail on a related topic.

The data that Gagliano sent to BP included the results of a single test. An expert could see that test showed that the February design was unstable. Gagliano did not comment on the evidence of instability, and there is no evidence that BP examined the data in the report at all.

2. Psychological safety is a necessity, not a luxury. “If you see something, say something” is often easier said than done. Voicing an unpopular or pessimistic perspective can take a toll on professional relationships, personal reputation, or even job security. Yet the ability to speak without fear of reprisal has immense implications for team performance. In the mid-1990s, Harvard Business School professor Amy Edmondson and colleagues studied cardiac surgical teams and found that the teams that got the best results were the ones where all team member felt comfortable communicating, regardless of positional power. Edmondson coined the term “psychological safety” to describe this dynamic, which she defined as, “A shared belief held by members of a team that the team is safe for interpersonal risk-taking.” [3] The importance of psychological safety has been validated numerous times since then, notably by Google, which found that psychological safety was the differentiating factor between its highest-performing teams and all others.[4]

On the Deepwater Horizon rig, there was evidence of a significant lack of psychological safety: a survey of the Transocean crew conducted the month before the accident found that 46% of crew members felt that some of the workforce feared reprisals for reporting unsafe situations. Transocean crew members comprised the majority of the people on the Deepwater Horizon rig.

Psychological safety is the flip side of the normalization of deviance. On a team that empowers its members to speak freely, “See something, say something” becomes a shared norm. Abnormalities are addressed frankly rather than rationalized and accepted.

3. Success can breed complacency. Success is often met with the cliché, “If it ain’t broke, don’t fix it.” But as engineering historian Henry Petroski has warned, “Success can mask latent flaws.” [5] Blind spots grow in the absence of continuous efforts to learn and improve.

Prior to the accident, Deepwater Horizon was one of the best-performing deepwater rigs in BP’s fleet. In September 2009, it had drilled to a world-record total depth of 35,055 feet, tapping into a pool of crude estimated at 4 to 6 billion barrels of oil equivalent. As of April 2010, it had not had a single “lost-time incident” in seven years of drilling. But as of April 20, BP was also more than $58 million over budget and nearly six weeks behind schedule. Cost and schedule pressure was relentless.

After weeks of hard work on a well that a top BP drilling engineer referred to six days earlier as “a nightmare,” the final cement job had gone fine. To ensure the job did not have problems, a three-man team from Schlumberger, an independent contractor, was scheduled to fly out to the rig later that day, able to perform a suite of tests to examine the well’s new bottom cement seal. According to the BP team’s plan, if the cementing went smoothly, as it had, they could skip this evaluation. The decision was made to send the Schlumberger team home on the 11:00 a.m. helicopter, thus saving time and the $128,000 fee. As BP Wells Team Leader John Guide noted, “Everyone involved with the job on the rig site was completely satisfied with the [cementing] job.”

The cement failure was deemed the primary cause of the well blowout.

4. When shopping for answers, you get what you pay for. Confirmation bias refers to the tendency to seek or interpret evidence through the lens of existing beliefs or expectations. You shop for the answer you want to hear.

On Deepwater Horizon, BP’s design team originally had planned to use a “long string” production casing—a single continuous wall of steel between the wellhead on the seafloor, and the oil and gas zone at the bottom of the well. But after encountering cracking in the rock formation on the ocean floor on April 9, which limited the depth to which the rig would be able to drill, they were forced to reconsider. As another option, they evaluated a different design called a “liner”— a shorter string of casing hung lower in the well and anchored to the next higher string. A liner would result in a more complex, and theoretically more leak-prone, system over the life of the well. But it would be easier to cement into place.

On April 14 and 15, BP’s engineers, working with a Halliburton engineer, used sophisticated computer programs to model the likely outcome of the cementing process. When early results suggested the long string could not be cemented reliably, BP’s design team switched to a liner. But that shift met resistance within BP. The engineers were encouraged to engage an in-house BP cementing expert to review Halliburton’s recommendations. That BP expert determined that certain inputs should be corrected. Calculations with the new inputs showed that a long string could be cemented properly. The BP engineers accordingly decided that installing a long string was “again the primary option.”

The long string design did not cause the failure, but it increased the difficulty of getting the cement job right—again, the cement failure was the primary cause of the blowout.

5. Wishful thinking doesn’t make it so. Optimism bias refers to the belief that negative events are less likely to happen than probabilities would suggest.


On Deepwater Horizon, once BP decided on the long string well design, the team faced a challenge. BP’s original long string designs had called for 16 or more centralizers—critical components designed to screw securely into place between sections of cement casing. But on April 1, a BP team member learned that BP’s supplier had in stock only six centralizers.

Halliburton engineer Jesse Gagliano ran a series of computer simulations based on the long string design. His calculations found that the casing would need more than six centralizers; a second set of simulations suggested 21 centralizers would be best.

Engineers disagreed back and forth about the number and type of centralizers needed. In the end, BP only installed six centralizers. In one of these email exchanges, BP drilling engineer Brett Cocales concluded, “But, who cares, it’s done, end of story, [we] will probably be fine and we’ll get a good cement job.”


 

Hindsight is 20/20. Ten years later, it’s easy to pass judgment about difficult real-time decisions made with imperfect information under conditions of uncertainty. But as Nobel Prize-winner Daniel Kahneman has said, decision-making is a process, and like all processes it should subject to quality control. A high-quality process for decision-making will not guarantee a good outcome every time, but in combination with an organizational culture that promotes learning and psychological safety, it can reduce the likelihood of unforced errors. A decade later, Deepwater Horizon stands as stark reminder that the failure to take this seriously can have deadly consequences.


[1] The observations here draw directly from "Deep Water: The Gulf Oil Disaster and the Future of Offshore Drilling," the final report to President Obama by the National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling. The report is in the public domain. [2] Diane Vaughan, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA (Chicago: University of Chicago Press, 1986), p. 409. [3] Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383. [4] Duhigg, C. “What Google Learned from Its Quest to Build The Perfect Team.” New York Times Magazine, February 25, 2016. Accessed 04/19/20 at: https://www.nytimes.com/2016/02/28/magazine/what-google-learned-from-its-quest-to-build-the-perfect-team.html [5] “ASK OCE Interview: 5 Questions for Dr. Henry Petroski.” Accessed 04/18/20 at: https://appel.nasa.gov/2010/02/26/ao_1-10_f_interview-html/

639 views

Recent Posts

See All
bottom of page