Exclusive Content
8 Nov 2016

Column: Engineering a Safer World

In this column, I provide a book report on Nancy Leveson’s Engineering a Safer World (2012).

The central premise of the book is: The process hazard analysis methods we use today were designed for the relatively simple projects of yesterday and are inadequate for the complex projects we build today.

I agree with her.

Would It Have Prevented Bhopal?
My litmus test of a new process hazard analysis technique is: “Would this approach have prevented the Bhopal accident?” A typical hazard and operability study (HAZOP) would not have prevented Bhopal, in my opinion. I believe that Leveson’s systems-theoretic process analysis (STPA), a process and safety-guided design approach, would have.

Why Do We Need a New Approach to Safety?
The traditional approaches worked well for the simple systems of yesterday. But the systems we are building today are fundamentally different.

  • Reduced ability to learn from experience because of
    • The increased speed of technology change
    • Increasing automation removes operators from direct and intimate contact with the process
  • Changing nature of accidents from component failures to system failures due to increasing complexity and coupling
  • More complex relationships between humans and technology
  • Changing public and regulator views on safety. Decreasing tolerance for accidents.
  • Difficulty in making decisions because at the same time as safety culture is improving, the business environment is getting more competitive and aggressive

Accident models explain why accidents occur, and they determine the approaches we take to prevent them from recurring. Any such model is an abstraction that focuses on those items assumed to be important while ignoring issues considered less important.

The accident model in common use today makes these assumptions:

  • Safety is increased by increasing system and component reliability.
  • Accidents are caused by chains of related events beginning with one or more root causes and progressing because of the chance simultaneous occurrence of random events.
  • Probability risk analysis based on event chains is the best way to communicate and assess safety and risk information.
  • Most accidents are caused by operator error.

This accident model is questionable on several fronts.

Safety and reliability are different properties. A system can be reliable and unsafe.

Component failure is not the only cause of accidents; in complex systems, accidents often result from the unanticipated interactions of components that have not failed.

The selection of the root cause or initiating event is arbitrary. Previous events and conditions can always be added. Root causes are selected because

  • The type of event is familiar and thus an acceptable explanation for the accident.
  • It is the first event in the backward chain for which something can be done.
  • The causal path disappears for lack of information. (A reason human error is frequently selected as the root cause is that it is difficult to continue backtracking the chain through a human.)
  • It is politically acceptable. Some events or explanations will be omitted if they are embarrassing to the organization.

Causal chains oversimplify the accident. Viewing accidents as chains of events and conditions may limit understanding and omit causal factors that cannot be included in the event chain.

It is frequently possible to show that operators did not follow the operating procedures. Procedures are often not followed exactly because operators try to become more efficient and productive to deal with time pressures and other goals. There is a basic conflict between an error viewed as a deviation from normative procedures and an error viewed as a deviation from the rational and normally used procedure. It is usually easy to find someone who has violated a formal rule by following established practice rather than specified practice.

We need to change our assessment of the role of humans in accidents from what they did wrong to why it made sense to them at the time to act the way they did.

Complexity Primer
Project management theory is based generally on the idea of analytic reduction. It assumes that a complex system can be divided into subsystems and that those subsystems can then be studied and managed independently.

Of course, this can be true only if the subsystems operate independently with no feedback loops or other nonlinear interactions. That condition is not true for today’s complex projects.

Complex systems exist in a hierarchical arrangement. Even simple rules sets at lower levels of the hierarchy can result in surprising behavior at higher levels. An ant colony is a good example (Mitchell 2009): A single ant has few skills—a very simple rule set. Alone in the wild, it will wander aimlessly and die. But, put a few thousand together, and they form a culture. They build and defend nests, find food, divide the work.

Culture? Where did that come from?  No scientist could predict ant culture by studying individual ants.

This is the most interesting feature of complex systems. Culture is not contained within individual ants; it is only a property of the collective. This feature is called emergence—the culture emerges.

An emergent property is a property of the network that is not a property of the individual nodes. The sum is more than the parts.

Safety is Emergent
There is a fundamental problem with equating safety with component reliability. Reliability is a component property. Safety is emergent. It is a system property.

Fig. 1—Simplified hierarchy of project and operating assets.

Fig. 1—Simplified hierarchy of project and operating assets.

The system is hierarchical (Fig. 1). Safety depends on constraints on the behavior of the components in the system, including constraints on their potential interactions and constraints imposed by each level of the hierarchy on the lower levels.

Safety as a Control Problem
Safety depends on system constraints; it is a control problem.

Fig. 2 is a simple control loop. We are all familiar with control loops for controlling process variables. This is no different.

The four required conditions for control are:

  • Goal condition. The controller must have a goal. For a simple process control loop, the goal is to maintain the set point.
  • Observability condition. Sensors must exist that measure important variables. These measurements must provide enough data for the controller to observe the condition of the system.
  • Model condition. The controller must have a model of the system (process model). Data measured by the sensors are used both to update the model and for direct comparison to the goal or set point.
  • Action condition. The actuator must be able to take the action(s) required to achieve the controller goals.
Fig. 2—A Control loop.

Fig. 2—A Control loop.

Role of Mental Models
The controller may be a human or an automated system. It must contain a model of the system (process model).If the control is a human, he or she must possess a mental model of the system.

The designer’s mental model is different from the operator’s mental model. The operator’s model will be based partly on training and partly on experience. Operators use feedback to update their mental models. Operators with direct control of the process will quickly learn how it behaves and update their mental models. In highly automated systems, operators cannot experiment and learn the system.

Further, in highly automated systems the operator will not always have an accurate assessment of the current situation because his or her situation assessment is not continuously updated.

I have a fishing example of this. I occasionally (but rarely) go fishing in the marshes near Lafitte, Louisiana, south of New Orleans. I don’t know the area well, but I have a map. If I keep track of my movement, I always know where I am and can easily recognize features on the map. If/when I get lazy and just motor around, then I find the map almost useless. I can no longer match geographical features to the map. Every point of land in the marsh looks much like every other point.

Control Algorithm
Whether the controller is human or automated, it contains an algorithm that determines/guides its actions. It is useful to consider the properties of a typical automated loop. Most industrial control loops are  proportional-integral-derivative (PID) loops. A PID controller has three functions:

  • Proportional action. Takes action proportional to the error (difference between the measured variable and the set point); small errors yields minor valve movements; large errors yield large valve movements.
  • Integral action. Takes action proportional to the integral of the error. Here, a small error that has existed for a long time will generate a large valve movement.
  • Derivative. Takes action proportional to the derivative of the error. A rapidly changing error generates a large valve movement.

Tuning coefficients are provided for each action type. The appropriate tuning coefficients depend on the dynamics of the process being controlled. The process dynamics can be explained pretty well with three properties: process gain, dead time, and lag.

Process gain is the ratio of measured variable change to control valve position change. Lag is a measure of the time it takes the process to get to a new steady state. Dead time is the time between when the valve moves and the process variable begins to change.

Unsafe Control Causes
Control loops are complex and can result in unsafe operation in numerous ways, including: unsafe controller inputs; unsafe control algorithms, including inadequately tuned controllers; incorrect process models; inadequate actuators; and inadequate communication and coordination among controllers and decision makers.

STPA—A New Hazard Analysis Technique
The most widely used process hazard analysis technique is the HAZOP. The HAZOP uses guide words related to process conditions (flow, pressure, temperature, and level).

STPA guide words are based on a loss of control rather than physical parameter deviations .(Note that all causes of flow, pressure, temperature, and level deviation can be traced back to control failure.)

The STPA process is as follows:

  • Identify the potential for inadequate control of the system that could lead to a hazardous state.
    • A control action required for safety is not provided.
    • An unsafe control action is provided.
    • A control action is provided at the wrong time (too early, too late, out of sequence).
    • A control action is stopped too early or applied too long.
  • Determine how each potentially hazardous control action could occur.
    • For each potentially hazardous control action examine the parts of the control loop to see if they could cause it.
      • Design controls and mitigation measurements if they do not already exist
      • For multiple controllers of the same component or safety constraint, identify conflicts and potential coordination problems.
    • Consider how the controls could degrade over time and build in protection such as
      • Management of change procedures
      • Performance audits where the assumptions underlying the hazard analysis are such that unplanned changes that violate the safety constraint can be detected
      • Accident and incident analysis to trace anomalies to the hazard and to the system design

Safety-Guided Design
Hazard analysis is often done after the major design decisions have been made. STPA can be used in a proactive way to guide design and system development.

The Safety-Guided Process

  • Try to eliminate the hazard from the conceptual design.
  • For hazards that cannot be eliminated, identify potential for their control at the system level.
  • Create a system control structure and assign responsibilities for enforcing safety constraints.
  • Refine the constraints and design in parallel
    • Identify potentially hazardous control actions of each system component and restate the hazard control actions as component design constraints.
    • Determine factors that could lead to a violation of the safety constraints.
    • Augment the basic design to eliminate potentially unsafe control actions or behaviors.
    • Iterate over the process (perform STPA Steps 1 and 2) on the new augmented design until all hazardous scenarios have been eliminated, mitigated, or controlled.

An example of a safety-guided process is the thermal tile processing system for the Space Shuttle. Heat-resistant tiles of various types covered the shuttle. The lower surfaces were covered with silica tiles. They were 95% air, capable of absorbing water, and had to be waterproofed. The task was accomplished by injecting the hazardous chemical  dimethylethoxysilane (DMES) into each tile. Workers wore heavy suits and respirators. The tiles also had to be inspected for scratches, cracks, gouges, discoloring, and erosion.

This section is a partial/truncated application of Safety Guided Design to the design of a robot for tile inspection and waterproofing.

Safety-guided design starts with identifying the high-level goals:

  • Inspect the tiles for damage caused by launch, reentry. and transport.
  • Apply waterproofing chemical to the tiles.

Next, identify the environmental constraints:

  • Work areas can be very crowded.
  • With the exception of jack stands holding up the shuttle, the floor space is clear.
  • Entry door is 42-in. wide.
  • Structural beams are as low as 1.75 m.
  • Tiles are at 2.9- to 4-m elevation.
  • Robot must negotiate the crowded space.

Other constraints:

  • Must not negatively impact the launch schedule.
  • Maintenance cost must be less than x.

To get started, a general system architecture must be selected. Let’s assume that a mobile base with a manipulator arm is selected. Because many hazards will be associated with robot movement, a human operator is selected to control robot movement and an automated control system will control nonmovement activities.

The design has two controllers, so coordination problems will have to be considered.

Step 1: Identify potentially hazardous control actions.

Hazard 1: Robot becomes unstable. Potential Solution 1 is to make the base heavy enough to prevent instability. This is rejected because the heavy base will increase the damage if the robot runs into something. Potential Solution 2 is to make the base wide. This is rejected because it violates the environmental constraints on space. Potential Solution 3 is to use lateral stabilizer legs.

However, the stabilizer legs generate additional hazards that must be translated into design constraints such as the leg controller must ensure that the legs are fully extended before the arm movements are enabled; the leg controller must not command a retraction unless the stabilizer arm is in the fully stowed position; and the leg controller must not stop leg extension until they are fully extended.

These constraints may be enforced by physical interlocks or human procedures.

Summary and Conclusion
Leveson argues that our standard accident model does not adequately capture the complexity of our projects. Her proposed solution sensibly addresses the flaws that she has noted.

Viewing safety as a control problem resonates with me. All or almost all of the hazard causes that we discover in HAZOPs are control-system-related, yet the HAZOP method does not focus explicitly on control systems. And control between levels of the hierarchy is generally not considered at all in process hazard analyses.

I am particularly attracted to the ability to apply STPA during project design, as opposed to other process hazard analysis techniques that can only be applied to a completed design.

References
Leveson, N. 2012. Engineering a Safer World, Systems Thinking Applied to Safety. MIT Press.

Mitchell, M. 2009. Complexity, A Guided Tour. Oxford University Press.

Howard Duhon is the systems engineering manager at GATE and the former SPE technical director of Projects, Facilities, and Construction. He is a member of the Editorial Board of Oil and Gas Facilities. He may be reached at hduhon@gateinc.com.

7 Nov 2016

Moving Closer to True Picture of the Fugitive Methane Problem

A valve station on a natural gas pipeline in the Marcellus Shale of Pennsylvania. Researchers in the US may be approaching a solution for determining how much natural gas is seeping into the atmosphere. Credit: Getty Images.

A valve station on a natural gas pipeline in the Marcellus Shale of Pennsylvania. Researchers in the US may be approaching a solution for determining how much natural gas is seeping into the atmosphere. Credit: Getty Images.

But if government regulators and some environmental groups are applauding the transition from the most carbon-intensive fuel source to the least, they are holding back on a standing ovation.

The reason is that a raft of scientific studies published over the past few years shows that too much natural gas is being lost into the atmosphere at different points all along the supply chain—potentially canceling out the climate benefits of utilizing gas over coal.

But environmental researchers and industry alike have had trouble defining the true scope of this problem, termed fugitive methane emissions, because of the disparity in data gathered from oil and gas sites through aerial flybys vs. surface observations. These are, respectively, known as top-down and bottom-up measurements.

As a percentage of gross production, bottom-up studies show methane losses may average around 1.5% while estimates from top-down studies range anywhere from 2% to 17%.

The goal for a number of producers is to get those numbers down to less than 1% in order to mitigate the negative impacts of methane, which is at least 25 times more effective at trapping heat in the atmosphere than carbon dioxide.

Potential ‘Breakthrough’
Karen Olson, the director of strategic solutions at Southwestern Energy, the third-largest producer of natural gas in the US, announced that researchers may be close to reconciling top-down and bottom-up measurements earlier this month at a workshop organized by the International Energy Agency in Austin, Texas.

Without elaborating, she told attendees: “We’ve actually had a breakthrough and now have a correlation based on actual measurements from onsite vs. the flybys.”

Olson was presumably referencing a new “peak emissions” hypothesis that emerged from a multimillion-dollar methane emissions study funded by the Research Partnership to Secure Energy for America  (RPSEA). Southwestern along with three other operators participated in the project, which was led by researchers from the Colorado State University and the National Oceanic and Atmospheric Administration (NOAA).

The full findings are expected to remain unpublished until early next year, but the hypothesis contends that the emissions averages generated using aerial data are higher because they are based on methane that was emitted during short-lived events that took place in the morning.

These episodic bursts of emissions are believed to occur as the morning shift arrives to start up or adjust production equipment. So while aerial measurements may be accurate, this new concept suggests that, to get a truer average of daily emissions rates, the temporary nature of these morning events must be fully understood and taken into account.

If this idea holds up, then it could be an important factor in determining how the industry and other groups look at top-down and bottom-up data in the future. It may also mean that the experts can go back to all the data already gathered to see if they now tell a more accurate story.

Too Many Measuring Methods  
Desikan Sundararajan, a senior researcher of environmental management at Statoil, highlighted in his remarks at the workshop what life as a scientist working on this problem has been like without such a correlation. He found that there are more than 300 research papers on the subject of fugitive methane emissions and said “the beauty of it is that not a one of them agrees with each other.”

Sundararajan explained that one of the reasons for the disparity between a number of top-down studies has been that the researchers are using too many different instruments to take measurements; typically the ones they are most familiar with.

There is also an apparent tendency among the researchers in this area to be the first to publish a new, first-of-its-kind approach, he added. “That does not help the industry. It does not help the stakeholders or the policy makers,” Sundararajan said, stressing that there needs to be more congruence with how methane emission data are gathered.

4 Nov 2016

Energy4me Named Best Outreach Program at 2016 World Oil Awards

SPE’s Energy4me program won top honors in the Best Outreach Program category at the World Oil Awards in Houston. The awards ceremony, now in its 15th year, seeks to recognize and honor the upstream industry’s top innovations and innovators.

Energy4Me's award for best outreach program.

Energy4Me’s award for best outreach program.

Also being honored by World Oil, Nathan Meehan, 2016 SPE president, received the Lifetime Achievement award. This award is bestowed on an individual who has made both significant strides and impacted the oil and gas industry throughout his or her career.

In all, awards were given out in 18 categories that encompass the full breadth of the upstream industry. Today’s innovations, many of which would have seemed far-fetched a generation ago, are enabling operators to find and produce hydrocarbons more safely, economically, and efficiently.

“I’m so very proud of the work that Energy4me accomplishes in classrooms and workshops around the world,” said Glenda Smith, SPE vice president for communications. “Under the leadership of Liz Johnson, the Energy4me team of Kim LaGreca and Zunaid Jooma delivers online educational resources to educators while helping students learn balanced information about the industry.”

Also vying for the best outreach program award were PetroChallenge at NExT, a Schlumberger company, and the VIP Consultant Program at Paradigm.

In awarding the program, the World Oil Awards said that the program has “increased awareness and, through its workshops, created opportunities for students to enter the industry. The program has contributed, by using hands-on activities, to the increased interest and passion of the students, leading them to choose engineering as their career path.”

The judges also said that Energy4me’s hands-on activities ensure that many students will be exposed to the various career paths in the industry and will contribute to increasing manpower and available human resources in the future.

Energy4me and World Oil share a commitment to oil and gas education. Each year, the World Oil Awards endows a leading university that provides education for careers in the petroleum industry, with much-needed funding to equip the next generation of innovators. Since the inception of the World Oil Awards, donations have been distributed to 32 universities as varying as the University of Houston and the University of Ibadan in Nigeria. This year’s beneficiary is the George R. Brown School of Engineering at Rice University in Houston.

Visit Energy4Me here.

4 Nov 2016

National Academy of Science Report Looks To Boost Offshore Safety Culture

In May 2016, the National Academy of Science (NAS) released a report entitled “Strengthening the Safety Culture in the Offshore Oil and Gas Industry.” This report offers recommendations to the industry and regulators to strengthen and sustain the safety culture of the offshore oil and gas industry. The report presents a definition of safety culture for government regulators and industry to adopt, discusses the elements of a strong safety culture and ways for assessing it, identifies barriers to strengthening safety culture, and offers recommendations to overcome these barriers.

The Bureau of Safety and Environmental Enforcement issued a policy in 2013 that defines safety culture as “the core values and behaviors of all members of an organization that reflect a commitment to conduct business in a manner that protects people and the environment.” The policy articulated nine characteristics of a robust safety culture:

  • Leadership commitment to safety values and actions
  • Respectful work environment
  • Environment of raising concerns
  • Effective safety and environmental communication
  • Personal accountability
  • Inquiring attitude
  • Hazard identification and risk management
  • Work processes
  • Continuous improvement

This NAS report provides a list of detailed recommendations for both regulators and industry that will contribute to and foster a positive and sustainable safety. The details of this study and its accompanying recommendations can be found in a summary entitled “Beyond Compliance.”

Download the summary here.

Find the full NAS report here.

3 Nov 2016

Piper Alpha Survivor Shares Experience

Rae

Rae

Piper Alpha survivor Steve Rae will discuss his thoughts and experiences from the disaster during a webinar set for 15 November. Rae’s presentation, titled “Piper Alpha—Accident or Predictable Surprise,” will examine how safety can be improved by everyone in the industry as individuals by accepting personal accountability and adopting a more proactive approach to work and safety.

Piper Alpha, a North Sea oil-production platform that had been converted to produce gas as well, was destroyed by explosions and the resulting fires on 6 July 1988, killing 167 people. The disaster has been considered a turning point for safety in the industry and continues to influence HSE design and considerations. Rae, who survived by jumping from a platform 80 ft into the sea, was one of 61 survivors.

Register for the webinar here.

2 Nov 2016

Efficiency, Innovation Needed for Sustainability Initiatives

Despite the rise of unconventionals, major offshore projects will continue to make up the bulk of new production, an expert said. As governments around the world seek to lower energy consumption and reduce carbon emissions in the wake of the low oil price environment, the industry must be proactive in ramping up its sustainability efforts in its offshore projects.

meehan_nathan_2016-preferred

Meehan

In a presentation, “Price of Oil—Sustainability and Innovation,” held by the SPE Gulf Coast Section’s Projects, Facilities, and Construction Study Group, 2016 SPE President Nathan Meehan discussed the issues affecting sustainability initiatives across the industry. Meehan is a senior executive adviser at Baker Hughes.

Efficient operations are the key to successful sustainability initiatives. Meehan said energy efficiency improves every measure of sustainability, because conservation and efficiency improvements lower the need for oil and gas to satisfy the primary global energy demand. To that end, the low oil price environment is harmful to sustainability because the prices drive demand for oil and reduce demand for alternative sources of energy as well as the demand for improved conservation measures.

Meehan said a lack of standardization in facility design is a significant factor in the lack of efficient facility construction schedules and operations. The learning curve for new facilities should decrease as operators repeat the execution of a standard blueprint. However, given the variations of geographic properties between offshore fields, a standard design is extremely difficult; the design template may be too small for some fields and too big for others.

“If you want to lower the costs and time to make something, you need to make a bunch of things that are exactly the same,” he said. “If you’re making a bunch of one-offs, you don’t get any better at it. Even if you just make one-offs all the time, you don’t get better. That’s a bit of a problem.”

Standardization should play a role in sustainability efforts, but Meehan said it is difficult for the industry to agree on which standards it should adopt. He said operators have a difficult time standardizing internally to begin with, making any efforts to collaborate with other operators even more of a challenge. Conversely, in areas where companies have agreed to standardization, the effect has been immediate, but the competitive costs were low.

“There are competitive factors, and then there are factors where you could say people have not been pushed to the point where they have to lower that cost,” he said.

In addition to standardization, Meehan said the industry must improve its efforts to develop creative solutions to operational inefficiencies. He said that engineers are skilled at improving existing technologies, which can be useful for lowering project costs and increasing efficiency. However, innovation is where companies can see significant gains. Meehan cited nanotechnology and high-temperature electronics, such as circuit boards, as areas of focus.

“This is globalization. We’ve got something, we’re going to make more of them, we’re going to make them cheaper, we’re going to make them smaller, faster, with higher pressure. Something used to work only at 4,000 psi, and now we’re going to make it at 10,000 psi, and so on. You used to build something for USD 20 million and now you can do it for USD 10 million. But the real difficult part is … this real kind of innovation,” Meehan said.

25 Oct 2016

Harnessing CO2 Content in Natural Gas for Environmental and Economic Gains

Summary
Carbon dioxide (CO2) capture and usage (CCU) is currently a global topical issue and is viewed as one possible route to reduction of CO2 concentrations in the atmosphere. The core issues facing the world in current times—development, economy, and environment—are identified as being dependent on the provision of clean, efficient, affordable, and reliable energy services. Currently, the world is highly dependent on fossil fuels for provision of energy services, and the amount of which renewable energies can sufficiently replace is minimal.

The deployment of appropriate CO2-separation technologies for the processing of natural gas is viewed as an abatement measure toward global CO2-emissions reduction. Selection of the optimum technology among the several separation technologies for a particular separation need requires special attention to harness the economic and environmental benefits. The captured CO2 would also require appropriate disposal or usage so as to sequester or “delay” its re-entry into the atmosphere. These challenges of CCU— involving natural gas particularly during processing, which has become an area of intense research—shall be discussed in the paper with respect to the selected technique for CO2 capture. A typical natural- gas-production scenario in Nigeria shall be analyzed for potential CO2 capture. Further discussion shall be on the identification of the recovered CO2 gas-usage framework, such as CO2 flooding (in enhanced oil recovery), for additional revenue generation, assessment of the CO2 savings, and the contribution to the clean development mechanism.

Read the paper here (PDF).

25 Oct 2016

Safety for a Helicopter Load/Unload Operation on an Offshore Platform

Helicopter operations are important in the offshore oil and gas industry. Helicopters perform a variety of roles, including crew change, logistics supply, and medical-emergency and evacuation duties. Because helicopter accidents can have fatal consequences, many helicopter safety reviews and arguments have been conducted. The pursuit of operational safety is continuous work. More industry experience contributes to more safety. This paper focuses on the specific helicopter operation that comprises the loading/unloading task of a slickline/wireline job on an offshore platform. The discussion was carried out as a case study that was based on actual operational experience.

Credit: Getty Images.

Credit: Getty Images.

When a slickline or an electrical wireline job is required on offshore oil/gas platforms that have no crane equipment, a helicopter load/unload operation is a common method used for transporting materials such as winch units, power-pack units, blowout-preventer units, and lubricators from the platform or from the offshore complex to the platform. A series of materials is transported separately by helicopter so that one lifted material can be within the maximum loading capacity of the helicopter. A materials-transportation package typically consists of four or five load/unload operations for an entire set of materials. These frequent load/unload operations are performed with a hovering action, which has the highest risk among helicopter actions (i.e., taking off, cruising, and approaching/landing). To achieve safety, all risk-mitigating factors are adequately incorporated into a plan that should be shared with all crew (pilot, company supervisor, slickline/wireline operators) in advance of the operation.

This paper discusses mitigations from various points of view, in addition to summarizing general safety tips. As a result of considering the psychological response of the ground crew on the basis of actual field experience, this paper recommends ways to remove mental factors that silently act on the actions of a helicopter marshaller. Moreover, fundamental measures are recommended to update marshalling methods and to use new-generation helicopters that are designed for improved safety requirements.

Read the full paper here (PDF).

21 Oct 2016

PetroTalk: The Challenges of Sustainability—A Rio Tinto Perspective

SPE recorded several presentations from the 2016 International Conference on Health, Safety, Security, Environment, and Social Responsibility held in Stavanger and is presenting them as PetroTalks. These insightful presentations were captured from experts within and beyond the oil and gas industry in order to bring the conversations to a larger audience.

Peter Harvey with Rio Tinto Diamonds and Minerals talks about the challenges of sustainability. In his presentation, he offers perspectives of what he has delineated as three key aspects of sustainability: sharing risk to deliver mutual value, collaborating to create trust, and leading through innovation.

“We need solutions that are going to work for all involved,” he said. “The crux is that sustainable practices can deliver a good rate of return for investing in them. …  It’s not just bottom-line numbers, but it’s also some of these license to operate, the intangible numbers, the above-ground risks that are so impactful when they go wrong.”

 

13 Oct 2016

Webinar Examines Importance of Health Contracts

A webinar on 19 October analyzed health contracts in the workplace and effective health management systems.

The webinar was organized by the SPE HSSE-SR Health Subcommittee in collaboration with the International Oil and Gas Producers Association (IOGP)  and IPIECA, the global oil and gas association for environmental and social issues.

Speakers during the webinar were Alex Barbey, international health coordinator with Schlumberger, Simon Hawthorne, vice president of legal for UnitedHealthcare Global Medical, Phil Sharples, global senior medical director for UnitedHealthcare Global Medical, and Eugene Toukam vice president of HSE for Schlumberger.

In 2015, to assist operator/contractor relationships and to help eliminate confusion about responsibilities and expectations, the IOGP/IPIECA health committee published a guideline document entitled Health Management Contract Guidelines for Clients and Contractors. The document provides guidance on

  • Health management system elements, requirements, and deliverables
  • Establishing roles and responsibilities between contractors and clients and operators
  • Health aspects related to the prequalification, bidding, and execution phases
  • Promoting transparency and effective communication on health management in contracts

Toukam and Hawthorne provided examples of real problems that can occur in the absence of a strategic contract management plan. Barbey, who was chairman of the IOGP/IPIECA health task force that produced the guideline, explained how the guidelines can help mitigate the issues described and prevent negative effects from deficiencies in contract management.

The talks are intended to inform professionals who either manage health contracts or who have responsibilities in supporting these contracts that, in the workplace, an effective health management system requires active and positive collaboration between operators and contractors. A lack of such a system creates the potential for

  • Loss of life
  • Health-related accidents
  • Injuries and illness
  • Disruptions in operations

Find the webinar archive on demand here.

6 Oct 2016

New Technical Report Examines Sharing Safety Lessons

SPE released a new technical report concerning offshore-safety data after its Annual Technical Conference and Exhibition (ATCE) in September. The report, “Assessing the Processes, Tools, and Value of Sharing and Learning From Offshore E&P Safety-Related Data,” was written by a committee of subject-matter experts (SMEs) with industry input from a summit held in April. The report is based on discussions and conclusions from the summit and is intended to provide guidance on an industrywide safety-management data-sharing program.

Summit Overview
In 2014, the US Department of the Interior’s Bureau of Safety and Environmental Enforcement (BSEE) approached SPE regarding an opportunity to collaborate on the development of a voluntary industrywide near-miss data-sharing framework. This framework was envisioned as a resource to enhance the industry’s ability to capture and share key learnings from near-miss events with the objective of identifying and mitigating risks. Although the collaboration initially focused only on near misses, evolving discussion resulted in increasing the scope to include a broader range of data. In the spirit of continuous improvement, a related objective was identified: to bring government and industry together to make a safe industry safer and to enhance public confidence in the industry.

Representatives from SPE and BSEE were co-chairs of a summit steering committee that included representatives from SPE, BSEE, exploration and production (E&P) operators, service companies, the US Bureau of Transportation Statistics, the Center for Offshore Safety, the American Bureau of Shipping, and the International Association of Oil and Gas Producers. Planning for the summit outlined that the scope of the data-collection and -reporting framework would begin with the US outer continental shelf (OCS). Additionally, a secondary objective was established: to consider how existing processes might be leveraged with an overarching objective to extend influence beyond the US OCS to align with other systems and requirements globally. In considering industry alternatives for developing a safety-data management framework, caution was advised to avoid creating an additional layer of reporting expectations beyond the current requirements by regulators and industry associations.

During the summit, Vice Admiral Brian Salerno, director of BSEE, shared his perspective on the importance of industrywide safety-data collection and sharing. He also encouraged the E&P industry to demonstrate to the public how a safe industry may be made safer through more open data sharing.

The discussions, expert opinions, and suggestions offered by the group of safety-data management SMEs during the summit were captured in the technical report, which was posted on the SPE website for comments and then approved by the SPE Board of Directors at ATCE in September.

Find the technical report on OnePetro here.

4 Oct 2016

Column: Risk Management at NASA and Its Applicability to the Oil and Gas Industry

On initial consideration, one might reasonably ask: What can the National Aeronautics and Space Administration (NASA) contribute to the oil and gas industry?

About 3 years ago, a senior principal at Deloitte Advisory’s Energy & Resources Operational Risk Group reached out to NASA to better understand the safety culture at NASA with the intent of understanding how that culture might translate to oil and gas operations. Very quickly, the conversation expanded to the realm of risk management.

Working with Deloitte, NASA came to appreciate the remarkable similarities between an offshore deepwater facility and the International Space Station. Both exist in extremely hostile environments. Both function in remote locations where movement of crew and supplies must be carefully choreographed. Both are extremely complex engineering structures where human reliability plays a critical role in mission success, and both have a deep commitment to personal and process safety.

It also should be noted that both have dedicated teams—the onboard crew and the onshore support experts—that live by the mentality that “failure is not an option” because of the consequences to life and the environment should a catastrophic mishap occur.

At NASA, we use qualitative techniques—such as fault trees, failure modes and effects analyses, and hazard assessments—to understand risk based on statistics, experience, or possibilities that our engineers can anticipate. Similarly, upstream oil and gas exploration and production uses qualitative techniques—such as process safety methods, barrier analyses, bowtie charts, hazard identification, and hazard and operability studies—to assess risk. At NASA, these qualitative approaches are augmented by a quantitative risk-assessment technique called probabilistic risk assessment (PRA) to uncover and mitigate low-probability sequences of events that can lead to high-consequence outcomes.

Why PRA?
The technique of PRA was developed by the nuclear power industry and initially published in mid-1975, though not widely publicized. However, the investigation of the Three Mile Island incident in 1979 revealed that the PRA had documented the sequence of low-probability events (both of hardware failures and human errors) that led to the high-­consequence near-meltdown of the nuclear core. As a result, the US Nuclear Regulatory Commission has required a facility-specific PRA for every nuclear power plant in the United States.

In February 2003, Space Shuttle Columbia was lost on re-entry when a piece of insulation foam broke off from the external tank and struck the wing leading edge of the space shuttle. Recognizing that the cause of this accident was a low-probability, high-consequence event, NASA committed to strengthen its safety and mission assurance capabilities. PRA was adopted and embraced by the Space Shuttle and International Space Station programs.

A PRA creates a rigorous logic flow for a complex system. Every safety-related hardware component is captured as a node and quantitative reliability performance numbers are assigned to each possible outcome. For example, a pump can function as commanded, remain off when commanded on, remain on when commanded off, or operate at only a partial level of capability. Human actions also are captured as logic nodes that can have quantitative reliability information assigned to them. For example, a person can push the correct button within the assigned timeframe, push the wrong button, push the correct button outside the assigned timeframe, or do nothing.

A rigorous PRA also can account for common cause failures in both hardware and software. For example, if a pump fails in one system, then all similar pumps from the same lot/vendor that may exist in entirely separate systems are now suspect.

Given a high-consequence undesirable event (such as loss of hydrocarbon containment), every single path through the logic model that could lead to that event can be assessed. Should a low-probability action occur (perhaps a highly trained individual is distracted and fails to observe a change in the mud flow rate in vs. the mud flow rate out), then every other subsequent low-probability action(s) can be identified to mitigate the undesirable event.

Why BSEE?
In April 2015, I attended a conference that explored crossover technologies that might have applications to the space and energy sectors. Brian ­Salerno, director of the Bureau of Safety and Environmental Enforcement (BSEE), gave a presentation that included an acknowledgement that BSEE would need better tools to assess risk as operators moved to deeper drilling; higher temperatures and pressures; less well understood environments; and introduced new, emerging technologies. He suggested the need for a quantitative approach to risk management.

The outcome of several meetings was a US Government Interagency Agreement between BSEE and NASA signed in January 2016, formalizing a partnership between the two organizations for 5 years. Under this agreement, NASA will work with BSEE to develop a process for preparing PRAs for offshore deepwater drilling and production operations. Together with the oil and gas industry, we will evaluate whether the additional insights of a PRA provide meaningful information for the operators and contractors as well as for the regulator, BSEE.

NASA has a document to guide in the preparation and execution of a PRA referred to as the “Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners” (NASA document number SP-2011-3421). The first task that BSEE has given NASA is to rewrite the PRA Guide to be relevant to the oil and gas industry. NASA is scheduled to deliver the initial version of the document to BSEE by the end of the 2016 calendar year.

Projects With Anadarko
In addition to working with other government agencies, NASA has a special mechanism for working with commercial organizations. In situations where NASA has unique facilities, technologies, techniques, or experiences, it may enter into a reimbursable agreement (referred to as a Space Act Agreement) to perform work for the mutual benefit of the Space Act partner and NASA.

Anadarko Petroleum is working with suppliers to develop various subsea equipment with working pressures of more than 15,000 psi for their Shenandoah field in the Gulf of Mexico. The director of Engineering and Technology Global for Anadarko, Jim Raney, wanted to have a set of eyes from outside the industry look over the approach to risk management being used by his team for this activity. Anadarko entered into a Space Act Agreement with NASA in November 2014, enabling NASA to engage and participate in the project.

Anadarko introduced NASA to the unique layout of bowtie charts (an integration of fault trees and event trees), to the barrier analysis approach. Our eventual assessment back to Anadarko was that all their risk-management techniques were qualitative and, while excellently executed, might not capture low-probability, high-consequence events. NASA explained its use of quantitative PRA modeling to capture these types of events.

Anadarko was open-minded to the possibility that PRA might provide insights not otherwise available through their more traditional qualitative risk-management techniques. Because the project would require a blowout preventer (BOP) with a rated working pressure up to 20,000 psi, Anadarko asked NASA to prepare a PRA for a generic 20,000-psi BOP. The work began in October 2015.

The development of the BOP PRA was a true partnership; Anadarko provided world-class expertise on the design and operations of BOPs, and NASA provided world-class modelers and data analysts. The results of the BOP PRA model were presented to Anadarko management on 28 July 2016. A final report was delivered at the end of August.

While it is not my place to discuss any facet of the work that NASA did in partnership with Anadarko, I am able to state that Anadarko followed up the BOP work by asking NASA to perform a PRA of the dynamic positioning system being considered for the Shenandoah development. The PRA for that began in June and is ongoing.

NASA is just beginning to work with BSEE and the oil and gas industry. Our hope is that the benefits of a quantitative assessment of risk will both complement the industry’s current approach to risk management as well as help with risk-informed decision making. It has worked for NASA in the exploration of space. Could it also work for offshore deepwater drilling and production operations?

David Kaplan is a leader at the National Aeronautics and Space Administration (NASA) Johnson Space Center with more than 30 years of experience in aerospace engineering and management. He has been a project manager for Mars hardware, a space shuttle flight controller, and managed the crew health-care equipment on the International Space Station. Most recently, Kaplan served as chief of the Quality Division at the space center. In that position, he managed the NASA Failure Analysis Laboratory, which is instrumental in detecting counterfeit parts and assisting projects to reduce their risks associated with fabrication and operations. Currently, he is involved in assessing the applicability of NASA’s quantitative risk-management techniques to the oil and gas industry. He may be contacted at david.i.kaplan@nasa.gov.