Exclusive Content
7 Oct 2015

Symposium Examines Complexity of Arctic Exploration

On 24 September, University of Houston Energy and ExxonMobil presented the first event of the 2015–16 Energy Symposium Series: Critical Issues in Energy, “Arctic Drilling: Untapped Opportunity or Risky Business?”

The symposium, held at the University of Houston, was moderated by Richard Haut, director of energy production at the Houston Advanced Research Center. The speakers were Kevin Harun, Arctic program director at Pacific Environment; Jed Hamilton, senior Arctic consultant at ExxonMobil Upstream Research Company; Bob Reiss, an American author and consultant on Arctic issues; and Peter Van Tuyn, managing partner at Bessenyey & Van Tuyn based in Anchorage, Alaska.

Watch the symposium here.

6 Oct 2015

OPITO Releases List of Candidates for Annual Safety Awards

OPITO, the Offshore Petroleum Industry Training Organization, has announced the short list of finalists for its Annual Global Oil and Gas Workforce Safety Awards.

INSTEP/Petronas in Malaysia, Oil Spill Response Limited (OSRL) in the UK, and McDermott in Dubai are vying for the Employer of the Year title, while Megamas Brunei, Petrofac Training Services, and PT Samson Tiara in Indonesia have made the final three in the Training Providers category of the awards.

The awards recognize companies that best demonstrate their commitment to building a safe and competent workforce through OPITO standards.

The winners will be announced at the OPITO Safety and Competency Conference (OSCC) on 3 November 2015 at the Dusit Thani Hotel, Abu Dhabi. Now in its 6th year, this global annual event is supported by headline sponsor Shell.

Expected to bring together approximately 500 senior figures from industry, government, regulators, and training providers from around the globe, the event will explore how the industry maintains competency and continues to keep its people safe in a lower-oil-price environment.

OPITO Group Chief Executive Officer David Doig

OPITO Group Chief Executive Officer David Doig

OPITO group chief executive officer David Doig said, “We have been thoroughly impressed with this year’s entries, the most we have ever received. It’s great to witness an increasing number of organizations looking to continually improve their safety training and develop competency.

“The OSCC Awards is our way of recognizing and rewarding their ongoing efforts to ensure a safe and competent workforce. It’s vital that we do what we can to maintain this momentum and not make any compromises when it comes to safety, even in sub-USD-50 oil.”

Entries were judged on how they have effectively adopted OPITO standards, the number of staff trained, geographical location, and examples of how the standards have shown a tangible improvement in safety and competence in the workplace.

In March this year, INSTEP/Petronas was named the world’s first approved technical qualification centre to receive OPITO accreditation for maintenance training in mechanical, electrical, instrument, and control disciplines. Established in 1981, INSTEP/Petronas was primarily set up to train skilled technicians and operators in meeting the rapid growth of the global petroleum industry. The organization has since trained more than 10,000 technical staff in Malaysia.

OSRL first gained OPITO approval in 2007 for its competency management system. The organization has worked closely with OPITO to adapt its training programs to meet the needs of a growing workforce across an extended geographical reach. OSRL has seen a number of improvements in safety, competence and risk reduction since adopting OPITO’s global standards.

As an OPITO approved training centre, McDermott actively participates in OPITO development forums and training provider advisory groups. The organization recently adopted OPITO’s IMIST standard to ensure that every offshore worker is equipped with the necessary safety awareness and training to reduce risk and ultimately reduce the number of incidents.

In 2000, Megamas Brunei became the first training provider in the world to deliver the OPITO Tropical BOSIET and has since delivered the course in more than 20 countries. The organization recently celebrated 25 years without a lost-time incident free and received an international recognition from NEBOSH for its outstanding contribution to health and safety.

Petrofac Training Services established a 5-year health, safety, environment, and quality strategy to drive health, safety and environmental improvement in line with OPITO standards. The organization recently developed a Competent Person Profile program to ensure staff skills and behaviors meet the required competencies needed to deliver OPITO courses. The program is currently being delivered in Aberdeen and will be rolled out across the globe for fire and marine training.

In 2004, Samson Tiara became the first OPITO-approved safety training provider in Indonesia. The organization has played a pivotal role in the growth and adoption of OPITO standards across Indonesia in an effort to educate government institutions, oil and gas regulators, and industry employers on the benefits of OPITO approved training.

OSCC is the only global event focused on safety and competency in the oil and gas industry. The event was introduced to bring operators, contractors and the supply chain together with training organizations to provide a forum for improving standards of safety and competency that protect the workforce and the industry’s reputation.

Read more about the OSCC here.

1 Oct 2015

SPE Partners With IOGP for Outstanding Young Professional Award

To inspire the next generation to get involved and solve problems through innovation, SPE and the International Association of Oil and Gas Producers (IOGP) have collaborated to launch the IOGP Outstanding Young Professional Award. The award will recognize achievements of E&P professionals who have fewer than 10 years of experience and have demonstrated outstanding talent, dedication, and leadership in at least one aspect of health, safety, security, the environment, and social responsibility. The winner will be announced at the SPE Health, Safety, Security, Environment, and Social Responsibility (HSSE-SR) Conference and Exhibition, which will be held in Stavanger 11-13 April 2016.

How to Apply
To nominate someone, you must be an SPE member and have more than 10 years of experience. Nominations are open to all young professionals, both SPE members and nonmembers, who have fewer than 10 years of experience. Nominees should:

  • Be well-respected and in good standing within the community
  • Serve as a role model for other young professionals
  • Demonstrate noteworthy professional and personal achievement
  • Demonstrate commitment to excellence and proven leadership
  • Exhibit expertise, passion, and the ability to inspire others

Nominators should include the nominee’s CV and complete the application form here. The deadline for nominations is 4 November.

The award committee, which consists of SPE members and IOGP officials, will select five finalists by 1 December. Each finalist will then be asked to submit a short video presentation in the style of a TED talk (no longer than 5 minutes and 1GB) that addresses the issue, “How innovation in HSSE-SR can make the oil and gas industry more sustainable and more acceptable to the wider world.” Creativity is encouraged in the making of the videos. All videos will be due by midnight on 8 January, and the award finalist will be notified 1 February.

At the Conference
The winner will be announced at the SPE HSSE-SR Conference in Stavanger and receive:

  • IOGP Outstanding Young Professional Award certificate and trophy
  • Complimentary registration to the conference
  • A one-year membership to SPE
  • An invitation to join the award and young professional committees for the 2018 SPE HSSE-SR Conference
  • Recognition on the IOGP website and newsletter and on the SPE conference website

The 25th anniversary of the SPE HSSE-SR Conference and Exhibition will bring together experts from all over the world to share new ideas, process improvements, technological advancements, and innovative applications to enhance HSE performance. The theme for this year’s conference is “Sustaining Our Future Through Innovation and Collaboration.”

Read more about the conference here.

25 Sep 2015

Understanding Communities: A Key to Project Success

Many factors can influence public perception of the oil and gas industry and the projects it develops. Increasingly, public acceptability can make or break the license to operate. Engineers and other technical leaders frequently view the problem as one that can be overcome with public education.

But education can only be successful if the industry first achieves a level of trust within the community. Building trust requires developing an understanding of the community. One of the more important approaches is for operators to understand the needs and expectations of the communities where they operate and for project teams to educate themselves about the expectations of the local community that will be affected by a project.

Each community is unique; the challenges are multifaceted. SPE HSSE-SR Technical Director Trey Shaffer and SPE PFC Technical Director Howard Duhon will present Understanding Communities: A Key to Project Success during a topical luncheon at SPE’s Annual Technical Conference and Exhibition (ATCE) in Houston. Shaffer and Duhon will explore some notable industry efforts in community engagement and education as well as critical success factors for effective community engagement.

The PFC/HSE topical luncheon will be from 1215 to 1345 on 29 September at the George R. Brown Convention Center, in Bush Grand Ballroom A.

Learn more about ATCE and register here.

11 Sep 2015

Simplification: A Moral Imperative

As early as 500,000 years ago, man was using fire to light his cave. This was a very inefficient source of light, yielding about 0.6 lm-h per 1,000 Btu of energy.

A step change improvement occurred about 40,000 years ago with the burning of animal fats and oils. Candles became common about 4,000 years ago, but burning wax to get light was also inefficient, yielding only 4 lm-h per 1,000 Btu.

This type of resource was also expensive. It has been estimated that a common man would have had to work an entire day to afford a few minutes of light. Unless you were wealthy, night was a dark and dangerous place.

It was thousands of years before the next significant improvement occurred when sperm whale oil came on the scene in about 1700, yielding 10 times as much light per Btu of energy at a much lower cost. A day’s work would buy 4 hours of light. A downside was that many men died while harvesting whale oil, and, after 150 years of its use as a fuel for lighting, the sperm whale was nearing extinction.

The oil industry saved the sperm whale. The discovery of significant quantities of oil in Pennsylvania and elsewhere in the 1850s and beyond and the development of drilling and refining methods created a much lower-cost and more abundant source of energy. One day of labor yielded 75 hours of light.

The next and most dramatic improvement was the development of electric light. One day of work earned 4,000 lm-h per Btu or 10,000 hours of light.

Light was available to the common man in nearly unlimited quantities.

People who are fortunate enough to live in developed countries enjoy unlimited light, which is not the case everywhere in the world. Availability of affordable energy is perhaps the largest divider between the haves and have-nots today.

The Complexity of Light
For the end user, switching on a light bulb is much simpler than lighting a fire. But the systems behind the bulb are complex. To get light from an electric bulb the following are needed:

  • Mining for fuel (gas, coal, oil, and uranium)
  • Power plants to generate the electricity
  • Mining industries to obtain raw materials for light bulbs, wiring, and other components
  • Transmission and distribution systems to deliver the generated electricity to homes and businesses
  • Light bulb manufacturing, distribution, and retail sales
  • Electrical wiring systems in buildings
  • An advanced political/social system that enables all of the above

The benefits of light are derived from complex systems that are mostly hidden from view.

Strength is Also Weakness
Oil has had a much greater impact on the world than simply providing light. With the age of oil has come cars, trains, planes, modern medicine, and plastics. Life expectancy in 1850 was fewer than 40 years. Since then, the discovery and use of oil has enabled the innovations and advancements that have added 40 years to our life expectancy.

Systems theory guarantees that there will be some downside to this kind of success story; every great strength is also a great weakness. The success of the oil industry also poses its greatest challenges, one of which is to keep it going.

The world has changed because oil/energy has been cheap for most of the past 160 years. And the world has become dependent on oil.

But keeping energy affordable is going to get harder.

Although there is a great deal of oil and gas in the world, much of it is expensive to produce, as is painfully apparent in the past few months.

The world is not lacking in oil, but it may be lacking in oil that can be affordably mined—unless we change the way we mine it.

How Much Oil Do We Need?
The current glut of oil supply is unlikely to last long. We need to find a lot of oil to keep this world humming.

The world consumes about 94 million BOPD. Historically, consumption has increased by about 1 million BOPD annually. The production decline rate of existing wells is about 5 million BOPD annually. Therefore, we need to bring on 6 million BOPD of new production every year just to stay even. That is the production equivalent of Saudi Arabia every 2 years, 60 major deepwater developments annually, or the production equivalent of six North Dakotas annually.

And this has to be done in an environment in which the oil industry faces a great deal of opposition and where many areas are off-limits to oil production. If it is true that the easy oil has been found, then extracting the vast quantities of needed new oil would be increasingly more difficult and expensive.

Moral Case for Simplicity
In the February and April issues of Oil and Gas Facilities, I wrote articles about complexity and the SPE Complexity Work Study Group. At the beginning of the study, I viewed complexity as an economic issue that affects project viability and profitability. Now, I see it as a moral issue as well.

World political stability and the economic progress needed to pull impoverished people into the middle class depend on affordable energy.

Eventually, the world is likely to transition to renewable energy, but it will not happen soon. For at least the next couple of decades, the affordable energy source must be largely oil and gas.

We seem increasingly incapable of delivering projects successfully. Obviously, part of the problem is that our projects are inherently more complex than they used to be. For example, a complex project is necessary to develop a complex reservoir in a new deepwater basin with no infrastructure.

But we also add unnecessary complexity. To be successful going forward, we must do a better job of managing inherent complexity and we must shed the baggage of unnecessary complexity.

Sources of Complexity—Systems Theory
I have discussed the sources of complexity in previous columns, which include

  • Inherent technical complexity
  • Inherent social/political complexity
  • Standards, specifications, and regulations
  • Decision making
  • Design team preferences
  • New technology
  • Safety culture

Another source of complexity, which is largely hidden from view, is the effect of increases in the size of project teams. Although it is recognized that large teams and multiple teams may create interface issues, the challenges are greater than most people realize.

Early in my career, when I started working on small developments in the Louisiana swamps and shallow waters offshore, projects were less complex: one facilities engineer generally understood the whole project. Surprises were few and usually inconsequential. Everyone with whom I interacted was located in the same building, across town, or at most, a short flight away. All construction was taking place within driving distance.

Designs were simpler, too. For example, control systems were defined on simple loop sketches.

I recently listened to an interview of an Apple executive. He was asked why the iPhone was being built in China: Why not spend a little more to build them in the United States?

Paraphrasing his response, he said that it would be impossible to build them in the US because it would be impossible to organize the requisite skills in one place as the company can do on the massive manufacturing campuses in China.

I immediately related to that comment. I know little about the building of an iPhone, but I know that we are hampered in our industry by the need to coordinate with design and construction groups spread all over the world. I do not think that we fully understand the impact of having to coordinate the work of hundreds of people in multiple teams on a global scale.

More People, Less Productivity
When we have more work to do, we often add more people to the team or teams to the project. But adding people does not necessarily result in increased productivity.

Baron and Kerr, in their 2003 book Group Process, Group Decision, Group Action, described the factors involved as

Team Production = Team Potential – Coordination Losses – Motivation Losses

Although team potential may increase linearly with the number of people on the team, the increases in coordination losses and motivation losses are nonlinear. A point is reached at which adding more people results in more losses than gains.

What do these losses look like in practice? Some examples that I have experienced are

  • A process engineering team designs parallel compression systems to be run in parallel, but the control systems engineer does not include load sharing in the design. As a result, the compression systems will not run in parallel.
  • A vendor uses an outdated version of the piping and instrumentation diagrams (P&IDs), and the wrong grade of pipe is installed.
  • The subsea team misinterprets the topsides chemical design pressure as the operating pressure and undersizes an umbilical tube.
  • The commissioning engineer adds a valve to the P&IDs (for isolation during commissioning), but the process engineer deletes it from the next version of the diagrams because he cannot see a reason for it.
  • A section of pipe rack reserved for future expansion is used for a minor field-run piping.

We light the world. In the future, we must learn to light it more simply and efficiently.

PFC Program at ATCE
The discussion of complexity will be continued in greater depth during the Projects, Facilities, and Construction (PFC) dinner on 28 September at the SPE Annual Technical Conference and Exhibition (ATCE) in Houston.

The PFC-related offerings at the ATCE have increased over the past few years. This year’s program features the best lineup. They are

  • Technical paper sessions (Flow Assurance, New Technology, and Field Experience)
  • Special sessions (Gas Scrubber Design and Validation for Robust Separation Duty, Error Analysis and Uncertainty in Flow Assurance and Facility Design, and Managing the Future Impact of Current Cost Cutting)
  • Training courses (Water Treating for Hydraulic Fracturing, Separator Design, and Understanding Communities)
  • Topical luncheon (Understanding Communities)

Speakers at the training course and topical luncheon will discuss community engagement, an important topic.

Many of you may still believe that educating the public will melt away its opposition to the oil industry. This is utterly incorrect. To defuse community opposition, we need to understand the communities in which we operate.

Attend ATCE, meet up with old friends, and make a few new ones.

Read more about ATCE and get tickets to the PFC dinner here.

Howard Duhon is the systems engineering manager at GATE and the SPE technical director of Projects, Facilities, and Construction. He is a member of the Editorial Board of Oil and Gas Facilities.

11 Sep 2015

BSEE Presents Program To Determine Best Available, Safest Technologies

In cooperation with industry, the Bureau of Safety and Environmental Enforcement (BSEE) has developed over the past months a data-driven and transparent program for determining best available and safest technologies (BAST). On 12 November, BSEE will present the BAST Determination Process at an event in Houston.

“One of the ways to ensure safety and reduce risk in outer continental shelf areas is through the use of available critical technologies that have been determined to be the best available and safest. The requirement encourages innovation and continuous improvement and guarantees development in the safest, most responsible manner,” wrote Doug Morris, BSEE chief of offshore regulatory programs, on the bureau’s Website.

BSEE and industry stakeholders have worked closely to develop the process that informs and enables the evaluation and determination of BAST in the offshore environment. The Houston event will be an opportunity to hear from the regulator on the path forward for BAST. The process will be presented by BSEE representatives Doug Morris, Joe Levine, and others.

The event will be from 0800 to 1200 at the Hilton Houston North. Registration is free, but seating is limited to 300.

Register for the presentation here.

20 Aug 2015

Safety Case in the Gulf of Mexico: Method and Benefits for Old and New Facilities

The purpose of the Bureau of Safety and Environmental Enforcement (BSEE) Safety and Environmental Management Systems (SEMS) is to enhance safety of operations in the Gulf of Mexico (GOM). One of the principal SEMS objectives is to encourage the use of performance-based operating practices. However, the current US regulatory framework for GOM operations does not provide adequate tools to focus on specific risks associated with a facility. The adoption of the safety-case regime would steer operations toward this goal.

This paper discusses the application of the safety-case concept and how the operator can demonstrate that the major safety and environmental hazards have been identified, and associated risks estimated, and show how these risks are managed by achieving a target level of safety. Throughout the safety-case road map, the identification of safety critical elements (SCEs) and associated performance standards represents one of the cornerstones of asset-integrity-management (AIM) strategy.

The paper discusses how application of the safety-case regime for existing facilities would highlight particular risks that may have been misjudged, taking into account the current state of installations and the actual operational procedures in place. For new facilities, the introduction of the safety case at the early stages of design would ease the integration of the overall risk-management (RM) plan at each level of organization.

General Safety-Case Approach
The safety-case approach is referred to generally as part of an objective-based (or goal-setting) regime. Such regimes are based on the principle that legislation sets the broad safety goals to be attained and the operator of the facility develops the most appropriate methods of achieving those goals. A basic tenet is the premise that the ongoing management of safety is the responsibility of the operator and not the regulator. The term “safety case” arises from the Health and Safety Executive in the UK, where the safety-case regime was implemented after the Piper Alpha accident in 1988. Most of the performance-based regulations have adopted elements of the safety-case approach. Moreover, many operators have included safety-case components as part of their companies’ requirements and have integrated them in their general management system.

Fig. 1

The safety-case regime is a documented demonstration that the operator has identified all major safety and environmental hazards, estimated the associated risks, and shown how all of these risks are managed to achieve a stringent target level of safety, including a demonstration of how the safety-management system in place ensures that the controls are applied effectively (Fig. 1). The safety case is a standalone document, based on a set of several subsidiary documents, undertaken to present a coherent argument demonstrating that the risks are managed to be as low as reasonably practicable (ALARP). Fig. 1 presents the general principle of the safety-case development process.

Current RM Regime in GOM
All leasing and operations in the GOM part of the outer continental shelf are governed by laws and regulations to ensure safe operations and preservation of the environment, while balancing the US’s need for energy development. Since October 2011, the BSEE enforces these regulations and periodically updates the rules as the responsible party for the comprehensive oversight, safety, and environmental protection of all offshore activities.

The original SEMS rule, under the Workplace Safety Rule, made mandatory the application of the following 13 ­elements of the American Petroleum Institute (API) Recommended Practice (RP) 75:

  • General provisions: for implementation, planning, and management review and approval of the SEMS program
  • Safety and environmental information: safety and environmental information needed for any facility (e.g., design data, facility process such as flow diagrams, mechanical components such as piping, and instrument diagrams)
  • Hazards analysis: a facility-level risk assessment
  • Management of change: program for addressing any facility or operational changes including management changes, shift changes, and contractor changes
  • Operating procedures: evaluation of operations and written procedures
  • Safe work practices: e.g., manuals, standards, rules of conduct
  • Training: safe work practices and technical training (includes contractors)
  • Assurance of quality and mechanical integrity of critical equipment: preventive-maintenance programs and quality control
  • Prestartup review: review of all systems
  • Emergency response and control: emergency-evacuation plans, oil-spill contingency plans, and others in place and validated by drill
  • Investigation of incidents: procedures for investigating incidents, implementing corrective action, and following up
  • Audit of safety- and environmental-management-program elements: strengthening API RP 75 provisions by requiring an initial audit within the first 2 years of implementation and additional audits in 3-year intervals
  • Records and documentation: documentation required that describes all elements of the SEMS program

Introduction of Safety Case for Operations in the GOM
Analogies Between Strengths and Weaknesses of SEMS Rule and Safety-Case Development. As part of BSEE communication, the four principal SEMS objectives are the following:

  • Focus attention on the influences that human error and poor organization have on accidents.
  • Continuous improvement in the offshore industry’s safety and environmental records.
  • Encourage the use of performance-based operating practices.
  • Collaborate with industry in efforts that promote the public interests of offshore worker safety and environmental protection.
  • SEMS is promoted as a nontraditional, performance-focused tool for integrating and managing offshore operations. However, the current US regulatory framework for offshore operations in the GOM does not provide adequate tools to focus on the specific risks associated with a facility. The development of the SEMS program is generally focused on the provision of the 13 elements required in API RP 75 rather than a consistent narrative where the operator demonstrates how effective the controls and management system in place are against the identified risks.

Fig. 2

Nevertheless, the 13 elements of API RP 75 could be seen as a skeleton for the development of the safety-case regime. The links between them are naturally identifiable, but significant efforts would be necessary to meet the safety-case philosophy and the ALARP concept in particular. Fig. 2 presents a correlation between the 13 elements of API RP 75 and the main steps of safety-case development.

As shown in Fig. 2, the elements of API RP 75 are truly part of the components of safety-case development. However, as is also obvious in Fig. 2, critical shortcomings are present, such as the ALARP process as part of the risk-reduction effort, an unambiguous strategy for the identification of SCEs, and the development of the associated performance standards. Moreover, the safety-case regime advocates a clear demonstration of how the decision process is based on the output of each development stage. Such a continuous link among API RP 75 elements is missing.

The SEMS vulnerabilities are primarily related to the lack of targets (or how to define targets) as part of a performance-based approach.

Use of Safety Case for the Development of RM/AIM Plans
Asset integrity is largely considered as a key for managing major accidents. It is an outcome of good design, construction, and operating practices. It is commonly accepted that the AIM process follows a standard continual improvement cycle (the Deming cycle)—plan, do, check, act.

As part of the first step, it is crucial to establish the objectives and processes necessary to deliver the expected results (plan). These different aspects cover factors outside the organization, such as the applicable legislation, codes, and standards, as well as key stakeholders, and internal factors, such as the company RM standards, processes, and targets or roles and responsibilities.

Once the plan is defined and the objectives are clearly stated, it is important to implement the plan—execute the process to deliver the results (do). This stage is based on a risk-assessment process from hazard identification to risk analysis, to provide a risk evaluation of the facility.

The actual results are studied (measured and collected in “do” stage) and compared against the expected results (targets or goals from the “plan” stage). This phase of risk treatment involves considering all the feasible options and deciding on the optimal combination to minimize the residual risk as far as reasonably practicable.

Once the decisions are made, on the basis of an ALARP process, the solutions are implemented (act). It is also crucial to monitor and periodically review the approach taken.

The safety-case process involves a similar development cycle; therefore, it is natural to promote the development of RM/AIM plans and the safety case in parallel.

For existing facilities, existing RM/AIM plans would be challenged and revised toward a continuous improvement of their effectiveness. Application of the safety-case regime for existing installations would highlight particular risks that may have been misjudged, taking into account the current state of the installations and the actual operational procedures in place. Output from verification activities would lead to the identification of corrective actions for existing assets. This type of revision could be seen as a significant effort, but it would actually help the operator to optimize its AIM strategy and spend its resources more effectively. This approach would also give the regulator a quantified picture of current operations in the GOM. Because all facilities would be evaluated against the same performance targets, it would be easier for the operator to prioritize the critical aspects of each facility.

For new facilities, the introduction of the safety-case regime early in the project would naturally lead to an optimized AIM philosophy, strategy, and plan. The operator would be able to anticipate the efforts to be deployed for the entire facility life cycle. The introduction of the safety-case regime at the early stages of design would ease the integration of the overall RM plan at each level of organization.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper OTC 25957, “Safety Case in Gulf of Mexico: Method and Benefits for Old and New Facilities,” by Julia Carval, SPE, and Bibek Das, SPE, Bureau Veritas North America, prepared for the 2015 Offshore Technology Conference, Houston, 4–7 May. The paper has not been peer reviewed. Copyright 2015 Offshore Technology Conference. Reproduced by permission.

20 Aug 2015

Safe Handling and Disposal of Nanostructured Materials

Nanostructured materials are substances that contain at least one dimension in the nanometer-size regime and can include nanoparticulate materials such as quantum dots, nanofibrous materials such as carbon nanotubes, and nanoporous material such as activated carbon. Potential applications of these novel materials in the oil and gas industry include wastewater treatment, antimicrobial additives, and multifunctional coatings. These applications cause concerns regarding safe handling and disposal of the materials. This paper provides a first-hand perspective on the appropriate handling of nanomaterials in a laboratory setting.

After several cycles of technological advances in fields such as polymers, electronics, and the energy sector, the world is currently undergoing a nano revolution, wherein materials with increasingly smaller dimensions are generating considerable interest in the interdisciplinary technology community. Such materials, known as nanomaterials or nanostructured materials, typically have at least one dimension in the nanometer range. These materials have been found to possess many useful properties, such as high strength, high surface area, abrasion resistance, and tunable chemical reactivity. They are currently being researched extensively or actively proposed for related applications in critical realms (e.g., aerospace, defense, medicine) such as aircraft composites, electronic devices, biomedical sensors, and coatings. This trend makes it evident that nanomaterials and nanotechnology, the science and application of such material or the manipulation of material at molecular or atomic scales, are here to stay and will grow in popularity. A wide range of economic institutions worldwide estimate the global market for nano-related products and technologies to be worth currently more than USD 1 trillion.

As with any new material or technology, there will be unknowns such as questions related to safety, economy of handling and processing, and effect on the environment. Therefore, the increasing use of nanomaterials in research laboratories and industries makes it essential to understand and address these questions better.

This paper focuses on prevention of possible safety issues related to nanomaterials through a review of current good practices and regulatory developments as applied to an industrial laboratory setting. As the saying goes, “Prevention is better than cure.” As with any material or activity associated with human endeavor, risks exist and can always be addressed by the judicious use of appropriate protective or preventive measures in the research-and-development phase and during manufacturing and commercialization.

Potential Risks of Occupational Exposure to Nanomaterials
Various types of nanomaterials have their own unique sets of physical, chemical, and biological properties. For example, nanoparticulate powders can be easy to aerosolize and disperse, even unintentionally. Because these particles are very small, even a small quantity of the material can be dispersed over a wide area. Liquids containing dispersed nanomaterials (nanofluids) can sometimes be less dispersible because, unless pressurized, they cannot be dispersed over large areas as easily as the dry particles. Pressurized aerosol containers of nanodispersions (in a liquid or gaseous carrier), on the other hand, are energized and potentially are even more dispersible than dry nanoparticles.

Given that nanomaterials are a new class of widely used materials, only sparse definitive data exist on their effects on human beings. A person can be exposed to these materials through several key routes: oral ingestion, inhalation, skin contact, and injection. Upon coming in contact with finely dispersed particulate material, literature suggests that a person can suffer from mild or chronic symptoms (depending on the mode and duration of exposure). These range from respiratory discomfort and dermatitis to lung or eye damage (especially for prolonged exposure or exposure to high doses of the material). Several of these symptoms have been recorded in the literature for various micrometer-sized particles. Asbestos is another material that has been studied extensively and can provide an analog for the potential risks of exposure to nanomaterials.

Some common exposure routes and resultant consequences exist if precautions such as the use of personal protective equipment (PPE) are not taken. Initial damage arising from external exposure to nanomaterials (in the form of dispersions, aerosols, or powders) can translate into more-complex and -­unpredictable consequences within the body of a human being. Exposure to nanomaterials can be prevented easily with some commonly used PPE such as safety glasses, laboratory coats, face masks, and gloves.

What Is Nanosafety?
Given the development of several new types of nanomaterials, the lack of definitive data on their harmful effects, and the availability of a wide range of preventive safety measures, approaches need to be developed to promote better safety when working with these materials. Such an endeavor results in safe working conditions for personnel, which can be termed “nanosafety.” Among the most common ways to promote nanosafety is prevention by the use of widely available and commonly used PPE and suitable engineering controls. A hazard-risk assessment usually helps identify opportunities for designing such controls. The use of PPE along with engineering controls effectively reduces external exposure and subsequent internalization of nano­materials by personnel. One cannot emphasize enough the importance of these simple measures.

It must be noted that merely using PPE and engineering controls would not be sufficient to promote nanosafety. The authors of this paper consider nanosafety to be a philosophy and a responsibility to work with nanomaterials in a careful manner, guided by sound scientific principles and common sense.

Regulatory Activity: Emerging Trends and Challenges
Although general guidelines and regulations pertaining to the safe handling and disposal of chemical or hazardous wastes exist, the initiatives addressing the unique requirements related to nanomaterials are still in their infancy. Several regulatory organizations are looking into addressing these initiatives. In late 2014 and early 2015, some basic information regarding nanomaterials came to be required from manufacturers by the US Environmental Protective Agency (EPA) as part of the Toxic Substances Control Act (TSCA) under the auspices of the Significant New Use Rule. Moreover, in the US, the Nanoscale Materials Stewardship Program introduced by the EPA under the auspices of the TSCA still regards nanomaterials as conventional chemicals, despite differences in their properties. The Registration, Evaluation, Authorization, and Restriction of Chemicals program rolled out in the EU tends to focus on bulk chemicals. Consequently, the smaller quantities of nanomaterials and their related wastes tend to “fall through the cracks.” While it is likely that not all nanomaterials are harmful, several categories of these materials will be capable of having a negative effect on human health and the environment, either in isolation or in a mixture with more-conventional materials and chemicals (e.g., polymer nanocomposites). Challenges regarding the effective evaluation of hazards pertaining to nanomaterials could contribute to these inadequacies, where the issues could be addressed potentially through a combination of improved toxicology-test protocols and computational methods. Any improvements to the current regulatory stipulations may take some time to be formulated and implemented. Meanwhile, one way to handle this challenge is to voluntarily adopt suitable good practices, coupled with existing regulations and intracompany policies. The key will be to err on the side of caution wherever possible.

Good Practices in Action
Until nanosafety regulations are in place, some voluntary good practices should be adopted, based on currently used laboratory and industrial safety protocols. On the basis of literature published by the National Institute for Occupational Safety and Health, some suggested universal guidelines pertaining to nanosafety can include

  • By default, treat nanomaterials as hazardous chemicals, and learn about related technical literature before working with them.
  • When new to the field, employees should be provided with adequate training.
  • Employers should work toward identifying tasks, processes, and equipment involved in handling nanomaterials, especially in their native forms (e.g., bulk powders). Workplace profiles of exposure to nanomaterials should be conducted regularly.
  • Ongoing education programs pertaining to nanosafety should be in place and inform employees periodically about the latest developments in this field.
  • Plan the experiment or process beforehand, and obtain the required amounts of nanomaterial; this reduces subsequent waste and disposal problems.
  • Be aware of neighboring personnel when working with nanomaterials, and always confine or restrict the workspace where nanomaterials are handled.
  • Use suitable engineering controls and proper PPE specific to the materials and processes in question.
  • Properly dispose of any waste.
  • Wash hands (even after removing gloves) with soap and water before handling food or working outside the laboratory.
  • Regularly monitor changes in the organization’s policies, industry practices, and emerging regulatory activity, and comply as required.

Fig. 1

In Fig. 1, we can see that the type and quantity of nanomaterial, the processes employed, the existing infrastructure, and (above all) the human factor all play a big role. The flow chart must be customized for specific nanomaterial-related activities.

This paper attempts to present a detailed overview of safe handling of nanomaterials in an industry setting, from a laboratory practitioner’s viewpoint. Increased usage of nanomaterials leads to increasing amounts of related waste, also termed “nanowaste,” with as-yet-­unknown ramifications.

Nanowaste is currently treated as a conventional hazardous chemical in academic and industrial entities working with these new materials, though not all nanomaterials are toxic or harmful. However, owing to size-dependent differentiation of the properties of materials, nanomaterials and related waste require certain unique additional safety measures. Moreover, nanomaterials can consist of various compositions and chemistries that must be addressed separately. Many good practices are based on current precautions used when handling hazardous chemicals and involve general common sense.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper OTC 25975, “Safe Handling and Disposal of Nanostructured Materials,” by Pavan M.V. Raja, SPE, Monica Huynh, and Valery N. Khabashesku, SPE, Baker Hughes, prepared for the 2015 Offshore Technology Conference, Houston, 4–7 May. The paper has not been peer reviewed. Copyright 2015 Offshore Technology Conference. Reproduced by permission.

20 Aug 2015

Managing Marine Geohazard Risks Throughout the Business Cycle

Today, the industry is faced with entry into frontier areas with little prior published understanding and potentially complex slope and deepwater settings. In such settings, early effort in the exploration-and-production cycle is required to allow appropriate data to be gathered and assessed. In order to address these issues, BP has adopted a methodology to manage geohazard risks over the life of the license.

In 1964, the rig C.P. Baker was lost in the Gulf of Mexico in a shallow-gas blowout with the loss of 22 lives. That accident, and similar events in the industry around the same time, triggered the development of geophysical site investigation or geohazard methodologies to support safety in tophole drilling and field development through detailed assessment of seabed and near-surface geology. To this end, the Hazards Survey in North America and the Site Survey in Europe became the staple means for evaluating predrill or predevelopment conditions over the following 30 years.

The technologies used in these surveys have continued to be developed. These approaches have generally served the industry well for 50 years. How­ever, as the industry has progressed from operations generally on the continental shelf out onto the continental slope and into ultradeep water, the geohazard issues that need to be addressed by the industry have grown in variety and complexity.

While the scope of possible ­sources of geohazards has expanded, so has the potential size of license areas to be studied.

If conditions across such blocks on the continental shelf or in ultradeep water were homogeneous, it may be acceptable to continue with the traditional approach of the site survey. However, the conditions in many large blocks are far from homogeneous, and, therefore, a site survey would deliver little understanding of the variability in geohazard conditions and processes that may have implications for the immediate safety of drilling.

The longevity of production operations now faced in a license or field has also been gradually extended through the implementation of improved-recovery techniques. BP’s Magnus field was discovered in the far north of the UK continental shelf in 1974. At the time of first oil in 1983, the projected field life was seen as being out to the mid-1990s. However, another phase of production drilling will be starting from the platform in 2015, and current projected field life is now seen out to the 2020s. However, the last high-resolution seismic data to have been acquired below the platform were acquired in 1984. Before restarting drilling, a prudent operator would ask the question, “What is the possibility that geohazard conditions may have changed over the last 30 years?” The prudent operator, therefore, needs to revisit geohazard risks and the validity of site-investigation data across the full life of the license, from entry to field abandonment, and to update geohazard understanding consistently across the whole time period.

Fig. 1

This paper, therefore, sets out an integrated approach to address management of geohazard risks across the life of a license (Fig. 1), an approach that seeks to consistently update understanding of what geohazards might be present, and, thus, where possible, seeks to avoid them directly or mitigate their presence.

License Entry
Upon entry to a new license area, existing seismic or published geoscience information upon which to build understanding of geohazard complexity may be sparse.

A consistent approach for the rapid evaluation of the potential degree of geohazards complexity before, or upon, entry to a new license area uses an evaluation of four fundamental geoscience attributes: evidence for presence of shallow hydrocarbons, recent-deposition rate (over the last 1 million years), structural complexity, and underlying seismicity. A final attribute is the quality of the database available to review the area: The sparser or poorer the data available, the greater the interpretive uncertainty. Each of these five factors is scored by use of a consistent scoring mechanism, and they can be plotted on a pentagon where the greater the area finally shaded, the greater the fundamental level of underlying geohazard risk.

Geohazard Baseline Review
After initial fundamental evaluation of risk before or upon entry, it is normal to expect that licensewide exploration 3D data acquisition will be a first step to support the exploration effort—if this is not already in place.

Delivery of a geohazards or short-offset volume at this stage is a simple and effective byproduct. Indeed, in the case of wide-azimuth data acquisition, delivery of such a product may be a key intermediate quality-control output to delivery of the final product and may be of significantly greater value to the geohazards interpreter than the final volume used by the explorer.

Once processed, 3D data are available to produce a complete geohazards baseline review (GBR) of the delivered volume. Such baseline reviews need to be performed and communicated efficiently to the exploration team in a way that supports eventual prospect ranking and delivered early enough in the exploration cycle to affect choice of drilling location.

Production of a GBR provides the underlying framework for all later geohazard studies to be built and data requirements to be defined. The GBR, therefore, should be revisited and updated regularly.

Geohazard-Risk-Source Spreadsheet (GRSS)
A GRSS captures individual ­sources of geohazards, the threat that each may pose to operations, and their effect on those operations. These then form a threefold semiquantitative evaluation of the interpretive confidence that a hazard is present, the likelihood of that geohazard event occurring, and the effect of that event to establish an initial definition of operational risk from the individual source of the hazard.

Exploratory Drilling
On the basis that a prospect is identified within the licence that is considered of sufficient value to commit to exploratory drilling, a location will need to be assessed for its safety for drilling.

Local regulatory requirements may establish specific constraints. Other­wise, the level of visible overburden complication may suggest, even in deep water, that site-specific high-resolution 3D-data acquisition is required to support either selection of a location clear of geohazards or accurate definition of the geohazards present to allow their mitigation in well design.

The key is that, outside of regulatory requirements, the operator, rather than applying a rote process to evaluation of a drilling location, should be designing a site-investigation program that specifically addresses the potential hazards faced at that location.

Appraisal: Toward Field Development
At this stage of the life cycle, direct operational experience of initial drilling activities should have been gathered and can be fed back directly into improving predictions of tophole appraisal drilling. Beyond this, however, the addition of potential location-specific site-investigation-survey data, combined with direct operational experiences from initial drilling, will allow a full revision of the GRSS contents. This review should focus on whether the GRSS contents either were too conservative or overlooked possible hazards sources.

Major-Project Delivery
At the onset of a field-development project, it is expected that all site-investigation-data needs have been met and plans have been put in place for data acquisition or that the data are already in hand. Ultimately, the different study strands defined in the project GRSS should be brought together into an integrated geological model.

Outputs from a completed integrated study allow proper risk avoidance in concept screening through choice of development layout, for example, or risk mitigation by engineering design.

Development-Project Execution Into Early Production
As a development project moves into the execute phase and the instigation of production drilling or facility installation, the refinement of geohazard understanding needs to continue.

Drilling requires the same screening as used for the exploratory-drilling phase. Experiences from drilling of the first wells from a location need to be captured either directly by presence of tophole witnesses on-site or indirectly by use of remote monitoring facilities. These experiences should be fed back into updated predictions of drilling conditions for ensuing project or production wells to allow appropriate and safe adjustment of drilling practices in accordance with actual conditions encountered. This process needs to be carried through the production phase after the initial development is complete. Variances should always be investigated and reconciled against pre-existing knowledge.

Drilling Renewal and Field Redevelopment
Before the restart of drilling or redevelopment operations, an operator should pause to capture previous operational lessons learned. Reviews of the ongoing integrity of the overburden should be held regularly throughout the life of the field, especially ahead of any engineering operations, and, as a result, the validity of overburden imagery should be considered regularly and carefully for renewal.

Ahead of the instigation of abandonment operations, a review of the potential for change in overburden, or geohazard, conditions should be undertaken. For a single suspended or partially abandoned subsea well, the period since the well was last worked over may have been considerable. The prudent operator will undertake a review of the original operation to understand the condition of the well. It is also prudent to undertake a simple survey of the seabed around the well to look for anomalies that may suggest a change in the integrity of conditions since temporary abandonment.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper SPE 173139, “Managing Marine Geohazard Risks Over the Full Business Cycle,” by Andrew W. Hill and Gareth A. Wood, BP America, prepared for the 2015 SPE/IADC Drilling Conference and Exhibition, London, 17–19 March. The paper has not been peer reviewed.

20 Aug 2015

LULA Exercise Blends Surface and Subsea Responses to Simulated Deepwater Blowout

To test the improved blowout-response capabilities implemented following the Deepwater Horizon accident, Total organized and ran a large exercise to check the ability to efficiently define, implement, and manage the response to a major oil spill resulting from a subsea blowout, including the mobilization of a new subsea-dispersant-injection (SSDI) device. After a year and a half of preparation, the exercise took place 13–15 November 2013.

The oil-spill-response exercise, code-named LULA, considered a scenario in which a blowout at a water depth of 1,000 ft resulted in an uncontrolled release at 50,000 BOPD. The main objectives of the LULA exercise were

  • To mobilize all the emergency and crisis units in Luanda, Angola; offshore; and in Paris
  • To use all the techniques and technologies available to track an oil slick
  • To mobilize the SSDI kit from Norway to Angola and to deploy it close to the well
  • To deploy all the available oil-spill-response equipment of Total E&P Angola
  • To test the procurement of dispersant and the associated logistics
  • To test the onshore response, including coastal protection, onshore cleanup, oiled-wildlife management, and waste management

Subsea Response
During the Deepwater Horizon disaster, the injection of dispersant directly at the source of the oil leakage at seafloor level proved to be an effective technique. The technique required the deployment of an SSDI system.

After the Deepwater Horizon accident, Total was involved with a group of nine major oil and gas companies in the Subsea Well Response Project. As a result of the work of this group, two SSDI kits were manufactured and positioned in Stavanger. Total wanted to test the ability to mobilize and deploy in a timely manner the newly developed equipment, and Total E&P Angola was designated as responsible for the organization of the LULA exercise in collaboration with the Ministry of Petroleum of Angola. The SSDI kit, positioned in Norway, would be transported by air to Angola, sent offshore, and deployed.

The objective of dispersant spraying, at the surface or subsea directly at the wellhead, is to break down the oil slick or plume into microdroplets that can be degraded much more ­easily by micro­organisms occurring ­naturally in the marine environment. Marine environments with a long history of natural oil seepage, such as Angolan waters, ­already host ­micro-organisms well-­suited to bio­degradation of hydrocarbons.

Fig. 1

The SSDI kit was loaded on a field support vessel (FSV) on 9–10 November 2013. The SSDI kit (Fig. 1) is composed of a coiled-tubing termination head (CTTH), a subsea dispersant manifold (SDM), dispersant-injection wands, and four hydraulic flying leads on racks (only three were mobilized).

The first step of the offshore operation was conducted by the FSV and consisted of the installation of the SSDI kit on the seabed and proceeded with the subsea connections of the various parts of the system by use of the vessel’s crane and a remotely operated vehicle (ROV).

The second step involved deploying the CTTH from the light-well-intervention vessel in open water by use of a coiled-tubing string.

The final step before starting to inject the dispersant was to connect the last hydraulic flying leads to the SSDI by use of two ROVs. Once the subsea layout of the SSDI kit was completed, the dispersant injection started at a low flow rate, set at 1/100 of the blowout rate.

Surface Operations
For the LULA exercise, one of the main objectives was to test the mobilization and deployment of Total E&P Angola’s offshore oil-spill-response resources (e.g., dispersant spraying, containment, recovery) and the coordination of deployment of additional resources.

While the response to an instantaneous oil spill (e.g., a spill from a tanker following a collision) will involve deploying resources on a moving target (following drifting oil slicks), the strategy for the response to a blowout incident will focus primarily on the oil reaching the surface from the wellhead.

Fig. 2

The advantages of doing so include the fact that fresh oil can be dispersed more efficiently, whether by aircraft or ships. If resources for containment and recovery are positioned adequately, the spreading of the oil will be limited, thus increasing the efficiency of such operations. The response invariably will involve the deployment of numerous response resources, all fighting for space. Therefore, it is critical to organize the operations by identifying areas dedicated to each component of the response (Fig. 2).

Although not fully implemented on-site during the exercise, the planning section of the emergency unit set the zoning of the response operations in cones and defined the following zones, starting from the well:

  • An exclusion zone: A no-go zone in the area of the surfacing oil, if needed, when volatile-organic-compound concentrations or other risks are too high to allow working safely
  • An area dedicated to the subsea response above and very close to the well (SSDI, capping of well, relief-well drilling)
  • Various areas for oil-spill response at the surface of the sea
    • Close to the area of the surfacing of the oil—dispersant spraying from ships and containment-and-recovery vessels
    • A second area dedicated to aerial application of dispersants
    • A third area for containment recovery of weathered scattered patches of oil
    • Coastal-area response (mainly recovery of patches of weathered oil coming close to the coast)

Shoreline Protection and Cleanup
Another major objective of the exercise was to mobilize and use simultaneously a variety of tools available to Total E&P Angola for monitoring and modeling oil slicks and to evaluate their scope of application and effectiveness. From an operational standpoint, the response efforts need to focus on the areas where the film of oil is the thickest within the slicks that rapidly spread. The effectiveness of the response relies extensively on the ability to guide and maintain the response resources on these thick oil patches.

The tools tested during the LULA exercise were used for tracking the oil slick and predicting its movement.

Soon after the release of crude oil into the sea, two drifting buoys were launched at the front edge of the oil slick. Their positions were tracked continuously by satellite and were visible online within 1 to 3 hours.

Helicopter surveys provide the greatest flexibility and the most-­detailed information about the spread and be­havior of oil slicks. Two helicopter flights took place during the LULA ­exercise. The survey reports were sent to the emergency units.

Fixed-wing aircraft were used to rapidly obtain an overall view of the oil slick. An airplane mobilized from Accra, Ghana, flew over the site on the second day of the exercise. It provided information about the oil slick in a report submitted to the emergency units.

On the basis of experience from a past incident, an observation balloon was developed. It was launched from a ship and used for the first 48 hours of the exercise. The balloon was tied to the boat approximately 150 m above sea level, and the camera fitted on it fed images (visible and infrared) to a station on the boat. The boat can then follow the oil slicks day and night, and position the response vessels on the thickest parts of the slick and start operations at sunrise.

The LULA exercise was conceived by the management of Total to test the capability of the company to initiate the response to a major deepsea blowout. The exercise went far beyond the scope of classic large-scale exercises, including

  • 5 years of preparation
  • More than 500 people involved during the exercise and international experts mobilized in Angola
  • Mobilization from Norway and deployment of a newly designed SSDI system
  • Deployment of monitoring tools used on a controlled release of crude oil (e.g., observation balloon, observation aircraft mobilized from Ghana, satellite radar imagery)
  • Deployment of surface oil-spill-response resources from Total E&P Angola and from other oil operators in Angola
  • Mobilization of the emergency management organization of Total and Total E&P Angola and of the Angolan National Incident

Command Center
The exercise highlighted the following main challenges and areas for improvement:

  • Responders and experts must be mobilized in-country to provide assistance for offshore operations but also for the emergency management.
  • Sourcing, contracting, and mobilizing personnel, equipment, consumables, and logistical support must ensure sustainable and coordinated responses for a blowout situation, including subsea, surface, and onshore operations.
  • The emergency management organization of Total E&P Angola must interface with national authorities at strategic and tactical levels to facilitate the operations (e.g., involving customs, immigration, flights authorization, and links with local and provincial authorities).
  • Damage-assessment and -compensation mechanisms for affected communities and activities must be reinforced in case the oil comes ashore.
  • A comprehensive health, safety, and environment monitoring program must be set up during an incident to ensure safe response operating conditions (e.g., explosivity and volatile-organic-compound measurement of fresh surfacing oil), to assess the effectiveness of the response (e.g., efficiency of subsea and surface dispersant spraying), and to monitor the potential effects on the environment and its restoration.

LULA was a success. All the planned actions were carried out safely and effectively during the 3 days of exercise. Many lessons learned were identified and included in a set of recommendations that will help to improve Total’s capability to respond to a blowout situation. The findings of the exercise will also benefit the whole oil and gas industry, particularly companies operating in deep­water environments.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper IPTC 18215, “LULA Exercise: Testing the Oil-Spill Response to a Deep-Sea Blowout, With a Unique Combination of Surface and Subsea Response Techniques,” by C. Michel, L. Cazes, and C. Eygun, Total E&P Angola, and L. Page-Jones and J.-Y. Huet, OTRA, prepared for the 2014 International Petroleum Technology Conference, Kuala Lumpur, 10–12 December. The paper has not been peer reviewed. Copyright 2014 International Petroleum Technology Conference. Reproduced by permission.

20 Aug 2015

Hydrogen Sulfide Measurement With Wireless Technology

Hydrogen sulfide represents a major hazard in oil and gas production, and the efficient and reliable detection of gas leaks is a critical safety aspect. Wireless-detection systems offer an opportunity to expand the measurement area. This paper reviews a specific application of wireless technology in gas detection and details the steps taken to assess the integrity of the wireless system and the considerations necessary to ensure the reliability and availability of the signal transmission.

Wireless-Sensor Networks (WSNs)
WSNs are an alternative to hard-wired systems where the cabling is replaced by radio-frequency (RF) transmission of the measured data into a host system. The network may be point-to-point or meshed transmission. Meshed transmission allows for multiple alternative routes and, therefore, offers potential improvements in the ability of the system to ensure that the data are delivered to the host system.

WSNs have been developed since 2003 on the basis of Institute of Electrical and Electronics Engineers (IEEE) Standard 802.15.4, which defines the operating frequency of 2.4 GHz and other aspects of the basic physical layer of communication. This is currently adopted by the process industry as the essential foundation for most wireless-­measurement systems.

Subsequent to the definition of the physical layer for communication, the HART Foundation, which was originally established to define protocols for serial data communication between cabled field devices, was extended to cover WSN technology. This wireless HART technology was subsequently approved by the International Electrotechnical Commission (IEC) in 2006 as IEC Standard 62591. A parallel development was undertaken by the International Society of Automation (ISA) in the US under the ISA 100.11a standard in 2009. Each of these standards seeks to establish interoperability of the different equipment manufacturers, and it is important that this convergence be achieved to prevent development and adoption delays.

The development of battery technology is also an important aspect of WSNs. Significant advances in battery design, solar-cell charging, and energy harvesting are expected to play an active role in the future. In present systems, sophisticated software is used to turn on and wake up components to minimize power consumption. It is also imperative that monitoring of battery status be managed actively by the host system.

Reliability Considerations
Reliability may be defined as the ability of a system or component to perform its required function under stated conditions.

Fig. 1

The quantitative analysis of reliability is a well-established practice for point-to-point systems. One methodology useful for visualization is a decision tree. A simple example for a system with a top event, which is loss of any of two signals, is provided in Fig. 1. For purposes of demonstration, the reliability of both receivers is 0.9 and of two transmitter/sensor combinations is 0.85 and 0.8. The reliability of the wireless transmission is assumed to be 1.0 (direct line of sight over a short distance). The result of the decision-tree analysis is an overall reliability of 0.55.

Fig. 2

Wireless systems using multicasting provide an alternative communication route by enabling the failed receiver to be bypassed (Fig. 2). It is clear from the ­decision-tree analysis that multicasting in this case provides a significant improvement in reliability, with the probability of the successful measurement of both inputs improving for this example from 0.55 to 0.67. It is also clear from these examples that the complexity of the decision tree increases significantly as the number of alternative routes increases.

Extrapolating the decision-tree approach to include the wireless transmission in larger mesh systems (e.g., 2,000 points) introduces the problem of estimating reliability influenced by many factors, some of which are interdependent. These include the effect in mesh systems of the signal consolidation from many reflections at the receiver in addition to line of sight, the natural tendency of an RF signal to spread over a radial distance, and the limitations of statistical assumptions in the probability of reflection.

Accordingly, for large wireless mesh systems, decision trees and other conventional point-to-point methods are difficult to apply; they simply become too large. As a result, the mathematical development of modeling techniques for these types of multiple information flows has received significant attention in recent years, driven not only by reliability considerations but also by the need to identify the smallest routes to limit investment costs on large-scale communication systems and to identify limitations on capacity of isolated sections of the network. Graph theory represents a suitable method for analysis of networks with multiple routes, but, again, solutions require complex extended algorithms and are difficult to visualize.

Many of these approaches to analysis concentrate on component reliability for the equipment (e.g., transmitters, receivers, batteries, sensors) and on generalized assumptions regarding the performance of the mesh design.

The sensitivity of reliability for a wireless network, however, is dominated by the RF environment, rather than component reliability. The assumption that the system will comply with standardized probability functions in particular may be ambitious, and specific planning of the network, testing of the network, and maintenance of the RF environment are imperative to ensure that the system will continue to work properly.

The Test Installation
The application reviewed in this paper was located at a fire training ground at Asab in the United Arab Emirates. A number of fire scenarios can be simulated, including gas leaks at flanges and tank fires.

Fig. 3

Safety at the training ground is focused on leak prevention; however, secondary risk mitigation is provided by gas detection. Gas detection is normally hardwired, and systems are available for detection of hydrogen sulfide and hydrocarbons. In addition to this hard-wired gas detection, a supplementary wireless gas-detection installation was put in place and investigations were conducted related to wireless aspects of the installation. The system consists of four gas detectors (hydrogen sulfide and hydrocarbon) transmitting to a receiver that converts the signals to the plant operator interface (Fig. 3). The system also has local alarm stations capable of receiving alarms from the various detectors.

The wireless system tested transmits at 2.4 GHz on the basis of the IEEE 802.15.4 standard and used direct-­sequence-spread-spectrum (DSSS) technology, which combines the transmitted signal with a broader spectrum of frequencies.

The transmitter power is limited to 100 mW to enable compliance with European and local statutory requirements for avoidance of interference with existing wireless facilities. The transmission of the signal is limited by reflections and spreading (i.e., the effect of radiating in a circular pattern). For the tests, a gas detector and transmitter were placed in a vehicle and driven away from the receiver over an area of 2-km radius and signal-transmission status was monitored to determine the extent of coverage. At various points within this area, gas detectors were tested with gas samples to ensure that full functionality was maintained.

Test Results
The detectors normally operate at a distance of 150 m from the monitoring-­system receiving antenna. For these tests, a gas detector with a battery power source and a wireless transmitter were transported around the area of the plant in various directions, and the distance from the receiving antenna was increased until communication with the host system was lost. As can be expected, the transmission is influenced significantly by the topography of the land and by building and process-­equipment obstructions. The successful transmission distance varied over a range of 0.4–1.6 km. The analysis also shows that, whereas direct line of sight is optimal for transmission, it was possible to maintain coverage with transmission through structures or using reflection.

For the installation reviewed here, further field tests were conducted to determine the practical robustness of the system in resisting other sources of RF interference from various potential sources.


  • The technology applied in wireless systems in this application appears to be very effective in preventing typical sources of interference with process plants from affecting measurement reliability.
  • The use of hopping with mesh networks effectively extends the possible coverage, within the typical national statutory limits of 100 mW for transmission power.
  • The reliability of equipment may be considered to incorporate hardware and software, which includes the battery, sensor, transmitter, and receiver. This equipment reliability is, to an extent, deterministic and can be managed effectively. The transmission quality of the RF signal, however, is heavily dependent on the application (e.g., location, obstructions, topography) and less easily modeled in reliability assessment.
  • The reliability of the system transmission quality cannot be modeled easily with conventional point-to-point approaches, and the systems may not, in practice, be represented accurately by statistical models. As a result, it is necessary to manage the RF environment actively to support wireless-network systems.
  • Mesh designs that enable local alarm activation without depending on the remote monitoring facility offer particular advantages for gas detection by reducing the difficulty in managing a widespread RF environment while achieving the primary objective of announcing the hazard directly to personnel who may be at risk near the leak source.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper SPE 171720, “Hydrogen Sulfide Measurement Using Wireless Technology,” by P. Phelan, A.-R. Shames Khouri, and H.A. Wahed, Abu Dhabi Gas Industries, prepared for the 2014 Abu Dhabi International Petroleum Exhibition and Conference, Abu Dhabi, 10–13 November. The paper has not been peer reviewed.

20 Aug 2015

Coral Relocation Mitigates Habitat Effects From Pipeline Construction Offshore Qatar

The Barzan Gas Project is a critical program to deliver natural gas to Qatar’s future industries. The project was expected to affect shallow coral communities during pipeline construction from Qatar’s North field to onshore. To partially meet the state’s environmental clearance for the project while supporting the state’s national vision, RasGas developed a project-specific coral-management, -relocation, and -monitoring plan that incorporated proven methodologies to relocate at-risk coral colonies to a suitable location.

In addition to natural-gas reserves, ­coral-reef communities are regarded as a significant and highly productive natural resource in Qatar, providing refuge and nursery areas for many commercially important fish and shellfish species during portions of their life cycle. Corals off the coast of Qatar grow in one of the more thermally stressed environments in the world. Elevated sea temperature and other coastal pressures such as overfishing, port development, and construction have led to a decrease in local coral-reef communities. Recognizing the importance of these habitats, Qatar included measures in the Qatar National Development Strategy 2011–16 calling for the protection, conservation, and sustainable management of marine and coastal habitats and associated biodiversity.

Fig. 1

The RasGas Barzan Gas Project off eastern Qatar (Fig. 1) is a critical program for the state, delivering natural gas from Qatar’s North field to the onshore processing plant through export pipelines. As part of the construction phase, the Barzan project was expected to affect shallow coral communities through the direct physical removal of coral colonies from trenching activities and through sedimentation and a general deterioration of the habitat immediately adjacent to the trench.

To partially meet the state’s environmental clearance conditions for the project, RasGas developed a project-­specific coral-management, -relocation, and -monitoring plan that incorporated proven methodologies to relocate at-risk coral colonies to a suitable location away from both present and future development to minimize potential harm.

Benthic Environmental Survey
In order to document the status of environmentally sensitive resources within the pipeline corridor and delineate coral and seagrass habitat, a benthic environmental survey was conducted along two predetermined parallel transects within the pipeline corridor from the shoreline (pipeline landfall) to 2 km offshore. Following the habitat delineation, quantitative data were also collected to estimate the number and species of corals within each habitat type.

Survey results showed there were four distinct areas of hard-coral habitat, differentiated by substrate type (e.g., sand, hard bottom) and coral density. By use of the areas of the four characterized hard-coral-habitat types and the estimated coral densities, it was determined that approximately 40,000 coral colonies with a diameter >10 cm were present within the hard-coral-habitat impact footprint.

Hard-Coral Recipient-Site Selection
In order to identify an acceptable recipient for the hard-coral colonies to be relocated from the pipeline corridor, several areas offshore northeast of Qatar were surveyed to assess their suitability for reattaching hard-coral colonies. Sites were selected primarily for their location outside of potential future pipeline construction and depth through a review of environmental-sensitivity maps provided by the Qatar Ministry of Environment and satellite imagery along the northeast coast. Site surveys were conducted at 21 sites within two larger areas to assess their suitability on the basis of the substrate type and topographic relief, dominant biota, coral presence/absence, and urchin presence/absence. Where hard corals were present, coral coverage was assessed qualitatively. An additional eight sites were assessed within an area closer to the project site for the potential deployment of limestone boulders to act as substrate for reattachment if a suitable natural substrate site was not identified.

A review of the survey data indicated that only two natural hard-­bottom sites were suitable with the exception of water depth, which was shallower (<2 m) than the original depth of the corals to be relocated (7–8 m), potentially exposing them to an extreme thermal change. Because of the lack of available hard substrate, it was recommended that native quarried limestone boulders of composition similar to that of the natural substrate be used to create exposed hard-bottom habitat.

Coral-Relocation Program
Approximately 550 limestone boulders, each nearly 1 m in diameter, were power washed to remove excessive sediment, transported from Ras Laffan, Qatar, and deployed into a predetermined recipient site that had been deemed suitable for habitat creation because of its proximity to a healthy reef, water depth, and distance from Ras Laffan. The relatively shallow sand veneer (≤11 cm) overlying a hard-bottom substrate indicated no risk of subsidence.

The rocks were deployed off the side deck of a barge, allowing for varying densities of rock patches and a configuration that would mimic the naturally divergent rocky outcrops. The newly created habitats not only provided a suitable substrate for the reattachment of hard-coral colonies, but additionally provided vertical and horizontal subsurfaces, interstitial spaces, crevices, and voids to create a complex habitat for a wide range of other marine life.

Corals were removed from the areas of highest coral density within the pipeline corridor by divers using hammers and chisels to separate the coral from its substrate and lift it intact to the extent possible. Corals were transported carefully to the recipient site onboard a survey vessel and were temporarily cached in metal trays on the seabed directly adjacent to the boulders until they were ready for reattachment.

Monitoring of Relocated Hard Corals
In order to assess the relative success of the Barzan coral relocation, a monitoring program was designed to permit the detection of and response to significant changes in habitat and community structure because of external disturbances (e.g., thermal extremes). Monitoring surveys will be conducted twice yearly for a minimum of 5 years to

  • Evaluate the attachment status (presence/absence) of reattached hard corals
  • Evaluate relative health of reattached hard corals
  • Assess habitat features to evaluate temporal ecological trends
  • Conduct water-quality monitoring twice yearly
  • Acquire and log on-site-temperature data

Summary and Conclusions
In 2012, more than 1,600 hard-coral colonies were relocated into a newly created habitat of limestone boulders because of the lack of hard bottom. Baseline monitoring of the relocated corals was conducted 3 months post-relocation. ­Monitoring-survey results showed that the relocated corals exhibited health comparable to that of the reference communities and exhibited comparable signs of stress. Future monitoring surveys conducted twice yearly for a minimum of 5 years will provide data to evaluate the overall success of the project and for comparison with other coral-relocation projects in the region.

This paper presents the composite monitoring results from Surveys II (January 2013), III (July 2013), and IV (January 2014), which were assessed for reattached-colony bonding status, ­colony health, benthic characterization, reef-fish assemblage, sediment accumulation, sea-urchin density, and water-column data.

Reattached-Coral-Colony Bonding Status. The substrate-augmentation approach with quarried limestone boulders is deemed to be successful, with fewer coral-colony detachments at the re­attachment site than reported during previous monitoring surveys.

Coral-Colony-Health Assessment. The number of coral colonies with more than 10% of the coral tissue affected by one or more conditions decreased at the reattachment and shallow reference sites from Survey III to Survey IV, indicating increased overall health at these sites.

Benthic Characterization. Low­profile filamentous benthic algae continued to account for the greatest benthic cover within the reattachment site. The algal cover increased not only on the limestone boulders but also the surfaces of the coral colonies, resulting in a decrease in percentage of coral tissue and increase in coral-health stress ranking.

Fig. 2

Reef-Fish Assemblage. Although the number of reef-fish observations decreased during Survey III compared with Survey II, it increased in Survey IV to the highest for the monitoring period. But the number of fish species stayed the same for the last two surveys. The assemblage composition recorded during Survey IV was more similar to those of Surveys II and III than to that of Survey I. An analysis revealed that the differences were because of increased numbers of dory snappers, yellowfin seabream, and Persian cardinalfish recorded during the latter surveys relative to pearly goatfish, a numerical dominant during Survey I. Although not observed in high abundance during the first three surveys, the yellowstripe scad was recorded in high abundance during Survey IV. The Persian cardinalfish, however, has continued to be an abundant member of the assemblage since Survey I. Overall, the assemblage was generally typical of the geographic region and habitat (Fig. 2).

Sea-Urchin Density. With the increase of algal cover, the presence of sea urchins may provide a means to reduce competition for space between the coral recruits and algae. It has been encouraging to observe an increased presence of sea urchins during Survey III compared with Survey II because these herbivores contribute positively to the dynamics of coral recruitment rates and potential survivorship in the reattachment site.

Water-Column Data. Sediment accumulation on and around the boulders has been negligible during Surveys II through IV, validating the selection of the coral-reattachment site. The hydrographic water-column profile data have been as expected in this portion of the Arabian Gulf, with anticipated temporal changes from seasonal fluctuations.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper SPE 170359, “Coral Relocation as Habitat Mitigation for Impacts From the Barzan Gas Project Pipeline Construction Offshore Eastern Qatar: Survey IV Update,” by Kaushik Deb, RasGas, and Anne McCarthy, CSA Ocean Sciences, prepared for the 2014 SPE Middle East Health, Safety, Environment, and Sustainable Development Conference and Exhibition, Doha, Qatar, 22–24 September. The paper has not been peer reviewed.