Doing More With Data

A few years ago, Mario Ruscev was in charge of a project to wire an oil field with a state-of-the-art system giving operators multiple streams of production data from each of the wells in the field.

jpt-2013-05-morewithdatafig4hero.jpg
Pradeep Annaiyappa, senior fellow at Canrig Drilling Technology, shows an image of the TopDrive control application called RockIt that can be served up through the company’s cloud network. Using Rockit, a directional driller in an office can control the topdrive to steer the well trajectory.

A few years ago, Mario Ruscev was in charge of a project to wire an oil field with a state-of-the-art system giving operators multiple streams of production data from each of the wells in the field. When it was all done and time to turn it over, the field’s manager surprised him by asking: “What do we do with it?” said Ruscev, now chief technology officer at Baker Hughes.

That field was on the leading edge of a flood of fields since then that have installed sensors to constantly measure what goes on while drilling wells and producing oil, and the industry is working on answering Ruscev’s question.

jpt-2013-05-morewithdatafig1.jpg
This topdrive is among the pieces of equipment that can be controlled using a customer program using the cloud network created by Canrig.

“We are in a catch-up phase. It takes some time to make sense of these new sets of data, before we figure how to really use it and how to relate it to what we want to do,” Ruscev said. Without good applications for using it in decision making, “data is just data.”

This cycle is likely to follow the same path as seismic, where advances, such as the advent of 3D seismic acquisition, enormously increased the volume of data gathered. “There have been two revolutions in seismic and we’ve never been stopped by it,” Ruscev said. “It just took time to be able to produce good results from these data.”

Fast-rising computing processing power will be able to digest the big data coming in, but he said the hard part in seismic, and now in reservoir and drilling, is relating those measurements in useful ways to “what is happening to the formation.”

Ruscev told his story during a panel discussion at the SPE Digital Energy Conference and Exhibition, in March in The Woodlands, Texas, which was filled with people working on better things to do with information. The initiatives range from mining data in well logs to make better maps showing underground stresses to doing quality control checks on data as it flows in.

Much of it is built on algorithms written to apply advanced numerical analysis methods seeking insights in the numbers.

One of the exhibitors was Sekal, a fast-growing, young company that uses real-time data analysis of measurements, such as the weight of the drillstring and the downhole pressure. The point of the work is to detect small changes in drilling conditions that could lead hours later to a big problem that could halt work, said Bill Chmela, a vice president at Sekal.

The work does that by constantly comparing observed information with amounts calculated by a model based on the physics of drilling and adjusted by what is observed along the way. When the lines diverge, its petroleum engineers look into the cause of this shift and report to the client. The goal is to spot and correct a potential problem early on, before it has a chance to grow to where it pushes the readings on the driller’s control panel outside the safe zone.

“When a driller sees that on the screen, he has the problem,” Chmela said. Started in 2011, Sekal based its system on work by the International Research Institute of Stavanger (IRIS), which continues to fine tune the mathematical formulas used for the exception-based system. It is now field testing its next generation product, which automates the reaction on the drill rig if trouble is predicted.

Chevron is working with researchers at the University of Southern California (USC) on a real-time quality control system able to detect even subtle sensor errors. The goal is real-time analysis based on comparisons of data flowing from multiple sensors. When sensor data is rejected—a statistical analysis determines if the gap between the observed and the predicted amounts indicates the sensor is likely sending bad data—the system is able to calculate a reconstructed version.

Real-time data cleansing has been around for a while in processing facilities, such as chemical plants, but it is tricky to move what works in a controlled environment, to a reservoir-based operation where “things will always be changing and moving,” said Lisa Brenskelle, a process control engineer at Chevron Energy Technology. Brenskelle, who delivered a paper on the project, said real-time quality control is vital for optimizing operations.

The conference brought together a diverse group of professionals who formed a new SPE technical section last fall that has grown to more than 400 members. It goes by the name PD2A, which is short for Petroleum Data Driven Analytics.

“I have been doing this for 21 or 22 years and finally SPE has enough critical mass to create a brand-new technical section for data-driven analytics,” said Shahab Mohaghegh, the group’s program chairman and a petroleum engineering professor at West Virginia University known for applying new analytic methods.

Kevin Lacy, global senior vice president of drilling and completions at Talisman, described the trend as “going from an experience-based business to an information-based business.”

The pace of this transition has been neither rapid nor easy. After years of work, leading companies in drilling automation will be reporting at this year’s SPE Annual Technical Conference and Exhibition (ATCE) on work to drill a stand of pipe, which represents making about 90 ft of hole. More is possible, but truly automated drilling systems are still under development

And Mohaghegh, who has used his methods to advise oil companies around the globe, said “people have told me to my face, it is too good to be true.”

It does represent a big change in how reservoir modeling is done. “People are trying to analyze these attributes in totally new ways,” Ruscev of Baker Hughes said. “The old way was: We need to physically model it and use physics to find the attributes.” He said new methods not based on physical models sometimes work where the old ways do not, but are “totally foreign” to veterans of the industry.

jpt-2013-05-morewithdatafig2.jpg
These maps of the level of minimum horizontal stress differ because of the amount of geomechanical log data used to produce them. One is based on the 30 logs actually performed in the shale field. The second adds 30 “synthetic geomechanical logs” calculated using logging data plus pattern recognition and artificial intelligence methods to predict what the values would have been.

Driven to Change

There are signs that the pace of change could be picking up in drilling.

“We are moving in the direction of rapid change in this industry,” said John Truschinger, senior vice president of support services and chief information officer at Transocean. “There are a number of projects on the table we have been waiting for years to do.”

The Macondo disaster, which took the lives of 11 of Transocean’s workers, has been a powerful motivation to consider new ways of doing things, as is the competitive drive to work in ever deeper more challenging waters at a company where there are new faces in the executive ranks.

The measure of this is “more research and development spending on technology in the past 2 years than we have spent (at Transocean) in a long time,” Truschinger said.

Details about these projects are limited, but the company’s stated goals are ambitious. Advanced analytics is “a component of a much larger program.” Its goal is to “change the way offshore drilling is done,” said José A. Gutierrez, director of technology innovation at Transocean.

Onshore, the gap between potential production from unconventional resource and what can be economically produced now is a powerful motivation to consider digital options.

Producers are up against a daunting economic barrier—the price of natural gas in the United States has dropped too low to economically develop gas-only shale formations. Ruscev said better analysis is needed to target sweet spots, improving the production per well. That cost gap could also be narrowed by drilling more efficiently using automation.

Lacy of Talisman has seen the time needed to drill a well in the Marcellus formation drop from 30 days to 18 days, and he is looking for ways to lower it to 5 days, to help bring the cost of drilling down to the low price offered for US gas.

The reward locked in the shale may provide the push to do so. “The shale plays are the best opportunity we have had in 50 years to de-man and automate drilling,” Lacy said. “For the industry, it is a massive opportunity.”

Talking about Drilling Automation

Fred Florence, the product champion for automation and drilling optimization at National Oilwell Varco and the former chairman of SPE’s Drilling Systems Automation Technical Section (DSATS), provides an update on developments in creating working drilling automation systems. The section plans to offer a report on its work to encourage development of the processes needed to automatically drill a stand. At SPE’s Annual Technical Conference and Exhibition there will be a presentation on the results of field tests of four drilling systems.

Each of the companies was asked to drill a stand. Could you explain that?

“Drill a stand” means that the stand of pipe is made up by the drill crew off line and moved to well center and then connected to the drillstring. The driller picks up out of the slips and pushes the “drill on automatic” button, and then monitors the process. The approximately 90 ft of new hole is drilled using the algorithms and processes developed by four companies, which will not be identified. At the end of the stand, the machine hoists off bottom and waits for the driller to resume manual control.

And then?

The drilling crews adds a stand of pipe, and the driller pushes the button to drill the stand. The drill floor work is manual. The downhole drilling is automated.

Is the challenge in writing a really effective algorithm or doing the hardware and software setup needed to create an integrated system controlling it all?

It is all the above. The goal is to integrate good algorithms and predictive models with the rig’s control system in a way the driller can understand what is going on, and properly manage the job.

At Wit’s End

The tools exist to make that change. For example, one of the requirements for programming a drilling rig is a common language so that data and commands can flow back and forth among rig components, allowing them to be put together as easily as personal computer components.
A language, WITSML, which stands for wellsite information transfer markup language, has been around for years and is supported by SPE’s Drilling Systems Automation Technical Section (DSATS).

But an earlier version of that code, WITS, which does not offer the fast connections needed for integrating rig components, is still widely used. While Lacy sees progress, he said, “we have WITSML. But we still cannot do plug and play.”

The building blocks to create automated drilling systems are being created. There are a growing number of drilling experts converting what they have learned about doing it better, such as reducing sticking and slipping while drilling, into mathematical formulas that can be used to program rig equipment. The hard part has been setting up rigs with the communications and control systems needed for centralized machine control.

Rig owners are beginning to adapt to this change. “A lot of algorithms will come into play,” said Pradeep Annaiyappa, a senior fellow at Canrig. He said companies have reached the point where they want to field test algorithms that have worked on drilling simulators.

To facilitate the process, Canrig created a rig network called RigCloud, using cloud computing to offer a controlled, secure network in which customers can run drilling programs and display what is going on for drillers and other authorized users.

Using cloud computing to provide a network built around a computer server provides a network for testing, but for widespread use, many rigs will need to be set up to readily accommodate automated operations. Annaiyappa said the industry will have to answer the question: Who is going to be the integrator?

A few companies are moving ahead on their own to show what can be done. To highlight this work, DSATS will offer a workshop the day before the ATCE in New Orleans, on its project to automate drilling a stand of pipe.

Led by Clinton Chapman of Schlumberger and Marty Cavanaugh of Shell, the working group challenged companies to create a system that could automatically drill the length of a stand of pipe which is about 90 ft long. The system must start and stop the machines in the proper sequence and apply operating limits to safely drill each stand while a driller monitors the work.

While data-driven drilling systems are expected to play a greater role in manufacturing wells, the goal is safer, more efficient drilling, not eliminating people from the process.

Chmela said Sekal’s growth depends on hiring enough petroleum engineers to monitor the data flow and investigate situations where the observed data and the expected data diverge, whether the cause is not a concern, such as the unexpected addition of a few bags of drilling mud, or an indication of deteriorating conditions downhole that could halt drilling.

While machines are better at executing programmed instructions, people can perform better when dealing with the unexpected.

“We have got to be cautious about how much we automate. The right amount of automation is critical,” said Truschinger. “It may be running on automatic 70% of the time. But someone has to be looking over the controls.”

jpt-2013-05-morewithdatafig3.jpg
Sensors above ground and below ground are fed into the system used by Sekal to seek out early signs of possible drilling problems. A variety of measurements are compared with amounts predicted by a model, which is calibrated using the data. When these amounts diverge, it may suggest drilling conditions are deteriorating.

What Is Analytics?

A broad range of the work in this field is described as analytics. It is a broad concept covering a variety of approaches for creating models to better understand and control a variety of things.
It can cover ways for finding meaning in the staggering amount of data that can be gathered inside the small hole during drilling and the limited data often available from large areas covered by a petroleum reservoir.

To fill in the data gaps in a shale field, Mohaghegh reported on using artificial intelligence and data mining techniques to create “synthetic geomechanical logs” for wells where that sort of in-well test had not been run. The goal was a more detailed look at the formation’s rock properties, particularly the alignment and intensity of the horizontal stress patterns that define the effectiveness of hydraulic fracturing.

Rather than building a model from the bottom up—explaining observed data based on physical and geophysical concepts such as porosity, permeability, and pore pressure—he has used systems developed based on how the human brain works using observed patterns.

One way to understand the process is to look at how Google used similar methods to create its successful Google Translate program. Many tried and failed to create translation software by programming in grammatical rules and word definitions, much like the methods used in language classes.

The often-suspect translations resulting showed how hard it was to build in the complexity and nuances of a language using a “rules-based, equation-based system.” Google’s approach was the feed in a large number of books that had been translated. The pattern recognition programs in effect “learned” how skilled linguists went from one language to another.

Large oil fields present some of the same problems because physical laws are applied to complex systems in which the reservoir properties, to the extent that it can be observed, are prone to exceptions.

A national oil company hired Mohaghegh to create a model of a large field because it had been unable to create a model able to generate results that match the history of water injection and oil production.

By starting with the data and using data mining and artificial intelligence methods to “deduce the physics,” which he calls top-down modeling, he said he created a model that matched the production history and other data for all the wells in the formation. It was used to increase production in this mature field by altering the water injection system.

These methods are not a replacement for the knowledge and experience that petroleum engineers bring to the job, Mohaghegh said, noting: “I have learned in 20 years that nothing substitutes for domain expertise.”

But, he added, sometimes the current deterministic models are not up to the task. “When problems are so complex,” he said, “we should be humble enough to say the physics are too complex.”

Getting Together in a Cloud

The best place for wiring together an automated drilling system could be in a cloud. Canrig Drilling Technology has developed a cloud-based network to help create the data connections needed to facilitate drilling automation, which also highlights the control issues making the transition to automation a difficult one.

Operators have been working on formula-driven approaches to drilling—capturing what they have learned about drilling more efficiently in the form of algorithms that can then be turned into programs that control drilling.

The first test is to try them out on drilling simulators using drilling data from past wells, said Pradeep Annaiyappa, senior fellow at Canrig, the rig equipment and controls arm of Nabors. After that, operators want to see if their algorithms are useful in the real world.

While Canrig recognized the need to accommodate these customers, drillers are not about to turn over the controls of the rig. Plus there are other service providers who need to be kept in the loop, both on location and remotely through the Internet.

Canrig had to figure out a way to broaden access to data, create data pathways among the rig equipment, create a control panel everyone could see, and set limits on who could do what.

The solution was a rig-based private cloud computer network, inspired by the ones set up by companies such as Google and Yahoo providing controlled access to a wide range of software and services over the Internet.

It offers the high-speed bandwidth to feed data into the programs based on the operator algorithms, a shared display so others can share the driller’s point of view.

“Effectively, this is used for a rig control system and allows other pieces of equipment and other companies’ software to run on our internal system,” Annaiyappa said. “From the customer’s point of view, it gives them an easier way to try new algorithms or practices without bringing in a whole lot of equipment.”

The instructions generated by the customer’s drilling optimization programs remain advisory. The operator can specify that the output of its drilling algorithm or other practice be used to make decisions, such as setting the weight on bit, which are observed by the driller who has the power to override the commands.

It is another, but far from the final, step toward rigs where computerized control can drill wells.
Advanced rigs already have computerized systems to do specific jobs, such as reduce drilling vibration, in the way that antilock braking systems on cars can automatically pump the brakes to avoid a skid.

There are a few companies testing equipment that moves the industry closer to the reality of complete automation, which would be like an autopilot on a plane allowing a driller to take control when needed.

SPE’s Drilling Systems Automation Technical Section (DSATS)will offer an update on that with a paper planned for release this fall reporting on the results of four companies that used automated systems to drill a section of a well automatically.

The past experience of automation pioneers is that one of the biggest challenges in setting up a digital control system is the variety of components used from rig to rig and the challenge of integrating components that were not built using the sort of interoperability standards that enable consumers to plug-and-play electronic components.

Cloud computing setups vary widely based on the level of control allowed by the manager of the network. One of its selling points is the ability to limit network access. Security has become increasingly important as recent cyber-attacks have raised concerns about hackers disrupting drilling operations.

“We need to control a network, and allowing connections to a rig network is not an easy task if you want to make sure applications have virus protection and are secure,” Annaiyappa said.

So far, such a cloud can only be found on a handful of rigs used by those testing drilling automation. Creating automation compatible rigs will require industry agreement on standard interfaces for devices, security systems limiting who controls the equipment, and secure data historians.

Just creating a time log in the data historian raises questions about definitions. For example, Annaiyappa said operators commonly define nonproductive time as when the drill bit is not making hole, but from a driller’s point of view, it is likely those periods when a stoppage means it is not getting paid its day rate.

Real-Time Quality Control

Real-time data offers a view of what is going on at any moment, including sensors going bad. A sensor that stops sending signals is easy enough to detect. Much harder is spotting one that has drifted a few degrees away from accurate.

If undetected, the consequence of the bad data could range from time wasted rooting out bad data from big databases to false warnings resulting in unneeded downtime or misleading analysis.

“This is a big problem for the industry,” said Lisa Brenskelle, a process control engineer at Chevron Energy Technology Co., who delivered a paper on work by a partnership between the company and the University of Southern California’s (USC) CiSoft Center for Interactive Smart Oilfield Technologies to create a real-time quality control system for streaming data.

It was one of two papers presented at the SPE Digital Energy Conference and Exhibition that showed early work on creating systems dealing with the challenge of constant monitoring of large networks of sensors in settings where the range of measurements is affected by the vagaries of a reservoir.

Data quality testing is not a new concern. It is a standard early step when analyzing previously recorded data. Automating that process could eliminate a tedious task “that wastes a lot of time and wastes money,” Brenskelle said. “Doing it manually takes a lot of time. If you bring in an expert, it is not a good use of their time combing through the data and weeding out the data that is wrong” before they get to the actual analysis.

Chevron’s goal is to ensure data is correct before it is saved in a data historian, and a second project is aimed at ensuring that the data compression system used to maximize the sensor readings that can be stored by the data historian weeds out redundant data but does not leave out any valuable information.

Others are looking for ways to ensure the data-driven real-time monitoring and control systems are running on accurate data. “There is a tremendous amount of emphasis in the industry on that,” said Bill Chmela, vice president for the Americas at Sekal, which analyzes real-time drilling data for indications of deteriorating conditions that could become a serious issue later on. Sekal recently added Saudi Aramco Energy Venture to its list of shareholders: IRIS, Statoil Technology Invest, and SåkorninVest.

Chevron and USC created and tested a “streaming data cleansing” system in a laboratory at the university. It examined data recorded at a vapor recovery unit in a Chevron facility and consistently detected errors inserted into the data.

The system constantly compares incoming data from each sensor with values predicted by a model of the relationships between the sensors developed using a dynamic principal component analysis (DPCA). The difference in value between the predicted and the actual is then statistically analyzed to see if the difference indicates a sensor error.

A project done by The University of Texas at Austin attacked the problem using a different calculation method—a Bayesian network model—which is used to map the web of interrelationships among components that is used to compare sensor readings to determine if any one of them is likely to be out of line.

One thing that Brenskelle said makes the Chevron-USC system unique is it uses a dynamic model that accounts for time lapses. For example, if the temperature rises in one spot it will take some time for that to affect other locations. A dynamic system is programmed to determine if a sudden temperature change observed by one sensor but not another nearby, is an indication that one sensor is sending bad data, or is due to the time lag.

Another is its ability to calculate numbers to replace data that is missing or has been determined to be wrong by the quality control system. Brenskelle said the resulting replacement values are “not perfect but it gets you in the ballpark.” The next step is a small field test later this year of the system, which now includes some simpler checks to detect gross errors.

Early results from the Chevron-USC project have been encouraging, but Brenskelle emphasized that it is “not far enough along to begin commercializing it.”


For Further Reading

  • SPE 163690 Synthetic, Geomechanical Logs for Marcellus Shale by M.O. Eshkalak, West Virginia University, et al.
  • SPE 150942 An Early Warning System for Identifying Drilling Problems: An Example From a Problematic Drill-Out Cement Operation in the North Sea by Eric Cayeux, Sekal, et al.
  • SPE 163719 IT Infrastructure Architectures to Support Drilling Automation by Pradeep Annaiyappa, Canrig Drilling Technology
  • SPE 163702 Advanced Streaming Data Cleansing by Yingying Zheng, University of Southern California, Lisa A. Brenskelle, Chevron Energy Technology Company, et al.
  • SPE 163726 Automatic Sensor Data Validation: Improving the Quality and Reliability of Rig Data by Pradeepkumar Ashok et al, The University of Texas at Austin, et al.
  • SPE 163711 What Are We Going to Do With All These Wells Then? by R. Cramer, Shell