Upstream Data Can Be “a Rat’s Nest of a Mess”
Upstream companies that don’t have a coherent strategy around managing and interpreting their data can find themselves in “a rat’s nest of a mess.”
This aptly Texas-flavored warning came from Melissa Suman, Schlumberger vice president, data and digital, during the opening session of this week’s SPE Annual Technical Conference and Exhibition (ATCE) in Dallas. The panel discussion commingled executives from big oil and gas firms and big tech—a gathering that was representative of a broader merger of disciplines and technologies currently taking place in the upstream space.
Suman explained that the industry has “a data wrangling issue” after more than a century of drilling wells and producing oil and gas. Data over that time period have been acquired, sorted out, and presented in many different methods, formats, and media. Bringing them together in a central platform “is one of our biggest challenges,” she said.
With digital enablement taking hold across the industry at a rapid pace—especially over the next year or so as deployment ramps up—she predicts issues stemming from the large volume of new incoming data. She advises her clients to have a scalable data strategy from the onset of digital adoption to accommodate this expansion.
Newly implemented data strategies should incorporate data compliance, data entitlement, and data source integrity. Companies must understand global data ownership and classification standards, determine who can access the data within their organizations and how the data are shared, and begin storing and transforming data as soon as they're received.
Offering the big tech perspective, Darryl Willis, Google Cloud vice president of the oil, gas, and energy sector, reiterated what many outside and inside the industry have observed: Oil and gas is lagging behind other industries in digital adoption. “But it is also one of the most data intensive and data-rich industries, which creates an amazing opportunity,” he said. This is especially the case for Google.
“One of the things we see at Google Cloud is that it's not the company with the best algorithm that's going to win—it’s the company with the most data.” The caveat to that, he said, is actually using that data given the oil and gas industry only uses 1-5% that it collects.
Supermajor Shell collects oodles of data, and they're filtered through a technical data management group that’s considered a discipline within the company just like those populated by engineers and scientists, noted Alisa Choong, chief information officer, projects and technology, at Shell Global Solutions International. “Their job is to make sure that the data is where they're supposed to be and we can get access to it quick.”
Finding What Works
Operators working in the unconventional plays of North America have long struggled to generate free cash flow given the difficult and expensive nature of their work. But there are big rewards to be had through efficient development, and big data, properly utilized, can help.
Encana, which has a core position in the Permian Basin, is trying to figure out how to “optimally develop this incredible resource that has more oil per section than almost anywhere on Earth—but it’s very complex and exists in lots of layers,” said Doug Suttles, Encana president and chief executive officer.
In an effort to develop in 3D, or refine what the company calls “cube development,” Encana examined whether it could pull data from different sources to pinpoint the zone from which oil was produced. It takes “a massive amount of data to do that,” Suttles said. “And just to give you a sense of the scale: What we can now do in 1 day used to take 7 years to do. That’s the processing of all this information.”
Data are leveraged in Encana’s two North American operating centers that monitor 24 hours per day every well the company has drilled. From there, wells can be started or stopped, and analytics can be applied to figure out how to improve their performance. This is where the company is in the process of ensuring personnel are assigned to more complex tasks instead of the daily minutiae of oil and gas operations.
With these advancements, Suttles takes umbrage with the characterization of shale development as a manufacturing process. “We think it's an innovation process,” he said. However, repeatability is an essential aspect of innovation. “When our team comes up with a great idea, whether it’s a different completion design or it’s a new way to use information to improve the performance of a gas compressor, if it works, we can then do it over and over again.”
But he warned against fixating on existing data alone. For example, in an effort to improve well performance and recovery on a piece of land in the Permian, Encana collected data from 430 wells and examined 31 different variables, simultaneously changing multiple parameters to determine what affected performance. After much work, the team gleaned the aspects that it thought moved the needle the most.
“They lost sight of the fact that we had no data on what we hadn’t tried yet,” Suttles noted. “And when our best experts came in, they said what you’re telling from that data isn’t right—it doesn’t match up with our understanding of the physics.” The company then tested additional concepts, “and we probably had our most significant breakthrough in performance,” he said.
While some companies aren’t doing enough to evolve their digital strategies, Willis said, many are doing too much. In his experience, the industry “has mastered the art of POCs—proofs of concepts. We do that masterfully well. But what we don’t do well is scale.” Companies should “identify one or two or three things that they think could be transformational for their business.”
Suttles said Encana has to pick and choose what digital technology works the soonest because personnel, capital, and time are limited. The company encourages collaboration between the teams within its organization to leverage ideas that do work.
Internal and External Collaboration
The panel agreed that, externally, companies should break through their competitive and secretive barriers to collaborate with each other on big data and technology development.
Suman noted that, earlier this year, Schlumberger collaborated with Woodside Petroleum to test traditional workflows around interpretation and modeling by incorporating cloud and digital technologies. The result, she said, was that stratigraphic interpretations were performed in 3 days instead of 2 weeks. “One of the real step changes has been allowing our people more time to focus on the more important parts of the process and taking a lot of the repetitive and time-consuming elements out.”
Schlumberger and Anadarko also this year worked on machine learning in geoscience, with Anadarko sharing and using a proprietary algorithm for the automatic interpretation of seismic and well logs.
Willis urged companies to seek unconventional partnerships both inside and outside the industry. “One of the things that I find very interesting in my current role,” he said, “is that I sit with the healthcare and life sciences team [at Google] and there are all kinds of similarities between energy—oil and gas in particular—and healthcare.”
For example, an MRI or PET scan can be seen in the same way as a seismic line. The brain can be analogous to an oil and gas reservoir. In other words, there could be transferrable lessons between the industries, Willis said.
Upstream Data Can Be “a Rat’s Nest of a Mess”
Matt Zborowski, Technology Writer
25 September 2018
Seeq’s Focus on Time-Series Data Draws in Chevron, Shell, and Pioneer
The 5-year-old software startup is getting noticed by the oil and gas industry for its ability to accelerate analytics projects by taking care of all the tedious work involved with data wrangling.
Face-Detection Algorithm Handles Big Data To Help Identify Candidates for Restimulation
This paper demonstrates the viability of a production-data-classification approach adapted from real-time face detection for identifying restimulation candidates.
Simulation Algorithm Benefits by Connecting Geostatistics With Unsupervised Learning
A new geostatistics modeling methodology that connects geostatistics and machine-learning methodologies, uses nonlinear topological mapping to reduce the original high-dimensional data space, and uses unsupervised-learning algorithms to bypass problems with supervised-learning algorithms.
Don't miss out on the latest technology delivered to your email weekly. Sign up for the JPT newsletter. If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.
09 October 2018
04 October 2018
05 October 2018