At the 2011 SPE Annual Technical Conference and Exhibition (ATCE) in Denver, a panel discussed the question, “10 Years of Digital Energy: What Have We Learned?” Those leading the discussion, mostly experts from major operators and service companies, centered on two main themes:
- Consolidating and Institutionalizing Successful Patterns
- Handling of Large, Disparate Data Sets
As an industry, we clearly have moved beyond the heady first years of the digital transformation, where the anticipation from many was that within a few years we would have a consolidated software solution spanning the scope of E&P workflows. While the stories told by such a panel naturally focused more on success cases (particularly for large greenfield applications), what emerges is evidence of large-scale benefits when a company invests in repeating successful patterns at its scale of operation—this is found to be true for both operators and service companies. The clearest examples of such success were on the fundamental aspects of data quality, exception- based surveillance, standardization of human workflows, and large-scale applications of focused software solutions, often having required an investment cycle of at least 5 years. Focusing on the scaling of fundamental aspects to broad application provided significant return while managing risk, with the result of sustaining those programs that delivered benefits. If the human workflow failed to rely on any new technology deployment, any gains found in the first year or two following the deployment were not sustained. So, a simple, “fast follower” approach is unlikely to be successful, unless the follower can adapt the leader’s success to their own culture and processes well.
Of course, the challenges are becoming more complex. Scaling successes from large, greenfield applications (in which initiatives may be justified easily) to brownfields, “difficult oil and gas,” and IOR/EOR will require us to focus more on the “big- data” challenge and the efficient application of qualified data to improve reservoir management through better daily decisions and more-accurate forecasting. In many cases, the problem has moved from a lack of data to an inability to contextualize the available data quickly into a particular decision process. As a result, information relevant to a decision may be available to some extent within the organization, but not easily applied to the decision because it first must be found and qualified, often through an undocumented process, before it can be used.
Once organizations can depend on a service level for qualified data, they can begin to exploit the data by use of established patterns, such as those outlined by the ATCE panelists, and emerging patterns, as illustrated by the papers in this feature.
Read the paper synopses in the May 2012 issue of JPT.
John Hudson, SPE, Senior Production Engineer, Shell, has more than 25 years’ experience in multiphase-flow research, flow-assurance design of deepwater production systems, and development of model-based real-time operations- decision systems. Since joining Shell, he has held technical and managerial positions in Europe and North America, including leading a team that developed a model-based, cloud computing solution that was deployed globally to gas plants with a total production capacity in excess of 10 Bcf/D. Hudson currently provides production-engineering support for the development of a next-generation simulator. He holds a PhD degree in chemical engineering from the University of Illinois. Hudson serves on the JPT Editorial Committee.