Tuesday, November 28
There is a growing trend towards the use of statistical modeling and data analytics for analyzing the performance of petroleum reservoirs. The goal is to “mine the data” and develop data-driven insights to understand and optimize reservoir response. The process involves: (1) acquiring and managing data in large volumes, of different varieties, and at high velocities, and (2) using statistical techniques to discover hidden patterns of association and relationships in these large, complex, multivariate datasets. However, the subject remains a mystery to most petroleum engineers and geoscientists because of the statistics-heavy jargon and the use of complex algorithms.
This workshop will provide an introduction to statistical modeling and data analytics for reservoir performance analysis by focusing on: (a) easy-to-understand descriptions of the commonly-used concepts and techniques, and (b) case studies demonstrating the value-added proposition for these methods. Participants are encouraged to bring their own laptops to follow along with the exercises in the workshop. Topics to be covered include:
- Terminology and basic concepts of statistical modeling and data analytics
- Multivariate data reduction and clustering (for finding sub-groups of data that have similar attributes)
- Machine learning for regression and classification (for developing data-driven input-output models from production data as an alternative to physics-based models)
- Proxy construction using experimental design (for building fast statistical surrogate models of reservoir performance from simulator outputs for history matching and uncertainty analysis)
- Uncertainty quantification for performance forecasting
Wednesday, November 29
The session will focus on the challenges, best practices, and lessons learned from history matching through field cases allowing us to obtain an up-to-date assessment of how history matching is used as a tool to understand reservoir dynamics, calibrate model assumptions, and provide feedback to geoscientists in building better predictive models. In addition to the fundamental questions on the goals and objectives of history matching, and the lesson learnt, arises the question of how uncertainties in modeling are defined, quantified and carried over throughout the history matching processes? What are the effects of uncertainty on the quality of history matching and the predictive abilities of the models?
History Matching has been long been used in conventional reservoir management and optimization. Many of the techniques and workflows have been also adapted to the fast developing unconventional plays. This session will discuss the state-of-the-art workflows and innovative techniques that history matching is using to calibrate the conventional and unconventional reservoir models with various forms of data streams: pressure or rate transient tests, tracer test, multiphase production history, interpreted micro-seismic and time-lapse seismic information, etc. Field examples will be used to illustrate how history matched models create value through optimizing reservoir development and management strategies.
Pilot testing is an important part of empirically verifying and de-risking hypotheses regarding reservoir performance and behavior. This session will showcase some examples of the benefits of pilot testing to accelerate, scale and apply learnings in unconventional and conventional reservoirs. Emphasis will be not only on the results of particular pilots but also the workflows, diagnostic tools and methods for analysis that were used.
In past recent years, applications of probabilistic and robust optimization data driven methods have grown substantially in reservoir prediction and optimization. Data driven methods can be the best choice to deal with subsurface large uncertainty, where many disparate data types are acquired (e.g. time-lapse seismic + logs + production data), or where the physics is poorly understood (e.g. unconventional tight oil). They are often faster to implement and run, and easier to update, than traditional physics-based methods. Powerful hybrid approaches have emerged combining reservoir models with data driven methods. Commercial tools and services together with open source software now make applications of data driven approaches in the reservoir domain accessible to operators of all sizes. Validation of predictive power and value is still a challenge, as is the acceptance of this way of working in communities of traditional modelers. This session will combine presentations in leading-edge data science R&D with applications of data driven methods and tools to real data and real problems in understanding and forecasting reservoir behavior.
Thursday, November 30
This session will investigate the various workflows and methodologies that are currently being used to extract actionable insights from historical production data. The focus will be on advances in data science and machine learning methods that are being used to analyze and interpret production data. Of particular interest will be the topics of cluster analysis, event detection, regression, interpretation as well as uncertainty analysis and decision making.
Reservoir characterization and well performance analysis are one of the most important activities to perform good reservoir engineering field development. The current economic environment is a big challenge and being able to perform such analysis with the most appropriate and reliable techniques according to each field is a big factor to reduce uncertainty and risk. It is common to perform pressure transient analysis for conventional reservoirs, and lately, rate transient analysis has become an important evaluation technique for unconventional fields. This session will discuss the best practices and experiences when analyzing production and pressure data through different PTA/RTA techniques.
From interaction among the wells (during hydraulic fracturing and production) to multiple zones and completion types, the ability to history match existing wells and forecast future performance for both existing and new wells can be a complicated cycle of reverse engineering if a structured multi-disciplined approach to evaluation is not taken. In this session, we will first investigate the role that fracture modeling at the well level plays in the optimization workflow by understanding the impact of heterogeneity, completion design, stage/spacing and other reservoir and completion parameters on the placement of the initial completion. Next, we will quantify through integrated approaches to history matching and development planning (including refracs, d-fracs, acceleration) , the effects of multi-zone multi-well pad development both during initial development and as infill wells years after leasehold wells are drilled. Lastly, we will discuss EOR planning and optimization from quality data collection strategies to choosing the best locations and designing and operating pilots and lastly full field implementation planning and challenges associated with the application of EOR.
This session will focus on the deployment, analysis and interpretation of emerging data types as well as other novel developments that push the boundaries of current technology for enhancing reservoir management. Such data types include NMR, time-lapse seismic, DTS/DAS data, novel tracers and others that are revolutionizing data collection, processing and interpretation. At the same time, this session will also focus on the challenges that need to be overcome before routine application of these technologies becomes possible.