Monday, February 17
Data-Driven Reservoir Modeling (Reservoir Analytics) is defined as the application of Artificial Intelligence and Machine Learning in fluid flow through porous media. Data-Driven Reservoir Modeling (Reservoir Analytics) is the manifestation of the digital transformation as it applies to the subsurface modeling in the upstream exploration and production industry. Effective and impactful use of this technology, which is the future of reservoir simulation and modeling, is becoming the important point of competitive differentiation in our industry.
The foundation of Data-Driven Reservoir Modeling (Reservoir Analytics) is solid domain expertise (reservoir engineering, reservoir modeling, and reservoir management) and comprehensive understanding of physics and geology of fluid flow through porous media. Data-Driven Reservoir Modeling (Reservoir Analytics) overcomes the over-simplifications associated with applied statistics and curve fitting approaches (including CRM). The major distinguishing factors of Data-Driven Reservoir Modeling (Reservoir Analytics) when compared to traditional numerical reservoir simulation are (a) avoidance of preconceived notions and biases, (b) lack of inclusion of significant approximations and simplifications, (c) complete automation of the history matching process, (d) generation of accurate and fast subsurface models for practical reservoir management, and (e) performing comprehensive and practical Field Development Planning (FDP), Production and Recovery Optimization (PRO), and Uncertainty Quantification (UQ) with tens of millions of simulation runs.
Data-Driven Reservoir Modeling (Reservoir Analytics) includes a set of tools and techniques that provides the means for extraction of patterns and trends from all field measured data (drilling, completion, formation, seismic, operation, production, well test, well logs, cores, etc.) and construction of predictive models that are validated through blind history matching in time and space. Data-Driven Reservoir Modeling (Reservoir Analytics) provides the ultimate assistance in short, medium, and long term decision making and optimization. Attendees will become familiar with the fundamentals of data-driven analytics, Artificial Intelligence and Machine learning including the most popular techniques used to apply them such as artificial neural networks, evolutionary computing, and fuzzy set theory. This course will demonstrate through actual case studies (and real field data from thousands of wells) how to impact infill well placement, completion, and operational decision-making based on field measurements rather than human biases and preconceived notions.
- Basics of Artificial Intelligence (AI) and Machine Learning
- Top-Down Modeling - TDM
- The Spatio - Temporal Database
- History Matching the Top - Down Model
- Post-Modeling Analysis of the Top - Down Model
- Examples and Case Studies
Tuesday, February 18
Initial reservoir model building activities have focused on distributing sub-seismic scale properties across reservoirs based on log data from limited sampled locations, resulting in large initial uncertainties. With modern instrumentation and better technologies, large amounts of information are gathered locally over refined time scales. Though spatially limited, these data are collected over large time spans from a variety of sources, potentially including permanent pressure gauges, SCADA units, DTS and DAS. This session discusses various methodologies and algorithms used to integrate such high frequency data, build models and refine uncertainties.
Compositional details/fidelity/granularity of the fluid description used in production system modelling is often a compromise between adequately addressing physics and providing sufficient computation efficiency as projects often require integration with other parts of the production system. However, there is clearly no one-size-fits all approach. Sometimes even within a particular integrated model, as required compositional detail in one area preclude practical run times in another. This session explores approaches to have consistent and computationally efficient approaches for handling fluid property predictions across workflows and in various model domains.
Well design specification and functionality will vary considerably as a function of the asset, the field development plan, the location of the well, the surveillance plan, reservoir management strategy, and operating guidelines chosen. A deep water, offshore, high deliverability gas well design will be different from a land based, multi-fracture, horizontal, unconventional oil well design, which in turn, will be different from a pair of Steam Assisted Gravity Drainage well designs. These variations will also dictate the type of reservoir data and knowledge from diverse sources required to be integrated into the well design. This session explores the integration methods used by the well engineer and petroleum engineering team working in these different environments, and how they address the availability and integration of data in the well design to ensure value delivery throughout the life of the field.
Wednesday, February 19
The challenges of fast track development has led to production facilities and field operations being somewhat misaligned in terms of timing and capacity. This session considers case histories and best practices for design and operations where integration had a major role to optimization and making feasible an appropriate reservoir management
History matching can be defined as the calibration process of our models, where we address our lack of understanding or inadequate description of the reservoirs. Proper calibration requires feedback from the static model (“big loop”) and can be time consuming. Even with the feedback during calibration, the remaining uncertainty can result in multiple acceptable solutions, or may not be able address some of the unknowns that can be critical to accuracy of the forecasts. Topics that can be addressed in this session include shortening of the “big loop”, calibration while integration with geomechanics and wellbore hydraulics, quantification of multiple solutions obtained through assisted history matching applications, forecasting with uncertainty, and inclusion of new data sources.
Advanced Integrated Models can be described as strongly coupled physics, (reservoir, wellbore, facilities), numerics and economics models. In addition to the complexities in developing efficient integrated models, they result in complex optimization problems, especially those related to capital allocation that spans multiple time scales (short- and long-term objectives) under uncertain environments. Latest developments in robust optimization and machine-learning algorithms may enable us to efficiently solve these very complex optimization problems, while accounting for various uncertainties. Topics addressed in this session include the challenges and benefits to optimal operations together with the theoretical and practical implementations of life-cycle decisions/strategies in these integrated systems in terms of joint optimization of well locations, types, drilling sequence, controls, completions and network/surface facilities.
Smart integration of high volumes and frequency of data within intelligent data analytics technologies provides an attractive opportunity for significant decision-making improvements in the oil and gas industry. A robust data foundation, effortless data access, and novel while fit-for-purpose data-driven workflows are required to enable and exploit the value of integration and utilization of the emerging technologies. The session discusses the advancements in the application of Artificial Intelligence and Machine Learning for smart data integration including: digitization, streaming and big data, and novel workflows involving pattern recognition, surrogate modeling and deep learning for integrated design and operations.