Reservoir-simulation-model inputs are numerous, and uncertainty is pervasive—before, during, and after development. On top of that, there is always pressure to deliver quality results as quickly as possible. This gives rise to a simple question, one that has yet to find a simple answer: How refined is refined enough and how coarse is too coarse? I run the risk of oversimplification here, but it seems we are faced with a classic dichotomy, one that is exacerbated by the pull of advances in high-performance computing that permit ever-greater model refinement, while, simultaneously, we have the push of (possibly stochastic) sampling of uncertainty, thereby encouraging the development of simplified (or surrogate) models that can run hundreds, even thousands, of times. The question is one of striking the right balance between two apparently contradictory approaches to simulation. The adage “horses for courses” is not particularly helpful in itself, even though it is probably appropriate. Does one have two distinct models with roughly commensurate scaling, or can we build a single all-purpose model with different scales within it that is both fast and accurate with multiscale grid?
Dimensional scale represents just one aspect of the term “multiscale,” which I have mistakenly taken to mean just the juxtaposition of, essentially, geometrical scale within a single model (such as that found in coupling a simulation grid and the wellbore). The large ratio associated with domain size, and the resolution of the geological data, is usually managed by upscaling. However, so-called multiscale methods represent a new avenue of research, one that may provide a bridge between the aforementioned push and pull of refinement resulting from the needs of different decision makers. Multiscale, which has been the subject of ongoing study over the past decade, knits together geometrical quantities (dimensional scale) with tailored computational schemas (numerical scale). This multifaceted multiscale concept may offer a means to construct an accurate coarser-scaled model, one honoring the attributes of the fine-scale heterogeneous geological data from both numerical and spatial standpoints. This method class computes local basis functions for the solution variables, to construct a smaller (coarse) system for computing an approximate solution on the original simulation grid.
While it is too early to say whether this broader notion of multiscale (numerical and geometrical) will provide a single, unifying, model for engineers, it is possible that this, or some other such method, may strike that elusive balance between refinement (accuracy) and surrogacy (speed). For those interested in reading up on this topic, the peer-reviewed SPE papers SPE 119183 and SPE 163649 provide more detail and clarify the status of some ongoing research.
This Month's Technical Papers
Recommended Additional Reading
SPE 169063 Application of Multiple-Mixing-Cell Method To Improve Speed and Robustness of Compositional Simulation by Mohsen Rezaveisi, The University of Texas at Austin, et al.
SPE 177634 Multiscale Geomechanics: How Much Model Complexity Is Enough?by Gerco Hoedeman, Baker Hughes
SPE 174905 Experimental Design or Monte Carlo Simulation? Strategies for Building Robust Surrogate Models by Jared Schuetter, Battelle Memorial Institute, et al.
SPE 169357 Reduced-Order Modeling in Reservoir Simulation Using the Bilinear Approximation Techniques by Mohammadreza Ghasemi, Texas A&M University, et al.
William Bailey, Schlumberger
01 July 2016
Enhancing Model Consistency in Ensemble-Based History Matching
The aim of this work is to present the effectiveness of a fully integrated approach for ensemble-based history matching on a complex real-field application.
Ensemble-Based Assisted History Matching With 4D-Seismic Fluid-Front Parameterization
An ensemble-based 4D-seismic history-matching case is presented in the complete paper. Seismic data are reparameterized as distance to a 4D anomaly front and assimilated with production data.
Rapid S-Curve Update Using Ensemble Variance Analysis With Model Validation
In the complete paper, the authors propose a novel method to rapidly update the prediction S-curves given early production data without performing additional simulations or model updates after the data come in.
Don't miss out on the latest technology delivered to your email weekly. Sign up for the JPT newsletter. If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.
12 June 2018