Proxy-Based Metamodeling Optimization of Gas-Assisted-Gravity-Drainage Process
You have access to this full article to experience the outstanding content available to SPE members and JPT subscribers.
Unlike continuous gas injection and water-alternating-gas injection, gas-assisted gravity drainage (GAGD) takes advantage of the natural segregation of reservoir fluids to provide gravity-stable oil displacement. The feasibility of carbon dioxide (CO2) GAGD was investigated for immiscible injection through equation-of-state compositional reservoir simulation with design of experiments (DOE) and proxy modeling to obtain the optimal future-performance scenario. After history matching, Latin-hypercube sampling (LHS) was used as a low-discrepancy and more-uniform DOE approach to create hundreds of simulation runs to construct a proxy-based optimization approach.
Many enhanced-oil-recovery studies have been conducted for CO2-flooding optimization in real oil fields; however, to the best of the authors’ knowledge, no study has been made for GAGD implementation and optimization in a real oil field. To implement the optimization process, a full compositional reservoir simulation was constructed to evaluate the reservoir performance through CO2-GAGD flooding for 10 years of future reservoir prediction. Then, proxy-model optimization was conducted through manipulating the operational decision factors that influence the CO2 flooding through GAGD by means of DOE. More specifically, DOE and proxy modeling were combined to create a simplified alternative (metamodel) to the compositional reservoir simulation to optimize the operational decision factors affecting GAGD. Four proxy models were adopted and validated as metamodels for the compositional reservoir simulator: polynomial proxy model, multivariate additive regression splines (MARS), fuzzy logic/genetic algorithm, and generalized boosted modeling (GBM).
The GAGD concept involves placing horizontal producers at the bottom of a pay zone. Then, gas is injected in a gravity-stable mode, either immiscible or miscible, through vertical wells from the top of the formation. Because of gravity segregation resulting from the distinct fluid densities at reservoir conditions, the injected gas accumulates at the top of the pay zone to form a gas cap, providing gravity-stable oil displacement that drains down toward horizontal producers and leading to better sweep efficiency and higher oil recovery. Fig. 1 above illustrates the basic concepts of GAGD.
The main-pay reservoir in the South Rumaila oil field was selected for a full, detailed compositional reservoir simulation to enhance oil recovery through GAGD. The main pay has only three lithology types—sand, shaly sand, and shale—with distinct areal permeability distributions. A high-resolution geostatistical reservoir model was reconstructed for lithofacies and petrophysical properties considering multiple-point geostatistics.
The geostatistical reservoir model was then upscaled for GAGD flow simulation. The upscaled reservoir model was exported to build the compositional reservoir-flow simulation.
For future-field-development evaluation, excellent history matching was obtained through trial and error with respect to field cumulative oil production, water injection, and fluid-flow rates. Production and injection matching is a good indicator of reservoir and fluid behavior because it reflects the matching of water cut and saturation distributions. The entire production history for the simulation period in this study is approximately 56 years.
DOE is a systematic statistical tool that creates a proper set of experiments for simulation. DOE is used for the purpose of identifying the most sensitive factors that affect the response through the sensitivity-analysis procedure. For this study, LHS was adopted with proxy modeling to determine the optimal values of the operational production decision factors for GAGD optimization.
LHS. LHS is a statistical sampling tool that is used to create samples from input factors to construct many computer experiments from a multidimensional distribution. To capture many levels of variation for each factor with minimum experiments, the sampling techniques provide limited data points through the design domain in a uniform distribution through the space-filling design. LHS is an efficient design that produces uniform and low-discrepancy observations.
LHS generates more-efficient experiments for K parameters than simple Monte Carlo sampling. More specifically, it provides a regular spread-points design because it maintains the maximum distance between each design point and all other points. Sampling K variables in LHS is performed by dividing each factor into many equal partitions. LHS is also an augmentation procedure that generates a new set of experiments in a random manner if the original data set does not represent the problem. There is no exact procedure to determine the number of experiments that can be created.
Proxy Modeling. Adding the proxy model to the LHS method enables design-quality optimization by incorporating the training data obtained from the LHS-designed simulations. More specifically, the training data are the simulation jobs created from LHS-designed experiments, and they are used to build the proxy model. To increase the chance of reaching global optima and to improve prediction accuracy of the proxy model, the training data need to be validated by verification-simulation jobs. These verification-simulation jobs are created iteratively to ensure a 95% confidence interval between the proxy-predicted and simulation-actual objective functions.
The proxy model is updated frequently after adding a new set of simulation jobs (verifications) to obtain the true optimal solution. The LHS-plus-proxy model was adopted for optimization of production-control parameters through the GAGD process.
The resulting polynomial proxy model then is included in the response-surface methodology (RSM) for modeling the response factor as a function of input variables. Unlike the linear models, the RSM considers either the polynomial or the ordinary Kriging model to obtain the expected value of the response factor.
Additional Validation and Comparison
From the DOE and polynomial proxy modeling, 643 simulation jobs were created for training and validation runs. These runs then were adopted for the comparison of three other proxy models: MARS, fuzzy logic/genetic algorithm, and GBM.
MARS. MARS is a nonparametric regression procedure that automatically fits the relationship between variables, taking into account nonlinearity by use of piecewise linear segments called splines. In MARS, a set of coefficients and basis functions, which are driven for the experiments data, is used to build the relationship between response parameter and predictors. MARS is suitable for high-dimensional predictors because the basis functions partition the input data into regions, each with its own coefficients set to get rid of possible outliers.
Fuzzy Logic/Genetic Algorithm. Fuzzy logic is a form of knowledge representation suitable for notions that cannot be defined precisely but that depend upon their contexts. Fuzzy logic is a convenient way to construct a fuzzy model of the input and output data. A fuzzy-logic system consists of three stages—fuzzifier, fuzzy inference system, and defuzzifier. The mechanism of fuzzy-logic systems is as follows. The crisp inputs to the system are transformed into fuzzy inputs in the fuzzifier stage. Then, fuzzy inputs are propagated to the inference system where the actual computation is performed. The rule base, where the expert knowledge is contained, is combined with fuzzy inputs and the inference engine to produce fuzzy outputs for each rule in the rule base. These fuzzy outputs form a fuzzy set, which is transformed into a crisp value in the defuzzifier stage. The genetic algorithm, however, is a random search tool to generate potential solutions that compete with each other in order to find an optimal solution.
Fuzzy logic/genetic algorithm is an evolutionary algorithm of fuzzy-systems population, which is randomly generated by the genetic algorithm, to be used as a prediction model.
GBM. GBM is a powerful machine-learning tool derived to capture complex nonlinear function dependencies. GBM has been adopted efficiently in many data-driven tasks with high accuracy in modeling and prediction of response variables. In GBM, accurate modeling is obtained by consecutively fitting new models to reduce the variance between predicted and observed responses. The main idea of GBM is to learn the data to achieve maximum correlation with the negative gradient of the loss function. The idea behind loss functions in GBM is to penalize large deviations from the target outputs and neglect small residuals.
The first proxy-modeling work flow includes generating simulation jobs as training runs to build the proxy model, which was iteratively validated through four sets of validation tests. To create an accurate proxy model that truly models the compositional reservoir simulator, polynomial regression and three other approaches were used to construct the proxy models. The four models are polynomial (quadratic) regression, MARS, fuzzy logic/genetic algorithm, and GBM. The GBM model was the most accurate metamodel for the GAGD process. The fuzzy-logic/genetic-algorithm proxy model was the second-best-matching model. The polynomial and MARS proxy models showed significant mismatch between the cumulative oil production calculated by the simulator and predicted by the two proxy models. Additionally, the simulator- and proxy-based cumulative-oil-production outputs from the GBM and fuzzy-logic/genetic-algorithm models have better scatter-point matching than those from the polynomial-regression and MARS models. Consequently, GBM and fuzzy-logic/genetic-algorithm models can be used as a simplified alternative metamodel to the full-resolution compositional reservoir simulator through GAGD evaluation and prediction.
Proxy-Based Metamodeling Optimization of Gas-Assisted-Gravity-Drainage Process
01 October 2017
Drive for Innovative Technologies Leads Industry to Unconventional Sources
Emerging technologies from medical science and the aerospace industry could have a disruptive impact on oil and gas operations. A panel of scientists looked into these technologies and discussed their potential role in the industry.
Natural-Language-Processing Techniques for Oil and Gas Drilling Data
This paper presents a method to compare the distribution of hypothesized and realized risks to oil wells described in two data sets that contain free-text descriptions of risks.
Petroleum Data Analytics
While many other industries have experienced tremendous benefits over the last few decades, adoption of data-driven analytics is still young in the oil and gas sectors. However, many challenges for full adoption exist in our industry.
09 October 2017
13 October 2017