Rapid S-Curve Update Using Ensemble Variance Analysis With Model Validation
You have access to this full article to experience the outstanding content available to SPE members and JPT subscribers.
In the complete paper, the authors propose a novel method to rapidly update the prediction S-curves given early production data without performing additional simulations or model updates after the data come in. The approach has been successfully applied in a Brugge waterflood benchmark study, in which the first 2 years of production data [rate and bottomhole pressure (BHP)] were used to update the S-curve of the estimated ultimate recovery. To the authors’ knowledge, the proposed work flow, including the model validation and the denoising techniques, is novel. The proposed work flow is also general enough to be used in other model-based data-interpretation applications.
As surveillance data are obtained from the field, the S-curves of the key metrics need to be updated accordingly. This is normally accomplished by a two-step approach. First, the data are assimilated through history matching to calibrate the model parameter uncertainties to obtain their posterior distributions. Then, a probabilistic forecast is performed on the basis of the posterior distributions of the parameters to update the S-curve of the key metrics. However, obtaining an S-curve update with the traditional approach can take weeks or months after the data come in. There is a need for rapid interpretation of the incoming data and update of the S-curve without going through a full-blown history-matching and probabilistic-forecast process.
Recently, the approach called direct forecast (also called data-space inversion) has been a focus of attention. In direct forecast, the statistical relationship between the measurement data and the business objective is established on the basis of simulation-model responses before the data acquisition. This direct relationship can then be used to rapidly update the prediction of the objective once the data become available.
A process called canonical functional component analysis is proposed to map the data and forecast variables into a low-dimensional space; multilinear regression was implemented in the reduced space to establish the data/objective relationship. The authors explore the use of a simpler and more-intuitive method called ensemble variance analysis (EVA) for rapid update of the S‑curve. The idea of EVA is to explore covariance between data and objectives, then use an analytical formula to calculate the posterior S-curve. In the complete paper, the authors adapt the formulation for rapid S‑curve update after data come in.
While the direct forecast has attracted attention, less attention has been paid to validating the consistency between observed data and the simulation model after data come in. Blindly applying the direct-forecasting formula without identifying unmodeled features can lead to an incorrect posterior S-curve and misinformed decisions. The authors propose a procedure that detects and removes features in the measurement data that are inconsistent with the simulation responses. The complete paper describes formulation of the problem of updating S-curves with measurement data.
Basic Assumptions. In EVA, the relationship between the objective function and the observation data is directly modeled as a multivariate Gaussian distribution. Under the multi-Gaussian assumption, analytical formulae are available for the posterior mean and variance of the objective function given a realization of observation data. The reduction of variance and the shift in mean depend on how correlated the data and the objective function are. The stronger this correlation, the more informative the data, and the larger the update to the mean and variance of the objective-function S‑curve. It can also be noted that the mean shift is a linear function of the deviation of the observed data from the expected value of the simulated data. With the posterior mean and variance estimated, the posterior S-curve can be obtained by scaling the previous S-curve. The work flow to calculate the expected uncertainty reduction from simulation results is summarized in the complete paper.
Model Setup. The application of the proposed S-curve-update work flow is illustrated by a waterflood benchmark case. The reservoir model considered in this example is the Brugge model. The structure has an elongated half-dome shape with an internal fault existing at the oil column. The simulation model contains nine layers that are divided into two units: Unit 1 includes the upper five layers, and Unit 2 includes the lower four layers.
Table 1 of the complete paper summarizes the 11 uncertainty parameters and their respective ranges considered in this model. The uncertainty parameters include imbibition parameters such as relative permeability exponents and endpoints, as well as static parameters such as permeability and porosity multipliers.
The objective function is the field cumulative oil production after 18 years of production with waterflooding. The waterflood-development scenario consists of drilling 20 producers in the oil zone and 10 injectors along the oil/water contact.
The pilot project is planned to start 2 years before the commencement of full-field development and involves one injector (Well I7) and one producer (Well P15). The data to be collected from the pilot include monthly BHP and water cut (WCT) from Well P15. The standard deviations for the BHP and the WCT measurements are defined as 50 psi and 2%, respectively. The data collected are divided into three sets, and uncertainty reduction is considered for each set separately. The first data set contains monthly BHP data from P15 for 2 years, which amounts to 24 data points. The second data set contains monthly WCT data from P15 for 2 years, which also amounts to 24 data points. The third data set is a combination of the first two data sets and therefore has 48 data points.
For both the pilot and the full-field-development periods, producers are controlled at a constant liquid-production rate of 3,000 std m3. The injectors are controlled at a constant liquid-injection rate of 4,000 std m3.
Benchmark Setup. In order to validate the updated S-curves from EVA, they will be benchmarked with the results from the rejection sampling method, a rigorous technique to obtain the posterior distribution.
While rejection sampling is theoretically rigorous, it suffers from several major drawbacks for practical application. First and foremost, a large number of samples are needed in order to generate enough accepted samples to characterize the posterior S-curve. The computational cost is often prohibitive when the samples are evaluated by use of reservoir simulation. One approach to avoid this drawback is to construct a numerical proxy from simulation samples and then evaluate posterior distribution based on proxy samples.
Even with the use of a proxy, rejection sampling can still be expensive when the error is small or when the number of data points to be assimilated is large. This is because, in such cases, the acceptance probability can be arbitrarily low. There is no guarantee that there will be enough accepted samples to characterize the posterior distribution.
On the other hand, the EVA method does not have this limitation of the rejection sampling.
EVA Result. The EVA S-curve-update study is based on 200 simulation runs in the uncertainty parameter space sampled using a space-filling design. The EVA result matched with the rejection-sampling result very well in this case.
For the BHP second data set, the rejection sampling failed because of insufficient accepted samples. Of the 50,000 samples, only seven were accepted. On the other hand, the EVA method still provides reasonable results for the posterior S-curve.
Fig. 1 shows the result of an exhaustive validation study. Each point in this figure is generated by an S-curve update run by taking one of the 200 simulated data realizations as the observed data. Shown on the x-axis are the posterior means calculated from rejection sampling, and on the y-axis are the posterior means calculated from the EVA method. The solid red line indicates the 45° line. It is clear that the mean shift predicted by EVA is comparable to that predicted by the rejection-sampling method.
Model Validation for EVA
The EVA is shown to work reasonably well for synthetic observed data. However, in real-field application, the observed data may not be used “as is” for various reasons. For example, there may be physics or events that happen in the real field that were not modeled in the simulation model. There might also be problems in uncertainty characterization; for instance, the range of the uncertainty parameters might be too narrow, missing key uncertainties that failed to capture the appropriate simulation response and objective function. In such situations, directly applying the EVA S‑curve-update method could yield misleading results.
In the complete paper, the authors propose two diagnostic procedures to address two of the common problems: unmodeled physics and event detection by use of principal-component analysis, and model validation with a hypothesis test.
The authors present the use of EVA for rapid S-curve update. EVA explores the statistical relationship between measurement data and objective uncertainty on the basis of precomputed simulation runs. This statistical relationship can then be used to update the S-curve of the objective uncertainty instantly after data come in. The use of EVA provides a much-more-efficient alternative to traditional history-matching-prediction work flow and can reduce project turnaround time significantly.
Rapid S-Curve Update Using Ensemble Variance Analysis With Model Validation
01 April 2018
Enhancing Model Consistency in Ensemble-Based History Matching
The aim of this work is to present the effectiveness of a fully integrated approach for ensemble-based history matching on a complex real-field application.
Ensemble-Based Assisted History Matching With 4D-Seismic Fluid-Front Parameterization
An ensemble-based 4D-seismic history-matching case is presented in the complete paper. Seismic data are reparameterized as distance to a 4D anomaly front and assimilated with production data.
First Three-Zone Intelligent Completion in Brazilian Presalt: Challenges and Lessons
Since the first intelligent completion was installed 20 years ago, the systems have become increasingly complex in order to reach productivity and optimization goals, allowing real-time independent monitoring and management of each zone in the well.
Don't miss out on the latest technology delivered to your email weekly. Sign up for the JPT newsletter. If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.
16 April 2018