Producers face a number of decision-making challenges. Specifically, they must optimize field development and operational decisions in light of the complex interplay of fiscal, market, and reservoir variable
Data analytics is enabling new and better solutions for handling these problems. Tachyus’ Data Physics technology combines machine learning and reservoir physics to rapidly integrate relevant data sources in real time. The prescriptive analytics of the new technology enables operators to efficiently compare possible decision scenarios to balance short- and long-term trade-offs, such as ultimate oil recovery vs. production targets. This methodology, combined with a scalable cloud-based computational platform, enables closed-loop reservoir optimization, in which reservoir and surface models are frequently updated and continuously identify new optimal operational decisions.
The work flow is not intended for geological characterization of a subsurface reservoir. Ideally, the models would be used to explore the “possibility space” of millions of potential field development scenarios and to identify a few candidate scenarios that are optimal against one or more objectives.
The models use a novel approach to integrate machine-learning techniques with the underlying reservoir-physics equations and limit the solutions to those that are consistent with the underlying reservoir physics. The models produce accurate predictions of the key output variables, even when field conditions change dramatically between training data and prediction period.
Data Physics models leverage the inherent continuity of reservoir behavior. Spatially accurate models can be built with orders-of-magnitude speedup, allowing local well-level optimization both for injection redistribution and infill-drilling purposes. The models are generated using a well-defined, automated algorithmic process.
The core modeling and optimization work flow begins with an extract-transform-load process of surface and subsurface data. Actual field data are divided into two sets: training data, which are used to identify model parameters, and validation data, which are intentionally withheld from the training process and used to validate predictive power and statistical accuracy.
Next, the data assimilation step assigns and fits the parameters of the model, inferring from the history of each well an ensemble of fit parameters that satisfy both past data and physical laws. Once training data are fit and an ensemble is produced, the predictive accuracy of each member of the ensemble can be measured against the validation data set.
An example application was a deployment with a mid-sized independent that operates 150,000 B/D of production. This project was implemented on a 13,000 B/D steamflood field to optimize cyclic-steam candidate selection. The field, located in the San Joaquin Valley of California, covers approximately 10 sq miles. As is typical for the region, the oil in the field is heavy, with a gravity of 14 °API, a viscosity of approximately 4,000 cp at 100°F, and a gas/oil ratio of less than 50 scf/STB. The field has approximately 1,150 producers and 150 steam injectors injecting approximately 100,000 B/D of steam.The technology identified the wells with the highest incremental production potential to cycle steam at any point in time given current oil price and steam cost. Engineers increased field-wide production by 1,000 B/D by optimizing 10% (cyclic-only portion of production) of the field’s production over 12 months. The results in the figure are actual field results over the pilot period and not model output.
Prescriptive-Analytics Modeling Technology Captures Reservoir Physics
Chris Carpenter, JPT Technology Editor
02 December 2016