Eliminate Decision Bias in Facilities Planning

You have access to this full article to experience the outstanding content available to SPE members and JPT subscribers.

To ensure continued access to JPT's content, please Sign In, JOIN SPE, or Subscribe to JPT

The complete paper holds that traditional facilities-planning methodologies, heavily based on design-basis documents and biased toward the most-conservative conditions, fail to recognize the entirety of operational conditions throughout the oilfield life cycle, leading to significant residual risk and the wastage of resources in the operations stage. An integrated stochastic approach is proposed, accounting for both subsurface and surface uncertainties and their interrelations throughout field life.

Introduction

The authors discuss an unbiased, data-driven stochastic work flow addressing the effect of subsurface uncertainties on surface-facilities design and operational decisions. Unlike classical design approaches, in which the most-conservative values are typically used as design input variables and assembled into design-basis documents, the stochastic work flow accounts for design-input-variable distribution and combination throughout the entire system life cycle. An example case is provided in which a flow-assurance risk is managed and chemical consumption optimized in a wet-gas field development.

Theory and Definitions

Oil and gas engineering projects are typically processes of high variety, low volume, and intermittent productivity, and with a high rate of diversification and complexity. Conversely, oilfield-facilities operations are expected to be continuous, characterized by high volumes and low variety. This expectation is reflected in the approach toward facilities design, where single-point, “conservative” design conditions are proposed and assembled as facilities design-basis documents. This approach frequently fails to recognize the risks and uncertainties associated with oilfield developments.

In the proposed work flow, deterministic models are established to account for the dependencies between design input variables {static variables [i.e., bottomhole pressure (BHP) and bottomhole temperature (BHT)]} and the desired objective [static results (i.e., chemical-injection rate)]. In the provided example, the analyzed variables change because of subsurface and surface events with different levels of uncertainty (i.e., condensate banking, lean-gas injection, water breakthrough). Stochastic algorithms are used to create probability-distribution functions (PDFs) for all analyzed design input variables (stochastic variables). Stochastic algorithms are then applied in the deterministic model, sampling from the previously defined probability distributions. Stochastic results are assembled into insightful charts and used to analyze the most-relevant variables and correlations affecting the objective function.

Example Case: Description and Application of Equipment and Processes

All modeling work, both deterministic and probabilistic, is conducted in R, a system for statistical computation and graphics. A baseline deterministic model is created to calculate the monoethylene glycol (MEG) injection rate (QMEG) to mitigate the hydrate-formation risk in a wet-gas field development. The model uses design-basis input data (static variables) deemed “conservative” and capable of covering all operational scenarios expected during field life and assumes chemical injection as a mass/continuous process. The input and output data are presented in Appendix A1 of the complete paper, and the main baseline results are summarized as follows (static results):

  • QMEG of 118.4 gal/D
  • Life-cycle operating expenditure (OPEX) of $3 million

However, in real-life situations, the values for the input variables change over time because of subsurface and surface events with different levels of uncertainty. The values for such variables are usually available as contingent information as outcomes of existing field studies or field data. To account for input-data variability, stochastic algorithms are used to create PDFs for all analyzed design input data (stochastic variables). Similar distributions are constructed for all variables deemed relevant for the desired outcome.

A stochastic algorithm randomly samples the distributions of variables and assembles them into input data sets that are transferred as scenarios to the deterministic model. On run completion, the deterministic-model output is assembled into stochastic-results data sets.

Presentation of Data and Results

The stochastic work flow, accounting for input-data variability and uncertainty over the field life cycle, provides the discerning facilities engineer with superior insights into a system’s behavior throughout its lifetime.

Fig. 1 presents a summary overview of the relations between six parameters [BHP, BHT, wellhead pressure (WHP), and wellhead temperature (WHT) as input data, and total water rate (QWT) and QMEG]. The number of parameters is limited to ensure visibility on paper, but any number/combination of parameters is possible if necessary.

Fig. 1—Results summary matrix (selective).

 

The summary matrix diagonal consists of histograms of the parameters under investigation. Under the diagonal are presented the scatter plots for each combination of parameters. For example, at the intersection of Row 5 and Column 2 is the BHT/QWT scatter plot. The scatter plot is a suggestive method to visualize correlations between two parameters. In the BHT/QWT scatter plot, a strong positive correlation between BHT and QWT is noticed. This is an important insight, because it provides an indication of the amount of water condensation in the system. This information, correlated with surface-­water-rate measurements, can signal water breakthrough. Another strong correlation is noted between WHT and QMEG.

Above the matrix diagonal are presented the correlation factors for each combination of parameters, expressed in absolute terms (i.e., both positive and negative correlations are expressed in positive terms). The stronger the correlation, the bigger the correlation numeric value and the larger its font. Correlation factors can vary between –1 and 1, where a factor of 0 indicates no correlation, while –1 or 1 indicates perfect correlations, negative and positive, respectively. For example, the correlation factor at the intersection of Row 2 and Column 5 is 0.77, confirming again the strong correlation between the BHT and QWT. Similarly, the correlation factor between WHT and QMEG is quite high (0.72), confirming the finding in the scatter plots.

Another interesting insight is provided by the QMEG histogram. It can be noted that, unlike the deterministic model in which QMEG of 118.4 gal/D was calculated, no MEG injection is required for most of the cases. Further analysis of the results indicates that injection is not required in approximately 76% of cases, the mean injection rate is 33.8 gal/D, and the median is 0 gal/D. It is also apparent that there are cases in which the QMEG needs to be higher than that calculated in the deterministic approach, up to 820 gal/D. QMEG in excess of 118.4 gal/D is expected in 9% of scenarios.

These are important findings that invalidate the assumption that “conservative” input data in the design basis can cover the entire operating range. Accounting for uncertainties not only reveals that a conservative approach leads to waste in 76% of cases, but also exposes the system to the risk of underinjection in 9% of cases.

A scatter plot representing QMEG/WHT is generated and split into facets, corresponding to different ranges for WHT and gas gravity (GG). It is demonstrated that there is a strong correlation (0.72) between QMEG and WHT. Also, it is noted that no injection is required for WHT values greater than 55°F. This is another important finding, because it allows for designing a smart logic for the MEG-injection control loop.

An additional insight provided by the split in the scatter plot is that even for WHT less than 55°F, there are cases in which no MEG injection is required. Thus the MEG-injection control-loop logic can be improved further for the combination of WHT and GG.

For WHTs between 50 and 55°F and GG greater than 0.65, the required QMEG ranges between 0 and 200 gal/D. For WHTs less than 50°F, the QMEG range triples, from 0 to 400 gal/D. Also, there are outlier cases (1%) in which QMEG ranges between 400 and 820 gal/D. This finding suggests that a split-range controller arrangement is suitable. In this arrangement, the low control range (0 and 200 gal/D) is triggered when the WHT is between 50 and 55°F and the GG is greater than 0.65; the high control range (0 and 400+gal/D) is triggered when WHT is less than 50°F.

The scatter plot representing the relationship between the WHP and QMEG is again split into facets, corresponding to WHT ranges as previously defined and grouped by injection/no-injection status. It is noted that for WHPs less than 280 psi, the QMEG is 0 gal/D.

The mean OPEX for the stochastic approach is approximately $1 million, which represents a significant reduction (66%) from the baseline OPEX estimate ($3 million).

This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 187283, “Eliminate Decision Bias in Facilities Planning,” by Z. Cristea, Stochastic Asset Management, and T. Cristea, Consultant, prepared for the 2017 SPE Annual Technical Conference and Exhibition, San Antonio, Texas, USA, 8–11 October. The paper has not been peer reviewed.

Eliminate Decision Bias in Facilities Planning

01 December 2017

Volume: 69 | Issue: 12

STAY CONNECTED

Don't miss the latest content delivered to your email box weekly. Sign up for the JPT newsletter.