ADVERTISEMENT

Analytics Solution Helps Identify Rod-Pump Failure at the Wellhead

You have access to this full article to experience the outstanding content available to SPE members and JPT subscribers.

To ensure continued access to JPT's content, please Sign In, JOIN SPE, or Subscribe to JPT

Industrial-Internet-of-things (IIOT) architecture provides an opportunity to improve asset uptime and maintenance of assets, reduce safety risks, and optimize operational costs. However, to turn data into meaningful insights, the industry must make full benefit of machine-learning (ML) models. This paper presents an analytics solution for identifying rod-pump failure capable of automated dynacard recognition at the wellhead that uses an ensemble of ML models. The proposed solution does not require Internet connectivity to generate alarms and meets confidentiality requirements.

Rod-Pump-Control (RPC) Architecture

Thanks to recent progress in microelectronics, the embedding of ML models in remote places with scarce connectivity, known as edge computing, is possible. Thanks to the insights generated at the oil field, onsite maintenance teams can apply immediate corrective responses, working efficiently and safely.

RPC architecture enables automated control of sucker-rod pumps. A variable-speed drive (VSD) controls the pump by adjusting the speed of the motor to downhole conditions. The VSD is controlled by a remote terminal unit (RTU) that provides speed reference for improved rod-pump control. The RTU also collects sensor measurements. In the work covered in the paper, two sensors, namely the proximity sensor and the load cell, are of particular interest. The proximity sensor is mounted near the crank arm, while the load cell is a transducer mounted between the polished rod clamp and the bridle. The RTU sends the measurements to a touchscreen human/machine interface for a local user. It also communicates the measurements to a supervisory control and data-acquisition (SCADA) system. The communication between the RTU and the SCADA host is established through external wireless communication devices such as radios or cellular modems. Furthermore, in the example discussed in the paper, a local edge computing gateway is added to the architecture. Edge gateways have significant computing power and can run ML-based applications. The gateway retrieves the data from the RTU and performs onsite analytics, generates alarms, and communicates with the cloud sporadically.

Using the measurements from the two sensors, the RPC produces a surface card (or dynacard), from which downhole pump conditions may be deduced, and consequently calculates a downhole card, from which downhole pump conditions may be inferred. The downhole card is, in essence, a translated surface card with oscillatory harmonics removed. An experienced operator is able to infer from the downhole card whether a pump is operating normally or has failed in some way.  

Solution Description

ML solutions essentially consist of two main phases: training and inference. The training phase is when the model is created. The model is a set of parameters adjusted to provide accurate responses on the basis of input data. In supervised or self-supervised learning applied in the context of this work, the model is given the output and input data and its parameters are adjusted to model the relationship between the input and output data. The parameter adjustment is performed automatically through a training algorithm such as backpropagation. The training phase is an iterative process that usually needs a significant number of input/output examples to obtain a good-quality model. The inference phase is when the already trained model is applied to new data to provide useful responses, such as abnormal-state identification.

An important part of building ML models is data preprocessing. Images may have different sizes, line widths, or interpolation methods. Those parameters should be tuned to balance processing and model complexity and overall performance and accuracy. Defining the data-preprocessing procedure, model selection, and model training comprises an iterative process that leads to the final model to be validated. This process is primarily achieved using a cloud-based computing infrastructure. The study detailed in this paper identified multiple smart techniques that improve the odds of having one ML model generalizing properly on new installations.

Data augmentation is one of the applied techniques. Existing labeled dynacards are used to create new dynacards and increase the size of the training data set. Another technique is called ensembling. This allows for the use of multiple models at the same time in a collaborative mode. Furthermore, application of transfer learning provides the capability to improve already-trained ML models using a locally labeled data set, which is generated onsite using customer feedback when an alarm corresponding to an abnormal state is triggered.

Using the combination of ensembling and transfer learning, together with data augmentation, the entire setting assists in improving ML models applied to previously unseen data sets.

Ensemble Models Overview

Each of the models selected for the ensemble has specific characteristics that suit the target application. The ensemble model is then capable of combining the responses of all the models into the best final response.

Convolutional Neural Network (CNN). CNNs are models in the domain of image recognition and probably the most influential in the field of computer vision. They are, therefore, also the first choice of the authors with respect to dynacard recognition. By their construction, CNNs somewhat mimic human experts in rod-pump failure identification.

Siamese Neural Network (SNN). Using a unique structure that ranks similarity between input images, SNNs are presented with two images at their input. The output from the model is equal to 1 when the two images belong to the same category and 0 when the two images belong to different categories. During the training phase, combinations of dynacards are used instead of using one card at a time, and a label (1 or 0) is provided. During the inference phase, the dynacard to be classified is evaluated against other dynacards with known labels and the failure can be identified on the basis of the output of the model.

Autoencoder Neural Network. The auto­encoder is a self-supervised trained model, which means it does not use labels provided by human experts during training. Instead, it uses the same image at its input and output to construct an internal representation (encoding) of the most-relevant features. In the authors’ application, the autoencoder becomes an efficient feature-extractor used to extract the most-relevant information from dynacards and to input those features to a simpler, fully connected network that is trained using remaining data with labels.

Histogram of Oriented Gradients (HOG). The other models belonging to the ensemble use a data-transformation technique known as HOG. HOG is a feature descriptor used in computer vision and allows for the description of image characteristics. It allows for extraction of information from dynacard images because the magnitude of pixel-intensity gradients is large around the edges.

Ensemble Model. The previously mentioned models do not provide the same responses when fed with the same dynacard; they excel in identifying different types of failures. To provide the best failure identification, all models must be used and the best of all provided failure identifications selected.

The rod-pump-analytics solution discussed in the paper uses an ensembling technique known as stacking or generalization. This approach consists in training a metamodel on top of the already-trained models. The metamodel takes as inputs the outputs of the first stage of models and combines them to provide the final failure identification. The whole ensemble architecture is depicted in Fig. 1.

Fig. 1—The analytics architecture. Each model provides its own probability distribution related to failure identification. Those distributions are input to the ensemble metamodel that combines them and provides the final failure identification.

Deploying an ML Model

As problem prediction on the pump is achieved locally (i.e., without any required connectivity to the cloud), the trained models are deployed at commissioning time and are directly connected to the RTU. Model inference requires fewer hardware resources than the training phase; instead of hours, the unit of time is seconds for one prediction.

In the context of rod pumps, a dynagraph card is generated every few seconds (dependent on pump-stroke speed); hence, card shape must be inferred at every stroke to provide real-time pump-performance indication. Extensive testing is required to ensure that dynagraph-card inference will work within process time constraints.

The architecture described in the paper uses Microsoft software. In addition to management containers, two application containers are deployed, and possibly remotely managed, to support communication with the RTU and to execute the ML prediction on the basis of the pump-stroke data. The containers used in the case of dynagraph-card prediction are described in the complete paper.

Conclusion

The development described in this paper allows for IIOT analytics that improve artificial-lift systems through automated dynagraph recognition. It enables autonomous diagnostics and operations through a unique setting of smart ML techniques and edge computing architecture. A high level of asset management is possible even with a limited work force.

The smart techniques detailed in this paper are not limited to only one type of asset. The principles can be extended to other artificial-lift systems, for example electrical-submersible or progressive-cavity pumps.

For a limited time, the complete paper SPE 192513 is free to SPE members.

This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 192513, “Industrial Internet of Things Edge Analytics: Deploying Machine Learning at the Wellhead To Identify Rod-Pump Failure,” by Bartosz Boguslawski, Matthieu Boujonnier, Loryne Bissuel-Beauvais, Fahd Saghir, SPE, and Rajesh D. Sharma, SPE, Schneider Electric, prepared for the 2018 SPE Middle East Artificial Lift Conference and Exhibition, Manama, Bahrain, 28–29 November. The paper has not been peer reviewed.

Analytics Solution Helps Identify Rod-Pump Failure at the Wellhead

01 May 2019

Volume: 71 | Issue: 5

ADVERTISEMENT


STAY CONNECTED

Don't miss out on the latest technology delivered to your email weekly.  Sign up for the JPT newsletter.  If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.

 

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT