Skip to main content

ORIGINAL RESEARCH article

Front. Earth Sci., 24 August 2022
Sec. Environmental Informatics and Remote Sensing
This article is part of the Research Topic Applications of Artificial Intelligence in the Oil and Gas Industry View all 9 articles

Simultaneous prediction of multiple physical parameters using gated recurrent neural network: Porosity, water saturation, shale content

f
  • School of Geosciences, China University of Petroleum (East China), Qingdao, Shandong, China

Reservoir parameter prediction is of significant value to oil and gas exploration and development. Artificial intelligence models are developing rapidly in reservoir parameter prediction. Unfortunately, current research has focused on multi-input single-output prediction models. Meaning, these models use a large amount of logging or seismic data to predict the petrophysical properties of a single reservoir. Another prominent problem is that most mechanistic learning studies have focused on using logging data (e.g., gamma ray and resistivity) to make predictions of reservoir parameters. Although these studies have yielded promising accuracy, a great shortcoming is the inability to obtain such data in logs by seismic inversion. The value of our research work is to achieve a complete description of the reservoir using the elastic parameters from the seismic inversion. We developed a deep learning method based on gated recurrent neural network (GRNN) suitable for simultaneous prediction of porosity, saturation and shale content in the reservoir. GRNN is based on Gated Recurrent Unit (GRU), which can automatically update and reset the hidden state. The input parameters to the model are compressive wave velocity, shear wave velocity and density. The model is trained to fit nonlinear relationships between input parameters and multiple physical parameters. We employed two wells: one for testing and the other for training. 20% of the data in the training wells were used as the validation set. In preprocessing, we performed z-score whitening on the input data. During the training phase, the model hyperparameters were optimized based on the mean absolute error (MAE) box plots of the validation set. Experiments on the test data show that the model has superior robustness and accuracy compared to the conventional recurrent neural network (RNN). In the GRNN prediction results of the test set, the MAE is 0.4889 and the mean squared error (MSE) is 0.5283. Due to the difference in input parameters, our prediction is weaker than the research method using logging data. However, our proposed method has higher practical value in exploration work.

Introduction

Prediction and analysis of porosity, water saturation, and shale content in reservoir delineation, geological modelling and well location distribution play an important role in reservoir delineation, geological modelling and well location distribution in the process of oil exploration and development (Eberhart-Phillips et al., 1989; Pang et al., 2019). These reservoir parameters are generally derived from logging data or core data (Segesman, 1980). However, the physical parameters in logging cannot characterize the distribution of reservoir parameters in unknown intervals or the entire work area, and are subject to cost constraints (Gyllensten et al., 2004). Therefore, empirical formulas or models are needed to calculate and predict (Goldberg and Gurevich, 2008). Due to the complex conditions of geology, logging data often exhibit strong nonlinear relationships between them, and this nonlinear relationship may never be exactly obtained at the theoretical level (Ballin et al., 1992). Traditional forecasting methods have their own limitations for accurate prediction of reservoir parameters (Hamada, 2004; Chatterjee et al., 2013). The effects of longitudinal and transverse velocities on porosity and shale content are thought to have some correlation, but are difficult to predict quantitatively (Han et al., 1986). Continuous sound velocity logging methods can only qualitatively identify water-saturated rock masses (Hicks and Berry, 1956). Laboratories can measure high-quality physical properties, but at an additional time and cost (Gomez et al., 2010). Therefore, numerical models of the petrophysical properties of the formation need to be established for reservoir characterization from measurable data (Tian et al., 2012). The prediction of the distribution of reservoir quality (i.e., porosity, water saturation, and shale content, etc.) has important implications for estimating the development value of reservoirs (Timur, 1968).

Modern reservoir characterization has evolved from the use of experts to analyze and interpret log data to the application of artificial intelligence to automatically identify predictions (Soleimani et al., 2020). Machine learning methods such as artificial neural network, K-proximity, support vector machine and function network have been widely used in the field of petroleum exploration due to their high efficiency and accuracy. This helps to solve many technical problems such as reservoir identification (Aminian and Ameri, 2005; Wang et al., 2017), reservoir fluid prediction (Oloso et al., 2017; Mahdiani and Norouzi, 2018), permeability prediction (Tahmasebi and Hezarkhani, 2012; Al-Mudhafar, 2019; Al Khalifah et al., 2020) and rock strength and geomechanical properties (Tariq et al., 2016; Tariq et al., 2017). Computer-based machine learning methods can effectively handle nonlinear problems and predict reliable reservoir property values (Kaydani et al., 2011).

Deep learning is an important branch of machine learning, which has gradually become a research hotspot in recent years (Lecun et al., 2015; Guo et al., 2016; Zhang et al., 2020). Research has proved that deep learning techniques are prominent at dealing with complex structures in high-dimensional spaces (Michelucci, 2018), and have been used in image recognition (Shuai et al., 2016), speech recognition (Saon et al., 2021) and language translation (Jean et al., 2014) and have achieved excellent results. In recent years, many scholars have applied it to the field of seismic exploration (Lin et al., 2018; He et al., 2020). For example, reservoir fracture parameter prediction (Xue et al., 2014; Wang et al., 2021; Yasin et al., 2022), lithology identification (Al-Mudhafar, 2020; Alzubaidi et al., 2021; Saporetti et al., 2021) and seismic inversion (Li et al., 2019; Cao et al., 2021), etc.

Classical back-propagation (BP) neural networks have been studied for prediction on well logging data (Al-Bulushi et al., 2012; Verma et al., 2012). Since the data information extracted by neurons is not transmitted in the same hidden layer, fully connected neural network is not suitable for effective prediction of sequence data. For data with sequence information, Recurrent Neural Network (RNN) (Mikolov et al., 2010; Zhang et al., 2017) can learn the relationships and laws within the data and make preferable predictions. RNNs consist of high-dimensional hidden states with nonlinear dynamics (Sutskever et al., 2011).

Among the different variants of RNNs, one that has attracted tremendous attention is the network formed using Gated Recurrent Unit (GRU)—Gated Recurrent Neural Network (GRNN) (Chung et al., 2014). The learning curves of GRNN on many datasets demonstrate significant advantages over standard RNN and are able to avoid the vanishing and exploding gradient problems of standard RNN when dealing with long sequences (Bengio et al., 1994). GRNN has been applied to log parameter prediction and lithology identification (Zeng et al., 2020), and to predict mass production in conglomerate reservoirs (Li et al., 2020). Some scholars have tried to combine GRNN with convolutional neural network to predict porosity (Wang and Cao, 2021).

We have noticed two neglected problems in the research on reservoir physical parameter prediction by mechanical learning. One issue is that the research programs can only predict a single reservoir parameter (Rui et al., 2019; Chen et al., 2020). The other issue is that the investigations are limited to parameters specific to the logs as model inputs (Ahmadi and Chen, 2019; Okon et al., 2021). This means that it is not possible to predict reservoir physical parameters for the whole work area and lacks practical value.

In this paper, the GRU-based GRNN model is used to simultaneously predict the physical parameters of the logging data. The structure and principle of GRU are analyzed, and a prediction model is founded by using the nonlinear relationship between elastic parameters [compressive wave velocity (Vp), shear wave velocity (Vs) and density Den(Den)] and physical parameters [porosity (Por), water saturation (SW) and shale content Vsh(Vsh)]. The error is compared with the traditional RNN method, and the influence of the correlation between elastic parameters and physical parameters on the prediction results of the model is analyzed.

Methods

Figure 1 depicts the study’s workflow. We first normalize the model’s input data. Well B is utilized to test the model’s generalizability while Well A is used as training data. Create a GRNN model fit for regression tasks, then tune the hyperparameters. The trained model is used to predict the physical properties of the reservoir after being tested in Well B.

FIGURE 1
www.frontiersin.org

FIGURE 1. Workflow. The log data is separated into training, validation, and test sets after being standardized. GRNN model construction and hyperparameter optimization. Test model generalization and predict reservoir physical properties.

RNN

Before introducing GRU, we need to introduce the classic RNN. RNN is a neural network with a hidden state and an output sequence, as shown in Figure 2. The input to the network is a variable-length sequence x=(x1,...,xT). The hidden state update of the RNN at each time step is given by:

ht=f(ht1,xt)(1)

where f is the nonlinear activation function. It can be a simple function in the form of a sigmoid or a logically complex structural unit (Graves, 2012).

FIGURE 2
www.frontiersin.org

FIGURE 2. RNN structure. Each structural unit accepts both the input data and the hidden state of the previous unit, and outputs the predicted value at the current moment.

The RNN learns the probability distribution p(xt|xt1,...,x1) at time step t by training on the sequence of the previous t1 time steps. For example, when using the softmax activation function, the resulting multinomial distribution (1-of-K encoding) is:

p(xt,j=1|xt1,...,x)=exp(wjht)j=1Kexp(wjht)(2)

where j=1,...,K. wj is the row vector of the weight matrix W. From these probabilities, the probability distribution of sequence x can be calculated:

p(x)=t=1Tp(xt|xt1,...,x1)(3)

By iterative sampling at each time step, p(x) can be continuously updated with new sequences.

GRU

Figure 3 describes the specific structure of the GRU. GRU has two important gating units—reset gate and update gate. In the j-th hidden unit, the reset gate rj is calculated by:

rj=σ([Wrx]j+[Urht1]j)(4)

where σ is the sigmoid function. Wr and Ur are the weight matrices in the hidden units. x and ht1 are the sequence of inputs and the state of the previous hidden unit. [.]j is the j-th element of the vector.

FIGURE 3
www.frontiersin.org

FIGURE 3. Compute the hidden state of the GRU model. Update gate z chooses whether to update the output hidden state ht with hidden state h˜t. Reset gate r decides whether to ignore the hidden state ht1 of the previous time step.

The calculation method for the update gate zj is similar:

zj=σ([Wzx]j+[Uzht1]j)(5)

The final hidden state hj is calculated by:

hjt=zjhjt1+(1zj)h˜jt(6)

where

h˜jt=ϕ([Wx]j+[U(rht1)]j)(7)

where h˜jt is the candidate hidden state. ϕ is the tanh activation function. This formula shows that as the reset gate rj approaches 0, the hidden layer state gradually forgets the previous hidden state ht1. This allows the hidden state to filter out important information, making the information transfer more efficient.

The update gate controls the ratio of previous hidden states to candidate hidden states. When the update gate zj tends to 1, the model tends to keep only the hidden state of the previous time step. At this time, the information of xt is almost ignored, thus skipping the input information of this time step. Conversely, when zj is close to 0, the new hidden state hjt is close to h˜jt. If the update gate for the entire sequence is close to 1, then the hidden state at the starting time step can be preserved and passed to the end of the sequence. According to the additive feature of Eq. 6, the GRU learns new information while retaining important features. In addition, this additive implementation enables shortcut paths that can bypass multiple time steps, thus making it easier to back-propagate errors and reducing the difficulties associated with gradient disappearance. The model parameters (weights and biases) are tuned in a similar way to traditional feedforward neural networks. During the training period, the objective function from the training set is minimized by the optimization algorithm. GRNN proceeds through a forward and backward propagation process during the training. During backpropagation, the weights and biases are updated with the corresponding error terms.

GRNN uses GRU to adjust the information transfer in RNN, so that the sequence model can retain effective information for a long time, forget irrelevant information, and effectively avoid the problems of gradient disappearance and explosion. In this research, GRNN is used to predict reservoir physical parameters. Logging parameters can be regarded as sequence information. The ability of GRNN to explore effective information adaptively is more suitable for the task of reservoir parameter prediction.

Data description and model

The experimental data we utilize are from the work area of the sandstone reservoir. The datasets used to predict reservoirs Por, SW, and Vsh are from Well A and Well B, and their log data are shown in Figures 4, 5. Well A contains 80% of the training data and 20% of the validation data. Well B is the test set. Well A has a depth range of 2,100–2,275 m and includes 1,597 sample points. Well B has a depth range of 2,150–2,325 m and includes 2,298 sample points. In this process, the model is only updated with gradients based on the training set, and the performance on the test set is used as a measure. We focus on the prediction performance on the test set.

FIGURE 4
www.frontiersin.org

FIGURE 4. Well A logging curve.

FIGURE 5
www.frontiersin.org

FIGURE 5. Well B logging curve.

Before model training, to ensure that the model can effectively adjust the weights and biases, we normalize the data with z-score (mean 0, variance 1):

xscaled=xx¯xstd(8)

where xscaled is the normalized data, x is the actual data, x¯ is the mean of the actual data, and xstd is the standard deviation of the actual data. Normalized preprocessing can limit the range of data variation, thereby improving model convergence speed and training accuracy.

Establish GRNN model

In this study, we used Python version 3.7.0 to implement the model via the Tensorflow platform.

Figure 6 presents the design framework of the GRNN model. The input data of the model is Xi={xVp,xVs,xDen} which is the sequence composed of Vp, Vs and Den corresponding to each logging data sample point. Since we generally can only collect elastic values by seismic inversion during the actual development process, these elastic parameters were chosen as input data. Xi is normalized and passed to the hidden layer. In the GRU structure of the hidden layer, two activation functions (refer to Figure 3) are used, sigmoid and tanh. The output of the sigmoid function is mapped in the interval, which is monotonically continuous, stable in optimization, and easy to derive. The output mean of Tanh is 0, and the convergence speed is faster than that of sigmoid. On the whole, GRNN replaces each hidden unit in a typical RNN with GRU, so as to realize the memory and forgetting of information flow. Each GRU needs to accept the hidden state and input data of the previous unit to update, so the first GRU needs to initialize the unit state with h0. h1 is the hidden state for output to the next GRU after the first GRU finishes processing data. The flow of information like this goes all the way to the last GRU. Finally, the predicted sequence Yi={ypro,ySW,yVsh} is output through the fully connected layer with three output units which are the corresponding prediction targets Por, SW and Vsh.

FIGURE 6
www.frontiersin.org

FIGURE 6. GRNN architecture suitable for property parameter prediction.

Compared with the classical BP neural network, GRNN is preferable at mining the information between sequences. And due to the structure of GRU, GRNN can filter unimportant information to a certain extent, retain the characteristic information of logging data, and improve the prediction accuracy of physical parameters.

Result and discussion

Fitting and generalization

When a model is trained, if it performs well on the training set but cannot predict accurately on the new test set data, then the model has no real value. In machine learning, this type of problem is called overfitting (Ying, 2019). However, if there is not enough data or the complexity of the model is lower than that required for fitting, the model cannot predict accurately even on the training set and is equally useless. This type of problem is called underfitting (Jabbar and Khan, 2015). In the case of supervised learning, whether it is overfitting or underfitting, it is necessary to pay attention to avoid it when debugging the model.

Generalization ability is very important for any machine learning model. We are always interested in how well a model predicts unknown data, because this can reflect the practical value of the model. Therefore, all evaluation indicators are subject to test data. Among the techniques for overfitting, a common method is dropout (Mianjy and Arora, 2020), i.e., randomly turning off neurons at a certain ratio (between 0 and 1) during each iteration. In the GRNN network, the Dropout value can be set in the hidden layer.

Parameter setting

We use well A logging data to train the model and analyze the effect of different parameter settings on the results. Three network parameters of GRNN are selected: learning rate (α), number of hidden units (nunit), Dropout rate (rdrop). We set a series of values, and when the value of one parameter changes, the other parameters remain the same. Record the performance of models α (0.001, 0.005, 0.01, 0.02), nunit (16, 32, 64, 128), rdrop (0, 0.2, 0.4, 0.8). Besides that, try different optimization algorithms: RMSprop (Zou et al., 2019), Adam (Bock and Weiß, 2019), Adamax (Kingma and Ba, 2014), and Adagrad (Duchi et al., 2011). A group of ideal hyperparameters are obtained in this manner.

We train the model with different hyperparameters and choose mean absolute error (MAE) and mean squared error (MSE) as metrics:

MAE=iN|ylabel(i)ypred(i)|N(9)
MSE=iN(ylabel(i)ypred(i))2N(10)

where ylabel(i) represents the true label value of the sample sequence, ypred(i) represents the model’s predicted value for the sequence, and N is the number of samples.

Boxplots are drawn based on the prediction error on the validation set, as shown in Figure 7. Boxplots show 75% and 25% quantiles, medians, outliers, maximum and minimum values. It can be seen that outliers will exist in most cases. Outliers may come from the initial stage of training when the model fitting ability is inferior, and there is a large gap between the predicted value and the true value. The optimal hyperparameters chosen based on the smallest median are α=0.001, nunit=64, and rdrop=0 and the optimization algorithm is Adamax.

FIGURE 7
www.frontiersin.org

FIGURE 7. Box plot. The model is trained using the well A data, and the network hyperparameters are adjusted according to the MAE values of the validation set. (A) learning rate, (B) hidden units, (C) dropout rate, (D) algorithm.

Comparison with RNN

To evaluate the accuracy of the GRNN model for simultaneously predicting Por, SW, and Vsh for well B, we use RNN to conduct comparative experiments. GRNN evolved from RNN, and the GRU structure was added to the hidden layer in RNN. Typical RNNs lack the function of extracting useful information and forgetting irrelevant information. For a fair comparison, the parameters of GRNN and RNN are set the same. The mean MAE and mean MSE values of the experimental results are reported in Table 1. The average MAE of the GRNN prediction model is 0.4889, and the average MSE is 0.5283, which is slightly lower than that of the RNN. This shows that the error of simultaneous prediction of physical parameters is bearable. Figures 8A,B show the changes of MAE and MSE when GRNN and RNN predict well B. It can be seen that after 20 iterations, both MAE and MSE errors, GRNN is at a lower level overall.

TABLE 1
www.frontiersin.org

TABLE 1. Comparison of prediction results for Well B.

FIGURE 8
www.frontiersin.org

FIGURE 8. Comparison of prediction errors of GRNN and RNN for well B during iteration. (A) MAE, (B) MSE.

Fitting and prediction of logging data

We use the trained GRNN model to predict Well A and observe the effect of the model fitting the training set, as shown in Figure 9. For the prediction results of Por, the fitting effect of the 0-value part is deficient mainly in the depth range of about 2,200–2,225 m in the middle section. This is due to the fact that the segment was treated as a 0 value for low Por when interpreted by an expert. For the prediction results of SW, the low SW in the depth of 2,175–2,200 m and the high SW in the depth of 2,200–2,225 m do not achieve the ideal fitting effect. The fitting error of Vsh is mainly distributed in the depth range of 2,265–2,275 m. The fit of the model to the target curve is generally acceptable. We assume that the magnitude of the error is related to the correlation between the training attribute and the predicted attribute. The Pearson Product Moment Correlation Coefficient (PPMCC) for these attributes was therefore calculated (Emerson, 2015).

FIGURE 9
www.frontiersin.org

FIGURE 9. Fitting effect of GRNN on well A.

Figure 10 presents the PPMCC matrix calculated from Well A data. PPMCC helps analyze the degree of correlation between variables. The closer the coefficient is to 1, the stronger the positive correlation between the two variables. Conversely, if the coefficient is closer to −1, it indicates that the two variables are more negatively correlated. When the coefficient is close to 0, it is considered that there is little connection between the variables. As shown by the figure, the correlation coefficients of Por for Vp, Vs, and Den are −0.36, −0.13, and −0.54, respectively. Porosity is more sensitive to density and exhibits a negative correlation, has a low negative correlation to P-wave velocity, and has only a weak negative correlation to S-wave velocity. The correlation coefficients of SW to Vp, Vs and Den are 0.08, −0.06, and 0.30, respectively. It shows that there is basically no correlation between water saturation and P-wave velocity and S-wave velocity, and only has a weak positive correlation with density. The correlation coefficients of Vsh to Vp, Vs, and Den are −0.38, −0.59, and 0.40, respectively. It shows that the shale content is negatively correlated with both the compressional wave velocity and the shear wave velocity, and the correlation degree with the shear wave velocity is higher. Moreover, there is a certain degree of positive correlation between shale content and density. In general, Vp, Vs, and Den have a certain correlation to Por, SW, and Vsh, but the correlation is not high. Especially for SW, it is almost only possible to extract valid information for prediction through Den.

FIGURE 10
www.frontiersin.org

FIGURE 10. PPMCC matrix for different logging parameters in the raw data of well A.

Finally, we present the prediction results for the test set well B, as shown in Figure 11. The predicted results for Por are basically in line with the logging curve. The overall effect of SW prediction is inferior, as analyzed in the previous PPMCC. Due to the weak correlation, it is difficult to make accurate predictions. Nevertheless, the trend of change can be predicted to a certain extent. The prediction effect for Vsh is better, which is also due to the stronger correlation with the elastic parameter.

FIGURE 11
www.frontiersin.org

FIGURE 11. Prediction effect of GRNN on well B.

Prediction of reservoir physical parameters

Through the inversion method, we obtained the elastic parameters of the actual work area, as shown in Figure 12. Figures 12A–C are the compressional wave velocity, shear wave velocity and density of the reservoir, respectively. We use the trained GRNN model to predict the physical parameters of the actual working area, as shown in Figure 13. Figures 13A–C are the prediction results of Por, SW, and Vsh of the reservoir, respectively. To verify the reliability of the results predicted, we inserted well C. The extrapolation effect is acceptable, as evidenced by the prediction results next to the well being fairly consistent with the true physical property parameters of the well logs. Even the SW, which has the lowest correlation with elastic characteristics, is still capable of making accurate predictions. The prediction results of this profile show that there are high Por, low SW, and low Vsh sections at a depth of about 1.18–1.20 s in the time domain. The characteristics of this profile suggest that this depth range has the potential for oil and gas development.

FIGURE 12
www.frontiersin.org

FIGURE 12. Reservoir elastic parameters from inversion. (A) P-wave velocity, (B) S-wave velocity, (C) density.

FIGURE 13
www.frontiersin.org

FIGURE 13. Prediction of reservoir physical properties. (A) Por, (B) SW, (C) Vsh.

Conclusion

The prediction of physical parameters such as porosity, permeability, and saturation is an important work in reservoir prediction. Nowadays, many scholars try to apply deep learning to physical property prediction. However, most of the current work is to use the electrical parameters with high correlation to predict the logging parameters instead of the elastic parameters that can be obtained in the actual work area, and only use various models to predict a single parameter. This lacks application value for reservoir prediction.

This paper describes how to apply the GRNN model to the simultaneous prediction of reservoir physical parameters (porosity, water saturation, and shale content). GRNN is a special recurrent neural network that enables sequence information to be transmitted between the same hidden layers. The structure of the gate control unit enables it to store and forget information, and realize the extraction and control of information flow. GRNN can simultaneously predict physical parameters using longitudinal wave velocity, shear wave velocity and density, and the prediction error is smaller than that of RNN. The model predicts well B in the test set with an average MAE and an average MSE of 0.4889 and 0.5293, respectively. The Pearson product moment correlation coefficients of the elastic parameters and physical parameters used in the model are calculated, and the results show that the correlation between the elastic parameters and the physical parameters is not high. In particular, water saturation has almost only a positive correlation with density. Nonetheless, the models can still fit their nonlinear relationship in high dimensions. The GRNN model can also predict and reconstruct the missing sections of the log. In future work, attempts will be made to change the structure of the network as well as the properties of the input data to improve the prediction accuracy. The prediction of various physical property parameters of reservoirs has guiding significance for oil and gas exploration. The application of elastic parameters to achieve physical property prediction has general value for actual exploration work in different work areas.

Our suggested method for predicting reservoir parameters still has several limitations. The accuracy of the seismic inversion decides the effectiveness of the prediction. A local optimum rather than a global optimum may be the result of model training.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

JZ: Conceptualization, methodology, writing-original draft preparation. ZL: Supervision, software. GZ: Validation, investigation. BY: Visualization. XN: Data curation. TX: Writing-Reviewing and editing.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Ahmadi, M. A., and Chen, Z. (2019). Comparison of machine learning methods for estimating permeability and porosity of oil reservoirs via petro-physical logs. Petroleum 5 (3), 271–284. doi:10.1016/j.petlm.2018.06.002

CrossRef Full Text | Google Scholar

Al Khalifah, H., Glover, P., and Lorinczi, P. (2020). Permeability prediction and diagenesis in tight carbonates using machine learning techniques. Mar. Pet. Geol. 112, 104096. doi:10.1016/j.marpetgeo.2019.104096

CrossRef Full Text | Google Scholar

Al-Bulushi, N. I., King, P. R., Blunt, M. J., and Kraaijveld, M. (2012). Artificial neural networks workflow and its application in the petroleum industry. Neural comput. Appl. 21 (3), 409–421. doi:10.1007/s00521-010-0501-6

CrossRef Full Text | Google Scholar

Al-Mudhafar, W. J. (2019). Bayesian and LASSO regressions for comparative permeability modeling of sandstone reservoirs. Nat. Resour. Res. 28 (1), 47–62. doi:10.1007/s11053-018-9370-y

CrossRef Full Text | Google Scholar

Al-Mudhafar, W. J. (2020). Integrating machine learning and data analytics for geostatistical characterization of clastic reservoirs. J. Pet. Sci. Eng. 195, 107837. doi:10.1016/j.petrol.2020.107837

CrossRef Full Text | Google Scholar

Alzubaidi, F., Mostaghimi, P., Swietojanski, P., Clark, S. R., and Armstrong, R. T. (2021). Automated lithology classification from drill core images using convolutional neural networks. J. Pet. Sci. Eng. 197, 107933. doi:10.1016/j.petrol.2020.107933

CrossRef Full Text | Google Scholar

Aminian, K., and Ameri, S. (2005). Application of artificial neural networks for reservoir characterization with limited data. J. Pet. Sci. Eng. 49 (3-4), 212–222. doi:10.1016/j.petrol.2005.05.007

CrossRef Full Text | Google Scholar

Ballin, P. R., Journel, A. G., and Aziz, K. (1992). Prediction of uncertainty in reservoir performance forecast. J. Can. Petroleum Technol. 31 (04). doi:10.2118/92-04-05

CrossRef Full Text | Google Scholar

Bengio, Y., Simard, P., and Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5 (2), 157–166. doi:10.1109/72.279181

PubMed Abstract | CrossRef Full Text | Google Scholar

Bock, S., and Weiß, M. (2019). “A proof of local convergence for the Adam optimizer,” in 2019 International Joint Conference on Neural Networks (IJCNN) (Budapest, Hungary: IEEE), 1–8.

CrossRef Full Text | Google Scholar

Cao, D., Su, Y., and Cui, R. (2021). Multi-parameter pre-stack seismic inversion based on deep learning with sparse reflection coefficient constraints. J. Pet. Sci. Eng. 209, 109836. doi:10.1016/j.petrol.2021.109836

CrossRef Full Text | Google Scholar

Chatterjee, R., Gupta, S. D., and Farroqui, M. Y. (2013). Reservoir identification using full stack seismic inversion technique: A case study from cambay basin oilfields, India. J. Pet. Sci. Eng. 109, 87–95. doi:10.1016/j.petrol.2013.08.006

CrossRef Full Text | Google Scholar

Chen, W., Yang, L., Zha, B., Zhang, M., and Chen, Y. (2020). Deep learning reservoir porosity prediction based on multilayer long short-term memory network. Geophysics 85 (4), WA213–WA225. doi:10.1190/geo2019-0261.1

CrossRef Full Text | Google Scholar

Chung, J., Gulcehre, C., Cho, K., and Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. Comput. Sci., Neu. Evolut. Comput.. arXiv:1412.3555.

Google Scholar

Duchi, J., Hazan, E., and Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12 (7), 2121–2159. doi:10.5555/1953048.2021068

CrossRef Full Text | Google Scholar

Eberhart-Phillips, D., Han, D., and Zoback, M. D. (1989). Empirical relationships among seismic velocity, effective pressure, porosity, and clay content in sandstone. Geophysics 54 (1), 82–89. doi:10.1190/1.1442580

CrossRef Full Text | Google Scholar

Emerson, R. W. (2015). Causation and Pearson's correlation coefficient. J. Vis. Impair. Blind. 109 (3), 242–244. doi:10.1177/0145482x1510900311

CrossRef Full Text | Google Scholar

Goldberg, I., and Gurevich, B. (2008). A semi-empirical velocity-porosity-clay model for petrophysical interpretation of P- and S-velocities. Geophys. Prospect. 46 (3), 271–285. doi:10.1046/j.1365-2478.1998.00095.x

CrossRef Full Text | Google Scholar

Gomez, C. T., Dvorkin, J., and Vanorio, T. (2010). Laboratory measurements of porosity, permeability, resistivity, and velocity on Fontainebleau sandstones. Geophysics 75 (6), E191–E204. doi:10.1190/1.3493633

CrossRef Full Text | Google Scholar

Graves, A. (2012). “Long short-term memory,” in Supervised sequence labelling with recurrent neural networks (Berlin, Heidelberg: Springer), 37–45.

CrossRef Full Text | Google Scholar

Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., and Lew, M. S. (2016). Deep learning for visual understanding: A review. Neurocomputing 187, 27–48. doi:10.1016/j.neucom.2015.09.116

CrossRef Full Text | Google Scholar

Gyllensten, A., Tilke, P., Al-Raisi, M., and Allen, D. (2004). “Porosity heterogeneity analysis using geostatistics,” in Abu dhabi international conference and exhibition (OnePetro).

CrossRef Full Text | Google Scholar

Hamada, G. M. (2004). Reservoir fluids identification using Vp/Vs ratio? Oil Gas Sci. Technol. -. Rev. IFP. 59 (6), 649–654. doi:10.2516/ogst:2004046

CrossRef Full Text | Google Scholar

Han, D., Nur, A., and Morgan, D. (1986). Effects of porosity and clay content on wave velocities in sandstones. Geophysics 51 (11), 2093–2107. doi:10.1190/1.1442062

CrossRef Full Text | Google Scholar

He, M., Gu, H., and Wan, H. (2020). Log interpretation for lithology and fluid identification using deep neural network combined with MAHAKIL in a tight sandstone reservoir. J. Pet. Sci. Eng. 194, 107498. doi:10.1016/j.petrol.2020.107498

CrossRef Full Text | Google Scholar

Hicks, W. G., and Berry, J. E. (1956). Application of continuous velocity logs to determination of fluid saturation of reservoir rocks. Geophysics 21 (3), 739–754. doi:10.1190/1.1438267

CrossRef Full Text | Google Scholar

Jabbar, H., and Khan, R. Z. (2015). “Methods to avoid over-fitting and under-fitting in supervised machine learning (comparative study),” in Computer science, communication and instrumentation devices, 70. doi:10.3850/978-981-09-5247-1_017

CrossRef Full Text | Google Scholar

Jean, S., Cho, K., Memisevic, R., and Bengio, Y. (2014). On using very large target vocabulary for neural machine translation Comput. Sci., Neu. Evolut. Comput.. arXiv:1412.2007.

Google Scholar

Kaydani, H., Mohebbi, A., and Baghaie, A. (2011). Permeability prediction based on reservoir zonation by a hybrid neural genetic algorithm in one of the Iranian heterogeneous oil reservoirs. J. Pet. Sci. Eng. 78 (2), 497–504. doi:10.1016/j.petrol.2011.07.017

CrossRef Full Text | Google Scholar

Kingma, D. P., and Ba, J. (2014). Adam: A method for stochastic optimization Comput. Sci., Neu. Evolut. Comput.. arXiv:1412.6980.

Google Scholar

Lecun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature 521 (7553), 436–444. doi:10.1038/nature14539

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, S., Liu, B., Ren, Y., Chen, Y., Yang, S., Wang, Y., et al. (2019). Deep-learning inversion of seismic data Comput. Sci., Neu. Evolut. Comput.. arXiv:1901.07733.

Google Scholar

Li, X., Ma, X., Xiao, F., Wang, F., and Zhang, S. (2020). Application of gated recurrent unit (GRU) neural network for smart batch production prediction. Energies 13 (22), 6121. doi:10.3390/en13226121

CrossRef Full Text | Google Scholar

Lin, N., Zhang, D., Zhang, K., Wang, S., Fu, C., Zhang, J., et al. (2018). Predicting distribution of hydrocarbon reservoirs with seismic data based on learning of the small-sample convolution neural network. Chin. J. Geophys. 61 (10), 4110–4125.

Google Scholar

Mahdiani, M. R., and Norouzi, M. (2018). A new heuristic model for estimating the oil formation volume factor. Petroleum 4 (3), 300–308. doi:10.1016/j.petlm.2018.03.006

CrossRef Full Text | Google Scholar

Mianjy, P., and Arora, R. (2020). On convergence and generalization of dropout training. Adv. Neural Inf. Process. Syst. 33, 21151–21161.

Google Scholar

Michelucci, U. (2018). Applied deep learning. A case-based approach to understanding deep neural networks. Apress Berkeley, CA: LLC. doi:10.1007/978-1-4842-3790-8

CrossRef Full Text | Google Scholar

Mikolov, T., Karafiát, M., Burget, L., Cernocký, J., and Khudanpur, S. (2010). Recurrent neural network based language model. Makuhari: Interspeech, 1045–1048.

Google Scholar

Okon, A. N., Adewole, S. E., and Uguma, E. M. (2021). Artificial neural network model for reservoir petrophysical properties: Porosity, permeability and water saturation prediction. Model. Earth Syst. Environ. 7 (4), 2373–2390. doi:10.1007/s40808-020-01012-4

CrossRef Full Text | Google Scholar

Oloso, M. A., Hassan, M. G., Bader-El-Den, M. B., and Buick, J. M. (2017). Hybrid functional networks for oil reservoir PVT characterisation. Expert Syst. Appl. 87, 363–369. doi:10.1016/j.eswa.2017.06.014

CrossRef Full Text | Google Scholar

Pang, M., Ba, J., Carcione, J. M., Picotti, S., Zhou, J., and Jiang, R. (2019). Estimation of porosity and fluid saturation in carbonates from rock-physics templates based on seismic Q. Geophysics 84 (6), M25–M36. doi:10.1190/geo2019-0031.1

CrossRef Full Text | Google Scholar

Rui, J., Zhang, H., Zhang, D., Han, F., and Guo, Q. (2019). Total organic carbon content prediction based on support-vector-regression machine with particle swarm optimization. J. Pet. Sci. Eng. 180, 699–706. doi:10.1016/j.petrol.2019.06.014

CrossRef Full Text | Google Scholar

Saon, G., Tüske, Z., Bolanos, D., and Kingsbury, B. (2021). “Advancing RNN transducer technology for speech recognition,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP (IEEE), 5654–5658.

CrossRef Full Text | Google Scholar

Saporetti, C. M., Goliatt, L., and Pereira, E. (2021). Neural network boosted with differential evolution for lithology identification based on well logs information. Earth Sci. Inf. 14 (1), 133–140. doi:10.1007/s12145-020-00533-x

CrossRef Full Text | Google Scholar

Segesman, F. F. (1980). Well-logging method. Geophysics 45 (11), 1667–1684. doi:10.1190/1.1441057

CrossRef Full Text | Google Scholar

Shuai, B., Zuo, Z., Wang, B., and Wang, G. (2016). “Dag-recurrent neural networks for scene labeling,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV. 3620–3629.

CrossRef Full Text | Google Scholar

Soleimani, F., Hosseini, E., and Hajivand, F. (2020). Estimation of reservoir porosity using analysis of seismic attributes in an Iranian oil field. J. Pet. Explor. Prod. Technol. 10 (4), 1289–1316. doi:10.1007/s13202-020-00833-4

CrossRef Full Text | Google Scholar

Sutskever, I., Martens, J., and Hinton, G. E. (2011). Generating text with recurrent neural networks. Bellevue, WA: ICML.

Google Scholar

Tahmasebi, P., and Hezarkhani, A. (2012). A fast and independent architecture of artificial neural network for permeability prediction. J. Pet. Sci. Eng. 86, 118–126. doi:10.1016/j.petrol.2012.03.019

CrossRef Full Text | Google Scholar

Tariq, Z., Elkatatny, S., Mahmoud, M., and Abdulraheem, A. (2016). “A new artificial intelligence based empirical correlation to predict sonic travel time,” in International Petroleum Technology Conference (Abu Dhabi, UAE: OnePetro).

CrossRef Full Text | Google Scholar

Tariq, Z., Elkatatny, S., Mahmoud, M., Ali, A. Z., and Abdulraheem, A. (2017). “A new technique to develop rock strength correlation using artificial intelligence tools,” in SPE Reservoir Characterisation and Simulation Conference and Exhibition (Wuhan, China: OnePetro).

CrossRef Full Text | Google Scholar

Tian, Y., Zhang, Q., Cheng, G., and Liu, X. (2012). “An application of RBF neural networks for petroleum reservoir characterization,” in 2012 third global congress on intelligent systems (Wuhan, China: IEEE), 95–99.

CrossRef Full Text | Google Scholar

Timur, A. (1968). “An investigation of permeability, porosity, and residual water saturation relationships,” in SPWLA 9th annual logging symposium (New Orleans, Louisiana).

Google Scholar

Verma, A. K., Cheadle, B. A., Routray, A., Mohanty, W. K., and Mansinha, L. (2012). “Porosity and permeability estimation using neural network approach from well log data,” in SPE Annual Technical Conference and Exhibition, Calgary, AB. 1–6.

Google Scholar

Wang, J., and Cao, J. (2021). Deep learning reservoir porosity prediction using integrated neural network. Arab. J. Sci. Eng., 1–15. doi:10.1007/s13369-021-06080-x

CrossRef Full Text | Google Scholar

Wang, J., Cao, Y., Liu, K., Liu, J., and Kashif, M. (2017). Identification of sedimentary-diagenetic facies and reservoir porosity and permeability prediction: An example from the eocene beach-bar sandstone in the dongying depression, China. Mar. Pet. Geol. 82, 69–84. doi:10.1016/j.marpetgeo.2017.02.004

CrossRef Full Text | Google Scholar

Wang, S., Qin, C., Feng, Q., Javadpour, F., and Rui, Z. (2021). A framework for predicting the production performance of unconventional resources using deep learning. Appl. Energy 295, 1–15. doi:10.1016/j.apenergy.2021.117016

CrossRef Full Text | Google Scholar

Xue, Y., Cheng, L., Mou, J., and Zhao, W. (2014). A new fracture prediction method by combining genetic algorithm with neural network in low-permeability reservoirs. J. Pet. Sci. Eng. 121, 159–166. doi:10.1016/j.petrol.2014.06.033

CrossRef Full Text | Google Scholar

Yasin, Q., Ding, Y., Baklouti, S., Boateng, C. D., Du, Q., and Golsanami, N. (2022). An integrated fracture parameter prediction and characterization method in deeply-buried carbonate reservoirs based on deep neural network. J. Pet. Sci. Eng. 208, 109346. doi:10.1016/j.petrol.2021.109346

CrossRef Full Text | Google Scholar

Ying, X. (2019). “An overview of overfitting and its solutions,” in Journal of physics: Conference series. Journal of physics: Conference series (Beijing, China: IOP Publishing), 022022.

CrossRef Full Text | Google Scholar

Zeng, L., Ren, W., and Shan, L. (2020). Attention-based bidirectional gated recurrent unit neural networks for well logs prediction and lithology identification. Neurocomputing 414, 153–171. doi:10.1016/j.neucom.2020.07.026

CrossRef Full Text | Google Scholar

Zhang, X., Yin, F., Zhang, Y., Liu, C., and Bengio, Y. (2017). Drawing and recognizing Chinese characters with recurrent neural network. IEEE Trans. Pattern Anal. Mach. Intell. 40 (4), 849–862. doi:10.1109/tpami.2017.2695539

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, Z., Cui, P., and Zhu, W. (2020). Deep learning on graphs: A survey. IEEE Trans. Knowl. Data En. 34, 249–270. doi:10.1109/TKDE.2020.2981333

CrossRef Full Text | Google Scholar

Zou, F., Shen, L., Jie, Z., Zhang, W., and Liu, W. (2019). “A sufficient condition for convergences of adam and rmsprop,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, 11127–11135.

CrossRef Full Text | Google Scholar

Keywords: deep learning, physical parameters, reservoir, recurrent neural network, regression prediction, multiple outputs

Citation: Zhang J, Liu Z, Zhang G, Yan B, Ni X and Xie T (2022) Simultaneous prediction of multiple physical parameters using gated recurrent neural network: Porosity, water saturation, shale content. Front. Earth Sci. 10:984589. doi: 10.3389/feart.2022.984589

Received: 02 July 2022; Accepted: 01 August 2022;
Published: 24 August 2022.

Edited by:

Qinzhuo Liao, King Fahd University of Petroleum and Minerals, Saudi Arabia

Reviewed by:

Buraq Adnan Al-Baldawi, University of Baghdad, Iraq
Qiang Guo, China Jiliang University, China

Copyright © 2022 Zhang, Liu, Zhang, Yan, Ni and Xie. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Zhuofan Liu, Z20010007@s.upc.edu.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.