Mohamed Hassan El Dabbour, Amr Labib, Ali Soliman, Ayman Fadel Said, Hany Shalaby, Khaled Mohamed Mansour, Mohamed Nagy Negm, Oliver C Mullins, Mohamed Ahmed Elfeel, Hassan Elsayed Diaa, Mahmoud Ali Shihab, Abdelrahman Agam, Farah Adel Seifeldin, and Mohamed Hashem Mostafa
Reservoir simulation is required to aid in the decision-making for high-impact projects. It is a culmination of geophysical, geological, petrophysical, and engineering assessments of sparse, uncertain, and expensive data. History matching is a process of elevating trust in numerical models as they are calibrated to mimic the behaviour of the real-life asset. Traditional history matching relies on direct parameter assignment based on flat files used as input to the reservoir simulator. This enables a convenient method for the perturbation of uncertain parameters and their value assignments during the history matching process. Given the nature of the input files, the scope for uncertainty parameters is limited to the original petrophysical properties, their derived simulation properties in a specified group of grid blocks, and occasionally extended to include fluid and multiphase flow properties. However, there are key influential model-building steps prior to reservoir simulation related to data interpretation. These steps control not only the values of petrophysical properties but also their spatial correlation, cross-correlation, and variability. The limitation in the scope for parameterization adds bias to the model calibration process, hence negatively impacting its outcome. In an era where ML/AI algorithms are shaping data interpretation methods, key modelling decisions can be revisited to realize the maximum value of subsurface data. However, a framework is required whereby these important model-building steps are captured in history matching to eliminate bias and ensure the geological consistency of the subsurface model during and after history matching. This paper demonstrates a liberated workflow to calculate the recommended parameters that achieve the minimum mismatch score. The workflow is executed through a cloud platform offering compute elasticity to expedite the history matching workflow. It is composed of three main steps. The first step is data loading, where simulation results and parameters are extracted from the submitted ensemble(s). Meanwhile, the second step involves data preparation and cleaning. Wells devoid of data are removed, and scaled metrics are created to calculate the mismatch score. The simulation ID then groups the data to get a field-level aggregation. The now aggregated and cleaned simulation results are merged with the parameters list to create the input dataset to the final step, where several machine learning models are trained and evaluated in parallel. The data is split into training and testing datasets. The target variable is the mismatch score, as the models are trying to predict the mismatch for a given set of parameters. Supervised learning regression algorithms were used. The best-performing ones were found to be random forest and gradient boosted trees. After fine-tuning the machine learning models and evaluating them based on their coefficient of determination (R2 score), the best fitting model is used to calculate the optimized parameters. This happens iteratively by generating new series of parameters within a range and using the machine learning model to predict the mismatch for each until the lowest mismatch is found. The parameters resulting in the minimum mismatch are the recommended parameters. This workflow is implemented on a simulation model built for a mature gas condensate field in the Mediterranean of Egypt. The field comprises three anticlines with a spill-fill Petroleum system, where the majority of the wells are in one of the anticlines. In contrast, the other anticlines have few wells and are candidates for appraisal. Moreover, there is high uncertainty in the sand distribution and reservoir properties, spill points depth, depletion, and observing an explained phenomenon of a sustainable gas water contact in the new anticline even after 30 years of production from the old Anticline. This uncertainty in the understand of the relation between the two anticlines makes the selection of the drilling locations a challenge. To Assess remaining reservoir volumes and identify potential infill targets, we used the ML to study all the uncertainties combinations in a full-loop approach from static to dynamic model and generate multiple representations that honour the geological understanding. The cloud-based Agile reservoir modeling approach enriched with ML / AI algorithms enabled us to generate Multiple realizations that match 30 years of historical production and pressure profiles capturing many possible combinations of uncertain geological parameters and concepts. In addition, several forecast scenarios for 3 new appraisal wells were optimized based on the ensemble of history matched models minimizing the risk of drilling dry wells. In addition to going through the work process and results, this paper highlights the method's practical effectiveness and common issues in practical application. The use of the cloud-based technology had a great cost saving and efficiency improvements, for example giving the existing on-premises Infrastructure would take 1-2 years to achieve the same results that was achieved in 1-2 months and cost saving around 1 million dollars in cluster hardware purchase. Moreover, Cloud based technology enable collaborative, iterative working styles for integrated teams and access to scalable technologies that are developed on cloud only.