High-performance Workflow Automation with Big Loop on AWS
Future digital evolution of reservoir development and management will come from the full automation of multidisciplinary subsurface workflows. The output of these are reservoir models used for forecasting reservoir behavior that are systematically and automatically updated by incorporating the latest petrophysical, geophysical, geological, and production data. The incorporation of all these data provide an evolving view of the asset, and critically, of forecast uncertainty.
This presentation will show a history matching study using an ensemble-based workflow that tightly integrates the static and dynamic domains. Subsurface uncertainties captured at every stage of the interpretative, modeling and predictive processes, are used as inputs within an automated repeatable workflow. By adjusting these inputs, an ensemble of models is created, and their likelihoods constrained by observations within an iterative loop. The result is multiple calibrated models that are consistent with the underlying geology and the observed production data. It is effectively a digital twin of the reservoir with an improved predictive ability that provides a realistic assessment of uncertainty associated with production forecasts.
We illustrate this workflow with data from Volve, a North Sea oil field on the Norwegian continental shelf. The study is conducted in an AWS cloud-hosted environment. The impact of the AWS cloud is profound; overall project time is reduced from weeks to days without typical limitations on the number of simultaneous runs. Results are available within hours instead of days, allowing same-day evaluation for faster decisions and improved operational efficiency. Moreover, enhanced stochastic analyses are now possible for those without high-performance on-premise clusters.