The problem of efficient utilization of computational resources gets more vital taking into account growing demands in the quality and details of 3D reservoir models. Nowadays, reservoir engineers typically have to upscale geological models when dealing with medium and large reservoirs in order to meet the deadlines with the available computation resources. Low resolution details of the dynamic model coupled to the upscalling errors lead to questionable results in production forecasting.
The efficiency of modern computer systems exhibit continuous growth due to increasing numbers of computational cores. High-performance hardware costs are decreasing every day, and hardware-software systems that were extremely expensive just a couple of years ago can be purchased for a reasonable price even for small service companies. Taking into account the availability of multi-CPU computers, we should pay more attention to the software side in order to utilize all computational resources in parallel simulations. Due to outdated software architecture, most common reservoir simulators skip most of the available computation power.
Our technology allows simulations on the finest grids due to utilization of the most recent developments in the software. According to our vision, this approach not only improves the model details, but also assists in collaboration between geologists and reservoir engineers.
The following novel approaches were implemented in tNavigator in order to increase the efficiency of parallel computations on multi-core workstations:
- All calculations should be carried out in parallel, including linear system solutions, well equations, matrix operations, etc.
- Within each CPU involved in simulation, all data exchange is handled directly by system threads (Boost ~ 30-40%)
- NUMA (Non-Uniform Memory Access) is supported for multi-CPU workstations (Boost ~ 50%)
- Hyperthreading technology support within each CPU (Boost ~15%)
The basic parallel algorithm in tNavigator is designed for multicore PCs. It is based on direct utilization of system threads, which seems to be optimal for task distribution between different cores within one CPU. This approach allows almost linear acceleration factors when applied on a modern PC.
It should be emphasized that continuous increase of PC computational efficiency is expected in the near future due to progressive growth in the number of cores. Thus, each reservoir engineer can actually get a supercomputer on his desk, and reasonable hardware utilization allows nearly unlimited boost. However, the acceleration factor cannot be kept at the same level when the model dimension increases. The problem of memory access speed starts playing a major role in this case. Thus, despite more efficient computers, the simulation time cannot be reduced any further due to memory speed limitations.
This restriction can be removed with distributed memory CPU clusters that require MPI-based algorithms for data interaction between the nodes. This approach is implemented in most of the reservoir simulators. tNavigator contains a novel hybrid algorithm for parallel computations. It utilizes the MPI approach for task distribution between the cluster nodes, and system threads between the cores within each node. This approach removes the restrictions by utilizing all resources as efficient as possible, and gives a tenfold greater acceleration factor compared to the market leaders on multicore CPU clusters.
The RFD office in Moscow utilizes a compact 20-node cluster with 2 six-core Intel Xeon 5650 CPUs on each node for reservoir simulations. This software-hardware configuration exhibits a 94x acceleration factor on a model of Samotlor Field (West Siberia) with about 4.7 million active grid blocks and nearly 13000 wells.
We are actively working on developing more efficient parallel computations. We believe there are higher limits we can achieve. According to recent tests, we should expect acceleration factors of about 200 on clusters with the new geneartion of Intel CPUs - Xeon E5 2600.