Back to Search Start Over

Multiple-GPU accelerated high-order gas-kinetic scheme for direct numerical simulation of compressible turbulence.

Authors :
Wang, Yuhang
Cao, Guiyu
Pan, Liang
Source :
Journal of Computational Physics. Mar2023, Vol. 476, pN.PAG-N.PAG. 1p.
Publication Year :
2023

Abstract

High-order gas-kinetic scheme (HGKS) has become a workable tool for the direct numerical simulation (DNS) of turbulence. In this paper, to accelerate the computation, HGKS is implemented with the graphical processing unit (GPU) using the compute unified device architecture (CUDA). Due to the limited available memory size, the computational scale is constrained by single GPU. For large-scale DNS of turbulence, we develop a multi-GPU HGKS simulation using message passing interface (MPI) and CUDA. The benchmark cases for compressible turbulence, including Taylor-Green vortex and turbulent channel flows, are presented to assess the numerical performance of HGKS with Nvidia TITAN RTX and Tesla V100 GPUs. For single-GPU computation, compared with the parallel central processing unit (CPU) code running on the Intel Core i7-9700 with open multi-processing (OpenMP) directives, 7x speedup is achieved by TITAN RTX and 16x speedup is achieved by Tesla V100. For multiple-GPU computation, multiple-GPU accelerated HGKS code scales properly with the increasing number of GPU. The computational time of parallel CPU code running on 1024 Intel Xeon E5-2692 cores with MPI is approximately 3 times longer than that of GPU code using 8 Tesla V100 GPUs with MPI and CUDA. Numerical results confirm the excellent performance of multiple-GPU accelerated HGKS for large-scale DNS of turbulence. Besides reducing memory access pressure, we also exploit single precision floating point arithmetic to accelerate HGKS on GPUs. Reasonably, compared to the computation with FP64 precision, the efficiency is improved and the memory cost is reduced with FP32 precision. Meanwhile, the differences in accuracy for statistical turbulent quantities appear. For turbulent channel flows, difference in long-time statistical turbulent quantities is acceptable between FP32 and FP64 precision solutions. While the obvious discrepancy in instantaneous turbulent quantities can be observed, which shows that FP32 precision is not safe for DNS in compressible turbulence. The choice of precision should be depended on the requirement of accuracy and the available computational resources. • To conduct DNS of turbulence, the HGKS is developed with single GPU+CUDA, and multiple GPUs using MPI+CUDA. • For multiple-GPU computation, multiple-GPU accelerated HGKS code scales properly with No. of GPU. • Efficiency of single Tesla V100 GPU is comparable with the MPI code using 300 CPU cores. • Efficiency of GPU code with 8 Tesla V100 GPUs approximately equals to MPI code with 3000 cores. • Efficiency is improved with FP32 precision, but differences in accuracy appear with the long-time computing. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00219991
Volume :
476
Database :
Academic Search Index
Journal :
Journal of Computational Physics
Publication Type :
Academic Journal
Accession number :
161488510
Full Text :
https://doi.org/10.1016/j.jcp.2022.111899