Back to Search Start Over

Leveraging GPUs for matrix-free optimization with PyLops

Authors :
M. Ravasi
Source :
Fifth EAGE Workshop on High Performance Computing for Upstream.
Publication Year :
2021
Publisher :
European Association of Geoscientists & Engineers, 2021.

Abstract

Summary The use of Graphics Processing Units (GPUs) for scientific computing has become mainstream in the last decade. Applications ranging from deep learning to seismic modelling have benefitted from the increase in computational efficiency compared to their equivalent CPU-based implementations. Since many inverse problems in geophysics relies on similar core computations – e.g. dense linear algebra operations, convolutions, FFTs – it is reasonable to expect similar performance gains if GPUs are also leveraged in this context. In this paper we discuss how we have been able to take PyLops, a Python library for matrix-free linear algebra and optimization originally developed for singe-node CPUs, and create a fully compatible GPU backend with the help of CuPy and cuSignal. A benchmark suite of our core operators shows that an average 65x speed-up in computations can be achieved when running computations on a V100 GPU. Moreover, by careful modification of the inner working of the library, end users can obtain such a performance gain at virtually no cost: minimal code changes are required when switching between the CPU and GPU backends, mostly consisting of moving the data vector to the GPU device prior to solving an inverse problem with one of PyLops’ solvers.

Details

Database :
OpenAIRE
Journal :
Fifth EAGE Workshop on High Performance Computing for Upstream
Accession number :
edsair.doi...........5b099055bfd201e1b57533d4ae5bd5c2
Full Text :
https://doi.org/10.3997/2214-4609.2021612003