1. CAVLCU: an efficient GPU-based implementation of CAVLC
- Author
-
Nicolás Guil, Antonio Fuentes-Alventosa, Juan Gómez-Luna, José María González-Linares, and Rafael Medina-Carnicer
- Subjects
Computer science ,business.industry ,CAVLC ,GPU ,CUDA ,H.264 ,Parallel implementations ,Data compression ,Variable-length encoding ,Frame (networking) ,Memory bandwidth ,Parallel computing ,Encryption ,Theoretical Computer Science ,Hardware and Architecture ,Encoding (memory) ,business ,Encoder ,Software ,Information Systems ,Image compression ,Block (data storage) ,Context-adaptive variable-length coding - Abstract
CAVLC (Context-Adaptive Variable Length Coding) is a high-performance entropy method for video and image compression. It is the most commonly used entropy method in the video standard H.264. In recent years, several hardware accelerators for CAVLC have been designed. In contrast, high-performance software implementations of CAVLC (e.g., GPU-based) are scarce. A high-performance GPU-based implementation of CAVLC is desirable in several scenarios. On the one hand, it can be exploited as the entropy component in GPU-based H.264 encoders, which are a very suitable solution when GPU built-in H.264 hardware encoders lack certain necessary functionality, such as data encryption and information hiding. On the other hand, a GPU-based implementation of CAVLC can be reused in a wide variety of GPU-based compression systems for encoding images and videos in formats other than H.264, such as medical images. This is not possible with hardware implementations of CAVLC, as they are non-separable components of hardware H.264 encoders. In this paper, we present CAVLCU, an efficient implementation of CAVLC on GPU, which is based on four key ideas. First, we use only one kernel to avoid the long latency global memory accesses required to transmit intermediate results among different kernels, and the costly launches and terminations of additional kernels. Second, we apply an efficient synchronization mechanism for thread-blocks (In this paper, to prevent confusion, a block of pixels of a frame will be referred to as simply block and a GPU thread block as thread-block.) that process adjacent frame regions (in horizontal and vertical dimensions) to share results in global memory space. Third, we exploit fully the available global memory bandwidth by using vectorized loads to move directly the quantized transform coefficients to registers. Fourth, we use register tiling to implement the zigzag sorting, thus obtaining high instruction-level parallelism. An exhaustive experimental evaluation showed that our approach is between 2.5x and 5.4x faster than the only state-of-the-art GPUbased implementation of CAVLC., The Journal of Supercomputing, 78 (6), ISSN:0920-8542, ISSN:1573-0484
- Published
- 2021