In recent years, there has been a growing interest in the use of Generative Adversarial Networks (GANs). Thanks to their outstanding performance in image translation and generation, they play an increasingly important role in computer vision applications. Most approaches based on GAN focus on proposing task-specific auxiliary modules or loss functions that are tailored to address various challenges of a single application, but often do not perform better when evaluated to improve other image generation tasks. Moreover, the basic ResNet and U-Net based GAN generators reach their limits in many image restoration and enhancement use cases. Therefore, in this paper, we propose a generic GAN referred to as Multi-Kernel Filter-based Conditional Generative Adversarial Network (MFGAN). We develop a new GAN generator with multiple CNN streams to extract more relevant and discriminative features related to the studied task. The proposed MFNet generator consists of two CNN modules, feature extraction and feature compression, which are combined to connect both the GAN encoder and decoder. It considers the strengths of conventional layers at different scale levels with multi-kernel filtering to capture high to low feature frequencies that reflect the complex image degradations and structural image details. Extensive experiments on five challenging applications for image enhancement, image restoration, and infrared image translation demonstrate the superiority and effectiveness of the proposed MFGAN in removing image degradation and generating visually appealing fake images. Our MFGAN quantitatively outperforms both state-of-the-art GANs and other CNN-based architectures in all tested benchmarks. [ABSTRACT FROM AUTHOR]