1. Deepfake Media Forensics: State of the Art and Challenges Ahead
- Author
-
Amerini, Irene, Barni, Mauro, Battiato, Sebastiano, Bestagini, Paolo, Boato, Giulia, Bonaventura, Tania Sari, Bruni, Vittoria, Caldelli, Roberto, De Natale, Francesco, De Nicola, Rocco, Guarnera, Luca, Mandelli, Sara, Marcialis, Gian Luca, Micheletto, Marco, Montibeller, Andrea, Orru', Giulia, Ortis, Alessandro, Perazzo, Pericle, Puglisi, Giovanni, Salvi, Davide, Tubaro, Stefano, Tonti, Claudia Melis, Villari, Massimo, and Vitulano, Domenico
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
AI-generated synthetic media, also called Deepfakes, have significantly influenced so many domains, from entertainment to cybersecurity. Generative Adversarial Networks (GANs) and Diffusion Models (DMs) are the main frameworks used to create Deepfakes, producing highly realistic yet fabricated content. While these technologies open up new creative possibilities, they also bring substantial ethical and security risks due to their potential misuse. The rise of such advanced media has led to the development of a cognitive bias known as Impostor Bias, where individuals doubt the authenticity of multimedia due to the awareness of AI's capabilities. As a result, Deepfake detection has become a vital area of research, focusing on identifying subtle inconsistencies and artifacts with machine learning techniques, especially Convolutional Neural Networks (CNNs). Research in forensic Deepfake technology encompasses five main areas: detection, attribution and recognition, passive authentication, detection in realistic scenarios, and active authentication. This paper reviews the primary algorithms that address these challenges, examining their advantages, limitations, and future prospects.
- Published
- 2024