Unimore logo AImageLab

Unveiling the Impact of Image Transformations on Deepfake Detection: An Experimental Analysis

Abstract: With the recent explosion of interest in visual Generative AI, the field of deepfake detection has gained a lot of attention. In fact, deepfake detection might be the only measure to counter the potential proliferation of generated media in support of fake news and its consequences. While many of the available works limit the detection to a pure and direct classification of fake versus real, this does not translate well to a real-world scenario. Indeed, malevolent users can easily apply post-processing techniques to generated content, changing the underlying distribution of fake data. In this work, we provide an in-depth analysis of the robustness of a deepfake detection pipeline, considering different image augmentations, transformations, and other pre-processing steps. These transformations are only applied in the evaluation phase, thus simulating a practical situation in which the detector is not trained on all the possible augmentations that can be used by the attacker. In particular, we analyze the performance of a k-NN and a linear probe detector on the COCOFake dataset, using image features extracted from pre-trained models, like CLIP and DINO. Our results demonstrate that while the CLIP visual backbone outperforms DINO in deepfake detection with no augmentation, its performance varies significantly in presence of any transformation, favoring the robustness of DINO.


Citation:

Cocchi, Federico; Baraldi, Lorenzo; Poppi, Samuele; Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita "Unveiling the Impact of Image Transformations on Deepfake Detection: An Experimental Analysis" IMAGE ANALYSIS AND PROCESSING, ICIAP 2023, PT II, vol. 14234, Udine, Italy, pp. 345 -356 , September 11-15, 2023, 2023 DOI: 10.1007/978-3-031-43153-1_29

 not available

Paper download:

Related projects: