Unimore logo AImageLab
Back to the project list

European Lighthouse on Secure and Safe AI

About ELSA

The European Lighthouse on Safe and Secure AI (ELSA) aspires to define the research agendas and lead research efforts in three important areas of Artificial Intelligence: technical robustness and safety, privacy, and human agency and oversight. These topics address core European values, and it is of strategic importance that Europe takes the lead in this research effort. Progressing in these areas of research is key to enable AI researchers and practitioners to design systems able to detect, prevent, mitigate and recover from harm and threats.

To advance in these research areas, important research questions must be addressed: how to define robustness guarantees in connection with certification of AI systems, how to scale up private and robust collaborative learning to real-life scenarios, and how to efficiently introduce human-in-the-loop decision making in AI systems. These research questions form the basis for the three Grand Challenges that ELSA puts forward.

In order to ensure real-life impact, the ELSA Grand Challenges, which address basic research, will be coupled with six Use Cases that define real-life scenarios where these research results are dearly needed and have the potential to create widespread commercial and social impact. The ELSA Use Cases will be focused on Health, Autonomous Driving, Robotics, Multimedia, Cybersecurity and Document Intelligence.


Elsa_approach

Planned Activities in the Multimedia use case

Machine-generated images are becoming more and more popular in the digital world, thanks to the spread of Deep Learning models that can generate visual data like Generative Adversarial Networks, and Diffusion Models. While image generation tools can be employed for lawful goals (e.g., to assist content creators, generate simulated datasets, or enable multi-modal interactive applications), there is a growing concern that they might also be used for illegal and malicious purposes, such as the forgery of natural images, the generation of images in support of fake news, misogyny or revenge porn. While the results obtained in the past few years contained artefacts which made generated images easily recognizable, today’s results are way less recognizable from a pure perceptual point of view. In this context, assessing the authenticity of fake images becomes a fundamental goal for security and for guaranteeing a degree of trustworthiness of AI algorithms. There is a growing need, therefore, to develop automated methods which can assess the authenticity of images (and, in general, multimodal content), and which can follow the constant evolution of generative models, which become more realistic over time.

The ELSA Use Case on Multimedia focuses on the development of benchmarks and tools for Fake data Understanding and Detection, with the final goal of protecting from visual disinformation and misuse of generated images, and to monitor the progress of existing and proposed solutions for detection. It will investigate novel ways of understanding and detecting fake data, through new machine learning approaches capable of mixing syntactic and perceptive analysis. Also, the Use Case promotes the creation of a competition on deepfake detection which is connected to the ELSA grand challenge of “Human in the loop decision making”. This will monitor and evaluate the development of algorithms for deep fake detection, in terms of efficacy, explainability and human oversight, by enabling domain experts to validate and improve results in a human-in-the-loop fashion. The Use Case will be connected to existing initiatives and will include the creation of new datasets for the aforementioned topics.

The collection and generation of data is a crucial step for the development of the benchmark. We will leverage existing datasets for deep fake detection, and generate new data as part of the Use Case. A first result in this direction is the COCOFake dataset, which has been generated by UNIMORE leveraging the CINECA supercomputing facilities. The dataset consists of more than 1.2M images generated using Stable Diffusion v1.4 and v2.0, using textual prompts coming from the COCO dataset for image captioning. As such, it contains clusters of five generated images sharing the same semantics and generated from five different textual prompts. In comparison with existing datasets for deep fake detection, it features more diversity, uniform coverage of semantic classes, and can easily be expanded to a larger scale.

ELSA Multimedia Benchmark - Track 1 opens

Join our thrilling competition of deepfake detection and put your skills to the test. As the rise of deepfake technology poses unprecedented challenges, we invite individuals and teams from all backgrounds to showcase their expertise in identifying and debunking manipulated media. The first track of the competition is open! It consists of detecting fully generated images: the binary classification task involves distinguishing between fake images and real images using machine learning and deep learning-based approaches. The competition will be deployed in periodical evaluation rounds and different versions of the dataset will be released progressively to improve the quality, quantity, and type of image generation to provide a benchmark that is as representative as possible of image manipulation scenarios within the media.

More details about the Multimedia track are available here.

 

ELSA  D3 Benchmark - New training dataset released


The current deepfake detection datasets lack diversity in terms of image generators and are insufficient in terms of quantity. To address this limitation, we have developed and released a new dataset named the Diffusion-generated Deepfake Detection (D3) dataset. This dataset comprises almost 2.3 million records and 11.5 million images. Each record includes a prompt, a genuine image, and four images produced by various generators. Both prompts and authentic images are sourced from LAION-400M, while the fake images are generated using different text-to-image generators. This dataset aims to facilitate the training of deepfake detection methods from the ground up.

More details about the D3 training dataset are available here.

Stay tuned for the release of a new test set on the ELSA platform.

Project Info

Elsa_logo_reduced

Staff:

Duration:

01/09/2022 - 01/09/2025

Project Web Site

https://elsa-ai.eu/

Project Number

HORIZON-CL4-2021-HUMAN-01-03

Funded by:

European Union (EU)

Project type:

Horizon Europe