Média Animation and the CSEM (Conseil Supérieur de l’Education aux Médias) collaborated to produce a series of webinars on the question ‘Is AI redefining the challenges of disinformation?’ As part of this, Florian Dauphin gave a presentation on his research about the challenges of critical image education in the AI era. Following his presentation, a round table discussion between media education professionals was held, using a collaborative working method to propose educational approaches.
Intervention: Florian Dauphin
Florian Dauphin is a sociologist and senior lecturer in information and communication science at the University of Picardie Jules Verne. His research focuses on analysing the formation of public opinion in the digital age. In particular, he examines the social and political dynamics of online information, analysing the mechanisms of disinformation, from the circulation of rumours to fake news and conspiracy theories.
Myth vs Reality
The public discourse surrounding generative artificial intelligence is often shaped by preconceived notions. Among the most widespread is the fear of hyper-realistic deepfakes that would imply the end of visual evidence. While such productions do exist, their direct effects remain limited: they are still relatively rare, have a short lifespan, and are often quickly debunked.
Another common myth is the idea that AI is deceiving everyone on a massive scale. However, the most effective visual disinformation today does not always rely on sophisticated technology. Cheapfakes (crudely modified content) and, above all, the recontextualization of real images taken out of their original context are much more effective and widespread.
Finally, the hope for a perfect detector capable of definitively solving the problem of truth and falsehood persists. This vision is based on a technosolutionist approach that tends to minimize the issue to a purely technical challenge.
A deeper crisis of confidence
As researcher Florian Dauphin points out, focusing exclusively on generative AI takes the risk of masking deeper causes. The current mistrust of images stems not only from deepfakes, but from a broader erosion of trust: in the media, institutions, authority figures, and dominant narratives. Certain high-profile cases have reinforced this sense of upheaval, fuelling the idea that “anything can be fake”.
In this context, blaming AI sometimes amounts to shifting responsibility, avoiding questioning the social, economic, and political dimensions that structure the flow of information. Technical detection solutions can be useful, but they are only a partial response to a much more complex problem.
Reframing the issue: practical educational approaches
Rather than seeking a miracle of technology, the challenge lies more in critical media education. Learning to question the context in which images are produced and disseminated, understanding why certain images circulate, who benefits from them and how they are interpreted, becomes central. Generative AI is then no longer perceived solely as a threat, but as a revelation of a fragile relationship with information and reality.
Thanks to his work, Florian Dauphin offers educational ideas based on concrete examples. You can download the tool sheet for his presentation directly from the EDMO Belux website.
Présentation de l’outil
Seeing is no longer believing
This tool sheet offers an educational application of Florian Dauphin’s research on AI-generated images and mistrust of information. Through concrete ideas for classroom activities, it encourages students to go beyond the simple opposition between true and false and question the context in which images circulate today.
This tool is written in French. The examples and case studies are mainly based on situations relevant to French-speaking students and their cultural context but can be adapted for in other languages and cultural background
Download here.