Accessibility Tools

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
progetto
strumento
post
team
Filter by Categories
Call
CERV
Creative Europe
Curatorship
Erasmus+
Events
Insights
Methodologies and research
New European Bauhaus
New Projects
News
PNRR
Projects
Toolkit
Tools
Training programme

fAIr Media

A media literacy training project to debunk mis/disinformation and anticipate the impact of generative AI on the media landscape

YEAR

CATEGORY

ROLE

USEFUL RESOURCES

What you find here

The main goal of the fAIr media project is to debunk disinformation, promote media literacy and strengthen citizens’ critical approach to media (including social media) through an active and participative methodology involving artificial intelligence, professionals (journalists, designers), artists and young citizens.

The activities

The fAIr media project encourages the construction of a transparent and safe media landscape by stimulating participation and debate among more than 700 people in three rounds of training events in 4 European countries. During the workshops, participants are given new skills to interact with AI tools to fully understand how generative artificial intelligence is revolutionising the media industry and reshaping the way we consume and engage with media content. 

Following the workshops, an illustrated debunking tactics glossary will be produced with the insights from the previous activities. It will be an innovative manifesto, co-created by citizens and artists, with the aim of getting the concerns of non-experts across to policy makers in a visually appealing and understandable way. 

The glossary will be disseminated through a third cycle of public events involving over 300 citizens between Italy and Ars Electronica Festival 2026 in Austria.

The workshop "Culture and Media for Debunking":

138 professionals from the fields of art, culture, journalism, and research took part in a workshop focused on the functioning of Large Language Models (LLMs) and the ethical issues surrounding the use of artificial intelligence.

The workshop, held in February 2025 at the DAMA Tecnopolo in Bologna, included a session featuring The Models, an art installation created by dmstfcn that explores the tendency of language models to display occasionally deceptive and antagonistic behaviors. The installation uses characters inspired by Commedia dell’Arte masks, created within a 3D engine. Its goal is to experiment with the “personality” traits of AI that usually go unnoticed, engaging the public in understanding these characteristics and moving beyond a merely passive critique of artificial intelligence.

The second session featured historian of illusionism Mariano Tomatis and journalist Federico Ferrazza, editor of a magazine dedicated to AI. They discussed the connection between wonder and truth from the Enlightenment to the present day, illustrating how wonder strategies were used in the 18th century and how illusionistic machines were employed to amaze audiences and challenge Enlightenment rationalism. The session addressed relevant contemporary dilemmas, such as how we perceive automatically generated images in a time when distinguishing between true and false is increasingly difficult. Is AI eroding our trust in reality? Or, as with illusions in the past, is the issue not with the technology itself but with how we perceive and decode it? With the journalist’s contribution, the session also explored debunking—a valuable tool for revealing the mechanisms behind illusions—and pre-bunking, which strengthens people’s critical thinking to help prevent them from falling into deceptive traps.

The engagement of participants was made possible thanks to several project stakeholders: Art-er, Cineca, and the Emilia-Romagna Region, who promoted the initiative through their own channels.

The workshop "Art for Debunking"

High school students from four Bologna-based institutes — Aldini Technical Institute (informatics), IIS Laura Bassi (linguistics), IIS Giordano Bruno (mechanics), and Belluzzi-Fioravanti Technical Institute (informatics) — took part in a workshop led by artists Francesco Tacchini and Oliver Smith of the duo dmstfctn. The workshop focused on exploring how Large Language Models (LLMs) function and the ethical issues surrounding the use of artificial intelligence, through the lens of art.

Held in February 2025 at the DAMA Tecnopolo in Bologna, the workshop included a session where students interacted with The Models, an art installation by dmstfcn that highlights the tendency of LLMs to display at times deceptive and antagonistic behaviors. The installation features characters inspired by the masks of Commedia dell’Arte, created using a 3D engine. Its goal is to explore the “personality” traits of AI that often go unnoticed, engaging the audience in understanding these characteristics and moving beyond a purely passive critique of AI.

The second part of the workshop included a hands-on activity, in which students were invited to practically explore how LLMs work. Through targeted exercises, they learned how these models are trained, analyzed existing biases, and critically discussed the ethical and social implications of their use. In particular, students had the opportunity to use the Hugging Face tool to create stories based on prompts.

Student participation was facilitated by the Marconi TSI Service of the Regional School Office for Emilia-Romagna, which is responsible for communication with schools in the region.

Credits

fAIr Media is a project led by Fundación Zaragoza Ciudad del Conocimiento in collaboration with Baltan Laboratories, Sineglossa and Ars Electronica, co-funded by the European Union within the CERV (Citizens, Equality, Rights and Value) programme.

Other projects you might like

EUROPEAN DIGITAL DEAL

MACHINES FOR GOOD