...

Accessibility Tools

Search
Close this search box.
Search
Close this search box.

Debunking requires art

Art and Technology
June 26, 2023

Share on

What you will find in this news

What is debunking?

We are living in an era called the “age of fake news”. Although fake news is nothing new in our times, today, compared to the past, its spread is increasing, because the media industry does not put up barriers to entry and it is much easier to put news and information online without having to prove its veracity.

Given the possibility for anyone to access the means of content creation and dissemination, fake news is also produced and disseminated not only by those who produce content for the official media but by anyone who has access to the Internet. Moreover, a greater complexity has recently emerged in the fight against fake news, posed by multichannel and multimedia: the case of a true news story accompanied by a fake image, or a true video but with a fake caption, can occur.

Debunking is a consequence brought about by the growth of this phenomenon: disinformation in these times is seen as a problem from which we must defend ourselves, and figures, tools and skills are being created to do ‘debunking’, i.e. distinguishing true news and content from false ones.

As described in the following paragraphs, the term debunking actually derives from a practice of ancient origin and the debunker is, literally, the ‘debunker’ (from de-bunk, a term of English origin composed of the prefix de-, meaning ‘to remove’, and -bunk, meaning “meaningless speech”). A debunker today is a person who debunks and disproves false, dubious or anti-scientific news: dismantling conspiracies, identifying hoaxes.

What does this have to do with Sineglossa?

Sineglossa’s interest in debunking comes from reading the novel The Q in Conspiracy. QAnon and Its Surroundings: How Conspiracy Fantasies Defend the System by Wu Ming 1, published in March 2021 by Edizioni Alegre. In the volume, an “unidentified narrative object” – as Enrico Manera’s review for Doppiozero states, given the different types of writing it is composed of – the difficult, disturbing and twisted story of QAnon is told, but not only.

Starting from an investigative reportage on QAnon, a semantic and linguistic redefinition of the terms used to describe the phenomenon of conspiracy theories is proposed, to compose a phenomenology of the mythical workings of conspiracy fantasies. Finally, the author highlights which characteristics debunking practices should have in order to be effective: the appeal of stories and the kernel of truth.

If, as Wu Ming 1 states, these two elements are necessary for an effective form of debunking, what would happen if an artist were to take on this challenge?

An excerpt from The Q in Conspiracy, Wu Ming 1, Edizioni Alegre, 2021

From this question stems Sineglossa’s project presented for European Digital Deal, the initiative co-financed by the Creative Europe programme and led by Ars Electronica, of which Sineglossa is the only Italian partner. 14 European cultural organisations together to investigate how the rapid and sometimes naïve adoption of new artificial intelligence technologies can contribute to altering or undermining democratic processes. Through an international call for artists, European Digital Deal will select 12 projects that, after a phase of artistic residencies in the cultural centres involved, will realise interactive installations to expose the risks and opportunities related to the use of artificial intelligence in everyday life of European citizens. In September, Sineglossa will launch a call for artists on the themes of debunking, conspiracy, and combating fake news, in collaboration with Cineca’s Visit Lab and the patronage of the Emilia Romagna Region.

5 experts for artist mentoring

The European Digital Deal project is developed around 5 main objectives, through a series of actions ranging from the organisation of public events to the production of dissemination content, from the production of artistic works to capacity building:

  1. Build and validate: each partner identifies a number of technology, art and digital experts who form “Local Expert Groups (LEG)”, i.e. they define the challenge to be set in the call for artists, make resources, tools and knowledge available for the implementation of the artistic residency programme to which the artist selected from the call for artists will have access, and finally support the artist through online and offline consultancy and mentoring to accompany him/her in the prototyping of the pilot project.
  2. Innovate: through an Open Call launched in each of the member states (12 in total for the network of partners), lɜ artistsɜ will be identified to work on 12 different challenges related to the topics of technology, justice and digital, to individually develop 12 different artistic projects, with the support of some of the partners involved (FZC, ZK, CY and CPN) for research and development activities, and of the planned meetings and exchanges between lɜ artistsɜ, for knowledge sharing.
  3. Learn: the project implements a series of non-formal educational activities for schools and students, through artistic and creative methodologies, and capacity-building opportunities for artists, on the research topics of the member organisations (AI transparency, art and democracy, art and debunking, plant/animal/human AI interaction).
  4. Showcase: the works and research and study content that emerged during the project will be the subject of events, festivals and exhibitions in each of the member states.
  5. Guide: the final step of the project will be to present the results to stakeholders outside the cultural sector, raising awareness among a non-expertɜ audience about the capacity of the arts to attract and develop innovation. Dissemination of the results will take place through the Digital Future Action Plan through the arts.

As can be seen from the list above, the training of the artistɜ is one of the objectives of the European Digital Deal project and is implemented both through free capacity-building courses open to various groups of artists, and through the identification of 5 figures who will provide support and mentoring to the artist selected by the call for artists. The group of experts identified by Sineglossa consists of:

  • Ilaria Bonacossa, 
  • Wu Ming 2,
  • Luca Baraldi,
  • Sara Tonelli,
  • Barbara Busi.
Photo of the first meeting of the Local Experts Group promoted by Sineglossa for the European Digital Deal project, Bologna, Palazzo Vizzani Sanguinetti, June 2023

From the art of debunking to the art for debunking

At the group’s first meeting, held on 13 June in Bologna, hosted by the cultural association Alchemilla at the historical venue of Palazzo Vizzani Sanguinetti, we discussed which indications, limits, stimuli and objectives to share in the call for artists to be issued in September 2023 with the aim of selecting the artist who, during 2024, will be in charge of developing the project curated by Sineglossa on art and debunking. Below is a summary of the most significant reflections that emerged during the meeting.

Photo of the first meeting of the Local Experts Group promoted by Sineglossa for the European Digital Deal project, Bologna, Palazzo Vizzani Sanguinetti, June 2023

In the 1990s, the Luther Blisset collective launched a strange practice to counter disinformation, which over the years has been renamed as “horrorism”: an artistic terrorism that was based on the inoculation of false news. The first real debunking exercise the group conducted with this strategy was during the Beasts of Satan case, with the aim of demonstrating the ease with which news was republished in the press without proper verification. Strictly rationalist debunking, which confronts the person fascinated by conspiracy fantasies with the inconsistencies of what they believe, works ineffectively. In order to understand what might work more, the collective investigated what is commonly called conspiracy, with the aim of deconstructing its characteristic elements. Thus in The Q in Conspiracy, the difference between “conspiracy theory” and “conspiracy fantasy” emerges: the former have a specific, social and political purpose, are real and imperfect, have an end and those responsible are sooner or later, by justice or collective knowledge, identified. The latter, on the other hand, concern an unlimited number of people, have no definite purpose, are perfect and internally consistent, and have no end, so they transcend history.
The person who believes in conspiracy fantasies is simply a person who cannot resist the fascination of stories, on which the “narrative” explanation of the fact, rather than the scientific one, has a hold. The rational debunker in this scenario appears as a party pooper, the one who spoils that feeling of “being the repository of a particular truth” that those fascinated by conspiracy fantasies feel. On the contrary, the debunker should rather satisfy that desire for wonder that has a hold on the audience of conspiracy fantasies. Between debunker and audience, then, should take place the practice that, for example, the magic duo Penn & Teller put into practice in their shows: by questioning the power hierarchy that all the rules of illusionism tell us to respect between the performer and the spectator (do not reveal the trick, do not reveal the rules of the game), but maintaining the ability to perform and arouse wonder, the two artists manage to activate a process of empowerment in the audience, giving them the tools they need to understand what is happening on stage. For more: Penn and Teller’s “Lift Off”; Penn & Teller Explain Ball & Cups on Jonathan Ross 2010.07.09 (Part 2)

Nowadays AI has a similar capacity for wonder as the magic trick because it returns answers to people’s questions similar to the same answers people would give; it would then be interesting if the debunking of this magic, the explanation of what is behind it, of the trick, could be done with the same tools as artificial intelligence. Could the artist be the one to “show the suture”? To explain how artificial intelligence works without destroying the magic that AI brings?

Natalia Trejbalova is an artist from Bratislava who has been working on the plot of the flat earther. In this video, she is fascinated by the fact that, despite Google Earth and all the tools known to date for observing the Earth as a sphere, more and more people are becoming “flat earthers”. She imagines living on a flat Earth, in which the rules of the physics of the three-dimensional world do not apply, riding on the common experience of every person who experiences the Earth’s reality as flat in their daily lives.

Cecilie Waagner Falkenstrøm is a Danish artist currently followed by a factory for the development of tech-art projects, who achieved fame with the project for the United Nations Tech for democracy, but since before the birth of Alexa, her research was visionary and pioneering because she had invented “Frank”, a virtual subject with whom on stage, during the performance, she would talk, and at a certain point he would enter the show calling the audience on the phone. Today, Frank has become a Faustian hall where the audience enters and talks to the AI as if it were an oracle, after having given consent for all their personal data to be processed.

Today, the information system relies on debunking agencies, entities that analyse the veracity of news by comparing them with databases on file. Artificial intelligence in this chain intervenes at different stages and with different types of mechanisms:

  1. Initially, it operates a claim detection component, i.e. a mechanism capable of identifying “claims”, factual information, that can be debunked, as opposed to opinions and interpretations that instead do not need to be debunked.
  2. The AI, having identified the claim, proceeds to evidence retrieval: it looks for something to support the debunking of the claim, relying on search engines that are smarter and more reliable than average (an example is the We Verify toolbox, where the user can enter a link and see its metadata, thus accessing various tools and information to decide whether the news is true or false).
  3. Finally, at some point, comes verdict prediction: the machine uses machine learning and more or less sophisticated algorithms to decide whether having learned all the previous information, the news is true or false, on the basis of the reliability ranking of the sources (e.g. Media Bias fact check is a service that assigns a ranking to the possibility of a certain site publishing a conspiracy). Now the “justification generation” component is getting a big push: once the system has made up its mind, the algorithm can be asked to explain, with a special version of ChatGPT, why it made that decision. Indeed, with generative AI it is true that it is easier to put fake news on the web, but there are applications of generative AI that help in debunking, and countering fake news.

 

A major limitation of AI in this area is being able to operate in the grey zone: there are claims that are not identifiable as true or false, belonging to a kind of “grey zone” in which the AI should not behave in a binary way but demonstrate that it knows how to abstain, to admit that it does not know whether that claim is true or false.

Talking about the ethics of datasets first requires some methodological clarifications, linked to:

  • ambiguity and semantic emptying of terms: different schools in academia chase after newsworthiness, often using terms and words in a wrong, imprecise, incorrect and inadequately researched way, so that reading articles and institutional resources on these topics often means having to compare concepts named in the same way but related to different issues;
  • feeding the tension between humans and technology: it is often forgotten that technologies are tools created by human evolution. Reflections today call for opposition between humans and machines, but the epistemological potential of this relationship emerges if one thinks in terms of integration, interpenetration, not difference;
  • institutional slowness: there is a structural fragility that cannot keep up with the speed of technological progress.


Having acquired these methodological notes, the topic of data ethics could be opened up on three different fronts:

    1. The problem of datasets and data quality: the epistemological issue on data training the algorithm must be dealt with a normative will; the AI ACT – the European legislation on artificial intelligence – is not enough; the process that is affecting the Data Governance ACT, which introduces the issue of dataltruism (data altruism), is much more delicate and complex: data can be used, aggregated, valued, or for common good (who can give it away and to whom?), or it can be put up for sale on transaction mechanisms between private parties. The principle of data altruism is at odds with market physiology on the one hand and the GDPR on the other.
    2. The problem of data accountability: should the traceability and provenance of the data be guaranteed by the end user – the one who buys that data – or by the one who collects it at the initial stage of aggregating information on its customers?
    3. The problem of the truth of the digitised data: profiling for e-marketing on social profiles does not intercept the authentic need of the person but the need that person wants others to perceive. Most people’s presence on social networks is never based on “authentic” data, but on self-representation: if digital marketing profiling believes it is responding to the user’s need, it is actually responding to how that user wants to be seen as needy in the eyes of others.
    4. Institutional truth problem: alleged truth systems compete for dominance in managing the power of truth on a global level.
    5. The problem of the truth of scientific data: which data is more truthful than others? Generative AI systems never speak of truth but of “credible and consistent content”. The hierarchy and institutionality of sources have been called into question by a well-known study – an investigation by Naomi Oreskes and Erik M. Conway – which became a book and a movement, Merchants of Doubts – revealing the publication of studies in authoritative scientific journals by private companies on commission. The manipulation of information flows leads to the problem of the unreliability of institutional sources.

In this scenario, the automation of content production through generative AI further complicates the state of affairs because AI is based on the past, digested by humans according to linear logic. What is the relationship between memory construction and the automation of prediction based on the past? The imagination that artificial intelligence can have about the future is limited by the linear mathematics with which AI has digested the past. An interesting experiment is the European Association for Jewish’s chatbot: a cyborg rabbi from the 2600s who can interrogate himself in the future about Judaism’s past versus its future. It would be interesting to ask then: how to educate the “possible imagination”?

Furthermore, how does access to knowledge relate to transforming into code information verification flows? How much does the creation of steps in the verification process facilitate the possibility of hacking the system?

The artist selected by the call for artists will have the opportunity to work in residence in the Tecnopolo of Bologna and will have the tools of CINECA’s Visit Lab at their disposal. What does it mean for the artist to be immersed in an ecosystem such as the Technopole? And what could be the mutual benefits of this encounter?

Tecnopolo is a 120,000-square-metre regeneration area in Bologna financed by the public sector. It was created with the aim of giving back to the city a space that gives it added value, where technological innovation takes place, and which positions Bologna and the Emilia Romagna region as globally important players in the data sector. Right now it is an area where many different objects are being set up: the European Weather Centre, whose data centre is part of a protected military area due to the geopolitical importance of the data studied; the Leonardo supercomputer, one of the three most powerful in Europe, managed by Cineca; a new institute of the United Nations University (UN) dedicated to Big Data and Artificial Intelligence for the Management of Human Habitat Change – IBAHC; ENEA – the National Agency for New Technologies, Energy and Sustainable Economic Development, which will have its headquarters at the Tecnopolo; and other research institutes, technological innovation enablers, and incubators for new enterprises.

In this panorama, there is a need to pursue a cultural discourse as well, through programmes, actions, and bodies that can translate the enormous supply of technology into meaning for citizens and the territory.

How can reflection on debunking fit into this institutional space?

Read also

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.