AI was already being used to produce and consult information, but in 2022 it made a significant leap forward with the advent of open-source generative AI. As this technology makes it easy to produce and widely spread plausible-looking content (whether real or fabricated), journalists would like to see its individual and collective use regulated.
Open AI made ChatGPT available online for free in November 2022, and in the three years since, the use of open-source generative AI has become widespread. It can be used to produce all types of content from information found online, combining text, image, audio and video formats. It is an incredible, low-cost production tool, but what is its relevance for an audience seeking information, and for journalists whose role consists of delivering an accurate reflection of reality?
Whatever the model (American Open AI, Chinese DeepSeek, Swiss Apertus, etc.), the responses provided by generative AI depend mainly on the question asked, the data available, and the algorithm that guides its selection from this data. In an online context favouring dominant cultures, it is equally possible to receive either accurate information or a plausible but false idea of reality. As early as 2023, multiple disinformation campaigns were using generative AI to deceive, in order to discredit individuals and organisations and to influence election results.
Journalists quickly realised the extent of this phenomenon, its potential and the threat that it posed to the credibility of their work. NGOs, notably Reporters Without Borders, took the lead in creating charters and practical models for an AI use that ensures reliable and transparent information. This was an important first step, but can AI be used to make further improvements? While it has the potential to generate an infinite number of deepfakes, might it also be capable of fostering the production and widespread broadcast of quality information accessible to the largest possible audience? What technical conditions would be necessary for this to happen? What private and public investments would be required, and what energy parameters would pertain?
These ambitious questions broadly reflect those addressed by regulation that is still in its infancy. Should they remain unanswered, the outlook is grim: AI will be fed on various forms of disinformation and will worsen the spiral of information chaos, eventually creating social rifts which will have an even greater impact in fragile societies. To avoid this outcome, we decided to bring together, in this issue of Mediation, the reflections of Fondation Hirondelle and of experts on the use of AI for public interest information, in the global South as well as the North.
This piece is taken from the 16th issue of Mediation, titled ‘Information in the age of IA’, which you’ll find attached at the top of this article or here.
