
Fondation Hirondelle has adopted a directive defining how AI can, and cannot, be used in its newsrooms. Jacqueline Dalton, our Head of Editorial, reflects on the organization’s stance and the principles guiding it.
Why did Fondation Hirondelle feel the need to reflect on AI?
Jacqueline Dalton: The discussion began when conversational agents like ChatGPT and generative image and video tools started proliferating. Our first step was to understand both the opportunities and the risks these technologies pose to journalism—especially in fragile contexts. Because AI can amplify misinformation, we wanted to ensure that any use within our media would be transparent and responsible. Trust is the cornerstone of our relationship with audiences, so clarity about how and when AI is used is essential.
What were the main concerns?
AI tools promise efficiency and cost savings, but they also carry significant risks that clash with our commitment to independent, credible journalism rooted in local communities.
For instance, we considered using AI-generated voices to read the “community announcements” slot on our airwaves, which features messages written by individuals and small businesses. While this seemed practical and time-saving, we realized it would erode the authenticity that defines our media. Our audiences connect with real people—human voices they know and trust. An artificial voice would weaken that bond.
Beyond this, AI systems are shaped by biases embedded in their data and design. They aim to produce convincing answers, not necessarily accurate ones, and their sources often reflect cultural and informational imbalances— particularly in the countries where we work. For these reasons, AI can only assist our teams, never replace them. Journalists remain fully responsible for editorial choices and for verifying every piece of content.
This piece is taken from the 16th issue of Mediation, titled ‘Information in the age of IA’, which you’ll find attached at the top of this article or here.
