How to Reconcile Artificial Intelligence and Journalistic Work? Fondation Hirondelle’s Directive

Gwenn Dubourthoumieu / Fondation Hirondelle

Between hopes for greater efficiency and concerns about its many biases, the integration of Artificial Intelligence (AI) into journalistic work raises many questions. In this context, Fondation Hirondelle has adopted a directive aimed at clearly defining the use of AI within its newsrooms. Jacqueline Dalton, Editorial Director at Fondation Hirondelle, reflects on the questions surrounding AI in journalism and the organization’s position on the matter.

Why did the question of AI arise within Fondation Hirondelle?

Discussions began a year and a half ago with the boom in the use of conversational agents such as ChatGPT and generative AI tools for images and videos. Initially, we wanted to understand the opportunities and risks that these tools could pose to journalistic work—particularly in fragile contexts. Since AI is a major vector for the spread of misinformation, we wanted to ensure that, should our newsrooms use AI tools, we would communicate clearly and precisely about their use, in order to maintain the trust-based relationship our media have with their audiences.

What were the main concerns regarding its use?

It can be tempting to fully embrace these tools to gain efficiency, reduce costs, and spare ourselves mental effort! However, significant risks exist that go against our commitment to producing independent, reliable, and community-rooted information. To give a concrete example, we sometimes receive press releases from the public, which we summarize and read on-air. As these processes are time-consuming, the idea of using AI-generated voices to read these bulletins came up. Upon reflection, such a use would go against the authenticity of our media, which derive their strength from their closeness and trust with the audience: using an artificial voice instead of a human one could break that bond.

Moreover, AI tools operate with many inherent biases. First, the goal of a conversational agent is to provide a plausible answer—not necessarily a true one. Second, the tool reflects the content it “scrapes” from the web, which can include significant cultural biases, especially in the countries where we operate. In this context, AI is intended to support our teams, not replace them. Journalists remain fully responsible for editorial decisions and for validating content.

How are teams being trained and made aware of AI use?

We are currently at the beginning of this process. So far, we have identified tools that could be useful and have established a usage guide—a directive—for our teams. This document has been well received, as it provides needed guidance amid the uncertainty over what is acceptable or not. However, this is not enough; we must also implement training sessions with concrete examples that journalists can relate to. Our goal is to train all journalists in our newsrooms by the end of the year. The directive will also be updated based on team feedback.

What are the key points of Fondation Hirondelle’s AI directive?

Four principles are fundamental to our directive:

  • A human-centered approach: AI supports our teams but does not replace them. Journalists are fully responsible for editorial decisions and content validation.
  • Transparency with the public: When content is primarily generated by an AI tool—whether text, images, audio, or video—this is clearly disclosed to the public.
  • Respect for quality and integrity: All content must continue to meet the rigorous standards defined in our Code of Ethics and Professional Conduct. The use of AI must never compromise factual accuracy or editorial quality.
  • Respect for ethics and responsibility: All AI-generated content must be verified using independent sources. Sensitive, confidential, or personally identifiable information is never shared with AI tools.

At Fondation Hirondelle, we want to reaffirm that trustworthy journalism is grounded in human fact-checking, field presence, and ethical responsibility. We therefore reject any careless use of AI that could undermine public trust.