Between innovation and integrity: Fondation Hirondelle’s approach to Artificial Intelligence in journalism

Gwenn Dubourthoumieu / Fondation Hirondelle

Between hopes for greater efficiency and concerns about inherent biases, the integration of Artificial Intelligence (AI) into journalism raises many questions. In response, Fondation Hirondelle has adopted a directive that clearly defines how AI may—and may not—be used within its newsrooms.
Jacqueline Dalton, Head of Editorial at Fondation Hirondelle, shares her reflections on AI’s role in journalism and the organization’s position on the issue.

Why did the question of AI arise within Fondation Hirondelle?

Discussions began about a year and a half ago, when conversational agents like ChatGPT and generative AI tools for images and videos started booming. Our initial goal was to understand both the opportunities and the risks these technologies pose to journalistic work—especially in fragile contexts.

Because AI can be a major driver of misinformation, we wanted to ensure that if our newsrooms used such tools, they would do so transparently and responsibly. Maintaining a trust-based relationship between our media and their audiences is fundamental, so clarity about how and when AI is used is essential.

What were the main concerns regarding its use?

It’s easy to be drawn to these tools for their promise of efficiency, cost reduction, and convenience. But they also carry serious risks that run counter to our commitment to independent, reliable, and community-rooted journalism.

For example, we regularly receive community notices and messages from the public or small enterprises, which we edit and adapt for radio broadcast. Because this process can be time-consuming, the idea of using AI-generated voices arose. Yet, upon reflection, we realized this would undermine the authenticity that defines our media. Our connection with audiences is built on human voices—real people they can relate to. An artificial voice would weaken that bond of trust.

Furthermore, AI systems contain biases. Their goal is to produce plausible responses, not necessarily accurate ones, and their data sources often reflect cultural and informational biases—particularly in the countries where we work. For these reasons, AI can only ever support our teams, never replace them. Journalists remain fully responsible for editorial decisions and for verifying every piece of content.

How are teams being trained and made aware of AI use?

We’re still at the early stages of this process. So far, we’ve identified potentially useful tools and created a directive to guide their use. The document has been well received, offering clarity amid uncertainty about what’s appropriate.

However, guidelines alone aren’t enough. We now need to offer practical training—real-world examples that resonate with journalists’ day-to-day work. Our goal is to train key newsroom staff by the end of the year. The directive will also evolve as we gather feedback from teams and adapt to technological developments.

What are the key points of Fondation Hirondelle’s AI directive?

Our directive is based on four fundamental principles:

  1. A Human-Centered Approach
    AI supports our teams but never replaces them. Journalists retain full responsibility for editorial choices and for validating content.
  2. Transparency with the Public
    Whenever content is primarily generated by an AI tool—whether text, image, audio, or video—this is clearly disclosed to the audience.
  3. Respect for Quality and Integrity
    All content must meet the high standards set out in our Code of Ethics and Professional Conduct. AI use must never compromise factual accuracy or editorial quality.
  4. Ethics and Responsibility
    All AI-generated material must be verified using independent sources. Sensitive, confidential, or personally identifiable information must never be shared with AI tools.

At Fondation Hirondelle, we reaffirm that trustworthy journalism is grounded in human fact-checking, on-the-ground reporting, and ethical accountability. We therefore reject any careless use of AI that could erode public trust—the very foundation of our work.

This article was translated into English by AI and reviewed by a human