Christina Wille and Clara de Solages are respectively director and research analyst at Insecurity Insight, an organisation that supports the aid sector with technical expertise in data collection and analysis to address violence. It provides relevant information for humanitarian workers, organizations, and those working to protect health workers, educators, internally displaced persons, and refugees.

What role does AI play in shaping hostile narratives against humanitarian actors in the Sahel specifically ?
Clara : First of all, it’s important to mention that our analysis does not provide a comprehensive view of all the harmful narratives that affect aid agencies because we only monitor public platforms and groups. […] On the public groups in Mali, Niger and Burkina Faso, we’ve seen examples of AI generated images shared by ordinary users or influencers, these are not directly anti-aid narratives, but pro-military narratives that go along with anti-AID narratives. […] The situation is evolving very quickly : it’s only in the past six months that we’ve seeing such AI generated images circulating. […] I haven’t yet seen examples of AI-generated anti-aid videos, but I’m sure it’s going to come very soon.
Christina : More generally, the algorithms can also have an harmful effect by clustering certain opinions or allowing some individuals to insert themselves into specific conversations.
What recommendation would you make to humanitarian organizations working in the Sahel to better navigate the risks posed by AI on social media ?
Christina : Get acquainted with it. […] I think the recommendation is really to adopt a critical posture, to follow it, to engage with it and to try to understand how it is shifting […]. This work is extremely challenging because of the human resources required to do that. Therefore, greater collaboration is needed among specialized organizations that can help humanitarian agencies navigate this labyrinth of complex interpretations.
Has your monitoring of AI use online revealed other applications of AI that perhaps you did not expect at all ?
Christina : When we started this work, we sort of assumed there was maybe a paid company that was trying to place certain stories within certain media. In fact, I expected that we might find deliberate disinformation campaign. But we didn’t find any of the sort. […] What we observed is that the conversation is fueled by comments rather than posts. So it is not so much a case of disinformation where someone publishes false facts that people start to believe, but rather, factually neutral and correct information is published, and then it is the comments and the way people engage, interpret, and connect this subject to other issues that create associations harmful to organizations.
Clara : […] Conversations are very different on private platforms and groups such as Facebook, WhatsApp, Telegram or Signal. However, we don’t have access to these groups or to any encrypted spaces more generally, so we do not have data to substantiate that. Still, as we are currently observing a decrease in online anti-aid engagement in these countries—probably due to censorship—it is likely that the conversation is shifting to these private platforms. […] More generally, what surprises me is that the use of AI is not yet more widespread in the data we are analyzing.
Insecurity Insight is monitoring narratives about the aid sector across the Sahel and has summarized the findings in the report ‘The Shrinking Humanitarian Space on Social Media: Insight from Burkina Faso, Mali and Niger’. To read here.
This interview is from our Sahel Newsletter of September 2025.
If you would like to receive our newsletters, sign up here.
