Fondation Hirondelle      +41 21 654 20 20


Young eople watch videos on the app TikTok on their mobile phones in Mumbai, India, in November 2019. ©Indranil Mukherjee / AFP Young eople watch videos on the app TikTok on their mobile phones in Mumbai, India, in November 2019.

The complex fabric of social media

How  are information and misinformation disseminated through digital social networks? Jens Koed Madsen, researcher at the Complex Human-Environmental Systems Simulation Laboratory (CoHESyS) at Oxford University explains the main biases induced by users and algorithms. An interview published in the 4th issue of our biannual publication "Mediation": "Informing despite social networks".

59% of people worldwide say that it is getting harder to tell if a piece of news was produced by a respected media organization . Would you say that widespread use of social networks in the last 10 years has introduced confusion in what can be considered trustworthy information?

Jens Koed Madsen: Digital social networks are increasingly becoming a significant source of news and information for most citizens. This has fundamentally changed our information structures, as classic news outlets have editorial oversight. In other words, we have gone from top-down mass media to a landscape of top-down and bottom-up information sharing.
This has significant advantages, as it democratises who can participate in public discourse, enables citizens to speak out against powerful social entities or persons, and makes it easier to expose wrongdoings (e.g. social media gave #MeToo more impact and reach). However, it also has serious disadvantages: it is easy to generate false or misleading accounts, it clouds accountability (it is difficult to know where a rumour or a piece of misinformation starts).
Given the ease of invention of fake accounts and disinformation, it is no wonder that many people find it increasingly difficult to know what is credible or not. As these systems are bottom-up, we need to understand how information can travel in order to design the social networks in ways that protect citizens from deliberate misinformation while retaining their freedom of expression.

How do social networks "work" psychologically? Can you give some examples of mental bias they may foster?

Psychology has identified numerous biases that are related to how we seek out and process the information we get on social networks. In particular, confirmation bias and the continued influence effect are relevant to acknowledge. Confirmation bias is the penchant to search for, interpret, and recall information in ways that confirm what that person already believes. Clearly, as the amount of data rises in social networks, it becomes easier for all citizens to identify information that confirms their prior beliefs. The continued influence effect shows that information initially presented as true continues to influence people’s beliefs even when they see corrections they deem to be clear and credible. That is, even when the misinformation is corrected, it can continue to do damage. People who wish to disseminate misinformation on social networks can exploit biases.
In addition to personal biases, the structure of the network influences the dissemination of correct and incorrect information. Networks are dynamic systems where people follow and un-follow each other and where underlying algorithms promote or suppress content. Users depend on how it is designed by the company in question. For example, a company may decide to promote polarising statements (if they elicit more user activity), which in turn may contribute to polarisation and dissemination of misinformation. In a study, we have shown that echo chambers can arise as a consequence of the structure of the network even in conditions where people have no biases.
We have to understand the psychology of citizens, the structure of the network, and how people engage with each other on these platforms, as all of these influence how misinformation can spread and be maintained. It is not enough to just understand user biases, as this puts undue weight on the users and ignores the role of system design and interactivity.

If you were a media editor, how would you use social networks in order for your media to be acknowledged as a trustworthy source of information?

As information systems have become bottom-up, the number of people who produce content has increased. This puts pressure on media outlets, as they risk being equivocated with any other entities that provide opinions or news, such as individual citizens, bots, and politicians. In order to become credible on social networks, media need to set themselves apart from opinionated or misleading contributors.
As many opinions and claims on social networks are unsubstantiated or simply revolve around claims with little to no backing, news media can differ in the source material: they can highlight and identify sources behind claims or statements, they can make clear the evidential reasoning that leads to a specific conclusion or claim, and they can interrogate hearsay or conjecture. By providing thorough critical journalism and source material, news media can set their content apart and substantiate their claims. Furthermore, it might be prudent to stop reporting what is trending on social media, as this equivocates reports from that media outlet with Twitter chatter.

Do you think social networks should be more regulated? If so, what should be done to prevent their use in disseminating misinformation?

Any country with libel laws, consumer protection agencies, or punishment of hate-speech or verbal threats imposes societally agreed restrictions on what can and cannot be said. Given increasingly complex information systems where everyone can participate (including malevolent actors), it is paramount that we consider how speech can be (or should be) regulated on social networks. In particular, regulatory frameworks should seek to limit the deliberate dissemination of misinformation without punishing citizens for accidentally doing so. This will involve citizens, journalists, regulators, and network providers.
These interventions can come in the form of fact checking, warning labels, algorithmic promotion of trusted news outlets, and so forth. However, critically we don’t know how ordinary citizens, purveyors of misinformation and network providers will adapt to regulatory interventions. For example, will citizens switch to competing social networks if a network decides to impose communal standards and norms? Until we understand the complex fabric of bottom-up communication on social networks, solutions from politicians, media people, pundits and social network providers will be inadequate.

This interview was published in January 2020 in our biannual publication Mediation Nº4 "Informing in spite of social networks". Read or download the full magazine here :