The problem

At the start of Russia's invasion of Ukraine, in February 2022, major tech platforms didn't have enough information on who the country's and region's local trustworthy voices were. Despite the fact that platforms collaborated with international partners on numerous news trust initiatives and indicators, they lacked reliable mechanisms to identify professional and trusted local news sources in smaller markets. Ukrainian Public Broadcaster and Euromaidan Press, for example, were neither verified on Twitter nor recognised by Facebook as news publishers, whereas Russia Today continued to earn advertising revenue on Facebook and YouTube until February 2022.

While technology companies have made significant efforts to combat the spread of dis- and misinformation, very few systems are in place to distinguish credible and trusted content creators, such as high-quality journalists and media organisations. At the time of SembraMedia's Inflection Point International report 62% of the media organisations interviewed [1] were not verified on Twitter and 64% were not verified on Facebook. Overall, only 35% of the media organisations said they had a point person to speak with in connection to the social platforms. Small and local media, investigative journalism organisations, journalists, NGOs, and other professional content creators often face prohibitive content moderation practices when reporting on current events and topics of public interest. “The Chilling: A global study of online violence against women journalists” report found that the percentage of women journalists surveyed who reported online violence to Facebook is 39%, Twitter 26%, Instagram 16%. Also, at least 20% of online gendered violence incidents result in physical violence. Furthermore, many news outlets and journalists struggle to effectively protect their accounts, appeal bogus account suspensions, or to quickly restore wrongly removed or restricted content.

As they lack recognition, their content and accounts are often negatively affected by the platforms' current moderation systems and malicious actors. Recent series of articles by Forbidden Stories partners expose the insidious black market for silencing journalists and their stories. There aren't many, easy to access, procedures in place to provide early warnings, an urgent response, or channels for communication in crisis for trusted content and accounts. Without such a mechanism, attacks on journalists will persist and mis- and dis-information will continue to thrive.

On the other hand, there are users of digital platforms that enjoy privileges. Recommender systems, or algorithms that enable content moderation, benefit those who have achieved a privileged status or recognition. Facebook's XCheck programme has provided special treatment to celebrities, politicians, and other high-profile users, shielding millions of VIP users from the company's normal enforcement process. While XCheck grew to at least 5.8 million users by 2020, only selected media companies in the most lucrative markets are designated as news publishers on platforms.

Finally, the majority of content moderation efforts today focus on online speech-related harms and algorithmic moderation of all content, with only sporadic measures looking at affirmative action, safety protection, and online recognition of credible actors and accounts. Maria Ressa explained it perfectly with a metaphor: “When we focus only on content moderation – it’s like there’s a polluted river. We take a glass. We scoop out the water. We clean up the water, and dump it back in the river. However, what we have to do is to go all the way to the factory polluting the river, shut it down, and then resuscitate the river.”

Last updated