Meta Platforms, Inc. is revolutionizing its approach to fact-checking by embracing a community-driven system similar to Wikipedia, known as community notes. This transition involves nearly a million unpaid contributors who are entrusted with identifying and correcting misinformation on the platform. Designed to address the limitations of traditional fact-checking, this system enables users to append notes to posts they believe are false or misleading. Despite its promising potential, research indicates that over 90% of proposed community notes are never utilized. However, the system shows promise, particularly in delivering high-quality fact checks for sensitive topics like Covid-19, with a study highlighting a 98% accuracy rate.
The impact of community notes is significant; research suggests they can produce hundreds of fact checks daily. Furthermore, appending a note to a misleading post can organically reduce its viral spread by more than 50%. Additionally, there's an 80% increased likelihood that the original poster will delete the tweet upon receiving a community note. This innovative approach comes as Meta relaxes its rules around politically charged topics such as gender and immigration, opting for a more robust fact-checking program overseen by Keith Coleman.
"I see stories that say 'Meta ends fact checking program'" – Keith Coleman
Meta's decision has sparked debate about the effectiveness of community-driven fact-checking versus professional fact-checkers. Mark Zuckerberg, Meta's CEO, has been at the forefront of this discussion, defending the switch as a move towards scalability and political neutrality.
"Meta replaces existing fact checking program with approach that can scale to cover more content, respond faster and is trusted across the political spectrum" – Mark Zuckerberg
While community notes have their advocates, some experts remain skeptical. Baybars Orsek, managing director of Logically Facts, argues that professional fact-checkers are essential for targeting the most dangerous misinformation and identifying emerging harmful narratives. In contrast, an article by Jonathan Stray from the UC Berkeley Center for Human-Compatible AI and journalist Eve Sneider notes that Meta's expert fact-checkers manage less than ten assessments per day.
"Crowd-sourcing can be a useful component of [an] information moderation system, but it should not be the only component." – Professor Tom Stafford
Meta will continue employing thousands of moderators responsible for removing millions of pieces of content daily, including graphic violence and child sexual exploitation material. However, the community notes system demands agreement among contributors with diverse perspectives to prevent biased ratings, akin to X's model.
"Birdwatch", as it was then known, began in 2021 and drew inspiration from Wikipedia, which is written and edited by volunteers. – X
This transition hasn't been without controversy. Alexios Mantzarlis suggests that Zuckerberg's decision may have been influenced by political pressures.
"Mark Zuckerberg was clearly pandering to the incoming administration and to Elon Musk." – Alexios Mantzarlis
Critics argue that traditional fact-checkers have become politically biased and have eroded trust rather than building it.
"Fact checkers have just been too politically biased and have destroyed more trust than they've created, especially in the US" – Mark Zuckerberg
Despite these challenges, the community notes system is viewed as a potential solution for scaling content moderation efficiently and equitably.
"Fact checkers started becoming arbiters of truth in a substantial way that upset politically-motivated partisans and people in power and suddenly, weaponised attacks were on them" – Alexios Mantzarlis