Meta ends fact-checking
Should Meta have discontinued its third-party fact-checking program? Viewpoints from multiple sides.
Enjoying Framechange? Forward to a friend to help spread the word!
New to Framechange? Sign up for free to see multiple sides in your inbox.
Learn more about our mission to reduce polarization and how we represent different viewpoints here.
What’s happening
This week, Meta CEO Mark Zuckerberg announced that Meta will end its third-party fact-checking program across Facebook, Instagram, and Threads in the US and replace it with a “Community Notes” model similar to that used by X. The company will also relax its policies for moderating posts, allowing more content related to topics previously restricted such as immigration and gender. The moderation changes will not impact the screening of content related to drugs, terrorism, child exploitation, fraud, or scams.
Justification: In its announcement, Meta said the fact-checking program was susceptible to the biases of its fact-checking partners and flagged too much content that many would view as legitimate political speech and discourse. It also emphasized a commitment to “return to the commitment to free expression,” saying that its existing moderation policies had gone “too far” in response to political and societal pressure.
How fact-checking currently works at Meta: Meta’s third-party fact-checking program was first instituted in 2016. Fact-checking partners review content identified by Meta’s platform or their own monitoring and, when a post is found to be misleading, Meta applies a label to it and reduces its distribution. The program uses 10 independent fact-checking organizations in the US and close to 100 globally across 60 languages.
How X’s Community Notes work: X’s Community Notes system relies on approved contributors, which can be anyone that has been on the X platform for 6+ months, applies with a valid phone number, and has no existing violations.
When a contributor sees a post they deem misleading, they can add a “note” underneath it that provides context and additional information. Before the note is shown publicly, enough contributors with ideologically diverse perspectives must review the note and vote it as “helpful.” The ideological profiles of contributors are determined by an algorithm that examines how users have rated notes historically.
With 3B+ users across its family of apps, Meta’s shift away from third-party fact-checking is likely to have significant implications on the way people consume information online. This week, we bring you the viewpoints from multiple sides. Let us know what you think.
Notable viewpoints
More opposed to Meta’s decision:
Meta’s changes will increase the spread of harmful information on its platforms.
As Zuckerberg admitted in his announcement of the policy shift, removing third-party fact checkers and loosening content restrictions will inevitably increase the proliferation of harmful content on the platform. It will also open greater opportunities for disinformation and propaganda campaigns.
Meta’s new policy could proliferate hate speech toward minority communities and lead to real-world violence, particularly if its policy expands to countries that are more susceptible to misinformation online. The United Nations, for example, found information on Facebook played a role in spreading hateful speech that fueled alleged genocidal acts in Myanmar in 2017.
“Social networks such as Facebook and Instagram are not neutral. They are driven by algorithms that maximize interactions. The more polarizing or emotional the content, the greater its reach. Without moderation, this effect will explode. The result? A flood of disinformation, hate speech and radicalization.” (Torsten Beeck, Editor-in-chief of Heise and former Strategic Partner Manager at Meta.)
Fact-checking is not censorship.
Fact-checking is not the same as censoring or removing content. The process focuses on adding context to controversial claims and debunking hoaxes or conspiracy theories with reputable information. Meta’s fact-checkers have followed the International Fact-Checking Network’s (IFCN) Code of Principles that require transparency and nonpartisanship.
The ultimate decision to remove posts has always been with Meta. Fact-checking partners review content in-line with Meta’s policies and rate its accuracy and/or provide context while Meta’s systems decide whether to restrict or remove the content.
The Community Notes model on X has exhibited shortfalls.
Some analyses have found that Community Notes are not applied frequently enough to misleading posts that are fact-checkable and are sometimes applied to posts that are not fact-checkable (e.g., opinions or predictions). For instance, researchers associated with Poynter Institute found that, from a sample of posts about the election on Nov 5 2024, only 29% of posts that were fact-checkable had Community Notes with a “helpful” community rating and 33% of Community Notes with a “helpful” rating were on posts that were not fact-checkable.
The Community Notes model can be slow and ineffective. A 2023 Bloomberg analysis found – among a sample of 400 posts on X deemed to contain misinformation – that a typical Community Note took more than 7 hours (and as high as 70 hours) before its label became public, enough time for a post to go viral before it is addressed.
Because a Community Note requires consensus that it is “helpful” from enough X users with different ideological perspectives (which can be challenging to reach on polarizing issues) before it appears on a misleading post, many accurate Community Notes may not ever be shown. The Center for Countering Digital Hate found that 74% of accurate notes on false or misleading claims about US elections were not made public to users, allowing many of the false claims to spread.
Meta’s changes are politically motivated.
Meta’s policy changes are the latest indication that Zuckerberg is attempting to court President-elect Donald Trump in the wake of his reelection. In December, Meta donated $1M to Trump’s inauguration fund and has recently made a slew of moves including hiring Republican-leaning Joel Kaplan as its Chief Global Affairs Officer and appointing Trump-ally Dana White to its board.
Meta is pre-emptively bowing to pressure to potential policy changes from the incoming Trump administration. For instance, Trump’s pick to chair the Federal Communications Commission, Brendan Carr, sent a letter to Meta and other Big Tech companies in November that implied that their current fact-checking practices could cost them Section 230 protections.
More supportive of Meta’s decision:
Fact-checking is inherently biased.
Fact-checking organizations tend to have a left-leaning bias. For instance, 6 out of the 10 fact-checking organizations Meta partners with in the US are characterized as having a Left or Lean Left bias by AllSides, a media bias rating agency, and none have a Right or Lean Right bias.
Meta’s existing fact-checking approach labeled more claims by Republican officials as “false” than it did for those of Democratic officials. In a sample of posts labeled as “false” by fact-checking partner PolitiFact during 2024, 88 were on posts from Republicans and 31 were on posts from Democrats.
Meta’s fact-checking program has made mistakes. Approximately two out of every ten posts restricted may not have actually violated its policies, and removing fact-checking will reduce the number of posts that are falsely removed. (Summarized from Meta announcement.)
Meta’s decision prioritizes free speech over content policing.
Removing its fact-checking program will help insulate Meta from the potential influence of political powers, which have an outsized role in shaping narratives in traditional media and a growing influence on social media. According to Zuckerberg, the Biden administration pressured Meta to restrict certain speech during the COVID pandemic.
Meta has too often flagged content that should have been welcomed in public discourse rather than filtered as potential misinformation, including articles by doctors about COVID information that was not misleading and a review of a book analyzing the extent of human-driven climate change.
“Before the woke mind virus took control, before he was multibillionaire with global influence, Zuckerberg was idealistic kid who believed in creating a platform where open discourse could flourish. All Americans, liberal or conservative, should be proud that he’s eventually found his way home.” (Gage Klipper, Daily Caller.)
Community Notes is an effective model for curtailing false information.
A crowd model for content moderation may scale better than groups of professional fact-checkers – while maintaining accuracy – given the sheer size of the crowd on social platforms. One 2021 study found that an information review system using groups of politically balanced laypeople produced similar accuracy ratings as professional fact checkers.
Crowdsourcing the accuracy of information within certain fields such as health and science may be more effective than relying on professional fact-checkers. A 2024 study found that, from a sample of Community Notes on posts about the COVID pandemic, 96% of the information shared was accurate and 87% of the cited sources were of high quality.
Community Notes could mitigate trust issues regarding the veracity of content on social media platforms. For instance, a 2024 study indicated that community-driven notes were perceived across the political spectrum as more trustworthy than simple fact-checking.
Some studies have shown that community-driven notes can help reduce the spread of misinformation. For example, a 2024 study found that adding context below an X post reduces its likelihood of being re-shared by almost 50% and increases the probability the post’s creator deletes it by 80%.
Other viewpoints:
Given the declining credibility of X after the Elon Musk acquisition and changes that included its shift toward Community Notes moderation, it’s feasible that Meta’s move toward a similar model will damage its business by reducing credibility, hurting the quality of the user experience, and compelling advertisers to avoid the platform.
Meta’s decision to loosen its fact-checking approach may have more to do with the shifting tide of power in Washington than a genuine policy stance and could just as easily shift again should Democrats win back power in the future.
Zuckerberg likely never wanted to have to police content on the Meta platform given the extra cost, effort, and distraction of doing so and the Washington power shift toward Trump gave him an excuse to abandon it.
Be heard
We want to hear from you! Comment below with your perspective on Meta’s decision to end third-party fact-checking and we may feature it in our socials or future newsletters. Below are topic ideas to consider.
Do you agree with Meta’s decision? Why or why not?
What are some arguments or supporting points you appreciate about a viewpoint you disagree with?
Snippets
During oral arguments, the Supreme Court appeared likely to uphold a ban on TikTok in the US should its parent company not sell it to a non-Chinese buyer. With the ban scheduled to take effect on Jan 19, it is unclear whether any further legal action will delay or block its enforcement. (See our previous coverage of the key arguments.)
Wildfires have continued across Los Angeles County after breaking out earlier this week, killing at least 13 and destroying 12,000+ structures. The wildfires, driven by extremely dry conditions and a rare wind event, are the most destructive in LA history and are estimated to have caused $50B+ in damages.
Dockworkers on the East Coast and Gulf Coast reached a new, six-year labor agreement with port operators to avert a strike that would have started on Jan 15.
A 6.8-magnitude earthquake hit Tibet, killing at least 126 people and injuring 337. Rescue efforts are ongoing.
President-elect Donald Trump reiterated his desires to gain control of Greenland and the Panama Canal during a news conference at his Florida estate. He implied he would not rule out using military force to do so and also suggested using “economic force” to absorb Canada as the 51st state.
Give us your feedback! Please let us know how we can improve.
Music on the bottom
Pulling from the eclectic selection of Bob Dylan songs featured in “A Complete Unknown” released over the holidays, enjoy this classic sung by the man himself, “It Ain’t Me Babe.”
Listen on Spotify, Apple Music, or Amazon Music.
Don't we already get enough unchecked facts via Fox and Truth Social? Just asking.....
Meta had fact checking?