Meta Will End Fact-Checks In US
PALO ALTO, CA (Reuters) – On January 7, Meta announced a significant shift in its content moderation policies, including the end of its U.S. fact-checking program on Facebook and Instagram. The move, aligning with the priorities of incoming president Donald Trump, was described by Meta founder and CEO Mark Zuckerberg as an effort to address political bias in fact-checking.
“We’re going to get rid of fact-checkers who have just been too politically biased and have destroyed more trust than they’ve created, especially in the U.S.,” Zuckerberg said in a post. Instead, Meta platforms will introduce community notes, similar to those on X (formerly Twitter), starting in the U.S.
This decision echoes longstanding criticisms from Trump’s Republican Party and X owner Elon Musk, who have argued that fact-checking disproportionately targets right-wing voices and constitutes censorship. Many conservatives contend that fact-checking programs undermine free speech and have led to legislative proposals in states like Florida and Texas to limit content moderation.
Zuckerberg, in a nod to Trump’s victory, suggested that the recent elections mark a cultural tipping point toward prioritizing free speech over content moderation. The shift comes as Zuckerberg has sought to reconcile with Trump since the latter’s election, including a donation of $1 million to Trump’s inauguration fund.
Trump, who has long been critical of Meta and Zuckerberg for alleged bias, was banned from Facebook following the January 6, 2021, Capitol attack. However, his account was restored in early 2023. Recently, Meta has taken several steps that are likely to resonate with Trump’s team, such as appointing former Republican official Joel Kaplan to head public affairs and naming Ultimate Fighting Championship (UFC) president Dana White, a Trump ally, to the Meta board.
Additionally, Zuckerberg announced that Meta will relocate its trust and safety teams from liberal California to conservative Texas to address concerns about bias within the company’s teams. “That will help us build trust to do this work in places where there is less concern about the bias of our teams,” Zuckerberg explained.
Also Read: Meta Releases AI Model That Can Check Other AI Models’ Work