Meta surrenders to the right on speech

“I really think this a precursor for genocide,” a former employee tells Platformer

Meta surrenders to the right on speech
Zuckerberg announces the changes in an Instagram Reel on Tuesday. (@zuck)

I. The past

Donald Trump’s surprising victory in the 2016 US presidential election sparked a backlash against tech platforms in general and against Meta in particular. The company then known as Facebook was battered by revelations that its network dramatically amplified the reach of false stories about Trump and his opponent, Hillary Clinton, and was used as part of a successful effort by Russia to sow division in US politics and tilt the election in favor of Trump.

Chastened by the criticism, Meta set out to shore up its defenses. It hired 40,000 content moderators around the world, invested heavily in building new technology to analyze content for potential harms and flag it for review, and became the world’s leading funder of third-party fact-checking organizations. It spent $280 million to create an independent Oversight Board to adjudicate the most difficult questions about online speech. It disrupted dozens of networks of state-sponsored trolls who sought to use Facebook, Instagram, and WhatsApp to spread propaganda and attack dissenters.

CEO Mark Zuckerberg had expected that these moves would generate goodwill for the company, particularly among the Democrats who would retake power after Trump lost in 2020. Instead, he found that disdain for the company remained strongly bipartisan. Republicans scorned him for policies that disproportionately punished the right, who post more misinformation and hate speech than the left does. Democrats blamed him for the country’s increasingly polarized politics and decaying democracy. And all sides pilloried him for the harms that his apps cause in children — an issue that 42 state attorneys general are now suing him over.

Last summer, the threats against Zuckerberg turned newly personal. In 2020, Zuckerberg and his wife had donated $419.5 million to fund nonpartisan election infrastructure projects. (Another effort that had seemingly generated no goodwill for him or Meta whatsoever.) All that the money had done was to help people vote safely during the pandemic. But Republicans twisted Zuckerberg’s donation into a scandal; Trump — who lost the election handily but insisted it had been stolen from him — accused Zuckerberg of plotting against him. 

“We are watching him closely,” Trump wrote in a coffee-table book published ahead of the 2024 election, “and if he does anything illegal this time he will spend the rest of his life in prison.”

By the end of 2024, Zuckerberg had given up on finding any middle path through the polarized and opposite criticisms leveled against him by Republicans and Democrats. His rival Elon Musk had spent the past year showing how Republican party support can be bought — cheaply. 

In business and in life, Zuckerberg’s motivation has only ever been to win. And a doddering, transactional Trump presented Meta with a rare opportunity for a fresh start.

All they would have to do is whatever Trump wanted them to do.

II. The announcements

On Tuesday, Meta announced the most significant changes to its content moderation policies since the aftermath of the 2016 election. The changes include:

  • Ending its fact-checking program, which funds third-party organizations to check the claims in viral Facebook and Instagram posts and downrank them when they are found to contain falsehoods. It will be replaced with a clone of Community Notes, X's volunteer fact-checking program.
  • Eliminating restrictions on some forms of speech previously considered harmful, including some criticisms of immigrants, women, and transgender people.
  • Re-calibrating automated content moderation systems to prioritize only high-severity violations of content policy, such as those involving drugs and terrorism, and reviewing lower-severity violations only when reported by users. (This sounds boring but might be the most important change of all, as we'll get to)
  • Re-introducing discussion of current events, which the company calls "civic content," into Facebook, Instagram, and Threads.
  • Moving content moderation teams from California to Texas to fight the perception that Meta's moderation reflects a liberal Californian bias. (Never mind that the company has always had content moderation teams based in Texas, or that it was Zuckerberg and not the moderators who set the company's policies.)

Zuckerberg announced these changes in an Instagram Reel; Joel Kaplan, a Republican operative and longtime Meta executive who last week replaced Nick Clegg as the company's president of public policy, discussed the changes in an appearance on "Fox and Friends." (See transcripts of both here.)

One way to understand these changes is as a marketing exercise, intended to convey a sense of profound change to an audience of one. In this, Meta appears to have succeeded; Trump today called the company's changes "excellent" and said that the company has "come a long way." ("Mr. Trump also said Meta’s change was 'probably' a result of the threats he had made against the company and Mr. Zuckerberg," dryly noted the Times' Mike Isaac and Theodore Schleifer.)

Whether this will be enough to get Trump to end the current antitrust prosecution against Meta, or otherwise advocate for the company in regulatory affairs, remains to be seen. By the cynical calculus of the company's communications and policy teams, though, one assumes that Trump's comments inspired a round of high-fives in the company's Washington, DC offices.

But these changes are likely to substantially increase the amount of harmful speech on Meta's platforms, according to 10 current and former employees who spoke to Platformer on Tuesday.

Start with the end of Meta's fact-checking partnerships, which perhaps generated the most headlines of the company's changes on Tuesday. While the company has been gradually lowering its investment in fact-checking for a couple years now, Meta's abandonment of the project will have real effects: on the fact-checking organizations for whom Meta was a primary source of revenue, but also in the Facebook and Instagram feeds of which Meta is an increasingly begrudging steward.

Alexios Mantzarlis, the founding director of the International Fact-Checking Network, worked closely with Meta as the company set up its partnerships. He took exception on Tuesday to Zuckerberg's statement that "the fact-checkers have just been too politically biased, and have destroyed more trust than they've created, especially in the US."

What Zuckerberg called bias is a reflection of the fact that the right shares more misinformation from the left, said Mantzarlis, now the director of the Security, Trust, and Safety Initiative at Cornell Tech.

"He chose to ignore research that shows that politically asymmetric interventions against misinformation can result from politically asymmetric sharing of misinformation," Mantzarlis said. "He chose to ignore that a large chunk of the content fact-checkers are flagging is likely not political in nature, but low-quality spammy clickbait that his platforms have commodified. He chose to ignore research that shows Community Notes users are very much motivated by partisan motives and tend to over-target their political opponents."

And while Community Notes has shown some promise on X, a former Twitter executive reminded me today that volunteer content moderation has its limits. Community Notes rarely appear on content outside the United States, and often take longer to appear on viral posts than traditional fact checks. There is also little to no empirical evidence that Community Notes are effective at harm reduction.

Another wrinkle: many Community Notes currently cite as evidence fact-checks created by the fact-checking organizations that Meta just canceled all funding for.

III. The harms to come

While fact-checking dominated the discussion on Tuesday, employees I spoke with were much more concerned about the end of restrictions on many forms of speech previously considered harmful by Meta.

For example, the new policy now allows “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”

So in addition to being able to call gay people insane on Facebook, you can now also say that gay people don't belong in the military, or that trans people shouldn't be able to use the bathroom of their choice, or blame COVID-19 on Chinese people, according to this round-up in Wired. (You can also now call women household objects and property, per CNN.) The company also (why not?!) removed a sentence from its policy explaining that hateful speech can “promote offline violence.”

Much more consequential than the policy changes, though, may be how the radical reduction in Meta's plans to enforce existing policies.

"We used to have filters that scanned for any policy violation," Zuckerberg explained in his Reel. "Now, we're going to focus those filters on tackling illegal and high-severity violations, and for lower-severity violations, we're going to rely on someone reporting an issue before we take action."

Today, imperfect as they are, Meta's systems accurately identify tons of misogyny, homophobia, bullying, harassment, and other forms of abuse and prevent their user base from seeing it. What Zuckerberg is saying is that it will now be up to users to do what automated systems were doing before — a giant step backward for a person who prides himself on having among the world's most advanced AI systems.

Zuckerberg's rationale is that the systems make too many mistakes — and they do. Most people I know have been thrown in "Facebook jail" once or twice, typically for something quite innocuous. I once posted a tweet as an Instagram story that was wrongly designated as showing support for Al Qaeda. Taylor Lorenz reported this week that the company had accidentally prevented teens from searching for LGBTQ content for months.

A younger, more capable Zuckerberg once worked to improve those systems: to build better machine-learning classifiers and reduce the error rates. Today he has chosen to largely throw those systems out, because they have become a political liability.

A Meta spokesman I spoke with today pointed out that over-enforcement complaints span the political divide. Meta has received sustained criticism from the left for removing posts supportive of Palestine, for example, just as it has received sustained criticism from the right for removing their hate speech.

But in its panicked retreat from the system it spent years building, Meta may now be putting billions of users at risk. Just because Meta deleted that line about hateful speech causing offline violence doesn't mean it isn't true. And now the company has all but declared open season on immigrants, transgender people and whatever other targets that Trump and his allies find useful in their fascist project.

"I can't tell you how much harm comes from non-illegal but harmful content," a longtime former trust and safety employee at the company told me. The classifiers that the company is now switching off meaningfully reduced the spread of hate movements on Meta's platforms, they said. "This is not the climate change debate, or pro-life vs. pro-choice. This is degrading, horrible content that leads to violence and that has the intent to harm other people."

In 2018, the United Nations found that Facebook and social media had played a key role in accelerating the Rohingya genocide in Myanmar. "Facebook has been a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the Internet," the UN concluded.

The former employee I spoke with feared that whatever consequences Meta's surrender to the right on speech issues might have in the United States, its effect in the rest of the world could be even more dire.

"I really think this is a precursor for genocide," they said. "We've seen it happen. Real people's lives are actually going to be endangered. I'm just devastated."

Ethics (a love story)

At the end of 2023 I met a man who, among his many other wonderful qualities, worked at a company I had never heard of. Over the next year, we fell in love. And I might not have mentioned any of this except that today he started a new job, at a company that I sometimes write about. Starting today, my boyfriend is a software engineer at Anthropic.

Journalists strive to avoid any entanglements with people employed on their beats, so as to avoid any conflicts of interest, real or perceived. For more than a decade, I swiped left on every profile I saw from people who worked at my beat companies. I take seriously my independence and my readers’ trust, and so to the extent I could avoid any undue intersection between my personal and professional lives, I did.

But sometimes life has other plans for us. I love my boyfriend and I love my job, and so my challenge now is to meet my responsibilities to both. To my readers, that means disclosing this information prominently and continuously as I continue to report on Anthropic, its rivals, and artificial intelligence more broadly.

I have been writing about Anthropic in Platformer and discussing it in Hard Fork for most of the company's short life. (You can find my past columns about it here.) I’ve interviewed more than a dozen of its employees, including its CEO. Because of its importance to the broader story of AI, I plan to continue writing about Anthropic, particularly as it pertains to policy and AI safety, while always prominently disclosing my relationship.

With that in mind, here are some things you should know:

  • My boyfriend found and applied for this job independently, and Anthropic was unaware of our relationship during the application process.
  • While he works at Anthropic, my boyfriend won’t disclose confidential information about the company to me, nor will he be a source for me in my reporting. His own work lies outside my core focus on product and policy.
  • We maintain separate finances and intend to continue doing so. We do not currently live together. Should that change, I will update this disclosure.

And here are the steps I'm taking in an effort to maintain your trust:

  • Publishing this disclosure in the newsletter, at platformer.news/ethics, and updating it whenever circumstances warrant it.
  • Adding a permanent link to my ethics disclosure in every edition of Platformer.
  • Linking to the disclosure at the top of any column that primarily concerns Anthropic, its competitors, or the AI industry at large. (In the latter cases, I intend to do this even when the column does not specifically mention Anthropic.) 

If you see this disclosure enough that you are personally annoyed by it, I will consider this effort a success.

I welcome reader questions and comments on this disclosure and how it might be improved. Paid subscribers can leave them in the comments; anyone can send them to casey@platformer.news.

Governing

Industry

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and good old-fashioned content moderation: casey@platformer.news. Read our ethics policy here.