Image

Everything You Need to Know About Meta’s Change in Content Rules

Meta sparked the first major social media controversy of 2025 last week, when it announced that it would be getting rid of fact-checking, and loosening its rules around what people can say in its apps.

Which many view as an attempt to appease incoming U.S. President Donald Trump, who’s been highly critical of Meta in the past, but why exactly are Zuck and Co. doing this, and will it actually be a positive or a negative for Facebook and Instagram (and Threads) users?

Here’s a look at all the key questions around Meta’s apparent backflip, and why Zuck and Co. have chosen to take things in a new direction at this stage.

What exactly is changing?

Meta’s updating its rules around what people will be allowed to say in their posts, while it’s also getting rid of its third-party fact-checking program, in favor of an X-style, crowd-sourced Community Notes system. It’s also bringing more content deemed “political” back to people’s feeds, after steadily reducing the presence of such over the past four years.

On the first element, Meta is specifically moving to allow more kinds of speech on topics “that are frequently subject to political debate”, beginning with discussion related to immigration and gender identity.

The Intercept has viewed examples of Meta’s updated moderation guidelines, which are currently being circulated to Meta staff, and it’s shared some of the notes provided by Meta relating to characterizations and comments that had been against its rules, but will now be acceptable.

Examples of now acceptable comments include:

“Immigrants are grubby, filthy pieces of shit.”

“Gays are freaks.”

“Trans people are mentally ill”

All of these are now fine, and will not be penalized in the same way (if at all), which will open the door for more hate speech in Meta’s apps, while expanded characterizations of immigrants and the LGBTQ+ community will also now get more leeway.

The wording of Meta’s update also suggests that further changes could be coming, based on whatever is subject to political debate at any given time.

The removal of fact-checkers, meanwhile, will lessen Meta’s defenses against the spread of misinformation. based on Meta’s own evidence (as discussed below), while the re-introduction of political content could see broader exposure to divisive debate across Meta’s apps.

Why get rid of fact-checkers?

According to Zuckerberg, the fact-checking partners that Meta has used are inherently politically biased.

As Zuckerberg explained to Joe Rogan last week:

“Some of the people whose job is to do fact-checking, a lot of their industry is focused on political fact-checking, so they’re kind of veered in that direction. We kept on trying to basically get it to be what we had originally intended, which is not to judge people’s opinions, but to provide a layer to help fact-check some of the stuff that seems the most extreme. But it was never accepted by people broadly. I think people just felt like the fact-checkers were too biased, and not necessarily even so much in what they ruled, but a lot of the time it was just what types of things they chose to even go and fact-check in the first place.”

So, there are a couple of key tells here.

First off, Zuckerberg explained this in a three-hour interview with Joe Rogan, who has long-held right-wing views. Of course, getting rid of fact-checkers also aligns with the right-wing view that freedom of speech should be absolute, and that social platforms should not play any role at all in dictating what can and cannot be shared in their apps. But the fact that Zuckerberg chose to announce these updates on Rogan’s podcast, while also sending his chief public policy to do the same on Fox News is relevant.

The message is clear: Meta is making these changes to appease right-wing supporters, and align with the views of incoming President Donald Trump. There can be no other way to view this, and that’s also a relevant aspect in identifying fact-checkers as politically compromised.

But is it true? Are Meta’s fact-checking partners politically biased in their efforts?

It’s impossible to know without assessing the full scope of Meta’s fact-checking program, but given the statistical context that we do have, it’s hard to see how removing fact-checks entirely is going to be beneficial overall.

Back in 2018, Meta noted that when fact-checkers rate an article as false, its future views are reduced by over 80% on average, while various academic studies have shown that fact-checks significantly reduce false beliefs, as well as the amount of redistribution fact-checked posts get.

And when you also consider that misinformation sees six times more engagement than factual news on Facebook specifically, that seems like a significant element that you’re taking away.

The next question, then, is whether Community Notes, which have been a success in some ways, and a failure in others on X, can replace the responsiveness and performance of third-party fact-checks.

The major flaw in Community Notes remains its reliance on political consensus to display a note (i.e. Notes contributors of opposing political viewpoints need to agree that a note is necessary), in order to ensure neutrality in which notes are displayed.

Independent analysis shows that on many of the most divisive issues, such agreement will never come, and thus, the majority of notes on these critical points of order are never displayed.

That could mean that political misinformation, which is likely to gain more momentum under Trump, could be spread a lot further in Meta’s apps than it ever could be on X.

Zuckerberg says that the changes are about getting the company back to its original mission, which is to make everybody more connected, but has that always been Meta’s aim?

Part of Zuckerberg’s justification for revising Meta’s moderation rules is, as Zuckerberg describes, “getting back to our original mission of giving people the power to share and make the world more open and connected.”

Which isn’t exactly what the company was founded upon, nor its original focus, but it has been a part of Meta’s stated approach for over a decade, in technical terms.

Back in 2014, Zuckerberg announced that Meta’s mission statement would now be “Connect the World”, switching from its first corporate motto of “Build Fast and Break Things”. Before that, nobody had any real idea of how significant, or influential, Facebook/Meta would become, with the company IPO being launched in 2012.

So while it hasn’t been the central focus of the company forever, it has been a key aim, even if it was arguably being applied in a different context in those early years.

In 2014, Zuck and Co. had set their sights on branching into every nation, and building new systems of connectivity to link more people into Facebook’s ever-growing userbase. So, yes, the aim was to connect people, but seemingly this was in a more literal sense, of connecting more people to Facebook.

In other words, it is somewhat disingenuous of Zuckerberg to suggest that connecting all people of all political viewpoints has always been his aim, but he can lean on these past mission statements to suggest that this was a central goal.

But really, Zuckerberg’s main aim, now and always, is business growth, and maximizing Meta’s capacity to dominate the competition.

When you view these latest moves through that prism, not the narrative that Zuckerberg would prefer, the announced changes make more sense.

Doesn’t this go against everything that Meta’s been telling us for the last ten years?

Kind of.

Looking back over Zuckerberg’s announcements on moderation and political speech, Meta has made some significant commitments that would seemingly run counter to this new approach.

In 2015, after the U.S. Supreme Court legalized same-sex marriage, Zuckerberg took the opportunity to celebrate the role that Facebook had played in enhancing LGBT connection.

Facebook LGBT groups

Easing the company’s rules around hate speech in this context does seem contradictory, particularly in regards to implementing specific exclusions for commentary that could be used to attack members of the LGBTQ+ community.

In 2017, after the mass shooting in Charlottesville, Zuckerberg committed to making Facebook a place “where everyone can feel safe”. You could argue that these new rules also go against this.

In 2018, following the controversy of the 2016 U.S. election, and the suggestion that Russian bot farms may have interfered with democratic process, Zuckerberg outlined a new approach, while also explaining how Meta had “fundamentally altered our DNA to focus more on preventing harm in all our services.”

Meta’s big focus was misinformation, and limiting the distribution of content that comes close to breaking the platform’s rules, but doesn’t quite do so.

According to Zuckerberg, this is a key problem, because the closer content gets to breaking the rules, the more engagement it sees.

Zuckerberg engagement curve

Zuckerberg’s answer to this was to focus on training its AI systems to detect borderline content, so that Meta could proactively reduce its distribution. So it was less about the removal or limiting of such, and more about addressing the incentive for posting, as the engagement wouldn’t be as high.

Also important, Zuckerberg also shared this note:

In the past year, we have prioritized identifying people and content related to spreading hate in countries with crises like Myanmar. We were too slow to get started here, but in the third quarter of 2018, we proactively identified about 63% of the hate speech we removed in Myanmar, up from just 13% in the last quarter of 2017.”

The role that Facebook played in political unrest in Myanmar has been well-documented, and Meta has worked hard in the years since to improve its systems to limit political polarization and misinformation, stemming largely from this incident.

The relevance of such in today’s context is that the U.S. is not the only nation that uses Meta’s apps, and these rule changes could also lead to harm in other regions.

But the bottom line is that hate speech, and the spread of misinformation, was a key focus for Meta in 2018, with Zuckerberg also noting that:

We are also making progress on hate speech, now with 52% identified proactively. This work will require further advances in technology as well as hiring more language experts to get to the levels we need.”

Again, Meta’s approach has been about safety, and ensuring users feel safe in using its apps.

This approach seems to have been largely upheld by all of Meta’s announcements and policy shifts over the past 6 years, including its decision to suspend Donald Trump’s account in the wake of the Capitol Riots in 2021, as well as its move away from political content entirely, as a means to combat division and angst, which has been impacting Facebook usage.

Indeed, the message from Meta more recently has been that politics is simply bad for business, while it also doesn’t need political discussion anymore either way, because Facebook and Instagram engagement has been increasing based on AI-recommended content, primarily Reels, which now make up more than 50% of the content that users see in their feeds.

So no more negotiating with news publishers over rights deals, no more promoting politically-aligned posts that can spark anger, and no need to get Zuckerberg himself entangled in congressional explorations of the role that social apps play in social division.

Meta seemed to be moving on, and was keen to distance itself from such entirely.

But then last year, Meta started to change its tune, with Zuckerberg penning a letter to Congress in which he expressed regret over his company’s decision to censor COVID vaccine misinformation, at the behest of Biden administration officials, and the mistaken blocking of a New York Post story about Hunter Biden’s laptop.

At that stage, Trump looked to be gaining in the polls, on the way to his subsequent re-election. That could be coincidental timing, but it did seem like Zuckerberg may have been setting the table for last week’s switch-up based on the poll projections.

Also of note, Trump had threatened to jail Zuckerberg for life if he was ever re-elected, due to what he viewed as political overreach by Facebook in suspending his account.

Which leads into the next query:

What does Meta (and Zuckerberg) stand to gain from siding with President Trump?

A lot. Here are just a few ways in which the U.S. Government can play a part in improving Meta’s business opportunities:

  • Foreign tariffs – Trump has vowed to increase tariffs on global imports to the U.S., including a 60% jump in tariffs on Chinese imports. Meta is reliant on Chinese components to build its VR and AR headsets, and recently shifted elements of the production of its AR glasses to China. As such, any tariffs on Chinese imports could end up costing Meta billions of dollars, while also reducing its capacity to make its AR and VR devices affordable enough to secure mass adoption.  
  • EU regulation – Meta has been fined more than $2.5 billion by European regulators over the past two years alone, based on various violations of increasingly stringent EU consumer protection codes relating to online entities. Having the U.S. Government in its corner could reduce the EU Commission’s propensity to resort to fines, due to fears of retaliatory penalties in U.S. trade.     
  • AI regulation – Meta also needs U.S. regulators to stay out of its business on AI advancement, in order to ensure that it can push ahead with its various AI projects. Many have raised concerns about the impacts that AI may have, and the need for more stringent security and regulation to limit potential harm. Meta doesn’t want that, so it’ll need to lean on its Washington connections to oppose such. Also, with Trump’s new best friend Elon Musk pushing his own AI projects, there’s a risk that new rules could be implemented that penalize Meta in favor of xAI. A better relationship with Trump could mitigate this.
  • Keeping TikTok out of the U.S. – Who benefits most from TikTok being banned in the U.S.? With TikTok gone, more people will turn to Instagram and Facebook, so Meta clearly wins out if the Trump Administration decides against pushing to keep the app available to Americans.
  • Keeping the FTC off Meta’s back – Finally, Meta has come under constant scrutiny from the FTC, which is still threatening to force the company to divest both Instagram and WhatsApp to reduce its market dominance. Less time spent battling the FTC means more time, and money, to invest in its technological development, while also reducing the risk of impacting Meta’s bottom line.

So clearly, Zuck and Co. have a lot to gain from being in partnership with Trump, and nothing to gain from maintaining opposition on ideological grounds.

Zuckerberg has said that Meta’s policy revision is based on the political mood of the people, but “people” in this context is really only the people in charge, whom Zuckerberg knows he needs on his side to maximize Meta’s opportunities.

So what’s actually going to happen as a result of this shift?

Here’s the thing: The impact of this change could be significantly less on Facebook and IG this time around because nobody posts to Facebook or Instagram anymore either way.

That’s not to dilute the responsibility that Zuck and Co. have, as any platform that’s used by 3 billion people is going to play a role in shaping opinions and political discourse. But the main difference caused by Meta’s AI recommendation shift is that people are relying on Meta’s apps less and less for political content, or for sharing their personal opinions.

Back in 2022, Instagram chief Adam Mosseri noted that “friends post a lot more to stories and send a lot more DMs than they post to Feed”. Facebook has seen the same, with the influx of recommended Reels now shifting the platform away from its “social” roots, and more towards entertainment. Which is better for driving engagement, and keeping people in the app longer (so they can view more ads). But in effect, it also means that Meta now has far less influence than it had back in 2016, when it was first identified as a legitimate political force.

So while Meta is going to show people more political content, if they want to see it (you’ll still be able to opt-out if you choose), I’m not sure that the impact of such will be as significant even as X this time around. Which may be optimistic, but again, people just aren’t posting to Facebook as much as they once were, while private groups have always had certain exemptions from scrutiny, due to people simply not being able to see and report them.

Now, more engagement happens within messaging groups, and that still seems like the most viable vector for the sharing of political information. And the same as private FB groups, that won’t be detected, so Meta’s impact in this respect may not be as significant as it might seem.

Though there will be impacts, and certain groups are going to feel the brunt of these changes. The amplification of misinformation is also a major concern, but maybe, Meta’s algorithms simply won’t allow for the same level of spread of such as it did in the past.

It doesn’t seem like Trump is going to return to Facebook either way, due to his contractual ties with Truth Social. And with his right-hand man Elon also tied to X, I assume those will be the primary propaganda focus of Trump’s supporters.

So maybe, Meta feels safer in making this change because it’s not in the same position as it once was for news distribution, and maybe, the impact of these changes won’t be the same either.

We’ll have to wait and see, but maybe, Meta has actually set the groundwork to minimize such a change in approach, enabling Zuckerberg to appeal to Trump and his supporters, while also reducing the actual effects either way.

I don’t think that was by design, necessarily, as again, Zuckerberg’s decisions are based on winning for his business, not on social harms. But maybe, this won’t lead to Facebook becoming an all-out misinformation bullhorn, as X now is.

The only proviso to this is that Facebook is still used by many middle-aged people (30-49), and is also where this cohort gets a significant amount of their news input:

Pew Research Social Media News usage

This group is also more likely to turn out to vote in the U.S., so Facebook’s influence is still notable in this respect.

Overall, the changes seem like a negative for social discourse, and do seem to be aligned with the whims of the incoming president, as opposed to what might be for the greater good. We won’t know till we see the full impact of the spread of misinformation or harm as a result, but also, we only know those impacts in retrospect.

But the bottom line, based on the evidence presented, is that Zuck and Co. are keen to win favor for their business, over any potential impacts. And when Zuckerberg changes his tune again, when the next president is voted in, we’ll go through all of this once more.    

SHARE THIS POST