Where Social Media (Likely) Goes From Here


Yesterday, Mark Zuckerberg announced that Meta would be kowtowing to the pressures of the incoming Trump administration and roll back its efforts to fact check its platforms Facebook, Instagram, Threads, and Whatsapp. The policy update, which reportedly was given ahead of time to the Trump camp, was initially announced on Fox News by Meta’s new head of policy Joel Kaplan (replacing Nick Clegg as of last week), and was not announced via email, push notification, or in app notification to Meta’s billions of users.

In effect, Zuckerberg and Meta really were only trying to reach one person with the announcement of what they were doing. Trump, during a press conference later in the day yesterday, said Zuckerberg was “probably” responding to his threats against Zuckerberg personally and Meta as a company. It is clear then that the policy, safety, and moderation changes that were initially made in large part because of posts from Trump, far right wing extremists, and fascist groups, are being removed to welcome hate speech back onto Meta’s platforms and apps.

If history is any guide, it is highly unlikely that Meta will be the last company to alter its policies in the face of the transition of power in the coming weeks. Everyone in the industry will be keeping a close eye on Google, Microsoft, Apple, and Reddit to see what they do. Nobody in the industry wants to be the first mover, but once Meta has cleared the way there is little to prevent other platforms and social networks from following suit.

Facebook and Instagram existed before the active third-party fact checking on the platforms, and even some of the third-party partners in the fact checking program criticized how it was utilized and put into effect (unclearly, too slowly, or otherwise.) The fact of the matter was that Meta was in effect helping keep a lot of these fact checking organizations in operation, paying some of them hundreds of thousands of dollars or more a year for their services. That money will now stop, and it is unclear if the organizations will be able to continue on. While the removal of third-party fact checkers and a transition towards community note style moderation was the headline story, there are other deeper policy changes that read to me as far more serious to the long term health and usability of Meta platforms.

In response to “recent elections” that “feel like a cultural tipping point towards … prioritizing speech” Meta is loosening its content policies around topics Zuckerbeg described as “out of touch with mainstream discourse.” (I would love someone to ask Mark what he meant by this!) These include topics such as gender, gender presentation, sexuality, immigration, religion, and “civic content” which is how Meta sometimes refers to content about politics. Updates to the company’s Hateful Conduct policies specifically calls out that it is now fully permissible to refer to lesbian, gay, trans, and queer people as “mentally ill” on the company’s platforms. Additionally, the policies now state you are allowed to say LGBTQ people should not be allowed in the military, or to teach in schools, or use public restrooms. You are now also allowed to refer to women as household objects, per the revised policies. A provision that forbade targeting people based on race, gender, religion, or ethnicity as “spreading the coronavirus” has been removed, so that users may now freely blame the next pandemic on the group of their choosing.

Prior to yesterday, there were additional restrictions on the kinds of content that could be placed in paid ads or boosted posts to ensure that one could not easily create hate speech and then amplify it to a massive audience. Those additional restrictions appear to have been removed.

The opening of Meta’s hateful content policies page used to say that hateful content “may promote offline violence.” That sentence has also been removed.

In case there was any doubt of the intention of these policy shifts, Zuckerberg also stated in his announcement video that “we're going to move our trust and safety and content moderation teams out of California, and our US-based content review is going to be based in Texas.” (Never mind that there were already trust and safety teams based in Texas.) Why? “I think that will help us build trust to do this work in places where there is less concern about the bias of our teams,” said Zuckerberg. People in California are more “biased” than those in Texas, apparently. I wonder what he could mean by that…

It is clear what kind of speech Meta is choosing to prioritize here, it is not the kind of speech a decent or civil society particularly enjoys.

The alteration of these policies, in addition to the shift towards community note style fact checking (which, to note, have no evidence towards working better or more effectively than the third-party fact checking system previously in place), have led a former Meta employee to say that it is likely a “precursor for genocide.”

Without a change in course, we are very likely going to see an increase in hate speech (one only needs to look at the increase in hate speech on Twitter/X after Musk’s takeover and gutting of the company’s Trust and Safety team), as well as an increase in politically radicalizing content. On top of that, disinformation and misinformation will flourish, especially when a system like community notes can be easily gamed or brigaded, and will not do anything to prevent the algorithmic spread of such information designed to go as wide as, and reach as many people as possible. The industry watched as there was an exodus from Twitter as users, especially those representative of a minority or other protected group, left the platform for other online spaces where they felt safer. I do not think it is unforeseeable that a similar exodus could occur from Meta’s platforms.

As someone who has spent a majority of my career working on the business end of social media marketing, I also would flag that this change in direction and policy is going to create far higher likelihood and possibilities of advertising content appearing next to “non brand-safe” content. The Coca-Cola’s and Nike’s of the world do not want their ads appearing next to posts calling a gay person mentally ill, so it is also possible we see advertising dollars allocated away from Meta platforms as well.

We are perhaps entering a new chapter in social media. One where AI profiles appear on your feeds, hate speech is openly welcome, and we are stuck to self moderate the results. We have watched as internet activity has moved behind closed doors, with Discord groups and private DM chats surging in popularity over the last decade, especially over the last 6 years. We will likely see further moves towards more closely moderated spaces. With Twitter/X’s slip from mainstream relevance, we saw a fragmenting of social activity as different groups moved their activity to different platforms. We will likely see another surge in new signups to Bluesky, and possibly new platforms will emerge to offer an alternative to Instagram or Facebook, where videos and photos were more prioritized than text posts (something that most other smaller or alternative platforms can’t or don’t intend to do.)

As with every new seismic shift in the social media landscape, those of us dependent on these platforms for our careers and livelihoods are stuck in the lurch, waiting to see what happens next. This time around, that likely means we’ll be exposed to an increase in hate speech, some of it directly attacking those who run the accounts of brands and companies on Facebook, Instagram and Threads. We will have to again explain to our bosses and leadership that because Meta is now “prioritizing free speech” there is likely little we can do to stem any hateful comments on our posts besides turning all comments off or walking away from the platform entirely. We will be asked to justify our ad spend, or advocate against it, with knowledge we are or would be giving money directly to the host of said hate speech.

It isn’t going to be fun. But hey, at least we get to mess around on Facebook and Instagram all day as a job, right?

Mitch Goldstein
Jan. 8, 2025



Further reading:
Meta to End Fact-Checking Program in Shift Ahead of Trump Term - The New York Times

Mark Zuckerberg’s Political Evolution, From Apologies to No More Apologies - The New York Times

The X-ification of Meta - WIRED

Mark Zuckerberg’s Eternal Apology Tour - New York Magazine

Meta is getting rid of fact checkers. Zuckerberg acknowledged more harmful content will appear on the platforms now - CNN Business

Meta surrenders to the right on speech - Platformer

Zuckerberg officially gives up - Garbage Day