Some people insist hate speech doesn’t really exist. “One man’s vulgarity is another man’s lyric,” wrote Justice John Marshall Harlan II in Cohen v. California. “What constitutes ‘hate speech’ is inherently subjective,” argues the Free Speech Center at Middle Tennessee State University. Others claim the term is simply used to shut down ideas people disagree with.
In September 2023, the Center for Countering Digital Hate reported 300 posts to X through official channels—posts they documented as violating the platform’s stated policies against hateful conduct. X failed to remove 86% of them. Here’s a sampling of what stayed up:
- A post labeling Hitler “a hero who will help secure a future for white children!”
- “Blacks don’t need provoking before becoming violent. It’s in their nature.”
- Encouragement to “Stop Race mixing” and “break up with your non-white gf today”
- Conspiracy content claiming Jews “control the blacks” and orchestrate mass migration
- Posts mocking Anne Frank and denying the Holocaust
- Racist caricatures depicting Black and Jewish people as subhuman
So let’s stop pretending hate speech doesn’t exist. It absolutely does. It’s also legal in the United States, and I’m not arguing that it shouldn’t be. But the First Amendment restricts government censorship, not private platforms. X can host whatever it wants. But “can” isn’t “must”—and X’s own policies explicitly prohibit this content. These posts were reported through official channels. X chose to leave 86% of them up.
This article isn’t a free speech debate. It’s an examination of what Elon Musk’s platform actively protects—and what it suppresses.
That same month, the main Canadian Active Club channel on Telegram celebrated. The neo-Nazi group noted that X now allowed “extreme right messaging to flourish on the platform for the first time since the 2015 to mid 2017 era.” They weren’t subtle about what this meant: X let them escape their online silos. Mainstream platforms meant mainstream recruitment.
Meanwhile, researchers and journalists documenting this exact activity faced systematic suppression. Accounts suspended through coordinated abuse of platform policies. Reach throttled despite exceptional engagement metrics. Analysis of extremism flagged by the same automated systems that allowed actual extremist content to proliferate.
This pattern—amplify the extremists, suppress the activists and analysts—isn’t an enforcement error.
The weaponization of platform policies against researchers specifically predates Musk’s ownership. In December 2021, NPR and the Washington Post both reported that far-right activists were coaching followers on how to abuse Twitter’s new privacy policy to force removal of photos shared by anti-extremism researchers and journalists. Researcher Gwen Snyder, who had documented a public Proud Boys march in Philadelphia where she identified extremists alongside people affiliated with the Republican Party, received a notification at 2:30 AM that her thread had been reported and she had to either delete it, appeal, or face permanent suspension.
Twitter later acknowledged the policy had been misapplied—documentation of a public event in a public space should never have been flagged. But the damage was done. When the platform was “overwhelmed with these reports, these coordinated reports,” its trust and safety team “messed up” and suspended researchers’ accounts. The system couldn’t distinguish between legitimate privacy concerns and coordinated abuse campaigns designed to silence documentation of extremist activity.
Multiple researchers described the result as a “chilling effect” on their ability to publish real-time reports and reach press and politicians. Neo-Nazis and far-right activists had discovered something useful: the platform’s own policies could be turned into weapons against anyone tracking their movements.
That was under the old management. Under Musk, the dynamic shifted from exploitation to alignment.
After Elon Musk’s October 2022 takeover, the platform took several steps that empowered extremists directly. The Anti-Defamation League documented both an increase in antisemitic content and a decrease in moderation of antisemitic posts—a trend that accelerated with reported cuts to Twitter’s content moderation staff. A USC study published in February 2025 found that hate speech increased 50% over the study period, with transphobic slurs up 260%, homophobic tweets up 30%, and racist tweets up 42%. Engagement grew too: 70% more likes for hate speech posts per day compared to 22% more for random English-language tweets.
Musk’s “amnesty” for suspended accounts reinstated prominent extremists like QAnon figures Romana Didulo and Brian Cates. The decision to sell verified blue checkmarks allowed white supremacists like Richard Spencer and Jason Kessler—organizers of the deadly 2017 Unite the Right rally in Charlottesville—to regain verified status alongside figures like Lauren Southern, who had promoted “white genocide” conspiracy theories, and Chaya Raichik’s anti-LGBTQ account Libs of TikTok.
The Counter Extremism Project’s monitoring has consistently found extreme-right and neo-Nazi content on X, including accounts affiliated with white supremacist Active Clubs operating internationally, videos glorifying terrorist attacks, and manifestos from mass shooters. In one documented case, a verified account posted a notorious antisemitic video that received nearly one million views within two weeks.
The acceleration of virulent antisemitism on the right wasn’t coincidental. It occurred after Musk unblocked Nick Fuentes, an antisemitic white supremacist influencer, in May 2024. Fuentes remains blocked on other major platforms, including Meta and YouTube. But on X, he’s welcome—and the algorithm rewards engagement.
In February 2023, President Biden tweeted support for the Philadelphia Eagles during the Super Bowl. His post generated nearly 29 million impressions. Musk also posted in support of the Eagles. His tweet got about 9 million impressions before he deleted it. According to Platformer’s reporting, Musk flew to the Bay Area that night to demand answers from his team. At 2:36 AM, his cousin James Musk sent an urgent Slack message to company engineers: “We are debugging an issue with engagement across the platform. Any people who can make dashboards and write software please can you help solve this problem. This is high urgency.”
Eighty engineers were pulled into the project. The “fix” they implemented: code that excluded Musk’s tweets from filters designed to improve timeline quality and artificially boosted them by a factor of 1,000 using a tool internally called the “power user multiplier.” A tool applied only to Musk.
Users across the platform complained about seeing an abundance of his posts. Musk acknowledged the situation with a meme showing “Elon’s Tweets” force-feeding “Twitter” a bottle of milk. He later posted: “Please stay tuned while we make adjustments to the uh… ‘algorithm.'”
The boost was eventually reduced below 1,000x—but not eliminated. A current Twitter employee told Platformer: “He bought the company, made a point of showcasing what he believed was broken and manipulated under previous management, then turns around and manipulates the platform to force engagement on all users to hear only his voice.”
This wasn’t a one-time tantrum. To encourage subscription sign-ups, Musk made blue-check replies show up first under all posts. He was amplifying the speech of his paid-up fans while deprecating everyone else’s. It also assured that the most visible replies under any popular post—especially his own—would be largely from people who shared his increasingly reactionary worldview. As Jacob Silverman writes in Gilded Rage, “More than any other change, this one act cemented Twitter’s new identity as a right-wing media platform that acted as an extension of Musk’s own political beliefs, paranoid suspicions, midnight musings and personal interests.”
Research confirms what users experience. A Harvard Kennedy School study found that after Musk’s acquisition, contentious actors on the platform saw a sizeable boost in post engagement—and “another explanation for the results I observed is that there were some unobserved changes to the Twitter algorithm, granting certain users more visibility. This would align with recent research that finds a modest right-wing bias in the Twitter amplification algorithm.”
A 2025 study from researchers analyzing the 2024 U.S. presidential election found that the algorithm amplifies conservative figures more heavily in right-leaning timelines while de-amplifying left-leaning voices. The pattern was consistent: right-wing influencers received disproportionate visibility, while liberal-leaning accounts were systematically suppressed. The study found that “X’s algorithm skews exposure toward a few high-popularity accounts across all users, with right-leaning users experiencing the highest level of exposure inequality.”
The Brookings Institution’s analysis was blunter: in six of seven countries studied, right-wing political content received higher algorithmic amplification than left-wing content. Germany was the only exception.
While extremist content flourished, those documenting it faced escalating hostility—now from the platform itself.
In July 2023, X sent a letter to the Center for Countering Digital Hate threatening legal action over their research documenting hate speech on the platform. Imran Ahmed, the center’s founder, told the Associated Press that his group had never received a similar response from any tech company despite years of studying social media, hate speech, and extremism. Typically, companies responded by defending their work or promising to address identified problems.
“This is an unprecedented escalation by a social media company against independent researchers,” Ahmed said. “Musk has just declared open war. If Musk succeeds in silencing us other researchers will be next in line.”
The pattern extended beyond legal threats. Elizabeth Blakey documented in Contexts how X cut off free academic access to its data—access that had existed for 17 years as a social contract enabling research on elections, hate speech, and other crucial public issues. The USC researchers studying hate speech on the platform had their access cut off mid-study due to a policy change replacing free academic access with prohibitively expensive API pricing.
Republican Representative Jim Jordan launched investigations targeting researchers like Kate Starbird of the University of Washington, whose work focused on identifying dangerous rumors in social media and protecting electoral integrity. The message was clear: document extremism on X and face consequences from both the platform and its political allies.
Edward Perez, a former Twitter director for civic integrity, stopped using X entirely in April 2024. Before he left, he wrote a postmortem. He called Musk “a poster child for divisive racist, sexist, and plutocratic tendencies that undermine democracy’s commitment to equality for all.” But the line that stuck with me: “Musk’s willingness to burn down what he purchased suggests that he’s motivated by a perverse righteousness, not profit.”
Jacob Silverman, summarizing Perez’s argument in Gilded Rage, put it bluntly: “Musk’s free-speech absolutism was a fiction perpetuated by a pliant media.” In practice, Musk bowed to authoritarian governments and banned critical journalists when their reporting annoyed him. He told advertisers to fuck off because they didn’t like his posts and the Nazi-friendly environment he’d created. It wasn’t about principle. It was about Elon Musk.
Since August 2025 I’ve been publicly documenting extremist rhetoric and democratic backsliding—timestamped, evidence-based analysis applying frameworks from Hannah Arendt and genocide studies to contemporary politics. In September and October 2025, I got loud about the dangers of extremism growing on the Right. I pissed off a lot of people.
In mid-October my reach plummeted to roughly 95% less than it should be (based on industry-reported comparisons with other accounts of a similar size and content type) despite an engagement rate 2.6-6x the standard for micro-accounts (conservative estimate), and it’s never recovered. When people see my content, they engage at rates that should signal the algorithm to amplify it. Instead, my top post in the past seven days (by an order of magnitude compared to my posts on average) reached only 17.5% of my four thousand followers.
I can’t prove causation. Maybe it’s mass reporting from coordinated bad actors exploiting the same policies documented since 2021. Maybe it’s related to blocking nearly 200 bots and suspicious accounts in a single day. Maybe it’s the Grok-powered algorithm rollout in October. Maybe it’s some combination. But documented research shows other anti-extremism accounts experiencing the same suppression pattern. Researchers and activists face legal threats, data access cutoffs, and political investigations for doing exactly the work I’m doing at smaller scale. I’m just one data point, but I fit the larger pattern.
And that makes me raise an eyebrow. Or three.
This isn’t algorithmic incompetence. It isn’t unintended consequences of legitimate policy decisions.
Musk gutted trust and safety infrastructure. He reinstated banned extremists including neo-Nazis. He sold verification to anyone with a credit card. He ordered engineers to boost his own posts by a factor of 1,000 and ensure paid subscribers’ replies dominated every conversation. He threatened legal action against researchers documenting hate speech. He cut off academic data access. He promoted far-right figures and conspiracy theories to his 200+ million followers. His platform’s algorithm systematically amplifies right-wing content while suppressing left-leaning voices—or even politically neutral anti-extremists.
Right-wing extremists understand what’s happening. Active Clubs celebrate that X finally lets extreme-right messaging flourish. Nick Fuentes is welcome on X but banned everywhere else. The grooming gangs discourse that Musk amplified in January 2025—51 posts generating 1.2 billion engagements—was dominated by Islamophobic and racist content promoting hatred against Muslims and immigrants.
The researchers understand what’s happening. They’re being silenced, investigated, threatened, and defunded for documenting the patterns the platform creates.
When platforms amplify extremist content while suppressing the people documenting it, they’re not neutral. They’re doing the work the extremists can’t do alone—carrying the message beyond the silos, into the mainstream, while kneecapping anyone who tries to expose what’s happening. The “free speech” framing is cover. This is platform capture for a specific political project.
And it isn’t a bug. It’s the system working precisely as its owner built it to work.
Discover more from The Annex
Subscribe to get the latest posts sent to your email.