The FTC’s politically motivated inquiry into “tech censorship” has managed to prove exactly the opposite of what it intended: the government agency is now actively censoring public comments from people complaining about being censored by tech platforms.
It’s almost too perfect. The FTC, under Chair Andrew Ferguson, launched what it called an investigation into “tech censorship” back in March. The investigation was based on the repeatedly debunked idea that social media platforms were unfairly silencing conservative voices. (This was already odd timing, given that Elon Musk had long ago turned ExTwitter into a non-stop Trump rally and Zuckerberg was eagerly aligning Meta with Trump, but consistency has never been the point here — unconstitutional coercion of anyone not sucking up to Trump is the point.)
But Ferguson seems committed to the bit, effectively making it clear that all platforms are expected to promote pro-Trump content… or else.
As the comment period continues through May 21st, Daphne Keller spotted something remarkable: The FTC itself is actively censoring submissions about censorship. And yes, this time “censoring” is the right word – it’s the government doing it.
Take Michael Dukett’s submission. This self-described “Concerned American Patriot” complained about TikTok “constantly removing comments and censoring my free speech,” attaching screenshots of his removed comments as evidence. The irony? Many of those screenshots were themselves removed by the FTC for containing “profanity” or being “inappropriate.”
Of Dukett’s twenty submitted screenshots, the FTC blocked nearly half — five for being “inappropriate” and four for “profanity.” The very same kinds of moderation decisions he was complaining about TikTok making. Even more telling? The screenshots the FTC did allow through included threats about shooting home intruders and various personal attacks — content that clearly violated TikTok’s community guidelines.
It’s almost as if the FTC is learning in real time what every platform eventually discovers: open systems need moderation, including the ability to remove “otherwise objectionable” content.
And Dukett’s case is far from unique. Scanning through the over 2000 comments in the docket reveals a pattern of the FTC practicing exactly the kind of content moderation it’s supposedly investigating.
One commenter railed against “horrific censorship” after supposedly losing 34 Instagram and 33 TikTok accounts — only to have their FTC submission partially blocked for sharing personally identifiable information.
Even more striking is “Jo Sullivan,” who requested an FTC investigation into platform moderators while calling for “the right to express ourselves within reason.” The FTC’s response? Blocking or redacting 16 of their 20 attachments for — you guessed it — inappropriate content and personal information.
Gosh.
The crazy thing here is that while private platforms have a First Amendment right to moderate as they see fit, the federal government does not. The FTC hiding these comments on its platform is, quite possibly, a First Amendment violation. The posts blocked for “profanity” are almost certainly protected speech under the First Amendment.
But the FTC’s actions inadvertently prove what platforms have known all along: Any open system needs moderation rules and enforcement, or it quickly fills up with inappropriate content, profanity, and personal information. The FTC’s own comment system demonstrates the Masnick Impossibility Theorem in action — content moderation at scale is impossible to do well, and someone will always complain about the decisions made. It doesn’t mean that there is unfair bias. Sometimes it just means that some users are assholes.
The ultimate irony? Apparently the best way to determine if Andrew Ferguson thinks certain content moderation practices are acceptable is to submit specific decisions to his “tech censorship” investigation and see if the FTC itself censors them.
At this rate, Ferguson might just need to investigate himself for all this anti-conservative bias censorship.