X Pulled The Plug On Parody — Has Elon Musk Lost His Sense Of Humor?

featured-image

The move is meant to crack down on fake accounts, but it could be argued this is a problem that X brought upon itself by removing the old verification system

White House Senior Advisor, Tesla, and SpaceX CEO Elon Musk speaks during a Town Hall event at KI ...

More Convention Center on March 30, 2025 in Green Bay, Wisconsin. (Photo by Joshua Lott/The Washington Post via Getty Images) It's not a joke; X has begun cracking down on parody accounts as a policy to bring greater transparency to the platform. Earlier this month, X's @Safety announced it would roll out new updates on the social media platform formerly known as Twitter.



It was aimed at "Parody, Commentary, and Fan (PCF) accounts." Beginning on Thursday, all such accounts were required to include "PCF-compliant keywords" to help distinguish them from official accounts, while PCF accounts were further told to avoid using identical avatars. This was meant to address the impersonation issues on X, since the platform dismantled the legacy verification system in favor of a subscription based verification.

The problem has been ongoing for a while, but as Mediate reported, there has been an increase in an "impersonation free-for-all that spawned everything from fake corporate announcements to Musk lookalikes running crypto scams." It could be argued this is a problem that X brought upon itself, and it is only now looking for an ad hoc solution. Even if it solves the problem in the long-term, which is still not a certainty, it could create bigger issues in the near-term.

"The new labeling requirements for parody accounts on X have seemingly straightforward consequences: they aim to reduce confusion by making impersonation more obvious and humor more clearly identifiable. But there's a deeper concern involving the unintended consequences," warned Rob Lalka, professor at Tulane University's Freeman School of Business and author of The Venture Alchemists: How Big Tech Turned Profits Into Power . Accounts are required, but enforcement could be difficult given X has seen its staff gutted and relies on algorithms.

"It seems as if he is trying to fix a problem he created when he bought the platform and monetized the verified badge. It used to be easier to identify a fake account when the platform was called Twitter because celebrities and notable brands were given a blue checkmark to authenticate their accounts," explained Tamara Buck, J.D.

, professor of mass media and chair at Southeast Missouri State University . "When Musk moved the verification program to a paid subscription model for anyone with an active account, it created a surplus of verified account holders who pirated more recognizable names for parody and more nefarious activities," said Buck. That didn't really seem to be an issue, at least until the number of parody accounts related certain tech entrepreneur and billionaire increased.

"A flurry of Musk parody accounts popped up after he first took over Twitter, and that's when he created the new verification system," added Buck. "We see them increasingly now that he has entered the political realm, and these accounts are either mocking him or using his name to defraud people. It's very likely this is contributing to the new lockdown.

" The order to clearly distinguish what is parody on the platform won't only be hard to fix, but it leaves unresolved the question of why wasn't this done before – or is it truly a case that Musk didn't care until he was the subject of so much satire and parody? Now the proverbial genie is out of the bottle, and Musk is trying to put it back in. The problem may run even deeper still, as it doesn't address "fake" accounts in the least. "There have been several studies done in the last five years that indicate the vast majority of social media accounts, and especially those on X, are not human," suggested Buck.

"Instead, these accounts are managed by bots, which are software programs that mimic human activity like posting, liking, sharing, and following." In many cases, all a human programmer has to do is make their subscription payments on time, and the bot does all of the work. "Even though many social media users are aware of the bots, this hasn't stopped people from interacting with them in ways that cause anger, frustration, and even loss," said Buck.

"It's an interesting phenomenon, because it can be difficult but not impossible for people on the receiving end of the bot communication to distinguish the bot from a human." The other notable aspect of this new directive is that it is focused on accounts that are seen to impersonate an individual, so would obvious humor accounts – like those of the Babylon Bee and The Onion – fall into the same category? What about someone who posts a meme that is satirical or a comment that is meant to be sarcastic? Then there is the question of what happens when there are accounts that could be confused with another person or organization, either intentionally or not. "When only certain content is labeled as parody, some users might assume that everything without a label is more trustworthy," said Lalka.

That could risk creating a false sense of certainty in an ecosystem where generative AI already makes it difficult to tell fact from fiction, especially when verification is simply a matter of paying a monthly fee to X. "In this kind of environment, users will need to slow down, reflect, discern, and ask harder questions, including where does this come from? Who's funding it? Does it align with my values, or just confirm my biases? The ways misinformation spreads only deepen this challenge," warned Lalka. Even if labeling works at the source, content on platforms like X just as often travels through screenshots, quote posts, and third-party sharing.

All of that further strips away the original context. "That means even clearly marked 'parody' posts can still mislead once they're detached from the label, especially when amplified by verified accounts whose credibility is bought, not earned," said Lalka. As a result, it remains an open question whether these new labels will help users on X draw clearer lines between what's meant to be funny, what's purposefully deceptive, what's sincere truth-seeking, and what's just entertainment, said Lalka.

He added, "As always, the real-life outcomes of our activities online remain both a corporate responsibility and a personal one. Drawing the line between fact and fiction requires more than labels, and without critical thinking and discernment, that line will remain dangerously blurry, potentially with serious results." Whether this makes the spread of misinformation worse isn't clear, but it is possible.

"It actually may help, even though it's not as good a fix as the one that was in place previously," Buck continued. "The onus is on the real people on the platform to pay closer attention to the accounts they follow and interact with, and to avoid implausible opportunities presented by people they don't know personally. Doubt everything, and verify, verify, verify.

".