Why social media content moderation is so difficult

Content moderation is a neglected backstop for businesses designed to draw in big audiences, and extract the strongest possible reactions from them in the process.

featured-image

S hortly after Hamas launched its surprise attack on Israel last October, channels used by the terrorist group on Telegram lit up with images of the ensuing atrocities. The response? A restriction of those channels that reportedly didn’t do much restricting. Nearly a year later, Telegram’s steady cultivation of a reputation as a free-speech bastion, sometimes to an extreme degree , has hit a glitch with the arrest of its founder and CEO in Paris.

Pavel Durov has been compelled to remain in France , where he’s under investigation for alleged complicity in criminal activity on his popular platform. “I got interviewed by police for 4 days,” he posted on his channel. “No innovator will ever build new tools if they know they can be personally held responsible for potential abuse of those tools.



” The sudden jolt of legal scrutiny begs the question: How are these services normally moderated? That is, how are they expected to be moderated? Like so many other aspects of our increasingly automated existence, humans still play a central role. Office cubicles filled with contractors staring at screens and making snap decisions on content may have given way to artificial intelligence and machine learning, but frequently overworked and underappreciated people remain an essential last line of defense. Patience with the current way of doing things is waning, however, according to Institute for Strategic Dialogue CEO Sasha Havlicek.

“You have consistent, systemic failures in content moderation across the platforms,” she said. Content moderation has been called the hardest problem on the internet for a reason. In many ways it’s a neglected backstop for businesses designed to draw in big audiences, and extract the strongest possible reactions from them in the process.

Moderators may be relied on to instantly referee right-to-expression issues that philosophers have pondered for centuries. Of course, some calls are easier than others. There’s the “sharp tip” of the problem, as Havlicek calls it, in the form of clearly illegal and harmful activity – which bookends an “awful but lawful” grey area that’s more difficult to police.

Still, in much of the world, particularly the non-English-speaking parts, even the sharp tip frequently gets a pass, Havlicek said. “You now have an interesting situation where you’ve got the Five Eyes and the EU, minus the US, essentially moving towards regulation,” she said, referring to the five anglophone countries (Australia, Canada, New Zealand, the UK, and the US) that coordinate intelligence gathering. Yet, “there’s a danger in over-stoking” free-speech absolutists who already equate efforts to regulate social media with repressive censorship, Havlicek warned.

“So it’s going to be a tricky few years ahead.” One readily addressable issue seems to be a dearth of skilled content moderators. Disclosures made by the biggest platforms to comply with the EU’s Digital Services Act (nascent but potentially “game-changing,” according to Havlicek) include tallies of moderators fluent in local languages.

X’s most recent report , for example, shows 1,535 moderators listed as fluent in English, 29 in Spanish, two in Italian, and one in Polish. “Even 20 people per country feels a little bit slim,” Havlicek said. “These are the best-served markets in terms of content moderation that exist, so you can only imagine what that means elsewhere.

” The psychological toll often paid by content moderators is now widely understood. It’s even provided the basis of a Broadway play – the protagonist of JOB is a troubled woman who “must eliminate some of the most incomprehensibly egregious content from the internet,” according to Playbill. “People doing this work now are not getting the pay or protections needed,” Havlicek said.

Moderators have filed related lawsuits in multiple countries . Eye-opening accounts in news reports have been published for at least a decade . Havlicek’s organization, which works with governments and platforms to chart a safer and more stable way forward, has its own team performing related research; strict rules are in place governing the hours anyone can spend on that research, and counselling is mandatory.

All of that seems to make content moderation a prime candidate for automation. But skilled people are still necessary for more complex matters – judgements on the use of irony, for example, or scrutiny of in-depth takes on a particular political situation. Bigger picture, Havlicek said, it’s not just about removing or labelling problematic pieces of content.

It’s really about curation systems. That is, how an environment can be distorted through algorithms designed to keep users engaged, and ad revenue flowing – which pushes people into more “extreme spaces.” That’s a particular concern as geopolitical intrigue mounts and elections hang in the balance.

What if all it takes to spread disinformation is to simply amplify voices that would exist anyway – as the US Department of Justice alleges one company did in a recent indictment ? By this point, platforms have a lot of practice identifying “state actor information operations,” Havlicek said, though they struggle to keep pace with constantly evolving tactics. Havlicek said measures like a fully fledged EU Digital Services Act could be immensely helpful. A key aspect of the legislation is enabling access to platforms’ data for independent researchers.

That’s meant to increase transparency, and better assess risk. “We’re going to need to show that something like the DSA can work,” she said. Only online platforms deemed “ very large ” have obligations under the act.

Services should be accountable for user safety “especially at the point that they’ve got user bases big enough to have impacts on society writ large,” Havlicek said. Telegram isn’t quite there yet , but it’s getting close. The service has been one of many to argue that when moderation works well, no one notices.

“The claims in some media that Telegram is some sort of anarchic paradise are absolutely untrue,” Durov has written on his channel. “We take down millions of harmful posts and channels every day.” A recent posting for a content-moderator job at Telegram seeks applicants with strong analytical skills and “quick reaction.

” About five months prior to his arrest, Durov had mused that “all large social media apps are easy targets for criticism” of their moderation efforts. Telegram, he pledged, would approach with problem with “efficiency, innovation and respect for privacy and freedom of speech.” Following the arrest, he wrote that “establishing the right balance between privacy and security is not easy,” though his company would stand by its principles.

The following day, accompanied by a celebratory emoji, he announced that Telegram had reached 10 million paid subscribers. For more context, here are links to further reading from the World Economic Forum’s Strategic Intelligence platform : On the Strategic Intelligence platform, you can find feeds of expert analysis related to Disinformation , Media , Justice , and hundreds of additional topics. You’ll need to register to view.

The author, John Letzing is Digital Editor, World Economic Forum. This article first appeared in the World Economic Forum. Read the original piece here .

var ytflag = 0;var myListener = function() {document.removeEventListener('mousemove', myListener, false);lazyloadmyframes();};document.addEventListener('mousemove', myListener, false);window.

addEventListener('scroll', function() {if (ytflag == 0) {lazyloadmyframes();ytflag = 1;}});function lazyloadmyframes() {var ytv = document.getElementsByClassName("klazyiframe");for (var i = 0; i < ytv.length; i++) {ytv[i].

src = ytv[i].getAttribute('data-src');}} Save my name, email, and website in this browser for the next time I comment. Δ document.

getElementById( "ak_js_1" ).setAttribute( "value", ( new Date() ).getTime() );.