Social platforms’ ‘language blind spots’ in content moderation bring brand safety concerns
Millions of social media users in the EU are posting in languages without any human moderation oversight.
That is according to a new study of recently mandated transparency data under the EU’s Digital Services Act (DSA), conducted by researchers from the Oxford Internet Institute, University of Copenhagen, University of Michigan and Manning College of Information & Computer Sciences.
It found that 16m EU-based users of X “do not have moderators for their national language” — equivalent to 14% of the platform’s EU user base.
The same is true for 8m LinkedIn users (16% of EU user base) and 7m Snapchat users (7%).
Meanwhile, across platforms, languages primarily spoken in the Global South — including Spanish, Portuguese and Arabic — were found to “consistently receive proportionally fewer moderators than English”.
Researchers examined six major social platforms that are subject to the DSA: LinkedIn, Meta (Facebook and Instagram), Snapchat, TikTok, X and YouTube. Data on content moderation derived from transparency reports from the summer of 2023 to the autumn of 2024.
While larger platforms like YouTube, Meta and TikTok were found to employ moderators for nearly all official EU languages during that period, X, LinkedIn and Snapchat were found to “have several language blind spots” with no reported human moderators.
More recently, TikTok has moved to cull human content moderation efforts, at least in the UK. The company has put hundreds of UK content moderation jobs at risk as it pivots to an AI-led moderation strategy.
The study is significant for leveraging DSA data; prior estimates of content moderation practices by platforms have generally been limited to leaked internal documents, with platforms attempting to keep their strategies relatively opaque.
Still, most platforms only provided moderator counts for EU languages, meaning other moderation blind spots are likely to exist, according to the report.
Molly Russell charity CEO: Social media’s user safety efforts have been ‘performative’
Minimal Italian moderation at X
Elon Musk-owned X was singled out by researchers as having particularly severe underinvestment in moderation relative to the amount of content on the platform in non-English languages.
For example, there were only two languages with more moderators than English relative to their content volume: Bulgarian and German. However, researchers found there was just one human moderator looking after a relatively “low tweet volume” of a daily average of 92,000 tweets in Bulgarian.
Strikingly, Italian had “a similar number of moderators” as Bulgarian, despite posts in Italian being 78 times more prevalent than posts in Bulgarian.
Other underserved languages on X include Portuguese (which receives 9% of English’s allocation), Arabic (7%) and Spanish (7%).
In contrast, YouTube was found to to allocate proportionately more moderation resources than English for the majority of its covered languages.
“Our analysis reveals that millions of users operate on social media platforms that do not have human moderators fluent in their country’s official language,” the report concludes. “Among languages that have professional moderators, we also find stark disparities in workforce allocation.
“The disparities are particularly salient on X/Twitter, with most non-English languages receiving proportionally much less moderators relative to English. This includes widely spoken global languages like Arabic and Spanish.”
Notably, while researchers used employment of English-language moderators as a baseline for comparison in their study, it’s not clear whether staffing decisions are even sufficient.
For example, X was found to employ 27 moderators for every 1m daily posts in English — a figure that “tells us little about whether this level of staffing is sufficient”, the report stressed.
Brand safety concerns
There were a variety of reasons that researchers speculated could account for the language discrepancies.
Some platforms have opted to ensure they employ one moderator per language. Considerations such as content volume in different languages on different platforms, the amount of harmful content in different languages and uneven regulatory pressure in different markets are all likely to contribute to the way platforms invest in this area.
Additionally, the authors suggested that “higher advertising revenues in wealthier regions may incentivise moderation in those languages” but note that, equally, “elevated labour costs could also deter such investment”.
The implication for advertisers, particularly global brands that want to advertise in brand-safe, locally relevant environments, is that ads in non-English languages could run against unmoderated or loosely moderated content.
It’s also likely that issues relating to content moderation have been exacerbated in the time between the study’s time frame and today.
This year, Meta moved to significantly curtail its moderation efforts, replacing it with an X-inspired strategy. The move prompted widespread concern from agency leaders and marketers, but did not lead to sustained boycotts, as occurred in the case of X. TikTok has similarly followed suit.
Andy Burrows, CEO of the Molly Rose Foundation, this week criticised social media platforms’ efforts to improve user safety as a demonstrably “performative” effort to placate lawmakers and advertisers.
He told The Media Leader: “Social media companies continue to be in the business of performative gestures rather than really rolling up their sleeves and getting on top of this problem decisively.”
We need to talk about Meta — with Outvertising’s Sonnie Spenser
