Social media platforms linked to human trafficking, UN report finds
Social media platforms are “primary pathways” for organisers of human trafficking to recruit victims as part of a wide-ranging online scam industry centred in South-East Asia.
This is according to a report published by the United Nations Human Rights Office of the High Commissioner last month. The report found that Facebook, in particular, was “the most widely used platform” for trafficking recruitment.
Other platforms used by victims include TikTok, YouTube, Snapchat, Instagram, Xiaohongshu, Zalo, Viber and WeChat.
Victims of trafficking describe being pulled into schemes via false advertisements circulated on social media platforms. Once involved, victims are themselves coerced into perpetrating online fraud using a range of digital platforms, from fake gambling websites and cryptocurrency platforms to impersonation scams, online extortion, and other forms of financial fraud.
According to the report, cybercrime headquartered in South-East Asia has “taken on industrial proportions”, with at least 300,000 people originating from 66 countries involved in scam workforces.
Scam operations are “entrenched and well-resourced”, and while accurate assessments of the scale of the market are challenging, the report notes that the industry’s annual global profits have been estimated at $64bn. It would make the Southeast Asian scam market roughly equivalent to Myanmar’s GDP.
In the Mekong region alone, the scam industry is estimated to be worth over $43.8bn annually.
The UN report concludes the compounds used by the scam operations are “sites of serious human rights abuses”, including through the perpetration of a “shockingly high level of physical and psychological violence”.
Victims describe being refused permission to leave, while independent investigations have confirmed that the majority of victims have experienced or witnessed torture or other serious human rights abuses, including death.
Facebook’s scam issue
Victims of trafficking operations report that contact is made with recruiters who post jobs on employment groups on social media platforms, particularly Facebook.
Recruiters then shift conversations from public social media posts to private messaging apps, including Telegram, WhatsApp, Signal, or other country-specific platforms that use end-to-end encryption.
Victims describe in interviews conducted by researchers that they had confidence in the job postings on social media platforms because they were often first shared by trusted sources. One Thai woman, for example, says her partner forwarded her a Facebook post advertising employment in a restaurant that was in reality a front for a trafficking operation.
It is unclear whether scam recruiters are paying for advertising on social platforms or relying solely on organic efforts. However, the UN report notes that job postings are often “familiar and aligned with the aspirations of the victims”, suggesting that recruitment efforts could be tailored to potential victims based on some level of user data.
IAB UK reaffirms Meta’s membership status after tech giant expelled from IAB Sweden
Meta acknowledges that it derives a substantial proportion of its revenue from scam ads. A November report from Reuters reveals the tech giant internally projected that it would earn about 10% of its overall annual revenue in 2024 (equivalent to $16bn) from such ads.
At the LEAD conference in London last month, the company’s security policy manager for community defence, Rima Amin, disputed the reporting, arguing instead that “real scams” on Meta platforms “might” have accounted for 3-4% of Meta’s total annual revenue in 2024 (equivalent to $5bn and $7bn).
Amin also revealed, however, that just 55% of Meta’s 2024 ad revenue was derived from verified advertisers. In 2025, she said, this figure rose to 70%. Meta is therefore knowingly taking money from a large number of unverified accounts, including potential scammers.
The issue is likely to worsen as AI is deployed for scam operations. This includes generating scripts and multilingual content, identifying targets for scams, producing deepfakes for impersonation, and facilitating money laundering.
The UN report also found evidence that indicates criminal actors are deploying AI models to scrape social media platforms in search of individuals who display indicators of financial difficulties or employment-related distress, with the aim of exploiting these vulnerabilities during fraudulent recruitment efforts.
Lacking moderation and support
The platforms have largely failed to address the issues raised by trafficking victims.
The UN report notes that TikTok Philippines last summer signed a memorandum of understanding with the Philippine Department of Migrant Workers to bolster its content moderation systems to flag and remove scam accounts.
However, multiple survivors told researchers that recruiters continue to operate on social media platforms despite personal efforts to report them.
Indeed, one Bangladeshi survivor said that scam recruiters continue to operate on the Facebook group through which he was recruited, despite his attempts to flag the operation to moderators.
On the contrary, the report notes that civil society groups and victim advocates have reported challenges in using the same social media platforms to warn their communities about scam operations. While reported trafficking recruiters remain online, multiple accounts set up to disseminate counter-trafficking messaging have been taken down.
Advocates allege that inadequate content moderation and unbalanced moderation algorithms are likely the culprits.
Content moderation is challenging for several reasons. Not only are non-English content moderation efforts generally understaffed, but platform policies may also prevent moderation within private groups.
Likewise, the status quo of legitimate job postings on social media in these regions is already so exploitative that it can be challenging to distinguish between what is and is not a scam based solely on the job advertisement.
Social platforms’ ‘language blind spots’ in content moderation bring brand safety concerns
Rather than improving human-led content moderation, Meta has intentionally weakened its content moderation policy. In January 2025, CEO Mark Zuckerberg said it would “catch less bad stuff” on its platforms by halting its third-party fact-checking programme and updating its hateful conduct policy.
This month, the advertising giant announced it was further reducing its reliance on third-party content moderators, opting instead to use AI systems for these tasks.
The UN report stresses the importance of taking a multi-stakeholder approach to cooperatively tackling the interconnected issues of human trafficking and scams perpetrated online via social platforms, one that includes victims, law enforcement, civil society groups and tech platforms.
Meta did not respond to a request for comment. Spokespeople for TikTok and Snap declined to comment on the record.
