|

Inside the Grok CSAM scandal and how brands have faced ‘weaponised political pressure’ to spend with X

Inside the Grok CSAM scandal and how brands have faced ‘weaponised political pressure’ to spend with X

Advertisers, influencers, publishers, and other organisations have once again been placed in a bind about whether to leave or disinvest from the microblogging platform X due to content moderation concerns. This time it’s over xAI’s chatbot, Grok, which has been posting AI-generated nude and sexual imagery — based on photos of real individuals — in response to user queries on the platform.

In several instances, Grok has generated nude AI images of minors. Actress Nell Fisher, aged 14 and who stars in the latest season of Stranger Things, was one such high-profile victim.

Another, Ashley St Clair, the mother to one of Musk’s children and a victim of users’ attempts at nudification on X, expressed she felt “violated” and that Grok amounts to “another tool of harassment”.

Speaking to The Guardian, St Clair said she complained to X, which failed to remove an AI-generated image of her as a child that had been “undressed by Grok.”

In a further example, shared by Bellingcat founder Eliot Higgins, one user asked Grok to create a series of sexualised images of Swedish deputy prime minister Ebba Busch.

One example of how Grok is being used to target women. Swedish Deputy Prime Minister Ebba Busch being sexualised, degraded, and humiliated step-by-step by Grok. All the images accurately reflect the prompts provided.

[image or embed]

— Eliot Higgins (@eliothiggins.bsky.social) January 5, 2026 at 5:37 PM

Other users, Higgins posited, have begun posting images of celebrities to drive engagement, assuming users will ask Grok to sexualise them. Common prompts for doing so include asking the chatbot to “put her in a string bikini” and “make her breasts larger”.

In response to the public backlash and concerns over the legality of hosting a chatbot on the platform that is actively creating and disseminating child sexual abuse material (CSAM) and other pornographic imagery, last week, X owner Elon Musk posted: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

When asked for further comment, an X spokesperson referred The Media Leader to a post from X’s safety account published over the weekend. It reads: “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.”

Ofcom to ‘undertake a swift assessment’

However, X does not appear to have sufficiently interceded thus far, with users continuing to ask Grok to create a swath of AI-generated nude or lewd images of figures, both public and private.

UK regulator Ofcom is investigating the matter. An Ofcom spokesperson told The Media Leader: “Tackling illegal online harm and protecting children remain urgent priorities for Ofcom.

“We are aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children. We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK.

“Based on their response, we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation.”

The spokesperson declined to provide further details “about ongoing operational matters”.

Platforms’ teen safety efforts amount to ‘broken promises’ — with Andy Burrows and Harriet Kingaby

It is illegal in the UK to create or share non-consensual images or CSAM, including “sexual deepfakes” created with AI.

Under the Online Safety Act, tech companies are required to assess and reduce the risk of users encountering illegal content on their platforms; pornographic content must also have “robust age checks” to prevent under-18s from accessing that material. However, the required age-verification technology is questionable in its reliability, may infringe upon user privacy, and can be skirted with VPNs.

The issue has spread to other platforms beyond X. On Reddit, for example, the subreddit r/Grok_Porn currently has tens of thousands of users, and a pinned post provides prompting tips.

New Isba director general Simon Michaelides told The Media Leader the “wave of misogynistic and CSAM content being generated by Grok in recent days is extremely concerning”, and welcomed Ofcom’s move to “swiftly” contact X.

He continued: “This is, of course, illegal content under the UK’s still relatively new Online Safety Act, which requires platforms to be transparent about harmful material. That transparency is exactly what advertisers require in order to make their individual decisions about where to place adspend and campaigns.

“Every platform, including X, must be clear about their content policies — including monetisation — and provide assurances that they are being effectively implemented. Advertisers want to operate in a brand and user safe environment.”

AI creates an uphill content moderation battle

Ruben Schreurs, CEO of media investment analysis firm Ebiquity, told The Media Leader that the content moderation issue is twofold: it involves two separate companies: xAI, which operates Grok, and X, which hosts the content Grok creates on behalf of its users. Musk owns both companies.

“For any company, it is an uphill battle to moderate, especially with these generative AI platforms,” Schreurs said. “All of them struggle with people trying to circumvent content moderation limits.”

Yet X has taken an especially laissez-faire stance on content moderation, gutting its Trust and Safety teams and apparently changing its algorithm to better reflect Musk’s far-right ideology.

According to Schreurs, Grok also appears more “lenient” than ChatGPT or Gemini in allowing users to create harmful content.

Social platforms’ ‘language blind spots’ in content moderation bring brand safety concerns

While Ofcom is looking into the matter, Schreurs believes that there is an “utter lack of accountability and enforcement when it comes to these AI companies,” adding that they are “all, at a massive scale, clearly infringing human copyright laws”, including by creating nude images of individuals based on photos.

Part of the issue is, who should be held accountable for such illegal behaviour? Is it the user who directed the chatbot to create the material? The person in charge of the chatbot’s model? The AI company as a business entity? The CEO of that company?

“There is a huge amount of legal uncertainty at the moment that is clearly overwhelming the entire world’s legal systems,” Schreurs said.

‘It’s horrific and abhorrent and needs to be stopped’

Regardless of the potential legal consequences for X and its users, the ethical imperative for business leaders is less questionable.

David Wilding, a former planning director at Twitter who now works as a strategist for WPP Media, responded to the news with a post on LinkedIn in which he described the situation as “Vile, grotesque little weirdos doing vile, grotesque things.”

He added: “One man could stop this happening relatively easily, no?” referring to Musk.

Reached by The Media Leader for further comment, Wilding said he “defied anyone not to be appalled” by Grok’s nude image generation and its impact on women and children.

“It’s horrific and abhorrent and needs to be stopped,” he said. “And a lot of the ‘as the dad of daughters’ guys from a few years back seem to have gone very quiet. […] It’s (another) signal that we’re at a real risk of sleepwalking into an AI world being controlled by tech bros without nearly enough consideration of the consequences for society, for women and for all of us.”

Bruce Daisley, Twitter’s former EMEA VP and an outspoken critic of Musk, said he hopes “2026 will be the year that his foul behaviour catches up with him”, including through potential action by Ofcom.

When asked whether he thinks brands should leave the platform or halt advertising, Daisley added: “If there’s any brand still using X in 2026, I don’t think they’re going to listen to what reasonable [people] think.”

‘Weaponised’ litigation: Industry stands ground against X following GARM shutdown

For Schreurs, the calculation for brands is simple: “If I’m a brand advertiser, I just simply do not want my ads to appear next to CSAM, illegal or hateful content that doesn’t align with my brand. I want to prevent that negative halo from influencing my brand.”

The simplest business decision a brand can therefore make, Schreurs concludes, is to avoid spending on any platform, including and especially X, that does not have active and sufficient content moderation standards.

Taking it a step further, businesses and public figures should also consider the risk of merely remaining active on the platform, he continued. Some individuals and brands have removed their profiles or stopped posting on X in the years since Elon Musk took over what was once known as Twitter, with many active on alternatives like Bluesky or Meta’s Threads.

Advertisers under ‘weaponised political pressure’ to spend with X

On the other side, Schreurs admitted, the platform still retains “a huge amount of eyeballs” that some brands might care to reach.

Perhaps more importantly, making a show of leaving X risks putting business leaders “in the crosshairs of Musk and his circle”, particularly for organisations with limited legal resources.

In 2024, Musk sued the World Federation of Advertisers (WFA), the Global Alliance for Responsible Media (GARM), and several individual brands for allegedly engaging in an “illegal conspiracy” to boycott the platform.

Musk, who has remained close to US President Donald Trump even after their apparent feud last year (Musk dined with Trump at Trump’s Florida club and residence Mar-a-Lago last week, and said he intends to “go all in” in supporting the Republican Party ahead of this November’s midterm elections), has also used his leverage with the administration to his own benefit.

The Wall Street Journal reported last year that Musk has used his influence to “strongarm” advertisers into spending on X, including by directly threatening to sue more advertisers if they don’t.

Interpublic Group (IPG) and Publicis Groupe subsequently yielded to pressure, agreeing to spending deals with X, even though buying ads on the platform may not have been in their clients’ best interests.

The issue was potentially exacerbated last year when the US Federal Trade Commission (FTC) approved Omnicom Group’s acquisition of IPG on the condition that Omnicom would not collude or coordinate “to direct advertising away from media publishers based on the publishers’ political or ideological viewpoints”.

While the FTC did not mention X or any specific media owner by name, the language appeared to address similar grievances as Musk’s allegation that “disfavoured political or ideological viewpoints” have received less advertising support.

As Schreurs explained, the newly merged Omnicom “can’t decide on behalf of clients anymore, or offer any kind of syndicated inclusion or exclusion list,” which could be construed as directing advertising away from media owners like X because of politicised hate speech.

“Each advertiser would have to explicitly demand from Omnicom where they do not want to run [ads] and where they do, and Omnicom technically cannot in any way influence that decision or provide free set-up inclusion or exclusion products,” he said.

US FTC approves Omnicom-IPG merger on condition it does not make ad decisions based on ‘political or ideological viewpoints’

Schreurs called such a development “weaponised political pressure” aimed at forcing “advertisers to spend money on platforms that they may not want to advertise on due to brand risk and reputational risk.”

Schreurs shared that, due to his outspoken opposition to Musk and the Trump administration, he “can’t travel to the US right now” after his then-valid ESTA was suddenly cancelled in November.

While Schreurs denied he had been explicitly banned from entering the country (unlike the five European social media critics who were banned last month), renewing his ESTA would require him to provide access to his social media posting history.

Speaking more generally, Schreurs said every company and individual must decide for themselves whether to cave to threats to remain on X, even amid the latest concerns over Grok.

He added: “But every company, every individual should be allowed to decide for themselves, without vindictive, weaponised litigation over a pretext of it being a conspiracy against Musk or the political right wing in the US.”


Editor’s note: This article has been updated after publication to include additional comment from Isba.

Leave a comment

Your email address will not be published.

*

*

*

Media Jobs