|

Government plans new powers to tackle online harms: ‘No platform gets a free pass’

Government plans new powers to tackle online harms: ‘No platform gets a free pass’

Prime Minister Sir Keir Starmer today announced new measures aimed at cracking down on “vile illegal content created by AI” and other online harms.

“Today we are closing loopholes that put children at risk, and laying the groundwork for further action,” Starmer said. “We are acting to protect children’s wellbeing and help parents to navigate the minefield of social media.”

He outlined that the Government is taking “new legal powers” aimed at “lay[ing] the groundwork for immediate action” following the conclusion of its forthcoming consultation on children’s online wellbeing, which includes whether to ban social media for under-16s.

The powers will allow the Government to act on the consultation findings “within months,” rather than waiting years for primary legislation to be enacted.

Potential policy changes could include:

  • Setting a minimum age limit for social media, such as an under-16 ban.
  • Restricting features, including infinite scrolling.
  • Requiring tech companies safeguard underage users from sending or receiving nude images of children (which are already illegal to create or distribute).
  • Options to age restrict or limit children’s VPN usage if it undermines proposed safety protections.
  • Ensuring vital data following a child’s death is preserved before it can be deleted if online activity may have been relevant to the death.

 

At an event in Southwest London, Starmer said he was “open-minded” about what specific actions the Government would take. “I can see the argument for a ban, I can see the argument [for] a much more restrictive content-based approach, which doesn’t necessarily cut out news and useful information. I think we need to test those options in the consultation, but whatever the outcome, it can’t be nothing. It can’t be the status quo.”

The Government will also move to close a “legal loophole” by requiring all AI chatbot providers to comply with the Online Safety Act’s illegal content obligations, or face legal consequences. Under the Act, companies can be fined up to £18m or 10% of their qualifying worldwide revenue, whichever is greater.

The effort comes following widespread backlash against xAI’s chatbot Grok creating and disseminating nude and lewd images of people on X. In a statement announcing the Government’s new effort, a spokesperson called Grok’s non-consensual image generation “abhorrent”.

An Ofcom investigation of X remains ongoing. The UK’s data protection regulator, the Information Commissioner’s Office (ICO), this month also launched an investigation into X and xAI over Grok’s creation and circulation of sexualised content. X’s French offices were meanwhile raided by the Paris prosecutor’s cybercrime unit earlier in February.

“Technology is moving really fast, and the law has got to keep up,” Starmer added. “With my government, Britain will be a leader not a follower when it comes to online safety.”

ICO launches X and xAI investigation as French cybercrime unit raids platform’s Paris premises

Open debate

Ministers have been engaging with parents, teens and civil society groups in recent months on issues of online harm, with leadership saying there has been “consistent and clear” messaging that parents desire government intervention. The Government is launching a digital wellbeing consultation next month.

In the short term, the Department for Science, Innovation and Technology has launched the “You Won’t Know Until You Ask” campaign, which provides practical guidance to parents on safety settings on online platforms and age-appropriate advice for teens dealing with harmful content.

Public perception that social media is directly responsible for the growing teen mental health crisis has existed for years, driven in part by high-profile suicides of young people, such as the 2017 death of Molly Russell, that were found to be linked to social media use. A new Channel 4 documentary about Russell’s death, Molly vs The Machines, releases on 1 March and has already been screened to advertising industry leaders.

More recently, a number of lawsuits have alleged suicides have been encouraged by AI chatbots, compounding concerns of underregulation.

Public outcry accelerated in 2024 with the launch of Jonathan Haidt’s book, The Anxious Generation, which argued that increased smartphone use has been detrimental to children’s development. However, in a book review of the title for the peer-reviewed British science journal Nature, Candice Odgers, a professor of psychological science and informatics at UC Irvine, said, “the book’s repeated suggestion that digital technologies are rewiring our children’s brains and causing an epidemic of mental illness is not supported by science. Worse, the bold proposal that social media is to blame might distract us from effectively responding to the real causes of the current mental-health crisis in young people.”

The book, nevertheless, was a driving factor in Australia becoming the first country to ban social media for under-16s last year. France’s lower house in January also passed legislation to ban social media for under-15s.

Australia’s ban led to Meta and Snap removing 500,000 and 415,000 accounts from their services, respectively.

Platforms’ teen safety efforts amount to ‘broken promises’ — with Andy Burrows and Harriet Kingaby

Andy Burrows, CEO of the Molly Rose Foundation, the charity set up in the wake of Molly Russell’s death, commented that the charity “strongly welcome[s] the Government’s ambition to move quickly and decisively to tackle appalling and preventable harm”.

Burrows and Ian Russell, the late Molly Russell’s father, have both, however, argued that banning under-16s from social media would be wrong and lack a basis in scientific understanding of social media’s impact on teens’ mental health.

In a joint statement, the Molly Rose Foundation, children’s charity NSPCC, and dozens of other organisations said that, “Though well-intentioned, blanket bans on social media would fail to deliver the improvement in children’s safety and wellbeing that they so urgently need. They are a blunt response that fails to address the successive shortcomings of tech companies and governments to act decisively and sooner.”

The statement warns that under-16 bans risk “an array of unintended consequences”, including causing threats to “migrate to other areas online” and creating a “dangerous cliffedge” for 16-year-olds. The groups further noted that children, particularly LGBTQ+ and neurodiverse children, also “require platforms for connection, self-identity, peer support and access to trusted sources of advice and help”.

Clive Green, director of strategic planning at independent media agency Generation Media, told The Media Leader he agreed an under-16 social media ban “risks oversimplifying a complex issue” given it is “deeply embedded in how young people communicate, learn and form identity.”

He added: “Experience in Australia suggests bans can shift behaviour rather than solve it, with patchy implementation and reports of teens bypassing checks via VPNs, fake birthdays or shared IDs. The most effective approach combines evidence-based regulation with safer platforms design, including robust age-appropriate experiences, reduced addictive mechanics and clearer accountability, alongside improved digital literacy for children and parents.”

Bans have also been criticised by privacy advocates for requiring users to upload personal identification data to platforms, including images or IDs, without strict scutiny for how such data could be used or sold.

NSPCC CEO Chris Sherwood agreed that the Government’s proposal “mirrors what we have been pressing for: proper age-limit enforcement, an end to addictive design, and stronger action from platforms, devices and AI tools to stop harmful content at the source.

“Delivered swifty, these measures would offer far better protection than a blanket ban.”

Need to tackle the business model

While scientific research on the psychological impact of social media on children remains inconclusive, that has not stopped lawmakers around the world from seeking to force changes to social media platforms’ design.

For Harriet Kingaby, co-chair of the Conscious Advertising Network (CAN), the government’s proposed efforts are insufficient if they don’t also address the root cause of platforms recommending harmful content to young users.

“We strongly urge the prime minister to look into the business model of the platforms, which is creating unhealthy incentives for the production and distribution of extremely harmful content,” she told The Media Leader.

Meta admits revenue from fraud and scam ads ‘might’ have accounted for 3-4% of total revenue

An ongoing US trial is one of several that will decide whether companies such as Meta, Snap, TikTok and YouTube intentionally developed their apps to addict users, especially teens, through endlessly scrolling feeds and opaque recommendation algorithms. TikTok and Snap settled ahead of the trial.

Testifying last week, Instagram head Adam Mosseri argued that 16 hours of daily Instagram use was “problematic” but not “addictive”.

According to internal company emails uncovered during discovery, Meta CEO Mark Zuckerberg decided in H1 2017 that the “top priority” for the company is teens. “Our overall company goal is total teen time spent,” one exhibit reads. In other words, whatever caused the largest number of teens to engage with the platform for the longest amount of time is what Meta (then Facebook) designed its business model around.

The Conscious Advertising Network, in partnership with the Molly Rose Foundation, released research last year that found one in 10 pieces of suicide and self-harm content on TikTok and Instagram were monetised. As Kingaby describes, this highlights “the link between advertising, platform greed and harmful content.”

Meta also admitted this month it “might” have derived 3-4% of its total annual revenue in 2024 from ads for scam or banned goods.

Kingaby continued: “These dynamics need to be stopped, advertisers need to have transparency about exactly where their advertising is going across platforms, AI surfaces, games, CTV and apps in order to create the accountability necessary for platforms to end these horrific practices.

“We need to end the practice of platforms serving extremely harmful content in order to keep people scrolling, so they can serve people more ads.”

Brands are still funding harm, putting children at risk

Leave a comment

Your email address will not be published.

*

*

*

Media Jobs