|

Molly Russell charity CEO: Social media’s user safety efforts have been ‘performative’

Molly Russell charity CEO: Social media’s user safety efforts have been ‘performative’
Burrows (left) and Kingaby

Last week, a new report by the Molly Rose Foundation found that social media algorithms are continuing to “bombard” young users with “a tsunami of harmful content” through recommendation algorithms.

The Pervasive-by-Design study was conducted in the weeks leading up to the implementation of the UK’s Online Safety Act. It found that, for teenage accounts that had previously engaged with suicide, self-harm and depression-related posts, the vast majority of recommended content served by both Instagram Reels (97%) and TikTok (96%) could be classified as harmful.

Over half (55%) of such posts on TikTok’s For You page on those accounts actively contained references to suicide and self-harm, with 16% referencing suicide methods.

Notably, ads from major brands, including fast-fashion retailers, fast-food restaurants, UK universities, charities (including mental health organisations), cosmetics brands and local youth projects, were found adjacent to such content.

The findings prompted Andy Burrows, CEO of the Molly Rose Foundation, and Harriet Kingaby, co-chair of the Conscious Advertising Network (CAN), to call on advertisers to apply pressure on Meta and TikTok to demand changes in their user safety policies and greater transparency over what content their ads run against.

The Molly Rose Foundation was founded in 2017 following the death of Molly Russell, whose suicide and depression were found by the coroner to have stemmed from “the negative effects of online content”.

Burrows and Kingaby spoke to The Media Leader to discuss the report and why advertisers have thus far failed to apply enough pressure to platforms to effect substantial policy change since Russell’s death.

This Q&A has been edited for clarity and brevity.


For anyone who hasn’t read Pervasive-by-Design yet — especially brands, marketers and agency leaders — what would you like them to take away it?

Burrows: The takeaway for advertisers is pretty clear: if you are a brand that is likely to appeal to young teenagers, and young teenage girls in particular, then as it stands right now you should not have confidence that your ads are not appearing next to just the most appalling types of content.

We really can’t emphasise enough the nature of some of these posts and the type of harm that they are generating.

We were founded by the family and friends of Molly Russell after her death almost eight years ago. One of the key takeaways from Molly’s death was the way in which algorithms bombarded her with material. And what this research shows is, eight years on, exactly the same thing is happening.

It’s the most appalling forms of content. In the report, we had to be exceptionally careful about the examples we could show. Those are not posts that could be shown on broadcast because of Ofcom’s code and yet teenagers are being algorithmically recommended them and advertisers are appearing next to these posts with a deeply worrying frequency — something like one in 10 posts.

Kingaby: This is not a new problem. And it’s not just a moral issue, it’s also — and I feel weird talking about it in these terms, but it’s necessary — it’s also a huge brand safety and reputational risk for these brands.

What we know is verification tools aren’t working. The platforms are still pushing this content. And they’re pushing it in order to serve our ads. So we as the advertising community have a real responsibility here and we have real power because we are the platforms’ customers.

Frankly, we should be leaning in to that power and talking to the platforms about why the algorithms are designed in this way and why they’re not prioritising child safety by design.

We don’t know where our ads are going. We collectively as an industry really need to be demanding transparency from platforms, from adtech providers, about where these ads are going.

Do you think brands actually care about brand safety? Because the sense is that they will talk a good game, such as with news publishing, but don’t act consistently when it comes to social platforms, where they feel they have to be to reach large audiences.

Kingaby: As with any supply chain, people will manage risk until the risk becomes existential. Are brands going to wait for that to really take this seriously? I don’t think that they should be.

It’s not just about brand safety here; it’s also about using the power that we have, as customers of the platforms, to demand different practices.

Burrows: In terms of C-suite decision-making in this space, we should just be clear: there will be teenage children, most likely teenage daughters, of people working in these companies who are being served this kind of content.

This isn’t something that can be easily dismissed as worst-case scenarios. In the UK, we lose a young person aged 10-19 by suicide every week where technology plays a role. And algorithmically driven harm is a very significant component of that.

But, more broadly, we can all see the mental health impacts that are being driven by social media.

Ethics in the future of media technology — with Professor Andy Miah

The platforms make the vast majority of their money from small and medium-sized businesses, which don’t seem to care as much as big brands about adjacency. Small brands just want to get to audiences, because their business survives based on that. How would you approach those types of business leaders?

Kingaby: Things like the CAN principles are easy to use. Tiny businesses can use them in the same way that large businesses can.

I think we need a movement of not only the big brands. If you’ve got a mom-and-pop business next to this kind of content, it’s quite different to some of the brands that we found that are extremely large.

Whilst [the platforms] might make a lot of money from smaller organisations, it’s a bit harder to rally hundreds of voices rather than just a few. But the reputational risk is much greater for these larger brands. We’re not naming and shaming any of the ones that we found, but these are significant brands with significant spend.

I do think large brands coming together and asking for this is still powerful.

When platforms discuss user safety, they argue that they’re working hard to improve by introducing new features, parental controls etc. Do you believe them?

Burrows: I think what this research demonstrates is that so much of what the large companies have pledged by way of safety improvements is essentially a performative exercise to try and demonstrate to lawmakers and advertisers that they’re on top of the problem. But, really, they are just paying lip service to it.

If you take Meta — they proclaim that they have over 50 safety tools. They have at no point provided meaningful data that is subject to independent scrutiny that has allowed us to understand the impact and the efficacy of those measures.

What this report shows is those measures are not having a demonstrable, meaningful impact on the experience of children’s feeds. Those safeguards have not materially shifted in the eight years since Molly died — despite everything that has happened in terms of legislation, in terms of regulation, in terms of the growing public concern.

There’s a really clear disconnect between the glitzy PR launches that have been targeted at the ad industry — and I’m thinking particularly of Meta’s teen accounts, where there was a significant push to try and demonstrate that Meta was determined to tackle these issues — and the reality on the ground.

It could not be clearer from these findings that social media companies continue to be in the business of performative gestures rather than really rolling up their sleeves and getting on top of this problem decisively.

Kingaby: The proof is in the pudding. If we can find [this harmful content], the measures aren’t working, the tools aren’t working — despite the promises to the industry.

It’s happening because those algorithms are optimised for engagement. This is why we need child safety by design. It needs to be designed into the way these platforms work. Otherwise we’re tinkering around the edges.

Burrows: At the Molly Rose Foundation, we’re not supportive of the calls for smartphone bans or social media restrictions. But we should all accept that if we don’t see meaningful improvements, then the potential that policymakers and governments decide to pull up the drawbridge increases.

I don’t think that’s a good thing for young people, nor for advertisers that legitimately want to advertise their products to young people.

We all lose if that scenario happens. And the status quo is not sustainable. The incentives should be aligned for everybody, including the advertising industry, to start demanding better from the tech companies now.

It has been one month since the Online Safety Act was implemented and it has proven controversial. VPN use has skyrocketed. The Children’s Commissioner for England found exposure to pornography has increased since the law was passed.

A lot of policies are well-intentioned but difficult to implement, particularly around age-gating. What types of regulatory intervention would you like to see?

Burrows: Regulation is the most effective way that will change the incentives for these companies to start to address child safety and broader product safety in a meaningful way.

The Online Safety Act is an imperfect piece of legislation. I’m glad that it’s there and starting to make an impact. It was watered down during the parliamentary process and it doesn’t go as far as we think it would need to. So we would like to see the legislation strengthened so that it can provide the incentives more effectively. It absolutely can be done.

You’re absolutely right that in the last few weeks there has been quite predictable immediate backlash from tech libertarians and free expression voices.

We’ve seen some issues around implementation. Those are regrettable but they largely stem from unforced errors from Ofcom; they are not a sign that this cannot be done.

A lot of this is quite predictable, performative rage. On the point about VPNs, it’s absolutely the case that in the first week or two VPNs were topping the app store charts and there was a song and dance about it.

If you look at the charts now, it’s settled down to how things were before. It’s like at the start of the year when we were told everyone was going to leave X for Bluesky, but that initial growth wasn’t sustained.

Why parasitic platforms keep feeding on media (and what’s next)

From a privacy standpoint, is it safe for children to upload biometric data to unregulated overseas third parties? That is a legitimate concern.

Burrows: We know that it is possible to implement highly effective, privacy-preserving forms of age assurance. Models like Yoti’s have been shown to be privacy-preserving and have a very high level of reliability.

What we’ve seen in the first few weeks are examples of providers that have started to roll out age assurance, but there’s a question mark about whether that meets the specification that they would be highly effective and as privacy-preserving as the market can provide.

So I think there’s some questions for the regulator about how actively they are going after providers that don’t meet the standard the act requires. But again, I think those are implementation issues.

For all the talk about Ashley Madison-style hacks, as long as the regulator is ensuring that highly effective, privacy-preserving measures are the ones that are getting deployed, then there should not be substantive privacy issues that arise.

Another argument from platforms is that it shouldn’t be their responsibility to manage age restrictions; rather, app stores should limit children’s access to them. What do you make of that framing?

Burrows: It’s the Spider-Man meme, isn’t it? Everyone’s pointing at each other and saying it’s their responsibility.

There’s certainly a discussion to be had about whether app stores should also be age-assuring their user base. But I think that conversation should be happening in addition to rather than instead of the responsibility of the platforms.

Let’s make this a responsibility that’s shared across the stack, rather than trying to pass the parcel.

Let’s delve deeper into the transparency issue. A brand probably doesn’t know it is accidentally funding content that it probably wouldn’t want to sit against.

But how do you go about reporting? Everyone’s feed is different. Are you asking for every platform to report every instance of where an ad was served to each individual user? Is there a technical problem in achieving that level of transparency?

Kingaby: I do know that it’s possible in other markets. The financial markets are highly regulated. They require intense transparency data about the trades that are made, who makes them etc. That is possible.

Data to advertisers has got to be meaningful; it’s got to allow them to make judgements about whether that platform is safe, whether the content they’re appearing next to is appropriate. And, trust me, none of the brands that are in this report would have signed off on being next to this content.

Burrows: The idea that this would be overly burdensome for platforms just doesn’t stack up. If you look at the Digital Services Act (DSA) in the EU, platforms are now required to submit a record every single time they make a content-moderation decision. That was something like 12m instances relating to self-harm and suicide alone in the first nine months that the DSA came into effect.

Platforms are eminently capable of producing automated reporting flows at scale. But clearly it’s in their interest right now to be operating under this cloak of opacity.

What’s at stake if platforms aren’t forced to change quickly, either by advertisers or regulators?

Burrows: Whether we’re talking about children who die because of the negligence of the tech platforms or a much larger cohort of children who are being exposed to poorer mental health and wellbeing because of the design and commercial choices the platforms are taking, this is inherently preventable harm.

The scale of this is really substantial. I work with far too many bereaved families who would never have expected or envisaged finding themselves in this situation. They are families at this time of year that will not be thinking about the return to school. They will be the families who have the empty seat at the Christmas dinner table.

That is the really stark reality of the large tech companies failing to deliver a basic level of safeguarding.

Brands must wake up to algorithmic harm

Media Jobs