|

How can brands avoid advertising against CSAM?

How can brands avoid advertising against CSAM?

Brands should use and keep updated inclusion lists rather than rely solely on third-party verification tools to avoid inadvertently advertising against brand-unsafe content via programmatic spending, according to experts.

Concerns around brand safety and the opaque nature of digital programmatic advertising flared up again last week after a report from adtech transparency startup Adalytics found large swathes of major advertisers have, over the past four years, had ads placed on a website that hosts a lot of child sexual abuse material (CSAM).

The site, imgbb.com (and its affiliate ibb.co), operates as an ad-supported free image-sharing websites that does not require users to register before uploading and sharing photos. According to Adalytics’ report, it receives more than 40m page views per month.

Advertisers affected included Adidas, Adobe, Amazon Ads and Amazon Prime, Google Pixel, HBO Max, Honda, Interpublic, L’Oréal, Mars, MasterCard, McAfee, Nestlé, Paramount+, PepsiCo, Sony, Starbucks, the Texas state government, The Wall Street Journal, Uber Eats, Unilever, the US Department of Homeland Security and Whole Foods.

Adtech vendors including Amazon, Criteo, Google, Microsoft, Nexxen, Outbrain, Quantcast, TripleLift and Zeta Global have reportedly supported the site with advertising, despite it being flagged dozens of times in recent years by the US National Center for Missing & Exploited Children.

Phil Smith, director-general of Isba, told The Media Leader: “Isba is obviously extremely alarmed by these findings. We are seeking a full understanding of the implications for advertisers and we await responses from all the adtech companies named in the report.”

He continued: “Over the better part of the last decade, Isba has supported regulation to strengthen the protection of children and young people, and has been at the forefront of work to improve brand safety. We work with our members to help explain the tools, products and processes which are available to them to better deliver their ads in a suitable environment.

“We continue to constructively challenge the adtech industry to improve transparency throughout the programmatic supply chain. Clearly, there is much more to do to ensure that no advertising revenues fund sites that host such horrific material.”

In response to the report, two US senators (Republican Marsha Blackburn and Democrat Richard Blumenthal) last week launched a probe into Amazon, Google, Integral Ad Science (IAS) and DoubleVerify, as well as the Media Rating Council and Trustworthy Accountability Group, over the scandal.

“The dissemination of CSAM is a heinous crime that inflicts irreparable harm on its victims,” wrote Blackburn and Blumenthal. “When digital advertising technologies place advertisements on websites that are known to host such activity, they have in effect created a funding stream that perpetuates criminal operations and irreparable harm to our children.”

A spokesperson for IAS told The Media Leader: “IAS has zero tolerance for any illegal activity and we strongly condemn any conduct related to child sexual abuse material. We are reviewing the allegations and remain focused on ensuring media safety for all of our customers.”

Brands feeling insecure

The latest scandal is far from the first time brands have been in hot water for accidentally advertising against unsavoury material online. In 2018, then UK home secretary Sajid Javid instructed the Internet Watch Foundation to examine how brands were funding exploitation online. Misinformation-tracking company NewsGuard has also repeatedly highlighted that brands continue to accidentally fund Russian propaganda and healthcare host sites through programmatic advertising.

In August 2024, a separate report from Adalytics found ads from hundreds of brands have appeared alongside inappropriate content, including racial slurs and explicit material, on sites like Fandom wiki pages.

One media strategist suggested to The Media Leader that revelations of this type now “come around every few months and it’s leaving brands feeling even less secure about the quality of media they’re buying”.

They added that while ad-verification tech and responsible marketing initiatives like the (since-shut) Global Alliance for Responsible Advertising were once considered “silver bullet[s]” for brand-safety efforts online, advertisers are now “realising they need to more proactively manage brand safety/suitability and not leave it to chance”.

The Adalytics report found that several brand marketing executives said that adtech vendors have failed to provide page URL-level reporting transparency, despite this being critical to their understanding of where their ads are running.

Such uncertainty will “definitely put more pressure on agencies and publishers moving forward” to demonstrate brand-safety standards are being met, the strategist concluded.

Don’t make assumptions

According to independent ad fraud researcher Dr Augustine Fou, the reason ads were placed on imgbb.com and ibb.co — despite the fact that most brands use third-party brand-safety tools like IAS and DoubleVerify to ensure their brands don’t show up in unseemly places online — is because such vendors were unable to detect the problem in the first place.

“The legacy vendors couldn’t detect the problem because there were no keywords in the page URLs or in the page content,” Fou explained. “And these vendors don’t scan the images, audio or video.”

Fou suggested that advertisers “should not assume that legacy vendors are sufficiently protecting them from fraud and brand-safety issues”, instead advocating that the “simple solution” is to institute strict inclusion lists of greenlit websites that brands are happy to programmatically advertise against online.

However, he warned that “even then bad guys can pretend to be the sites in the inclusion list”, so monitoring ad delivery is key to ensure ads show up where they should.

‘Two extremes’

Emily Roberts, head of digital at the Responsible Marketing Advisory, agreed that inclusion lists are “essential to ensure your ads are running in safe environments”. She added that tech companies and brands have been over-reliant on using AI to address brand safety — an approach that has proven to be fallible in identifying brand-safe digital environments.

She told The Media Leader: “For anyone running programmatic advertising campaigns, relying solely on exclusion lists puts brands at risk of inadvertently funding exploitative content and misinformation.

“As a brand, you can’t just depend on Google and third-party verification tools to block ads from running on non-brand-safe content. As DoubleVerify has acknowledged, there are challenges in classifying every URL across the web, especially when traffic is minimal on certain sites.”

According to Roberts, equally important to developing inclusion lists is inking direct deals with trusted online publishers.

As The Media Leader has reported, publisher articles are regularly and often wrongly demonetised from receiving advertising due to overzealous keyword blocklists. This is despite recent studies from Stagwell and Teads/Lumen revealing that ads placed adjacent to stories covering topics often deemed “unsafe” by brands performed just as effectively compared with those next to more “brand-friendly” stories.

“I’m noticing two extremes in the industry: brands so focused on brand safety that they won’t run ads even on premium news sites; others are overly reliant on exclusion lists across the entire web,” said Roberts.

Damon Reeve, CEO of digital publisher ad platform Ozone, told The Media Leader that from the perspective of news publishing, the persistence of “advertisers misuse and overuse” of third-party brand-safety tools is “doubly frustrating given the same technologies continue to disproportionately penalise regulated and editorially governed environments without any oversight of their own”.

He continued: “While this incident on its own is worrying, it’s symptomatic of a brand-safety challenge across the wider digital landscape. With social platforms rolling back their consumer protections and content moderation, will advertisers apply the same brand-safety principles and penalties? Unlikely. Will technology solutions on these platforms fail in their ability to block the very content they were designed to combat?

“It will come as no surprise that many of these challenges could be reversed with an industry-wide commitment to full transparency. There is no reason why any advertiser shouldn’t know exactly where their ads are running.”

Media Jobs