Imagine launching a stellar ad campaign, only to see your brand’s message displayed next to harmful content that could tarnish your reputation and erode trust with your audience. A recent Adalytics study has exposed this troubling reality, revealing that ads from hundreds of brands have appeared alongside inappropriate content, including racial slurs and explicit material on sites like Fandom wiki pages.
The Adalytics report details that brands use various pre-bid and post-bid brand safety technologies and keyword blocking techniques. Some even block user-generated content. Verification vendors such as Integral Ad Science (IAS) and DoubleVerify, renowned for their brand safety tools, face criticism for their apparent failure to block ads on offensive content. The report is clear that some ads contain JavaScript from DoubleVerify’s domain, while others have code from IAS’s domain, adsafeprotected.com. Yet, those ads still show up on pages with excluded keywords. This underscores a significant industry-wide issue in brand safety technology and practices.
This situation prompts critical questions about the effectiveness of AI systems in brand safety. For instance, DoubleVerify’s Universal Content Intelligence claims to leverage advanced AI technology to offer advertisers the most accurate classification, ensuring broad coverage and protection at scale. However, if these tools fail to prevent ads from appearing next to harmful content, their true value comes into serious question. Are these technologies really worth the millions of dollars brands invest in them?
Some advertisers who spoke to Adalytics voiced that one potential solution might lie in greater URL transparency for advertisers. By providing more detailed insights into where ads will be placed, brands could make more informed decisions and avoid risky content more effectively.
The question remains: Can the industry rise to the challenge and protect brands more effectively in the digital age?