Brands must wake up to algorithmic harm
Opinion
The spread of misinformation, disinformation and hate isn’t just a tech problem. It’s a business model problem. And advertisers must be part of the solution.
The UK government Science, Innovation and Technology Committee’s recent social media, misinformation and harmful algorithms report makes a bold statement: “Social media companies ‘often enable or even encourage’ a viral spread [of misinformation or hateful content] due to their advertisement and engagement-based business models, endangering the public.”
The lies that fuelled the 2024 Southport riots were a tragic consequence of this ecosystem.
This report highlights deep flaws in the Online Safety Act’s ability to address the harms caused by algorithmically promoted lies and hate, and points to the powerful role of digital advertising in incentivising these harms.
At a time when the United Nations’ Global Principles for Information Integrity are calling on governments, platforms and advertisers to act, this inquiry strengthens the case for bold, co-ordinated reform.
This isn’t just a tech problem. It’s a business model problem. And advertisers must be part of the solution.
News, analysis, comment and community — Join The Media Leader
The financial engine behind online harm
Much of the internet is paid for by advertising. Every scroll, swipe and click is an opportunity to serve an ad and every ad generates revenue. This is the economic foundation of the online ecosystem.
Programmatic advertising, which now accounts for a majority of digital spend in the UK, has created a highly automated, largely unregulated supply chain. Data is fed into real-time bidding systems, where thousands of intermediaries compete to place ads.
The most valued content that ads are placed beside is promoted by algorithms optimised for attention, not accuracy.
This creates a powerful incentive to publish content that promotes strong emotional responses and drives engagement and fuels outrage, regardless of its truthfulness or impact. As a result, entire corners of the internet are now shaped to attract ad revenue through sensationalism and clickbait, not public interest or ethical standards.
Advertisers are often completely unaware of the harmful environments, content and dynamics they’re funding. A single campaign may run across tens of thousands, if not hundreds of thousands, of websites. In many cases, neither the brand nor the agency knows where the money ends up.
Disinformation networks and criminal operations are able to thrive, all while remaining under the radar of the checks that advertisers have put in place to protect against fraud and unsuitable placement.
Curation and outcomes: Step in the right direction or another false dawn?
Now is the time to act
Despite growing concern from the public and experts, industry self-regulation is failing to address the crisis.
Existing tools designed to protect brands, such as keyword blocking or AI-driven verification technology, are widely regarded as insufficient. Worse, they can have unintended consequences.
The ad industry has for too long treated this as a technical issue to be managed behind the scenes.
But it’s a structural issue that requires a systematic response. We have overwhelming evidence that the digital advertising system is contributing to real-world harm.
The Science, Innovation and Technology Committee’s report rightly calls for new regulation of algorithmic systems, including fines for platforms that fail to limit the spread of harmful content. But the advertising system enabling that harm must also be addressed. Transparency across the full supply chain is essential.
The UN Global Principles for Information Integrity provide a clear framework. All stakeholders are urged to adopt human rights-based policies, conduct end-to-end audits of ad campaigns and demand disclosure of ad placements and partners.
These are not abstract ideals. As seen in Brazil’s Mutirão initiative — promoting transparency of placement and data, as well as disclosure across the ad supply chain to counter climate disinformation and greenwashing — they are practical steps towards a safer, more trustworthy online environment.
Advertisers can lead the way
Advertising spend shapes the internet. When ads fund content that results in real-world harm, they help normalise and sustain it. But if brands insist on transparency and integrity — conducting thorough audits of where their ads appear and aligning their media buying practices with their stated values — they could shift the system.
This isn’t just good ethics. It’s good business. Brands that take greater control of their supply chains are more likely to avoid reputational risk, reduce media waste and increase return on investment.
Has anyone truly demonstrated that buying media across 44,000 unknown websites is more effective than investing in 2,000 high-quality, trusted environments? We need to rethink what success looks like in digital media and design incentives that reward integrity, and effectiveness, not exploitation.
Marketers, chief financial officers and CEOs should all have visibility into where their ad budgets go and what those investments support. Just as companies are now expected to avoid unethical labour practices or environmental destruction in their supply chains, they must also ensure their advertising spend isn’t fuelling harm.
A transparent supply chain is not a big ask.
The winners here will be quality content producers, such as news outlets. More transparency encourages better decision making about what content is effective — and placement in trusted environments drives business results.
A turning point
The UK committee’s report is a critical moment in the national conversation about online safety. It exposes a system where viral hate and lies aren’t just tolerated but monetised, where content that creates harm is promoted and brands are sometimes wilfully ignorant of the consequences of their spending.
This doesn’t have to be the norm. With political will, regulatory clarity and industry-wide commitment to transparency, we can dismantle the incentives that currently reward harm and rebuild a system that protects users and promotes the integrity of our information systems.
The result will be that more people feel safer to express their opinions online, promoting greater freedom of speech. This will rebalance the system so public safety, ethical media and democratic resilience are no longer sacrificed for clicks and revenue.
We have the tools. We have the evidence. Now we must have the courage to act.

Jake Dubbins (left) is co-founder of Conscious Advertising Network; Dr Karen Middleton is senior lecturer in marketing at University of Portsmouth
