|

UN and CAN warn of AI-driven global information integrity crisis

UN and CAN warn of AI-driven global information integrity crisis

The integrity of the global information ecosystem is at a crisis point, and the rapid adoption of AI technologies is intensifying this deterioration at a pace that demands urgent attention.”

 

That is the opening to a briefing, issued today by the United Nations (UN) and written in partnership with the Conscious Advertising Network (CAN), which warns that unchecked adoption of AI in advertising is creating systemic risks to the UN’s Sustainable Development Goals, the health of democracies, the viability of independent and pluralistic journalism, and the spread of mis- and disinformation.

The briefing, Strengthening Information Integrity: Advertising, Artificial Intelligence and the Global Information Crisis, argues that advertisers are uniquely situated to demand greater transparency and guardrails from AI companies. It makes a moral and business case for marketers to use this leverage to improve information integrity.

As the primary funding model of the digital information ecosystem, advertising “funds the systems that shape what people see, trust and believe,” says Charlotte Scaddan, UN senior advisor on information integrity.

She warns that “without swift action and guardrails, AI risks accelerating the breakdown of information ecosystem integrity”, but that “advertisers have the power to help fix it.”

However, as the UN and CAN point out, advertisers are still funding content “regardless of quality, accuracy, potential harms or effectiveness”.

Instead, the organisations suggest that media buying practices, themselves increasingly driven by opaque automated procedures, be reconsidered and made more transparent in order to improve long-term commercial prospects, as poor information ecosystems tend to lead to economic and geopolitical disruption and, therefore, worse business outcomes.

AI is exacerbating disinformation, and the system that funds it

The briefing warns that generative AI’s swift development is leading to mass harms, including scaled disinformation and online fraud, which are currently supported by the advertising industry.

“Some AI tools have incorporated design features considered addictive, manipulative or harmful to users,” it reads. “Advertisers who place their advertising within these environments are, whether intentionally or not, providing the revenue underwriting these design choices.”

It comes as users increasingly rely on AI for information and decision-making, “without the literacy or information needed to assess its safety and reliability”.

The rise of AI search summaries is also systematically undermining the business models of more accountable online publishers by reducing click-through rates and referral traffic, leading to a decline in ad-funded online businesses.

As The Media Leader reported last year, one-third of publishers in the UK’s Independent Publishers Alliance could be driven out of business by the end of 2026 due to AI search taking and surfacing content from their sites without their consent.

The report explains: “AI is integrated into aspects of everyday life, often without users’ knowledge or meaningful consent, consolidating control of the information ecosystem among the few large technology companies.”

What you need to know about OpenAI’s ad model

Harriet Kingaby, co-founder of CAN, explains that “brands are under pressure to move fast on AI, but doing so without guardrails risks undermining the very environments their marketing depends on.”

AI companies, such as OpenAI, are likewise beginning to come under pressure to embrace advertising as they look to raise revenue amid ballooning compute costs.

Meta and Snap this month both announced layoffs of 10% and 16% of their employees, respectively, as part of efforts to offset rising AI-related costs.

But the UN and CAN warn that the technology is “contributing to a broader erosion of trust in information sources” by obfuscating consumers’ understanding of reality.

AI tools are capable of creating and distributing false and hateful content at scale, including nudification and child sexual abuse material (CSAM), as demonstrated by X’s Grok, as well as state propaganda, such as Iran’s Lego videos amid its war with the US and Israel or US President Donald Trump’s image depicting himself as Jesus.

Further muddying the waters are AI-driven media-buying practices, now implemented not just by digital platforms but also by media agencies. The UN-CAN report warns that this “risks worsening fraud and inefficiency” by increasing the opacity of the buying decision-making process.

The online advertising ecosystem, it notes, is already largely opaque. Advertisers are rarely given a complete view of where their ads appear, what they cost, and who in the supply chain receives what proportion of the expenditure.

Without proper oversight, ad revenues thus “flow indiscriminately”, including by funding content that is fraudulent or is otherwise “best able to attract and retain user attention regardless of quality or accuracy”.

Agency groups’ AI platforms, explained

This is particularly true of social platforms, where billions of people ostensibly go to access social connections and information, but are also algorithmically shown content aimed at maximising their time on the platforms, with the aim of showing as many ad exposures as possible. This model “underpins their profitability”, the report describes.

This business logic is part of what led a Los Angeles jury to conclude last month that both Meta and Google had intentionally designed harmfully addictive platforms. It has also, in part, led to a decline in support for independent journalism on those platforms, in favour of more sensationalist and polarising disinformation.

“Opacity reduces both advertisers’ and audiences’ capacity to make informed choices about the information they engage with and finance,” the report reads. “Content that keeps people engaged generates revenue, whether or not it is accurate, reliable or safe.”

Attempts at self-regulation have failed

That business model, in concert with the increasing dominance of tech platforms in earning global ad revenue, has led to what the UN and CAN describe as a general failure of self-regulation.

As the report notes, there is a large body of evidence that platforms frequently fail to moderate illegal content or content that violates their own standards and policies, “in some cases prioritising revenue retention over user safety”.

This is exacerbated in non-English-language markets, which typically maintain underresourced content moderation teams (if they exist at all). Such geographies are also more likely to lack ad verification infrastructure and related legal frameworks to intervene on behalf of victims, including those of human trafficking.

Meanwhile, the UN and CAN describe platforms’ ad libraries, intended to provide transparency on who is advertising on platforms and what users they are targeting, is “insufficiently transparent”.

As the organisations further note, while voluntary industry standards have been developed to address issues such as fraud, brand safety, supply-chain transparency and sustainability”, such efforts have received “inconsistent uptake” and maintain “limited accountability mechanisms”.

‘Are we monetising addiction?’ Ad industry faces reckoning following social media addiction lawsuit verdict

For example, an investigation by The Media Leader found that ad agencies’ rush to develop AI tools has outpaced sustainability considerations, such as carbon accounting.

Brand safety efforts, meanwhile, are also now actively being undermined by the very media agencies that had once championed them on behalf of their clients.

This month, ad agencies Publicis, WPP, and Dentsu settled with the Trump administration’s Federal Trade Commission (FTC), promising to stop alleged “brand safety collusion” and, in effect, disallowing the agencies from providing their clients with exclusion lists that could be construed as “politically or ideologically motivated”.

Omnicom had previously reached the same agreement with the FTC in order to receive the green light to acquire Interpublic.

Industry standards are also arguably failing to keep up with emerging technology, such as agentic media buying. The UN and CAN warn that “poor quality of data, short-term corporate decision-making, opacity in agent decision-making, and loopholes that already affect media buying practices” could exacerbate ongoing concerns over the health of the online ecosystem.

“Opacity compounds when multiple autonomous systems interact,” the brief reads. “It becomes nearly impossible to trace how disinformation or scams propagate through the ecosystem. Where algorithms and AI systems curate content without disclosing commercial incentives or decision-making logic, the distinction between organic content and paid influence becomes difficult — or in some cases impossible — to establish.”

Recommendations for advertisers

Given tech platforms’ (including especially nascent AI companies’) reliance on ad revenue, the UN briefing urges that advertisers should use the “current window of opportunity” to “make information integrity a condition of AI uptake”.

That includes prioritising support for AI tools developed with clear guardrails and safety measures “at the design and development stage, rather than retrofitted after the fact”.

It could, for example, mean that advertisers begin requiring greater transparency standards that enable end-to-end supply chain validation and independent third-party audits.

Likewise, advertisers could require agentic AI companies to make the decision-making behind automated media buys transparent and auditable.

The same standards, the briefing notes, could be enforced on existing relations with social platforms, which could be required to produce audit trails or provide full disclosure of AI-facilitated content output.

In addition, the UN and CAN recommend advertisers “make information integrity a core component of media placement strategies and support measures that ensure quality content creators and publishers are adequately remunerated by AI companies for the content on which their systems depend.”

This could include prioritising media spend with accountable media outlets or establishing dedicated funds to support journalism and independent creators.

CAN’s Kingaby argues such efforts are “not about slowing innovation” but rather are “about making sure it works for business and society.”

She concludes: “Advertisers can either fund the problem or help build a more transparent, trustworthy and effective digital ecosystem.”

Redefining responsibility: embracing the UN’s Pact for the Future


The Media Leader’s senior reporter Jack Benjamin will be discussing the issue brief and its implications in more detail with the United Nations’ Charlotte Scaddan and Conscious Advertising Network’s Harriet Kingaby at the Future of Brands later today.

Leave a comment

Your email address will not be published.

*

*

*

Media Jobs