The Media Leader investigation: AI push outpaces carbon transparency in advertising
The advertising industry is scaling AI integration fast, and despite measurement tools evolving, supply-chain opacity and cloud non-disclosure mean the industry still lacks a full picture of its carbon impact, The Media Leader has found following an investigation.
While the efficiency of AI in workflows has been widely discussed, its carbon and net-zero impacts have not been as central to the conversation.
Prominent strategic advisor for regenerative media and marketing Laura Wade echoes this, noting that AI tools are being implemented without “any real consideration or forecasting of the environmental impact.”
The carbon cost of AI
Research estimates training OpenAI’s GPT-3 model for ChatGPT would have emitted around 500 metric tonnes of carbon dioxide, equivalent to driving a car from New York to San Francisco 438 times.
Data centres, which power AI systems, currently consume around 1-2% of global electricity, with AI being responsible for around 15% of that.
However, as AI systems become more advanced, their energy demand will increase. The International Energy Agency has predicted that by 2030, energy demand will double due to AI.
For instance, in a five-year period, Google’s carbon emissions have risen by nearly 50% due to increased demand from AI.
Online advertising is already resource-intensive, with estimates that the energy and infrastructure required to support online ads is as much as 2.1-3.9% of global emissions.
As AI’s usage increases, so will the energy footprint and environmental costs, especially if cloud-based data centres are powered by a range of energy suppliers, including fossil fuels.
Jake Dubbins, co-founder and co-chair of the Conscious Advertising Network, highlights the concerning and growing convergence between big tech and the fossil fuel industry.
He says: “If the US is building a fossil fuel energy mix, then that is terrifying.
“For the advertising industry, the carbon in the supply chain will go through the roof, and initiatives like Ad Net Zero will either be blown up or totally blind.
“As far as I know, there is no transparency on the energy mix used by the big LLMs.”
This is why understanding and having accurate data on the carbon costs of AI tools, which agencies and clients rely on for reporting, is essential.
A look at the industry
The concern that accurate AI emissions data would mean net-zero targets are unachievable has already been raised.
One holding company indicated it has been working from “guesstimates” and highlighted that AI suppliers have not been able to provide accurate data on AI-related emissions.
It was outlined as a problem with attaining transparent data from data centres and hyperscalers.
There have been various efforts across the industry to address this issue.
For example, independent agency the7stars underscores its efforts to engage with this and has utilised a consultancy called Beyondly to measure scope 1, 2, and 3 emissions. From this, the company has set science-based targets and a carbon reduction plan.
The Trade Desk highlights its focus on improving measurement tools, but reveals that when it joined the Ad Net Zero initiative, it struggled with attaining transparent data.
A spokesperson says: “In 2022, we were trying to get a sense of our carbon emissions data as part of our Ad Net Zero commitment, and it was impossible to get that data and transparency from Amazon Web Services (AWS).”
The spokesperson caveated this, stating that AWS may have improved its reporting tools since then, but the issue of transparency remains a key industry concern.
When several Holding companies were asked about their strategy for receiving transparent data from the likes of AWS, all declined to comment.
Why is the data missing?
Currently, there are different ways to measure emissions that account for AI’s impact. And as established, one limiting factor is the lack of transparent data disclosure by cloud providers and data centres.
Matt Anderson, technical advisor at Carbon Trust, an independent, non-profit climate consultancy firm, explains: “Big hyperscalers — Microsoft, Google, AWS — they have a tool that provides an emissions report to cloud customers that says here are the emissions associated with the cloud services that you get.
“It’s not completely clear to me across the hyperscalers and those tools, to what level of detail they’re able to capture about AI-related services.
“In Google’s documentation, it specifically calls out capturing AI services within the footprinting tool, but it’s not completely clear across the board, really, the exact methodology that those emissions may or may not capture.”
Bob Burgoyne, associate director at Carbon Trust, echoes this: “In terms of the numbers provided by those big cloud providers, there’s a lot of critique about not being sufficiently transparent about the assumptions or the boundary that underpins these numbers.”
Daniel Schien, associate professor for sustainable ICT at the University of Bristol, also highlights the issue of using averages in this context.
“The providers have been very slow, and they are only sharing average numbers. There’s sufficient variability between an individual prompt for just one type of query versus other media types, such as video or images,” he says.
AWS, Google and Microsoft have been contacted for comment, but The Media Leader has not received a response in time for this publication.
How this opacity limits measurement progress
The absence of transparent, workload-level emissions data from cloud providers is now one of the biggest structural barriers to quantifying and understanding AI’s true environmental impact.
This lack of disclosure of energy load around AI training, inference or optimisation means the advertising industry has to rely on proxy modelling rather than direct measurement.
As AI becomes more embedded across the digital supply chain, these blind spots may compound.
For instance, Scope3, a technology company which provides data to brands, agencies and publishers to measure, visualise and reduce carbon emissions associated with their digital advertising supply chain, depends on aggregated, representative datasets which model the energy use of publishers, SSPs, DSPs, exchanges, CDNs, devices and networks.
Although the benchmarks are refreshed monthly, they are not based on raw infrastructure-level data.
Also, currently, the difference between a standard programmatic impression and one delivered through AI-enhanced bidding or optimisation cannot be captured.
The energy required to train AI or to deliver AI inferences also cannot be isolated, meaning AI’s energy use is instead at risk of being absorbed into broad averages that treat all impressions as equal.
This risks AI’s exponential growth within the industry being systemically underreported.
There are other examples of measurement, such as Ad Net Zero’s Global Media Sustainability Framework, which aims to provide a consistent, standardised framework across the industry.
However, the same data limitations apply, with the framework taking a tiered approach: the lowest level relies on aggregated data, while the highest relies on “contributed” data from suppliers or precise bidding volumes from publishers, SSPs, and DSPs.
The framework remains voluntary and not universally adopted, and without this granularity, it cannot differentiate between conventional programmatic delivery and ads served through AI-intensive processes.
Mike Hopkins, senior consultant at Carbon Trust, underscores the risk this poses to net-zero commitments. He says: “Without an understanding of emissions specific to a company’s AI use, and the key drivers of this, organisations lack the insight required to make informed decisions on how amending AI use or considering different suppliers may help reduce emissions.
“This may allow AI emissions to continue to rise unchecked as adoption continues, posing a material challenge to achieving Net Zero.
“This is particularly the case for service organisations without significant physical infrastructure, where AI use is likely to be more material.”
AdGreen, under Ad Net Zero’s Action 2, recently released the 11th iteration of its carbon calculator, which now measures AI impact across production.
Based on HiiLi’s methodology, it utilises open-source models to simulate the electricity required for a single AI inference and converts this into CO2 based on the assumed or, in certain cases, known location of the data centre.
This approach is designed to work around the non-disclosure of real energy data from AI and cloud providers.
Despite being a significant step toward quantifying carbon emissions both before and after a campaign, the calculator does not capture training emissions and excludes other material impacts, such as water used to cool systems or the footprint of cloud storage.
The calculator is further limited to production emissions and is not mandated, meaning although it is a tool utilised across the industry, with superusers including brands such as Oliver and Craft, wider adoption varies.
The experts from Carbon Trust also warn that cloud providers’ claims of “green” or “renewably powered” data centres can be misleading.
Many rely on market-based Scope 2 accounting, which allows companies to buy renewable energy certificates to offset grid emissions – this means the reported carbon footprint of AI workloads may not reflect the actual energy mix powering the data centre.
Burgoyne notes that while this is a recognised accounting method, it is “contested” and “overly relied upon” by big providers, and the method is currently under review.
He argues that genuinely renewable-powered data centres are “much more defensible,” however, advertisers do not necessarily have access to information on where their AI workloads are processed, making true oversight of emissions even harder to attain.
The efficiency assumption
A common theme across the industry is that AI will reduce waste and increase efficiency, particularly in creative production.
While there is truth to the notion that traditional production can be resource-intensive, and measurement solutions such as AdGreen’s carbon calculator, and other carbon calculators across the industry, may help substantiate this idea, experts warn that this assumption is not yet proven and cannot be relied upon.
For instance, Wade highlights how, while there is a lot of waste in creative production with traditional methods, “the assumption that AI is the solution has been oversimplified.”
Wade points out how sustainability teams are being tasked with “trying to figure this out as these tools are being established.”
She further emphasises how: “80% of environmental and social impact is 80% baked in at design,” meaning there is a “risk these tools are being integrated based on assumptions that AI is more efficient than traditional processes and practices.”
Hopkins also warns that efficiency gains can be misinterpreted as gains in sustainability. He states: “The idea that making technology more efficient simply drives more overall usage is a well-established concern, known as Jevons’ Paradox.
“To put this effect into context, it is important to consider whether the AI activities are truly incremental or substituting for something else and, if the latter, then what is the comparative environmental impact of the prior activities.
“We need to put things in context in this way, supported by greater transparency from model vendors, to make an overall judgement of AI’s positive vs negative impacts.”
The impact on net-zero
Over-reliance on proxy modelling and efficiency assumptions could significantly affect organisations’ ability to achieve net-zero targets.
Without granular data, companies risk making decisions based on incomplete or misleading information.
These issues speak to a widening gap between the industry’s sustainability ambitions and its need to measure the environmental impact of AI.
Without greater transparency and disclosure from cloud providers and AI vendors, the industry is at risk of further entrenching higher-carbon systems into its workflows and further conflating its contribution to greenhouse gas emissions.
