AA publishes best practice guide for responsible use of generative AI in advertising
The Advertising Association (AA) has published a guide for the responsible use of generative AI in advertising.
The Best Practice Guide for the Responsible Use of Generative AI in Advertising was unveiled at the AA’s annual LEAD conference earlier today. It includes eight principles to consider, as well as actionable steps for incorporating them into working standards.
The principles cover issues regarding transparency, data use, fairness, human oversight, harm prevention, brand safety, environmental considerations and continuous monitoring.
The framework is designed to complement existing UK laws, including UK GDPR and the Equality Act, though it is a non-binding guide aimed at providing recommendations.
The language used is generalised; for example on environmental issues, the guide suggests “consider[ing] environmental implications alongside business objectives” and “favouring energy-efficient options where practical” as opposed to telling practitioners explicitly to build environmental impact considerations, including measuring increased carbon emissions, directly into their AI use.
The guide was developed under the auspices of the Government and its industry-led Online Advertising Taskforce. It was created in collaboration with an expert industry working group consisting of advertising leaders and the Advertising Standards Authority (ASA). It builds upon Isba and the IPA’s 2023 principles for the use of generative AI in advertising.
“This new guide is designed to support responsible adoption of AI in advertising to ensure that the work of our industry can continue to be trusted by the public,” commented AA CEO Stephen Woodford. “In the words of the ASA, all advertising must be ‘legal, decent, honest and truthful’, and it must remain so as our industry embraces AI and the many benefits it can bring.”
The use of generative AI in advertising has become increasingly commonplace. Many major platforms and broadcasters now offer generative AI creative tools to streamline ideation and development and make it more accessible for smaller brands to advertise.
Separate to the main guide, a version has also been published that is tailored for small- and medium-sized enterprises (SMEs), which are generally perceived as most likely to make use of AI creative tools for the purpose of advertising. The SME version takes a “more proportionate approach” to the principles, focusing on the ones that the AA deemed “most relevant” to SMEs.
The Labour government has taken a consistently friendly stance to AI development, viewing it as core to its efforts to stimulate macroeconomic growth. However, many advocates in the creative industries have repeatedly urged the government to take a harsher stance on AI companies, primarily by enforcing existing copyright protections, which they allege have been breached by AI companies scraping their IP for use in language, visual, and audio models.
Minister for the Creative Industries Ian Murray called the guide “a key output” of the Online Advertising Taskforce’s AI Working Group. “This work supports the Government’s ambitions to ensure advertising remains trusted and makes the most of the opportunities AI can offer, helping the sector innovate responsibly,” he said.
Eight core principles
The eight main principles expressed in the guide include:
1. Ensuring Transparency: “Practitioners are encouraged to determine disclosure of AI-generated or AI-altered advertising content using a risk-based approach that prioritises prevention of consumer harm”.
2. Ensuring Responsible Use of Data: “When using personal data for GenAI applications including model training, algorithmic targeting and personalisation, practitioners need to ensure compliance with data protection law whilst respecting individuals’ privacy rights.”
3. Preventing Bias and Ensuring Fairness: “Practitioners can help prevent discrimination by designing, deploying, and monitoring GenAI systems to ensure fair treatment of all individuals and groups.”
4. Ensuring Human Oversight and Accountability: “Implement appropriate human oversight before publishing AI-generated advertising content, with oversight levels proportionate to potential consumer harm.”
5. Promoting Societal Wellbeing: “Avoid using GenAI to create, distribute, or amplify harmful, misleading, or exploitative content. Where possible, leverage AI to enhance consumer protection and advertising standards.”
6. Driving Brand Safety and Suitability: “Assess and mitigate brand reputation risks from AI-generated content and AI-driven ad placement, ensuring GenAI systems align with your brand values and safety standards.”
7. Promoting Environmental Stewardship: “When selecting GenAI tools and approaches, consider environmental implications alongside business objectives, favouring energy-efficient options where practical.”
8. Ensuring Continuous Monitoring and Evaluation: “Implement ongoing monitoring of deployed GenAI systems to detect performance issues, bias drift, compliance gaps or other concerns requiring intervention.”
How to harness the value of trust — with Matt Bourn and James Best
The Media Leader is today live blogging the LEAD conference.
