|

Industry AI taskforce seeks to balance risk and reward in outlining best practices

Industry AI taskforce seeks to balance risk and reward in outlining best practices

Early adopters of AI in the advertising industry are seeing significant time and cost savings, primarily in tasks such as idea generation, content creation, campaign optimisation and ad monitoring.

That is according to the first report released by the Advertising Association (AA) AI Taskforce, which explores the current state of AI use and best practices from the across the industry.

Konrad Shek, the AA’s public policy and regulation director, told The Media Leader that the report, which he co-authored, is meant as a “useful reference point” to benchmark one’s own use of AI, as well as to consider additional responsible and effective frameworks for AI utilisation.

The stakes could hardly be higher. AI companies promise to remake professional industries, including the creative sector, in new and efficient — but also potentially dangerous — ways.

Weighing benefits against risks

In the report, a representative from advertiser trade body Isba recognised that AI could be deployed through the ad campaign journey to the benefit of brands “not simply in the name of selling products, but in promoting the kind of behaviour change that could literally save the planet”.

But they noted that “AI’s power and capacities could also lead to the mass generation of low-quality ads, be used for more convincing scams or worsen the industry’s climate impact with its high energy demand”, adding that there is an additional question of whether AI will give already-dominant Big Tech even more power.

Given the potential risks, Isba, alongside the IPA, have developed 12 principles for their members to follow when using AI. They include disallowing use of AI that could result in undermining public trust in advertising, such as the use of deepfakes or otherwise fraudulent advertising.

Only the clichéd need be afraid: Why gen AI will make us better creators

Other guidance includes ensuring AI use is transparently communicated to audiences, does not infringe on the rights of individuals and their personal data, and avoids the potential for job displacement.

“A key theme within the report is that we want to work with AI as a co-pilot,” said Shek. “I know people will equate greater efficiency, doing [work] quicker, faster, cheaper, at greater scale to job losses. But I don’t think that’s necessarily true.”

Shek subscribes to the idea that AI is unlikely to take individuals’ jobs, but rather than individuals who know how to use AI will become most in demand. He suggested that most AI use in the workplace is coming organically from individual experimentation rather than employees following corporate guidelines on best-use cases.

“We all collectively have an individual responsibility to understand this tool, how to use it, how to enhance your own productivity and use it for your own personal gain as well as corporate objectives,” Shek added.

Copyright infringement remains a concern

Advertisers remain wary of using AI models for fear that doing so could result in copyright infringement.

“To be frank, there is still concern around IP [intellectual property] from the advertiser point of view and this probably is one of the things that creates a bit of friction for the wholesale take-up [of AI],” said Shek. “Because what you don’t want to do is invest in a particular tool and then that platform is subject to a massive lawsuit because of copyright infringement.

“So advertisers also have to think about these things and protect their own reputation and minimise their liability to these situations. It is something on the mind of advertisers [and] also agencies that are using these tools on behalf of clients.”

While publishers are represented on the AI Taskforce, Shek argued that the issue around copyright infringement is not one in which the ad industry can solve on its own. He said the AA will stick “firmly in the advertising lane” — in other words, remaining neutral until court decisions are made regarding the fair use of online material on which AI large language models are trained.

OpenAI seeks to reassure publishers over SearchGPT

In the meantime, most advertisers are using AI in the ideation phase through closed models that are finely tuned to a marketer’s needs, Shek suggested, and therefore are unlikely to run into direct concerns around copyright infringement.

“If you have a vanilla model and then you feed it context on brand guidelines and stuff, the output that comes out is actually very closely aligned with the client’s brand guidelines,” Shek explained. “In some ways, then, you’re reducing the risk of copyright.

“If you’re then saying ‘Write me an essay in the style of William Shakespeare”, you’re clearly going to get into trouble with copyright issues. I think this is where the agencies and advertisers are looking at a different perspective and trying to then minimise that exposure to legal risk.”

Current use cases

The AI Taskforce lays out a number of different ways in which the ad industry has been utilising AI to its benefit. However, some use cases clearly are not yet there.

For example, M&C Saatchi described using AI to reduce time spent on certain tasks, improve quality consistency and reduce third-party costs. It has also tested the use of AI to create “sythetic focus groups” to simulate real-life focus group discussions in response to consumer desires or responses to draft copy.

While the synthetic focus groups were found to save time and potentially be “largely consistent” with real ones, the agency said they feel like “a step too far, for now” given the importance of the human element: “With its inherent messiness and indirect responses, [it] has often been the source of memorable anecdotes and insights that inspire creatives and engage clients.”

Where will AI make the biggest impact in audio advertising?

Various other use cases were highlighted by the report, including the creation of an “artificial female candidate” for Fifa president named “Hope” who advocated for women’s rights on behalf of those afraid to speak up about misogyny in football. Another use was a custom-made image generator in Stable Diffusion used to help generate images featuring O2’s robot mascot Bubl.

AI has also been used to aid the Advertising Standards Authority (ASA) in its remit to review online advertising at scale. The ASA’s Active Ad Monitoring System is now processing over 1m ads a month to help support its regulatory teams.

Will self-regulation be sufficient?

Should the ad industry welcome additional government regulation on the use of AI in the media, especially given calls for the new Labour government to set out guardrails for AI use and the EU’s own passage of the Artificial Intelligence Act?

Shek suggested that new legislation may not be necessary given the UK already has what he described as robust ad regulations and strong self-regulatory bodies that help to hold the industry accountable.

“People say there’s no regulation [in the UK], but there is regulation out there,” Shek argued. “One of the key things that underpins advertising law is unfair commercial practices. It’s all about not misleading consumers.

“People go on about deepfakes and things like that, but actually the law as it exists basically says you can’t create an image that is misleading. So, in that sense, we have a legal framework that exists.”

Media Jobs