IAA roundtable: AI, brand safety, and the changing rules of media
Opinion
What happens to trust when AI sits at the centre of how we communicate? The IAA UK’s executive director distils the key discussion points from this recent debate.
At the latest International Advertising Association roundtable, senior voices from media, publishing, and advertising came together to untangle one of the industry’s biggest questions: what happens to trust when AI sits at the centre of how we communicate?
Created and chaired by IAA VP Inclusion Deborah Gbadamosi, who was joined by co-chair and host Marko Johns, UK MD at Seedtag, the discussion balanced optimism and caution, reflecting an industry that knows change is here and wants to shape it, not chase it.
AI is long past something to prepare for. It’s here; reshaping how the media ecosystem operates, rewriting the foundations of media, and forcing everyone to rethink what safety, creativity, and accountability really mean. It can shape recommendations, analyse sentiment, and plan campaigns in seconds. But it also brings an uncomfortable truth. No one fully agrees on how to keep it fair, creative, or even human.
Now is the time to align on shared principles, define what responsible innovation means, and agree on how the industry can maintain public trust as it embraces change through 2026 and beyond.
Finding promise in progress
Contextual targeting is gaining significant ground. Many saw it as a practical step forward, a more ethical route to brand safety that avoids muting minority voices.
Rather than relying on blanket restrictions, advertisers are starting to think more carefully about context, meaning, and nuance. That shift marks a deeper understanding of what brand safety truly entails.
“Context matters more than ever,” said Seedtag’s Johns. “Our recent neuroscience research proves that quality context drives performance. Neuro-contextual advertising, which matches ads to an article’s interest, intent, and emotional tone, delivers 3.5 times higher neural engagement”.
“With neuro-contextual technology, we can understand the real meaning and emotional impact of content rather than just the words on a page. That’s how brands can stay safe while still reaching audiences in authentic, culturally relevant ways.”
Quality journalism matters now more than ever, and brands have a responsibility to invest in credible journalism. The endless race for viral reach has come at the cost of real, trustworthy storytelling.
The value of verified, trusted content reaches beyond commercial gain. It shapes public understanding and helps hold the media ecosystem together.
Creativity brought a different kind of energy. The phrase “a duet, not a duel” summed it up neatly by Deborah Gbadamosi, VP inclusion at IAA UK.
When used thoughtfully, AI can handle the heavy lifting of analysis and organisation, leaving people with more time to imagine and tell better stories. The future belongs to those who make technology feel human, and the advantage will always sit with the brands and creators who use AI to elevate imagination rather than replace it.
The takeaway here is: prioritise contextual understanding and invest in trusted journalism. Use AI to enhance creativity, not replace it, and keep culture, empathy, and authenticity at the heart of every decision.
Where the tension lies
Beneath the optimism, there were quieter worries. Many systems still rely on outdated data, which can reproduce those biases.
What appears efficient can end up repeating the past. Several participants called for more attention to how AI is trained and to who decides what “normal” looks like. Bias reflects who trains the machine and what stories are represented, and the industry cannot let the past write the future.
Automation raised another concern. Agentic AI tools that automatically buy and plan media may change how agencies operate, but could also eliminate junior and entry-level roles where people learn the craft.
If those early steps disappear, so does the kind of understanding that only develops through doing the work yourself; the judgment, the intuition, the feel of it.
Creativity was another area of tension. Leaning too heavily on automation could make campaigns sound alike. Slick, efficient, and entirely forgettable. AI can sharpen the process, but it cannot replicate the empathy or lived experience of specific communities.
The takeaway here is: keep humans in the loop. Ensure diversity in AI training, preserve entry-level learning opportunities, and protect creativity from becoming mechanical.
Training as the turning point
Training came up repeatedly throughout the discussion. Not just technical training, but broader education that includes ethics, bias awareness, and cultural understanding. The industry needs to map this across entire careers, from developers and graduates to strategists and senior executives.
“We have to treat training as the bridge between innovation and integrity,” said Gbadamosi. “AI will only be as ethical as the people guiding it. The more we embed cultural awareness and critical thinking into training across entire organisations, the stronger the industry becomes.”
Across education and business, that bridge becomes tangible when AI training focuses on curiosity, scrutiny, and shared responsibility.
In schools, it means integrating AI literacy into everyday subjects, so students learn not just how to use tools, but how to question them – tracing where information comes from, spotting bias, and comparing machine reasoning with their own.
Classrooms can use co-creation projects, where students refine AI-generated essays or designs, and structured debates on real-world issues like misinformation or automation ethics. Teachers themselves need hands-on mentoring and shared lesson resources so they can guide students safely through this new terrain rather than policing its use.
In business, tangible training looks like short, practical workshops that build AI literacy for all staff – understanding what models can and can’t do, how to verify outputs, and when human judgment must intervene.
Cross-functional “critical thinking labs” enable teams to test AI tools on real-world tasks, reflect on unintended bias or tone, and document lessons learned.
For leaders, immersive ethics sessions connect innovation decisions to cultural and social consequences, ensuring integrity scales alongside technology.
In both classrooms and workplaces, this kind of training turns AI from a passive tool into an active learning partner. It grounds innovation in reflection and empowers people to question, adapt, and use AI responsibly, ensuring the human mind remains the true source of integrity.
The takeaway here is: build structured, lifelong training pathways. Pair technical skills with cultural intelligence and ethics, and make critical thinking the foundation of AI literacy.
Rebuilding trust in real time
Accountability was another recurring theme. Ali Hannan, founder of Creative Equals, called the EU’s AI Act an encouraging first step, though it was agreed that rules on paper would not be enough.
Real accountability has to be shared across advertisers, publishers, platforms, and the people who use them. AI systems do not remain neutral on their own. They need regular checking, auditing, and adaptation as social expectations evolve.
“Local journalism keeps communities connected and informed,” said Johns. “When it disappears, misinformation fills the gap. Advertisers have real power here. Directing spend toward credible outlets helps rebuild that connection and strengthens public trust.”
The table called for shared verification frameworks to help guide investment towards credible sources. Others suggested putting more pressure on platforms to improve the transparency and safety of their products.
Trust is not a permanent state. It needs maintenance. Brands must constantly ask where their money lands and what kind of media environment they are shaping.
The takeaway here is: make accountability active. Support local journalism, push for platform transparency, and create shared verification systems that keep the media ecosystem trustworthy.
Lessons to take forward into 2026 and beyond
The clearest takeaways were practical. Continuous education must become standard. Transparent reporting and verification frameworks should be industry-wide. Funding models must reward quality and local journalism. Contextual targeting should be prioritised as a more ethical and inclusive route to brand safety, helping brands reach audiences meaningfully without silencing diverse voices. And every innovation in AI should come with equal investment in ethics and inclusion.
The next year and a half will likely bring new tools, models, and unexpected partnerships. The direction they take will depend on how well the industry manages the balance between automation and human oversight. Progress must be guided by principle.
AI is not the enemy of creativity or safety. It will, however, test how seriously the industry holds on to those values.
The industry should use AI as a creative partner that accelerates insight but always leaves humans in charge of meaning, empathy, and imagination. That begins with better education, more transparent systems, and a commitment to inclusion that keeps pace with innovation.
“This is the moment for the industry to lead from the front,” said Johns. “If we keep combining innovation with accountability, AI can help us build a media landscape that’s more intelligent, more inclusive, and more trusted than the one before it.”
Kirsty Giordani is executive director of IAA UK
