|

Behave: Significant gaps exist between C-suite and employees over AI implementation

Behave: Significant gaps exist between C-suite and employees over AI implementation
Mediaplus UK group CEO Tom Laranjo

The AI revolution can be a “moment of profound change”, according to Mediaplus UK group CEO Tom Laranjo — but only if industry leaders “recognise the uniquely human dimension of behaviour change and transformation, and don’t just look at this as a moment of limited efficiency and cost-cutting”.

“It needs humility and open dialogue,” he told a standing-room-only crowd at Mediaplus’ offices in Soho Wednesday morning.

Yet according to research from Behave, the agency group’s global behavioural science consultancy, cost savings are top of mind for C-suite as they advance AI-driven business goals.

According to the Unlocking the RenAIssance report, published earlier this year, 26% of C-suite executives say that “improving operational efficiency” is the primary objective driving their organisation’s AI strategy.

At SXSW London last month, outgoing WPP CEO Mark Read admitted he has “no doubt” that “there will be fewer people involved” in media companies in order to produce the same level of output, though he firmly believes AI will eventually create new jobs within the industry.

It’s little surprise, then, that all but 3% of employees said they were at least “slightly concerned” about their job security in the AI era. In fact, 77% responded they were either extremely, very, or moderately concerned.

Exacerbating a creative crisis, or solving one?

Dr Alexandra Dobra-Kiel, author of the RenAIssance report and innovation and strategy director at Behave, remarked that “currently, everyone wants to party with AI”, but they are doing so for the purpose of achieving “myopic efficiency”.

According to Dobra-Kiel, “a failure of imagination” and “risk aversion” are leading business leaders to focus their AI strategies around short-term, incremental gains in profit, rather than consider the big picture for how AI can transform their business and lead to product innovation.

During a panel held later in the morning, Ed Birth, head of brand marketing at Hiscox, agreed creative jobs are likely to be affected, warning that there is already a “crisis of creative quality” in advertising, bluntly assessing that 95% of ads we see “are crap” and AI could worsen the problem if used by brands irresponsibly.

He suggested that, rather than envisioning AI as a tool that can replace creative jobs, it needs to be reframed by leaders as a tool for creatives to “be more creative with better insights”. To be additive, rather than oppositional.

That requires a “safe space” in which teams can test using AI tools to discover how they can best benefit teams and unlock new use cases. Supriya Dev-Purkaystha, head of media and adtech solutions at Microsoft UK, advised leaders to create such “playgrounds” for their staff to toy with AI in a way that doesn’t risk data security by making use of non-enterprise data.

According to Laranjo, 99% of data used by large-language models (LLMs) comes from publicly available sources (or otherwise taken from sources without regard for copyright). Despite enterprise-level data being largely more useful for most businesses, it is seldom sufficiently integrated into organisational workflows, Laranjo said.

Dev-Purkaystha added leaders will also need to get comfortable constantly being in “pilot” mode — constantly iterating how they’re using AI and integrating the technology into new products that can receive incremental updates.

‘Ethics is a muscle, not a checkbox’

The report, which surveyed 1,200 employees in the UK, including over 200 in C-suite, also found gaps between executives and their staff over AI skill proficiency and ethical considerations.

For example, 38% of C-suite respondents said their biggest barrier to effectively leveraging AI is a lack of skilled personnel, yet a majority (52%) of staff consider themselves “expert” or “advanced” in using AI tools for work, compared to just 20% who described themselves as “novice” or “basic” in their understanding of using AI.

There is also a lack of consensus on who should be driving ethical standards for AI use, with one in four (27%) believing senior management should be responsible, while a similar total (25%) preferred to punt the responsibility to external bodies.

According to Laranjo, while ethics is “the most important part” of Behave’s framework for AI transformation, there is not a “one-size-fits-all when it comes to ethics”.

Dobra-Kiel added: “Ethics is a muscle, not a checkbox”, implying the need for leaders to develop and commit to strongly considered ethical frameworks for AI use.

Behave recommended C-suite take various steps for codifying AI adoption that address gaps in motivation (“embracing AI”), proficiency (“harnessing AI”) and ethics (“safeguarding AI”) between organisational leaders and their staff.

To address motivation, “the key foundation is vulnerability”, according to Laranjo, which requires acknowledgement that fears over job loss and the drying up of skill gaps aren’t unwarranted.

“We’ve all seen those sessions where you’ve seen [an executive] stand up and say, ‘This is fantastic. We’re going to be implementing AI. There are hundreds of millions of savings that can be generated from that. It’s going to be great,’ whilst staring out at a sea of blank faces who recognise that they are that cost saving that person is talking about”, said Laranjo.

Greater transparency around how AI will transform the business, as well as acknowledgement of its downside risks and challenges are key, he suggested. Humility and “preserving humanity”, too, are needed.

“It’s not enough for a CEO to stand up and point in a direction, it’s really important that people follow as well”.

During the panel, Julia Zimmerman, executive partner at Future Marketing, added it is incumbent upon leaders to “also change” by setting, adhering to, and communicating the same transformational standards they expect of their staff.

Otherwise, she warned, a top-down approach to transformation is likely to backfire if employees don’t believe it is in their best interest to design or work with tools that could replace them or lead to lower-quality outputs.

Media Jobs