It’s already happening: AI-led journalism is just a ‘basic use case’
Was this article written by a human journalist or a generative AI bot? Does this publication have a responsibility to tell you how much, if any of it, was created by a machine?
These are the questions that media owners must answer with the advent of AI, an audience hosted by Insider, Politico and the World Media Group heard at Cannes Lions last week, amid a flurry of ethical concerns and disruptive forces that the technology is set to unleash.
Earlier this month, the global consulting firm Accenture, which in recent years has been growing its own marketing services agency group Accenture Song, pledged to invest £3bn in AI capability, such is the perceived opportunity to achieve “greater growth, efficiency and resilience”.
Speaking at Cannes, Amir Malik, Accenture’s managing director of growth marketing, did not seem phased by telling an audience full of publisher staff that AI is coming for traditional editorial functions.
“Written articles online are now really a basic use case. That’s the elephant in the room,” Malik said. “The graver question is: do we have a responsibility to flag the content that has been artificially produced? Because we’re losing the ability to tell.”
The same is true for audio, Malik explained, where AI-generated voiceovers are becoming so realistic that companies like Synthesia and ElevenLabs can create customer-service agents which humans cannot tell are AI when talking to them on the phone. Increasingly, brands will be asked whether they need to declare this, too, in the interests of transparency.
In terms of content production, AI is perhaps even more advanced than is popularly thought despite the widespread understanding of ChatGPT since the end of last year. Malik explained how he had essentially created a book about how to play poker from scratch in about four hours.
“[The AI] wrote me a 5000-word mini-book. I published it, I put it on Shopify. And now I sell it through Meta apps on my phone. It’s actually just trying to get the verification from Amazon marketplace that was the most annoying delay. But the production was there; I didn’t have to change anything.”
Malik cited this as the often-quoted 80/20 rule that has become familiar with any “future of AI” speak — the tech will be used to carry out 80% of work previously done by humans, with humans then responsible for refining, curating and checking the AI’s output.
Evan Bretos, director of Special Newsroom Initiatives and Partnerships at The Washington Post, was more sceptical of how much of media’s output would end up being produced by AI. Nevertheless, he acknowledged AI presented a “huge opportunity” for not only producing content differently, but giving journalists better ways of presenting content in ways audiences want, as well as better analysing data to tell stories.
“Data was said to be the North Star and then we didn’t know how to use it. We didn’t know how to how to read it, how to measure it, implement it ethically, or responsibly,” Bretos said. “Those are areas where, on average, not a lot of places are not doing a great job.”
Panel (L-R): Bretos and Malik.
The bigger concern, according to Bretos, is that readers may be duped en masse by bad online actors that are fronting as journalistic institutions with AI in irresponsible ways. Scam ads bought and sold on self-serve platforms like Meta’s Facebook and Google’s YouTube have been a concern for many years.
Just this week, The Media Leader’s columnist Nick Manning flagged a scam ad featuring the UK’s celebrity “money saving expert” Martin Lewis. Lewis had settled a court dispute with Facebook in 2019 over its repeated failure to prevent scam ads featuring his image and name from appearing.
Then there are obvious copyright issues which are a major concern for publishers, as WMG CEO Belinda Barker talked about in a recent interview with The Media Leader. If generative AI tools rely on quality information to create new content, the content created by publishers is potentially being significantly undervalued and undercompensated.
Bretos warned: “All businesses, not just newsrooms, are going to continuously be concerned about transparency and how much of their IP is being swept up into the data sets and the language models that are being spun up… Think about also, the implications of search, which is a whole other thing that will affect everybody, but publishers significantly.”
World Media Group CEO: don’t overlook AI’s threat to journalism at Cannes
These rapid advances in AI may very quickly create a media system which has room for fewer and fewer publishers, Bretos concluded.
But then, do consumers actually care? The way humans have consumed content has fundamentally changed, Malik said while appreciating he may be “shattering the dreams of journalists” with his pronouncements.
“People just look at the headlines, the lead image, read the first few lines, and then they want to discuss it… There’s [only] a smaller segment of the population that will read the whole answer, unfortunately. And so, then, what’s the opportunity cost of producing content with the AI if people can’t tell the difference?”
There would obviously be no point putting a disclaimer reading “this article was created using AI”, then, if consumers are not reading to the bottom of the article. There is also the potential for the online ad model to come under greater pressure as less reading means less valuable ad inventory opportunities.
What was not discussed, however, was how the publishing industry has already been deformed by the prevalence of algorithmic-led platforms which may automatically promote or downgrade content in various ways, whether it is for a Facebook News Feed, Google Search, or a demand-side platform for open-Web buying.
While this article was not written by AI, whether you’re reading it has, for some time already, been dependent on machines. And no one, judging by its market dominance, has stopped using Google because it has never revealed how its algorithm works.