Music produced by generative AI (gen AI) currently demonstrates a 20% accuracy rate when composing music for specific commercial briefs.
That is according to a study from sonic branding agency Stephen Arnold Music (SAM) and sonic test company SoundOut. The research found that humans can still compose more emotionally accurate and appealing music than gen AI. However, in its present state, AI can be used effectively for help in the ideation phase.
Researchers gave Stable Audio’s gen-AI platform four briefs: to produce music that is sentimental/compassionate, inspirational, funny/quirky and bold/daring. The platform then generated five iterations of each brief that researchers examined.
Results varied greatly depending on the brief. For example, in four of the five iterations for the “bold/daring” brief, the AI was capable of creating results deemed commercially acceptable by the researchers. However, it was only capable of reaching that threshold in one out of the 15 other iterations it created across the other three briefs.
In other words, in 80% of cases, the music fell short of their benchmark for what would be acceptable for commercial usage.
Where will AI make the biggest impact in audio advertising?
Short briefs work best
Overall, researchers described that the AI did “reasonably well” and that, directionally, its performance was “largely successful for most compositions”.
In particular, AI was most able to succeed when given “consistent” and “well-aligned” briefs that are short and when asked for music that attempts to evoke correlated emotional attributes. When presented with more complex or nuanced briefs, the AI was more likely to fall short.
“While humans still outperform AI on the emotional front, this study has revealed that AI ‘composing by numbers’ is already not far behind,” said SoundOut CEO David Courtier-Dutton. “The AI was not bad; it was just not as good [as humans]. With some emotional fine-tuning, we expect that AI will at some point in the not-too-distant future match the majority of human composers.”
Courtier-Dutton added that the AI does not need to understand emotions; instead, it merely needs to know how to invoke them in humans.
“AI can compose music to move us emotionally,” he continued. “It now just needs a little more technical empathy to be able to do this with sufficient precision for commercial use.”
AI uses in audio
AI is already being put to use in the audio space in numerous ways. Spotify, for example, has launched “AI DJs“. AI can also aid in advanced targeting efforts, hyper-personalised creative, synthetic voice development and numerous other instances.
As Jason Brownlee, founder of Colourtext, told The Media Leader last week: “If clever audio creatives can find a way to bottle their knowledge with AI and scale it into a super-efficient, self-learning and self-serve ad production platform, the sky really is the limit.”
At present, SAM and SoundOut recommend using AI in the ideation phase when creating sonic branding. Chad Cook, SAM’s president of creative and marketing, added: “When developing commercial-ready music for leading brands […] there are additional considerations for evoking the proper emotion at the proper time. Performance, emotional timing, production quality, mixing and mastering are all elements in which the human touch makes a distinct impact.
“Combining the capabilities of humans and AI has real potential for sonic branding in terms of efficiency and quality.”
The use of AI in audio will be one of the key themes at The Future of Audio and Entertainment on 18 April.
Adwanted UK are the audio experts operating at the centre of audio trading, distribution and analytic processing. Contact us for
more information on J-ET, Audiotrack or our RAJAR data engine. To access our audio industry directory, visit
audioscape.info and to find your new job in audio visit
The Media Leader Jobs, a dedicated marketplace for media, advertising and adtech roles.