|

The biological compass: How modern tech reveals our ancient human truths

The biological compass: How modern tech reveals our ancient human truths
H&M's 'digital twin' models
Opinion

It’s not that artificiality is inherently dangerous. It is that people can tell when something is being offered honestly and when it is trying to pass itself off as the real thing. Phil Rowley explains.


When did people first discover jet lag? 

It’s a question I asked myself a few weeks back when I found myself in my living room, annoyingly wide awake at 2.14 am, after a long-haul return flight from San Francisco. 

The clue is in the name. It took the invention of the jet engine and the democratisation of fast transatlantic crossings to reveal to people how their human circadian rhythms worked and could be disrupted. The term wasn’t coined by a magazine until 1966.  

Jet lag, then, is not even 100 years old, and it took a technological breakthrough to reveal an evolutionary truth about humans that had previously been hidden: the body knows when machines are no longer aligned with reality. 

In the case of jet lag, the machine is the clock. But this is not the only instance of technologies revealing dormant senses and unseen biological limitations. 

In fact, age-old human attributes may cut in like a circuit breaker as we move further into a technologically disruptive era of AI and VR.

Let me show you how.

Peering into the Uncanny Valley

In his book Blink, Malcolm Gladwell writes about how subconscious processing can clue us into something that ‘just isn’t right’. Not only can our body tell when a clock lies to us, but we also seem to naturally possess advanced ‘fraud detection software’ for words and pictures that appear to be ‘wrong’ somehow. 

Today, that cognitive immuno-response is being tested more frequently with the rise of AI.

Our ability to detect artificiality is in the spotlight, particularly notable when we encounter a human face that has been generated, edited or altered.   

There’s even a term for it: The Uncanny Valley. It’s named after the dip in affection felt toward creations or characters who try too hard to be human. So, yes, we love Woody in Toy Story because it’s an obvious caricature. But we seem less comfortable with the artificially ‘realistically de-aged’ Jeff Bridges in the 2010 Tron sequel because there’s something non-humanoid about the animation of his lips. ‘Realistic’ doesn’t always mean things feel real. 

Again, this likely has an evolutionary explanation – and a terrifying one at that. It’s possible that 200,000 years ago, our ancestors developed a need to discern humans from something pretending to be human. Just think about that for a second.

Failing the Turing test

It’s not just ‘non-real’ faces either. I recently encountered someone using an AI comment tool to auto-reply to waves of LinkedIn posts. I doubt they were even reading the content; they were just repeatedly hitting ‘generate response’ to appear in multiple conversation threads. This was synthetic participation, or the appearance of engagement without the substance of it.

The point is that it was detectable, and a dramatic failure of the famed Turing test. It’s a fascinating test to revisit in 2026. Today, we’re seeing that, yes, an AI can pass the Turing Test in short bursts, but it struggles to pass it in sustained conversation. 

And that’s a key point. We see so much AI output in text, and we’re spending more time conversing with it, that we’re all quickly learning to sniff out its use. 

These ‘AI-isms’, such as paragraphs constructed from a series of neatly stacked taglines, like “This isn’t a mistake. It’s by design” or the famed overuse of em-dashes, are a giveaway.  

Think of this like a ‘Cognitive Uncanny Valley’, maybe. We seemed primed to know when someone is not communicating in a human voice.

Putting The V in VR 

How about the ultimate media for facsimile and emulation – Virtual Reality? The R in VR seems to make a bold claim: you are entering a world that will seem almost ‘real’. Or take the early Metaverse experiment ‘Second Life’. That was even more brazen in its labelling: an experience so realistic that it will feel like ‘life’.

Yet, despite the framing, Virtual Reality has struggled to bring people this new reality. Why? For Jaron Lanier, the man perhaps most instrumental in the development of VR, the reason is clear. 

First, there are remaining technological challenges with frame rates 40 years after its inception that create a seasick-inducing delay between the movement of the head and the virtual environment. The human eye distinguishes between an image and reality. Granted that may yet be overcome in time.  

Second, however, Jaron states we enter VR in the same way as we buy tickets to a magic show: we desire to be tricked whilst knowing it is not real. Humans do not merely crave immersion. We crave immersion with safety rails. 

Moreover, says Lanier, once ‘inside’ Virtual Reality amplifies ‘real’ reality ‘outside’ because participants are subconsciously assessing the proximity of their experience to the real world. 

And again, our ability to detect ‘unreal’ sets our Spidey-Senses tingling when something feels off. 

In short, for Jaron Lanier, the co-inventor of VR, The Matrix can never happen. Sorry, Keanu.

Responsible artificiality

As marketers, we can take heart from this. Humans still have primacy in our relationships with machines, as we have an in-built bias for reality and can still use age-old skills to sense-check AI’s capabilities.

Brands and businesses can also gain insight: customers are adept at detecting inauthenticity and over-automation. 

To be clear, we shouldn’t fear artificiality. Otherwise, we wouldn’t have Toy Story, Star Wars and an entire computer games industry. We just don’t like being misled.

In a world of AI creativity, we must be honest and transparent about its use. Thus, brands and businesses should use artificiality responsibly – and many are: 

H&M was overt about its use of AI to create ‘digital twins’ from real-life models –even compensating them for their digital likenesses. It has also started clearly labelling AI-generated product descriptions to reassure customers.

Meanwhile, VM02’s Cannes Award-winning campaign turned the tables on those who would deceive us, using Daisy, an AI-powered Granny, to converse with scam callers to waste their time. 

Technology extends us, but also calibrates us

All this is to say that we should remember that we are still in a human-powered world, populated by citizens with truth-detecting abilities refined over thousands of years. 

The lesson is not that artificiality is inherently dangerous. It is that people can tell when something is being offered honestly and when it is trying to pass itself off as the real thing.

The future will not belong to the most synthetic brands, but to the most trustworthy ones. 

In short, don’t use AI to replace humans. Use it to reveal the human.


Phil Rowley is head of futures at Omnicom Media Group UK and author of Hit the Switch: The Future of Sustainable Business. He writes for The Media Leader about the future of media.

Leave a comment

Your email address will not be published.

*

*

*

Media Jobs