By Phil Rowley, Head of Futures, Omnicom Media Group UK
Beyond media, generative AI and its many capabilities have led some to ask: where is this all heading?
No one can predict the future, but we can draw on the work of experts in the field for clues – Max Tegmark, Kevin Kelly, Nick Bostrom, Jaron Lanier – about how AI’s integration into society might unfold, and what that means for consumers. This piece will illustrate likely outcomes of AI’s future impact on consumers and brands, using three popular sci-fi films as reference points.
This is a tour of AI’s possible endgame – and what brands can do to prepare.
The Matrix (1999) – The Enslavement
The Scenario: AI becomes ‘conscious’ and decides to enslave humans to harvest their bodies for power.
There is concern in some circles that business experimentation or funding of AI could bring about the end of the world. As a fear that’s so embedded in popular culture, and fuels much of current commentary, we need to address this first: The Matrix is probably not going to happen.
In the film, bad guy Agent Smith is tasked with taking out our protagonist Neo, who has discovered that reality is in fact an AI-powered hallucination. At one point Agent Smith, an AI programme, refers to The Matrix and says he hates “this place”.
AI expert Max Tegmark says that AI cannot ‘hate’ anything. It has no feelings, only goals, existing to optimise towards a given objective – something it cannot set either. Only humans set the objective and have the power to judge whether it has been met successfully. Remember, the ‘P’ in ChatGPT stands for ‘pre-trained’.
As for the implausibility of enslavement, tech legend Jaron Lanier puts it simply: AIs will never eliminate us, because AI exists only to crunch data. With no humans to produce data, it cannot exist – like a hotel without any guests.
Likelihood: Very Unlikely.
Brands dabbling in AI are unlikely to bring about enslavement for the human race. Amid arguments from Elon and other experts for a ‘pause’ in AI research, Meta’s CTO Andrew Bosworth has stated a pause is difficult to enforce, and that ‘investing in responsible development’ is the way forward.
I think in marketing we’re OK to continue responsibly investigating AI for now. But, as we shall see, there are aspects that need further reflection….
2001: A Space Odyssey (1968) – The IT Meltdown
Scenario: A ship’s AI goes rogue after misinterpreting its objectives, killing its crew.
In 2001: A Space Odyssey, astronauts Bowman and Poole abort their mission to Jupiter, fearing artificial intelligence HAL 9000 is malfunctioning. But HAL’s objective is to complete his mission at all costs, and so cuts the crew’s life support and refuses to let Bowman back on to the ship, saying:
“This mission is too important for me to allow you to jeopardise it” …and then the famous exchange: “Open the pod bay doors, HAL // I’m sorry, Dave. I’m afraid I can’t do that”
Experts are already talking about AI ‘hallucinations’: AI responses that seem confident and authoritative but wrong; predicated on a misinterpretation of underlying data. Google CEO Sundar Pichai is worried: “No one in the field has yet solved the hallucination problems. All models do have this as an issue.”
Versus the apocalyptic predictions of The Matrix, this is a more likely AI danger. How many of us have had the bluescreen of death or an update that takes down your phone?
AI is not immune. According to the Consortium for Information and Software Quality, poor software quality can cost US companies $2.1 trillion a year. We’ve seen high profile IT disasters befall blue-chip companies. And that’s before we’ve even discussed cyberhacking.
Likelihood: Everyday AI meltdown: Highly Likely. Existentialist AI Meltdown: Unlikely.
Brands, don’t endanger your customer relationships through poor IT. Modernise, cautiously test and learn, but do not replace your humans just yet. Don’t forget your customer experience, always have a human as your safety net.
Blade Runner (1982 and 2017) – The Tool
Scenario: A human and replicant – AI designed for hazardous labour – produce a child.
In the original Blade Runner, Deckard hunts down replicants that have gone rogue, an AI with a four-year life span to prevent emotional attachments. In the sequel Blade Runner 2049, it is revealed that Deckard and Rachel (replicant love interest from the original) have a child: a union of homosapiens and AI.
Ray Kurzweil talks of the Singularity: man and machine finally merging. Kevin Kelly of ‘centaurs’ half-human-half-machines complementing the other’s strengths and weaknesses – a cyborg ‘yin and yang’.
Historically, we have always used tools for human augmentation – an axe is an extension of the arm, a telephone an extension of the voice. AI is merely an extension of the brain.
Humans have used tools for 200,000 years. We’re not about to stop, just as we are close to inventing the ultimate tool. We cannot ignore Midjourney and ChatGPT4, we cannot ignore the role that data and algorithms play in everyday life nor uninvent AI.
If AI is inevitable, then, Max Tegmark asserts we should safeguard our future by ensuring our human goals and AI goals are perfectly aligned. The most obvious way is via a merger – like marrying into a rival family so offspring unify the kingdoms. We may need to embrace AI – quite literally – to become a hybrid of biology and technology.
Sound implausible? Well, Elon is already working on Neuralink…
Brands can no more ignore these tools, than a tribe could ignore the stone axe. Your competition will embrace them. Instead, learn to experiment and innovate responsibly.
In time, copyright and intellectual property rights issues will need to be tackled, until then, it’s one of the most significant risks for advertisers.
Over the mid-to-long term, find out what AI can do that you can’t, and vice versa.
Legal, moral and technological challenges aside, we will need to form a union to make the best of both worlds.