Sean Betts is Chief Product & Technology Officer, Omnicom Media Group UK and NDA’s monthly columnist.
I’ve recently done a few talks/panels on AI in marketing where I’ve had some really good audience Q&A sessions afterwards. A topic that has come up multiple times in these Q&As has been one of trust – at what point do we start to trust technology, and more specifically, AI, to perform tasks without human oversight.
Sitting behind this question is I think a deep-seated cultural issue that we have with technology, by which I mean any machine, or software that humanity has created. We want technology to be 100% reliable, 100% correct, 100% of the time. I might be wrong, but I think there is a general assumption in society that this is not only achievable but is the case with many technologies we’ve invented.
All technologies humans have ever invented have been tools i.e. something for humans to use to be able to do tasks more effectively. With AI technologies now maturing we’re for the first time faced with the prospect of a technology that isn’t used by a human but is able to accomplish human tasks without any supervision. For this first time in human history, we have a technology that isn’t designed to help us do a task more effectively but is designed to just do the entire task. No human supervision, no human in control. I think this lack of supervision and control is the real emotional bedrock behind the questions I’ve had around AI and trust.
Last week I was watching the BBC series “Inside The Factory” with my son, where Greg Wallace was at the Axminster carpet factory. Carpets have been made in Axminster since the 1750s, and over the last 275 years the technology has evolved considerably. However, we’ve had the current generation of automated weaving looms since the 1980s, so 40 years or so. This is a very mature technology; carpets are now mass produced and virtually every home in the UK has one.
It would be easy to think that carpet factories could now churn out a huge volume of faultless carpets with very little human oversight.
However, towards the end of the episode, the carpet that’s being followed through the factory goes through a final quality assurance (QA) check by a human. As I’m sure is common, a fault is found with the carpet and that’s quickly fixed by hand before the carpet is then shipped. So even with an incredibly mature and scaled technology like a weaving loom, it’s still not 100% reliable and still needs human oversight to do QA on the output, and to be fixed by hand if any problems are found. This is true of all technologies, no matter how mature or reliable we think they might be.
So how does this all relate to AI? Well, we’re very soon going to need to answer the question of when we start to trust AI to perform critical tasks that humans have historically performed. Let’s consider self-driving cars: soon, they’ll match the average human driver’s ability. But will the public trust them entirely, especially when the stakes are high and involve the safety of loved ones? And in the medical field, AI and robotics are making strides in improving surgical success rates.
However, the question remains: will people trust a robotic surgeon as much as they do a human, or does the AI need to surpass human capabilities (which also at times may not be 100% correct!) to fully gain our trust?
These are the questions that we’re going to have to confront over the coming years as AI becomes more capable, starts augmenting a wide variety of tasks, and impacts everyday lives. But there is one big learning that we can take from the history of technology – no matter how advanced or dependable a technology becomes, there will always need to be an element of human oversight for us to feel comfortable with it.
Our technologies are best when there is a human-in-the-loop. The same will be true for AI – it may take on many human tasks over the coming years, but I believe there will always be a need for human involvement, which will lead to new roles and opportunities being created, just like all technologies before it. However, as AI evolves more capabilities, we’re going to have to start getting comfortable with less supervision and less control. More trust in the technologies we build.