Interviews, insight & analysis on digital media & marketing

ChatGPT content: is emotional intelligence the missing link?

by Lottie Namakando, Head of Paid Media and Planning, iCrossing 

With the release of ChatGPT and Google’s Bard, it feels like a real step-change in automation and the use of AI, in our everyday lives. This raises again that ever-hovering question of, with these possibilities, what is the role of humans? From my point of view, AI and automation is still not in a place where we can realistically step back without inserting influence on the processes – human intervention and guidance are still needed.  It also raises the question of whether we really want to have zero accountability in these areas. Just because we can do something, does it mean we should?

Artificial Intelligence is a topic which has interested humans for years. As a big fan of Terminator, the concept of machines taking over from humans has always been a topic blended with apprehension and fascination. Whilst in my role in Paid Media, the use of AI and machine learning is both firmly at the centre of what we do, and how we do it in so many ways. However, there are still many areas in which the use of technology at the expense of humans just doe not necessarily make for a better outcome. To me, AI and machine learning are a valuable part of enhancing our lives and careers, but they will not result in a world where we can entirely sit back and let the computers run the world.   

A question which comes up on a regular basis from colleagues and clients alike, is whether we need humans to work on our media when AI can do it for us, given the increasing availability of AI based tools for marketers. “Surely, we can just plug it in and let it run, and it will make the best decisions for us. Right?” Wrong!  

In short, we need humans because AI can’t perform all elements of a marketer’s role. Marketing is inherently a combination of creativity and data analysis. It requires an ability to take a brief, and transpose this into a campaign concept and strategic vision. It demands the ability to develop the image or text that consumers will see and engage with, understand, and translate the often-complex interplay between different audiences which may not have an immediately obvious connection. Finally, is the need to be able to analyse the data, and amend the set up in response. Of these areas, the role AI plays in data analysis and optimisation is clear and straightforward. There are many valuable opportunities for AI to step in. Although, as I will discuss later, there are still limitations even in this context. The analytical elements of marketing arguably can and should involve AI for efficiency purposes – however AI can’t employ the more social and emotional elements which are so important in marketing.  

What exactly is AI and what can it do? 

Looking at the definition of AI from IBM, it combines computer science with robust data sets to enable problem solving. It is using computers to understand human intelligence, without biological limitations, taking into account intelligence is not limited to book learning.  

AI can process data and analyse problems in a functional way, resulting in AI being well-placed to process huge data sets and evaluate data-based problems and challenges, which many of us already know well. What AI (currently) cannot process, and produce are the elements surrounding human cognition, which make us fundamentally different to computers through social awareness, ethics, comprehending context, empathy and emotional intelligence. In the context of Media and Marketing, emotional intelligence is a hugely important consideration in what we do. Defined as the ability to understand and manage your own emotions, emotional intelligence also includes the ability to recognise and influence the emotions of those around you. This comprises of self-awareness, self-management, social awareness and relationship management. Do we really believe that emotion can be attributed to a data set, and if AI can’t do emotion, what impact does that have on its usability?  

What does this mean for the use of AI? 

In the Media industry, emotions are hugely influential in a successful marketing campaign, and this is achieved through both brand creative (images, videos, audio) and content. Establishing an emotional connection between consumers and brands with their marketing can result in greater advocacy, trust and loyalty (The Drum).  

The increased use of UGC again is testimony to the impact that ‘real’ and ‘genuine’ content and adverts have on consumers. As Forbes states here, UGC does so well because people like to feel part of a community. Can you develop and convey the same sense of a community using artificial intelligence which has no experience or relationship with anyone? Adjectives like “authentic”, “genuine”, “trustworthy” and “honest”, are constantly flying round when we discuss Marketing, both from a branding and performance perspective. Social Sprout talks about building brand authenticity and its importance. Thinking about it, can AI or the output it produces ever be considered as real or genuine? It can’t call on life experiences or emotions, and so by using AI to develop copy or content, can a brand still be considered as authentic?  

This also prompts the question of how does it feel if you know something is AI generated, would you still have the same feeling and opinion on the advert or content which is AI generated as you did when you thought it was created by a human being? This will largely depend again on the context and what you are specifically talking about, whereby if something is factual, perhaps using AI isn’t an issue, but it if it supposed to represent something ‘human’, like an expression of trust, opinion, or emotion, then AI probably isn’t the best route. It is unlikely to come as a surprise that studies have shown that 86% of people prefer talking to a human being rather than AI. So, when we are looking at the role of humans in marketing, as well as the strategic elements, there remains an important role in creative and content development as well.  

But what about in the context of data? 

As mentioned earlier, AI does have a recognised and an established role in data processing and analysis, but there is still value in human intervention and the impact that humans have on the effectiveness of AI. The basis of this is the necessity for solid data input, AI still requires input, be that in the form of numerical data, or guidance through certain instructions, it is still not capable of taking a problem with no context and delivering a result.  

A great analogy to this is cooking. You have a recipe, it tells you how to complete a set of steps to create a meal, however the quality and type of the ingredients you are using will fundamentally impact the outcome, its taste and possibly its look. So, whilst what you have produced might be edible, when using up ingredients, you have knocking about, or using non-premium ingredients, more often than not (or maybe it says more about my culinary skills), the result is seldom as good as it would otherwise be if top-quality ingredients were used instead. The same principle applies for using AI in processing data and optimisation. To get the best data output, you need to use the most relevant and suitable data, which reaches the necessary data quality thresholds. So, even when using AI in data processing, there is a valid role for humans in data quality evaluation, to ensure that the data is relevant and at a high enough quality, before it goes into the system. This is because the provision of any data set will produce an output, but the question is, but are you sure that it was the right data in the first place, and if you are not sure, then how do you trust the quality of the output? 

Evaluating data input relies on a variety of elements including context evaluation, to establish data quality and to be able to judge if that data is the right data to use in the first place. This can be based on factors such as where the data came from, and the processes used in data collection. As we have already discussed, context is not something AI is hot at comprehending or evaluating, so again, there is arguably a necessity for human intervention, to ensure that the technology is being fed the right data. And finally, accountability poses yet another consideration, if the AI is left to its own devices, if issues or challenges occur, who is accountable for the output of the AI technology?  

But what does AI think about it? 

To wrap things up I wanted to share ChatGTP’s own thoughts on the topic, and this is what it had to say.. 

While AI can provide many benefits, it is essential to recognize the value of emotion, empathy, and experience that humans bring to the table. In situations that require emotional intelligence, such as counselling or customer service, humans are better equipped to provide empathy and understanding to the person on the other end. Additionally, humans have experiences that cannot be replicated by AI, such as cultural background, upbringing, and personal biases. These experiences can play a crucial role in decision-making, problem-solving, and innovation. 

This area is a fascinating topic and will only continue to evolve and develop. We may find in years to come that AI is able to develop the skills of self-awareness and empathy, but that is a bit of a scary thought. I am a big advocate of the use of AI in data processing, the benefits it brings to analysing and evaluating large data sets however I am also a strong believer in the value and importance of human direction and guidance to maximise the output AI can deliver.