Interviews, insight & analysis on digital media & marketing

AI & data privacy: the uncomfortable truth

By Lucas Galan, Head of Data Science Product, CODE Worldwide

Marketers might be forgiven for thinking much of their remit entails staying on top of data privacy laws that seem to be in constant flux. From new technology and forthcoming changes to the use of cookies, to the UK Government’s recent vow to replace GDPR regulations privacy, issues come thick and fast.

As if getting to grips with opt-ins, data privacy notices and expanding digital marketing options isn’t enough, AI is moving the goalposts again. That doesn’t just mean a shift in mindset for marketers. Consumers too must get used to the way their data is managed.

AI underpins many of the tech-based changes to society that we’ve witnessed for the past decade, and which are sure to continue into the future. Smart assistants, spam filters, voice recognition: three innovations that have altered brand-customer interactions and feel like the tip of the iceberg. 

It’s fair to say that if data is the new oil of the Information Age, ubiquitous in business operations, then AI is the optimal way to refine it into something valuable.

But the use of AI also has significant implications on privacy. We can view this issue from two main perspectives:

  • privacy implications in the data used for training modern AI;
  • infringement of privacy through advanced AI implementation.

Learning to live with AI’s use of data

From a learning perspective, as more advanced AI technologies are built, they will require increasing amounts of data to learn from. It’s no surprise that the owners of social media platforms are among the biggest players in AI, sometimes utilising vast amounts of user data for training purposes.

Training is the process by which AI ‘learns’ to complete a task, requiring a high quantity of reliable examples to do so; the more complex the task, the more data is required. As such, learning to – for example – dynamically caption images online requires AI to be trained on millions of examples of images and caption pairs. Places where humans have already labelled thousands of images, like social media platforms, are incredible sources of data. 

Although the data is only viewed by the machine and not on an individual basis, it still raises important issues about consent. While direct applications such as facial recognition are currently prohibited, others are tolerated – often under the guise of leveraging data anonymously.

AI allows for the synthesis of data to a degree that was previously impossible, giving companies an almost supernatural power to understand human behaviour. In many cases, these things are today described as algorithms. How does TikTok know exactly what you’ll like? How does Amazon predict what you’ll be buying next? How does Instagram know to promote and advertise the products and services you’re most likely to want to engage with?

By leveraging and connecting thousands of data points and online behaviours (anything from actively liking a post to passively consuming a video all the way to the end) companies are able to build accurate AI models of human behaviour. 

For this reason, more people are suspicious that companies including Google and Apple are listening to their conversations.

The truth is they aren’t; they don’t need to. It’s simply down to the impressive power of data-driven AI. 

AI’s positive privacy possibilities

Organisations argue that neither the use of someone’s data for training nor the development of advanced algorithms are strictly a breach of personal information. This is an uncomfortable truth in the field of AI research: advances in the technology are often built on vast quantities of data that must be sourced at scale from public repositories. 

So, what would constitute a breach of privacy? The question is both legal and ethical. Legally, the notoriously long and undigestible legalese of user agreement policies that we all neglect to read and agree to safeguard companies from any actual breaches.

Leveraging large repositories of human data from the internet is a no-brainer, and these data-sets are in large part the reason for AI’s rapid evolution. But it’s hard to argue this has been done transparently or with consent. Future endeavours on this front should remedy the situation and even empower users to reap benefits from their data.

In fact, AI need not be wielded exclusively in the way described above and can be a powerful tool for good – provided it is used in the right way.

In terms of content moderation it is already showing great promise. An AI can police content and interactions ceaselessly and without bias if it’s trained correctly. In so doing, the technology could be a potent ally of privacy and individual empowerment. 

AI could also constitute a big step forward in interactivity and functionality, doing away with the need for traditional data capture – like cookies or CRM. Cookies and algorithms are currently used to optimise user experience. By having AI act as a conduit for information retrieval we might prevent or greatly reduce the need for personal data gathering.

Instead, AI could be leveraged to service users without needing any previous knowledge, assisting their queries smartly and anticipating desires by directly engaging with them.

Few people who want to change society for the better would argue against a more transparent, efficient internet. One where we do away with the scattergun approach of cookies, ads, competing content, echo chambers and noisy strategies, and replace it with a symbiotic experience where users’ desires are matched by AI with the correct content. What an impact that might make for brands.

Although it’s possible this may never happen to the extent I’ve described, it does point to the neutrality of AI and raise doubt about the clamour that often accompanies its use. When all is said and done, AI’s impact on privacy is largely in the hands of those who wield it.

Opinion

More posts from ->