Interviews, insight & analysis on digital media & marketing

How we defended privacy… and ended up with bad ads and no privacy.

By Andrius Misiūnas, Senior product manager, Adform

These articles have been written by the latest cohort of the Practice Makes UnPerfect programme – a course that helps people find and finesse their public voices.

I love fishing. I post about my fishing trips every time I’m lucky enough to go. And before I do, I tend to buy a few new hooks and nets online. This makes me an absolute dream for fishing retailers and their marketing departments. 

What does my propensity to brag about my fishing trips mean from a data perspective? Well… while I post my pictures, I share this ‘data’ knowing it will be seen by other people.  In fact, I expect it. But when I browse for hooks and nets, I likely get profiled as a fishing hobbyist by those marketers, and this data is used later to show me targeted – or personalised – ads. 

On the surface, it doesn’t sound too bad if I get an ad personalised to my hobbies every now and then. So why would I ever want to disagree with tracking? It’s for a good reason, isn’t it? 

This is the very basic argument for online data usage, whilst the wider privacy debate rumbles ongoing in the industry and society.  Very simply, the  data a company gathers from consumers can take three forms: 

  1. Technical information that is sent automatically by simply using a browser.
  2. The data you create to share yourself – your photos, videos or texts. 
  3. And then there are the advertising cookies – those that enable tracking of a user.

This is all very straightforward when I browse for simple products or services, and when all the technical components work as intended. It’s simple when a fishing company wants to see me some rods before my next trip. But the internet is more complicated than that, and fishing is not the real target of privacy policies and regulation. When profiling on mere product or service preferences, the user would likely be exposed to ads that cater to their interests – and as a downside, may not learn about new products for any hobbies besides fishing. 

Unfortunately, it is just as easy to profile a user on their religious, political, social views as it is to track their interest in fishing  hooks and rods, and this is the darker side of advertising; primarily enabled by precise targeting. Political advertising has made headlines in recent years. As researchers S.Milano and others found, accurate targeting of people creates “filter bubbles” or “echo chambers” – meaning every ad you see online tends to communicate a unified message, thus reducing your chances to see alternative views. 

Let’s take an example. Say you wanted to impart imperialistic ideas to your audience. You’d maybe choose an audience who already reads right-wing, maybe even radical articles. Then you’d target these audiences with ads that promote such values, leading people to read more articles on a new world order and ultimately convincing the audience that your truth is the only truth there is,  as all the information people see is geared towards supporting the same ideology. That’s when things start to get really dangerous. All of a sudden I’m no longer fishing, but I’m hunting for endangered sharks in the Pacific ocean! 

Researchers have suggested a method to balance this. In short: less accurate targeting. This would ensure that among your target audience there are people who are not exactly in your target audience, but would make sure the messages (ads) are seen by a wider variety of people. This in turn would enable the audience to become part of regulation, with the hope that it would raise opposition to messages advertised, and would be reported to regulators. That’s a lot of responsibility to give to real people! 

And what this lacks is a way to convey the risks and possibilities to actual consumers who own and share their data. Thus, what we see is a growing focus on regulating data sales, its usage, but the choice and control is all on the consumer side. We have yet to see a functional, informative way to choose and control how a user’s data is/will be used that isn’t trying to downplay the power of this choice.

We’ve all heard of GDPR and CCPA, more recently of CPRA and VCDPA, and CPA. Lots of acronyms that all mean the same: finally legislators are working on defining what is the subject of personal privacy, how companies are allowed to work with those and what control the user should have. 

But we need an urgent reality check. The laws that are created will quickly be worked around to make sure ad-industry can maintain the status quo. This already happened with the cookie consent choices you see every time you visit a webpage. Even the initiatives from browser makers (Chrome) are set to keep the existing functionalities of showing you relevant ads, and showing you ads of products you have just browsed. For as long as the ad industry moves faster than regulators, this will be the case. 

And that is how we end up with bad ads and no privacy; at least in the eyes of the everyday internet users. What the ad industry must do instead is insist on a more transparent approach to targeting, educate, regain the trust of an end user, and deliver promises of showing ads that are useful, not repetitive and corrupt.