Interviews, insight & analysis on digital media & marketing

Adapting to a new “performance” normal

By Adam Chugg, Head of Big Tech Activations, the7stars

Traditionally, advertising has been about identifying the target audience for your products and services, identifying their key media behaviours, and then buying ads in these places to reach them. Measure, rinse and repeat.

However, increasingly, ad delivery systems driven by machine learning are doing this work for us. Using hundreds of real-time data signals, these platforms are serving ads to the right users, in the right place, at the right time, using predictive modelling to drive specific outcomes, such as viewing a video or buying a product. The priority for advertisers is now in providing these platforms with rich datapoints regarding outcomes (i.e. conversion tracking) and effective creative assets aligned to the barriers and motivations of potential customers to act.

In theory, it should be simpler to have more effective, measurable, and accountable marketing activity than ever before. At the7stars we’ve estimated that 74% of media is now ad-tech driven, and ad-tech driven media means access to data and feedback loops that previously didn’t exist. However, paradoxically, the development of these opportunities has also led to a media landscape that is more complex and difficult to navigate.

At the same time, there are almost too many metrics available to us. The risk of using the wrong metrics and connecting data incorrectly, or not connecting it at all, is high. Such data overload can contribute to laziness when it comes to experimentation and advanced measurement. The temptation to optimise towards the best-looking and most accessible outcomes is always strong, regardless of whether those chosen KPIs are linked to real world business outcomes. 

Whilst working at Meta, behavioural economist Julian Runge conducted research that concluded only 12.6% of a 6,777 sample of businesses had utilised randomised controlled experiments (Harvard Business Review). What’s more, the very same research highlighted that those businesses who run 15 experiments a year achieve a 30% higher ad performance, so the size of the prize is large.

This is where the new challenge of a cookieless future and data-privacy can be a force for good. It will instigate advertisers into a broader reassessment of effectiveness and measurement.

Google estimates that 70% of conversion journeys are now lost and not available for deterministic attribution. Advertisers are seeing increased inflation across all platforms, particularly in paid social platforms, as algorithms lose the volume of data-signals required to efficiently find new prospects as cost effectively as before.

But the infrastructure and framework for effective measurement and experimentation needs to be at the forefront of campaign planning and execution. There’s never been a better time to get this right.

So, with degrading performance in-platform, how do advertisers adopt more robust and holistic measurement frameworks that include controlled experiments? 

Here are my top 5 tips on addressing inflation and a cookieless future in the biddable space:

  1. You absolutely must adopt the privacy-first tracking solutions of the platforms you invest heavily in. For example, with Meta this will be the conversions API. For Google, this will be consent mode and enhanced conversions. 
  2. Get your first party data in order and your consent policies up to date. If you’re not in a position to upload your customer data to the platforms in which you invest heavily, you are at a huge competitive disadvantage. It’s all about using your first party data to plug measurement gaps and improve modelling. 
  3. Focus on creative. Estimates from Meta and Google place the impact of creative on campaign performance at between 50-70%.
  • Create a measurement framework that correctly assigns short- and long-term solutions. Don’t under-value the impact of brand awareness and consideration by relying on attribution as a long-term measurement.  A robust measurement framework should include attribution for short-term, in-channel optimisation. Cross-channel effectiveness can only be assessed with a combination of attribution, controlled experiments, lift studies and MMM. 
  • Run controlled experiments with every campaign or as frequently as possible. Aside from the obvious benefit of A/B tests to prove the effectiveness of different targeting and creative, geographic hold-out groups are a great way of assessing the potential uplift activity where attribution doesn’t give you the full picture. It’s also a great way of validating in-platform measurement and lift studies with your own business data. 

Measurement should not be merely retrospective, an afterthought or remain unchallenged. It’s something we should be aiming to get better at all the time, especially in such a fast-moving landscape. If we’re not learning something new about media effectiveness with every campaign we run, then something is wrong.