Liam Brennan, Global Director of Innovation, MediaCom is NDA’s new monthly columnist. One of our industry’s preeminent thinkers in innovation, Liam works with the agency’s biggest clients on innovation and digital transformation strategies.
How long is a piece of string?
“Well, it’s about 1mm thick…”
“This string is 20% longer than that piece of string over there…”
“I’ve previously seen a piece of string 30cm long, so it should be around the same length…”
“I disregard your traditional ways of measuring string, and have developed my own framework for string measurement…”
This question may be rhetorical, but I’m sure you would agree that these are all terrible answers.
Assuming one can measure said piece of string, it’s easy to express its length in metric units (or imperial for our American friends) by using a ruler or tape-measure and quantifying through the standardised and trusted International System of Units (SI) system.
Using a suitable, quantifiable metric that is understood by the broader populous also allows us to compare the string’s length with other objects, or indeed with other pieces of string past and present.
So why in New Digital Age am I ranting about how to correctly measure a piece of string? Well, it’s because many digital practitioners have a measurement problem. Or to be more specific, an inability to quantify appropriate success metrics.
I used the string example to draw parallels to recent conversations had between myself and digital practitioners across the media and marketing spectrum — agencies, brand and digital platforms — regarding how they report back the success of activity.
Swap ‘length’ for a typical marketing outcome such as brand awareness, leads or sales and the simple string problem becomes a multi-million-pound marketing problem. How one measures and quantifies their work affects how media and marketing activity is planned, deployed and optimised. Ultimately, it dictates the partners we work with and where large brand budgets flow.
‘Digital’ is at a critical crossroads right now. Consumption is high, platform spend is higher than many analogue channels, and the increasing opportunity around ‘digitised’ ATL is huge. But much digital activity has gone relatively unscrutinised for a long time, and brands are understandably pulling back activity in some areas.
As a fifteen year ‘digital’ veteran, I witnessed a great shift at the end of the last decade towards an over reliance on a scientific approach to advertising. Data, measurement and mathematics are used in increasing quantity and importance, and the ability to measure at a much more granular level, both in terms of the volume and frequency of trackable data being passed back, is a big selling point.
But we must never forget that quality beats quantity when it comes to measurement. Just because something can’t be tracked to a desired level doesn’t mean that it isn’t working.
This is best exemplified by the view harboured by many digital practitioners that TV doesn’t ‘work’ because it can’t be measured to the same level of granularity as, say, digital video. This is a ludditic (and untrue) belief, and somewhat ironic given a great deal of digital activity is measured in the same panel-based fashion by brands and agencies alike.
Preferencing what will give the greatest level of data volume and granularity not only risks championing underperforming partners and channels, it also encourages a culture of optimising to internal processes, not outcomes, grading work on improvements to anchored specialist metrics and particulars of delivery, rather than the value that is being generated.
For example, a typical ‘consumer data targeting’ strategy may produce a strong 15% uplift in conversion rates when compared with broadly targeted buys. But when factoring in the cost of the data, technology and other fees, there may well be a 30% lift in base costs to deliver that smaller conversion improvement.
Of course, ‘programmatic’ approaches can increase efficiencies and enable improved creative effectiveness, but looking solely at a metric like conversion improvement only tells a small part of a larger story.
In an ideal world we could measure desired campaign outcomes in the same fashion as we can measure clicks and impressions. Sadly, this is generally not the case, but we can most certainly measure to a proxy that has been found to either correlate with, or heavily influence, the outcome needed to be tracked.
All channels, digital or otherwise, that play the same strategic role should ideally be measured to the same KPI. If one can provide deeper levels of tracking or insight, then fantastic, and if metrics need to be adjusted to factor quality (e.g. attention-based metrics) then that can be easily factored in when everyone speaks the same language.
It is time to drop ‘digital metrics’ and prioritising what will give the greatest data volume or granularity for the sake of suitability. Digital practitioners need to be better at measuring what matters, whether that is reach, sales, or indeed, the length of a piece of string.