These articles have been written by the latest cohort of the Practice Makes UnPerfect programme – a course that helps women find and finesse their public voices.
By Gizem Unal, Marketing Operations Manager at Permutive
According to a study by HBR, only 3% of companies’ data meets basic quality standards, even though they’re collecting more information about their customers than ever before, and powerful data analysis and visualization software is widely accessible. This is alarming, because decisions made from bad data are extremely costly – a Gartner study found out that the average financial impact of poor data quality on organizations is $9.7 million per year.
When data is unreliable, people quickly lose faith in it and fall back on their intuition to make decisions. According to research from Columbia Business School, states of uncertainty increase people’s reliance on their gut feelings to make judgments and decisions. However, when you put gut feeling at the center of the decision-making process, it becomes dangerous, very quickly. We take our idea and run with it without testing our assumptions first. We think we know, but quite often we don’t know, we just assume. We fall into the trap of believing what we want to believe. We design a campaign that we assume our target market wants to see, implement a strategy or process that we think would work without digging for more evidence and validate our ideas first.
No matter how skilled we are, we can’t purely rely on our knowledge and intuition to come up with the best ideas. Looking at our strategies, websites and campaigns every day, we can’t see them the same way our customers do. Analysing historical data (where data quality is acceptable) gives us invaluable insights into patterns and trends. But, can the data we already have tell how our customers will react to innovations? Not really. So, how do we actually make good decisions when we only know the past behaviour of our customers? This is where experimentation comes into the picture.
Experimentation is the process of asking questions, trying out new ideas, allowing those ideas to fail, and trying again. It is about gathering data. It is the engine that drives innovation. Most companies get value from running experiments and embodying an experimentation culture.
Experimentation helps us gather reliable data at speed. We can use experimentation to develop and run simple tests for our hunches to see if we’re on the right track. We still need to try and uncover useful patterns by analysing data, but we need to do it fast and take an experimental approach. Test any findings on a control group to see whether the impact is worth building on. And as one director at Booking.com said “If the test tells you that the header of the website should be pink, then it should be pink. You always follow the test.” (HBR).
Uncertainty is unavoidable in marketing whether you’re working at a startup or large corporation. There will always be a time that you hedge your bets with a strategy or campaign. Experimentation helps us to navigate the avoidable uncertainty that is part of any innovative process and mitigate risk. You need to think like a scientist – identify the question, outline your hypothesis, design an experiment, evaluate the results, optimise and repeat.
There are various easily accessible models (A/B test, multivariable test, heat maps, user testing, etc.) and tools out there and inspirational experimentation stories from companies like Amazon and Booking.com. Unfortunately, most of us don’t have access to that high-volume website traffic to test every single variation on our website or have a team dedicated to experimentation. Once I designed a website experiment to understand the impact of a page headline. The sample size calculator told me that I need to run the experiment for 389 days or increase my sample size. Running a webpage test this long does not make any sense for various reasons. And in most cases, you won’t be able to increase your sample size. Most people think that if you have a small sample size you can’t run experiments. But this is only a common misconception. The key limitation is that you are limited to detecting large differences. So, instead of playing with a single variable, I established a proof of concept and changed as many variables in a combination I believe is most likely to get the result. The larger the difference, the less sample size you need. Fortunately, in the business context, we are often most concerned about these big differences—differences our customers are likely to notice.
In his book ‘Experimentation Works: The Surprising Power of Business Experiments‘, Stefan Thomke reminds us that even big companies like Google, Amazon and booking.com all started small. When you look at what they did, even as startups, they started out with experiments and had built their capabilities gradually over time. The more you experiment, the more hypotheses you generate from your experiments’ findings. The largest experimenters started by running only a few experiments per year and most companies still only run very few experiments.
Experimentation is not a magic wand. Most experiments fail, and they should be. But the ones that work can make a big difference to our key performance metrics. You mitigate the risk of failed experiment outcomes and limiting their potential negative consequences by defining a clear purpose and scope at the experiment design stage.
And don’t forget, conducting the experiment is just the beginning – this is an iterative process. We’ll always make optimistic predictions about the process. But it will take longer and cost more money than we have anticipated. This is hard work but there are no shortcuts. We’ll learn from each iteration, act on what we learned and reach our goal at the end.
Even where there aren’t sufficient data decisions should not be based on gut feeling alone. If they can be put to a test, they should be.