Failure and loss come in many different forms. The best thing about it is, once you’ve identified the problem you can usually find the solution. Ergo this is mostly within your control (emphasis here on mostly).
Here are six key lessons we’ve learnt from failed tests and low performance that you can use to lead your CRM team to future victory. So let’s start with the good stuff…
1) Adopt a clinical approach
No test is devoid of risk, but a test is so much more than simply pass or fail. The testing methodology you learnt in math class still rings true today, even in the arena of digital marketing. Conduct research, form hypothesis, execute experiments (aka test), analyse and make your conclusions. Conduct your tests in the right way and you will be able draw actionable behaviour insights from your analysis.
The question you must always endeavour to answer is: why did this fail? Data, and a hypothesis to prove or disprove, is at the heart of understanding the behaviour that drove the testing outcome.
2) Use evolutionary and revolutionary testing methodologies
To extract as much knowledge as possible, you have to clearly define whether the test is evolutionary or revolutionary. There are pros and cons of both.
Evolutionary tests mean going through testing iterations for marginal gains. You get smaller incremental improvements, but far more in-depth understanding of subscriber behaviour by determining relationships between success metrics (e.g. OR up but CTR down).
Whereas, revolutionary testing will result in larger gains but less insight into what’s caused the shift in user behaviour. Learn when is best to use both testing methods. Use pass and fail insights from evolutionary sprints to inform the revolutionary tests. This creates a more calculated risk with a higher chance of success.
3) Define success
Deciding on success criteria allows you to fail fast. Early identification minimises revenue impact and focuses on the opportunity cost of another testing initiative. Utilise retrospective results and record your successes and failures. This will ensure mistakes are not repeated. An agile methodology lends itself to all of the above.
4) Identify relationships
Measuring KPIs is meaningless unless you can identify relationships and causation between metrics. For example: you are losing 10% of your database per month due to unsubscribes. You know the problem, but you don’t yet know the solution. Therefore, you have to look for other indications within your KPI reporting that could suggest what’s caused this pattern of subscriber behaviour. If your open rate has remained the same but click through has fallen, it would appear that your content strategy is no longer relevant or engaging for your target audience. Could this be the root cause of your spike in unsubscribe issue?
5) Churn isn’t always bad
Question whether loss is always a bad thing. Your unsubscribe numbers might be up but your complaint rate might have decreased. We know which action we would prefer.
When you are growing your database, the number of subscribers lost does not matter. Look at your metrics and your communications and consider: frequency, relevancy and - the acid test - ‘would you want to receive this?’
6) Record and share
Failing to record is planning to fail in the future. If you don’t record your losses and share them with your team, how can you prevent them from happening again in the future? Instil a collaborative culture within your team to make sure sharing is uninhibited, and there are no barriers to knowledge.
Continuous improvement relies on loss, failure and mistakes, not just your own but those of others too. Failure doesn't always have to be internal for you to be able to learn. Look to the wider industry and business press; you can learn from other brands’ losses to shape your future strategy without incurring the cost.