To err is human; to test and uncover what works, divine.
We’re all human but our testing platforms are not. They have been designed using statistical calculations and standard deviation. But even then, humans are doing the interpreting and mistakes can be made, even on the mathematical side of things, if we let them.
Mistake #1
Don’t Let Your Testing Platform Be Smarter Than You
It’s easy to default to the numbers our testing platform gives us once a test is live. Often times a test may show great improvements right away. In fact a “winner” at a 95% confidence level after 3 days is not uncommon in my experience.
When this happens, resist the urge to end your test.
Anything less than a full week or even 2 weeks is too short of a run to truly declare one version a winner. There is simply too much variation in traffic over a weeks time that you’d be missing if you didn’t let your test run a full week , then another just to make sure the results still hold.
Don’t always trust your testing tool data. Sometimes even the best tools only take into account confidence level and ignore sample size.
Mistake #2
Conversion Is Different From Revenue
One of the most expensive mistakes a conversion rate optimization expert can make is to focus entirely on conversion rate and forget to tie in actual revenue and profits. This mistake often leads conversion rate insights in the wrong direction and to bad conclusions, that cost you serious money.
All testing platforms are set up to measure conversion rate improvement, because it’s the one common denominator metric for success that fits a broad mold, but it’s usually not the most important metric to optimize for.
Adding an element to a test treatment that increases conversion rates by 40%, may look like the clear winner, but if the average order value was effected negatively by the winning version, what you thought was the winner was not.
Some other relationships in data to watch out for include:
- Price testing: More people bought, but at a lower price – profit did not increase
- Call To Action Cart button testing: More people initiated checkout, but abandonment the same because the real problem lies in the funnel
- Merchandising testing:More people bought, but items per sale were lower – profit did not increase
- Pre-checked email opt in testing: More people signed up for email, but reported your messages as spam because they signed up unwittingly
- Email price, promotions or coupon testing: More people purchased, but a percentage would have purchased anyway without the discount
Mistake #3
Thinking Your Results Will Hold Over a Long Period of Time
Yes all the experts will tell you that the increases in your test will have a much larger impact over time resulting in more revenue from the lift gained in one test.
Unfortunately that’s not true.
The increases in tests don’t usually hold over the long run. Over time test gains degrade to a mean. Sometimes because the test had not truly reached a winner, either there’s too much overlap in conversion from the control to the test or the sample size is not large enough and the test stopped before reaching the proper number of people through each test funnel.
The biggest reason however is wide variations in traffic or the mix of traffic changing from when the test ran to the time after it and over time.
So most tests are just a conglomerate of all traffic sources to one test, and sometimes you have no choice because you need all the traffic to run the test. But here’s whats happening in a case like this. Imagine several small streams of water, each one is a different traffic source or channel, one might be ppc, another could be house email traffic. All of these smaller streams connect to a large river, or your test, and mix at that point getting stirred up and mixed together when they enter the river.
The variation in the mix of each traffic channel changes across 2 dimensions both during the test and after it, when you expect the test winner to hold, which leads to why it never does.
These 2 dimensions are volume and motivation.
The volume or amount of each type of traffic is never consistent. There may be more organic visits at a certain time of the year because of more demand or because of some outside influence like a major news story or a spike in word of mouth or general buzz around your product / offer or category.
Motivation of each traffic source is also different depending on where the the traffic is coming from. Not all traffic is the same quality of traffic, but motivation level is also effected by time of year and seasonality. Some products are just more sought after at certain times of the year, like an Elmo doll around Christmas or a box of chocolates in February.
The best way to counter this is to segment your test traffic and create or duplicate tests and run them each with different traffic sources. Not only will you adjust for traffic mix that way but you’ll also gain more insight to how your hypothesis is effected by keeping each traffic source clean and not mixing the streams.
Now that you know the top 3 mistakes even the conversion rate experts make, how will this change your testing strategy for the year?
Knowing these 3 mistakes there are obvious impacts on how you buy traffic and which tests you run when along with your hypothesis for each test.
Happy converting.
Discover the 3 funnels that can help your health supplement business succeed.
Listen to the Health Supplement Business Mastery Podcast for for dietary supplement entrepreneurs and marketers.