11 A/B testing mistakes and how to avoid making them

Poorly done A/B testing will not only yield inaccurate data on what your customers prefer, but it may actually cause you to lose customers. Here are 11 classic mistakes and ways you can avoid them.

abtesting5 thinkstock

Most companies (if not all) have at some point conducted an A/B test, testing two versions of something related to content, design or price to help them improve their landing pages and conversation rates. However, not all A/B tests are created, or conducted, equally.

Indeed, design your A/B test poorly and not only will you not get an accurate assessment of what your customers prefer, but you may actually wind up losing customers.

So what can companies do to ensure their A/B tests are well-designed and yield positive results? Following is a list of the 11 most common (and serious) A/B testing mistakes and what your organizations can do to avoid them.

Mistake No. 1: Testing too many elements, or variables, at a time.

“One of the top A/B testing mistakes is having too many test variables,” says Corinne Sklar, CMO, Bluewolf, a global business consulting firm. “For example, having split subject lines and different calls to action in the email body makes it impossible to determine which of the two was the success factor in driving leads.” To avoid this problem, “keep your testing to one variable at a time.” That way, she explains, you will “gain a better understanding of what content strategy is working most with your intended audience.”

Mistake No. 2: Testing something that’s obvious or has already been proven to be more effective.

“There’s no reason to test something as simple as ‘Dear Customer’ vs. ‘Dear First Name,’” says Joshua Jones, managing partner, StrategyWise, a global provider of data analytics and business intelligence solutions. “This has already been tested time and again and the results are clear: customization [or personalization] is better. No reason to waste time and resources on the basics when you can test more critical elements, such as price elasticity or feature preferences.”

Mistake No. 3: Testing something insignificant, that’s hard to quantify.

“Test apples against oranges first,” says Justin Talerico, founder & CEO, ion interactive, an interactive content software and services provider. “Find the big winner. Then iterate with smaller variations — but never so small that the lift isn't worth the effort. Testing is justified with big wins.”

Mistake No. 4: Testing something you can’t actually deliver.

“A/B tests can give you eye opening results with lots of potential,” says Jess Jenkins, digital analyst, LYONSCG, an ecommerce digital agency. “But those insights are useless if you can’t act on what you’ve learned,” she points out. So “make sure you are testing items that are actionable. For example, video content might be your ticket to better engagement but do you have the resources and plan to [do this]?” Before you test something, “ensure that you can deliver what you learn from your tests.”

Mistake No. 5: Testing the wrong thing, or making (false) assumptions.

Sometimes, companies assume something is a problem when it isn’t – or test the wrong thing. “For example, an ecommerce manager may test a $50 vs. a $100 threshold for free shipping, but [he] may never test whether or not the current customers even value free shipping at all,” says Dan Hutmacher, senior digital consultant, LYONSCG. Or they might “test the color of a button without considering its size, shape or location.”

1 2 Page 1
Page 1 of 2
7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon