TESTING: WHYS AND HOWS
First published on RetailOnlineIntegration.com blog February 2014
© 2014 Susan J. McIntyre
PATIENT: “Doc, testing in a catalog is a pain and it takes a long time to get the results back. Do I need to actually test, or can I just go with my best marketing instincts or common industry wisdom?”
DOCTOR: “It's very tempting to skip testing in a catalog. But I've seen too many ‘great marketing instinct’ ideas and ‘common wisdom’ ideas crash and burn. It's much better to test first.”
TRIM SIZE TEST DELIVERED SURPRISING RESULTS
Common wisdom is that a full-size catalog (around 8x10.5") will always beat a slim-jim (around 6x10.5"). Not always so. One actual test (and it was a well-constructed test), showed absolutely no difference in results between the two trim sizes (but a big difference in cost in favor of the slim-jim). So don't always believe common wisdom—you need to test to know for sure what works for your catalog and your customers.
WHAT ELSE TO TEST?
Tests I've seen that paid off big: Covers. Offers. Page count. Order form. Photos. Density. Creative. And of course, lists. “Creative” and “density” are the most expensive to test because you need to create two entirely separate designs, and pay for plate changes on every page. A page count test can be done by adding an insert (like 4 more pages) that offers marginal products that didn't make it into the main book.
It's easy and affordable to test covers—it's a single plate change on one side of the sheet. And it can result in a big sales increase. One cover test had a beautiful image with messages versus same image without messages. Messages won by 40%. Retested. Same results. That's not an unusual difference—other types of cover tests have had similar big-difference results.
EASY OFFER TESTS
If you're just testing offers—which can often (but not always) be done in a small amount of space—check with your printer for the cost difference at your print quantities between plate changes, dot whacks, bind-in cards and dry-release cards to see what's most economical for you. For example, if you can fit your entire offer on a dot (“Save 20% through 3/31/14 - enter code SAVE20”) dots could be your best way to test.
DON'T ROLL OUT WITHOUT STATISTICALLY SIGNIFICANT RESULTS
Any test needs to deliver a minimum of 50 orders per test segment (100 is better). If not, you're just looking at random results that aren't predictive. True story: a cataloger made a massive list test mailing to (despite many cautions) very small quantities of many lists. Because the mailings to each list were small, so were the resulting order counts. Ignoring the statistical insignificance of the results, the “winners” were rolled out. Roll-out results bore no relation to test results, ending in big losses.
TEST TWICE BEFORE ROLLING OUT
In real life, rolling out to large quantities don't always deliver results that track with the test quantities. Safer is to retest to confirm the first test. If you're fairly confident in the results, you can retest to bigger quantities than in your first test.
DON'T BET THE FARM ON A TEST
It's not necessary to do “true A/B” splits where half gets the test and half gets the control. Especially if management is worried about the test, it's fine to test to, say, 10% and send the control to 90% as long as the test panel is big enough to generate statistically significant results.