THE CATALOG DOCTOR™
ARE YOUR CATALOG TEST RESULTS WORTHLESS? OR WELL WORTHWHILE?
First published on RetailOnlineIntegration.com blog June 2014
© 2014 Susan J. McIntyre
Testing is an excellent way to learn how to maximize sales and minimize costs. But it's critical to construct your tests to deliver results that are both accurate and actionable. A poorly-designed test can instead deliver unreadable results and money down the drain. Here are tips to get clear, actionable results every time.
FOCUS THE TEST ON ONE ELEMENT
A cataloger with "selling" covers (products sold directly from the cover), decided to test a "lifestyle" cover to see if it could lift response. An attractive room setting was photographed for the lifestyle test, then the test and control catalogs were mailed. The results? the control "selling" cover appeared to win roundly.
Wait, there's more.
The "selling" control cover also had a great offer...but the "lifestyle" cover did not. Why not? Because "the offer would have hurt the graphic integrity of the beautiful photo"!
So look again at those results. Did selling products on the cover win? Or did the great offer win? Or did selling from the cover plus the great offer win? No one will ever know. The test results were worthless.
MAKE SURE YOUR SYSTEMS CAN EXECUTE THE TEST
New management at the cataloger questioned if "all these offers" were really necessary. So they decided to test catalogs without offers versus the normal control offers. Creative was the same on both catalog versions except that "control" had the offer and "test" didn't. Mail quantities were selected to deliver statistically significant order quantities. In other words, this was a well-constructed test.
Results? Just about even. Conclusion? "Our customers don't need offers...they'll buy just as much without one." A puzzling conclusion. What was up?
A closer look revealed that anyone who ordered via the web (whether they'd received the offer catalog or the no-offer catalog) saw — and used — the offer.
How did that slip-up happen? Partly poor internal communication. Partly that it was hard to program hiding the offer from customers getting the test, but it was easy to program showing the offer to everyone so IT took the easy route. Test results were worthless due to poor channel communication and internal oversight.
A FREQUENCY TEST THAT WORKED
"Do customers really need to get all these catalogs, or can we mail less often?" was the question the new owners asked. So a test was set up to mail some customers all catalogs on the "regular" schedule, and other customers only half as often. The catalog team made sure that the half-as-many-catalogs group didn't miss out on any sales or special offers. This was a well-constructed test.
The catalog team also realized that to make this test work right, the name selection and data processing had to be done differently than normal. That is, the exact same names needed to be flagged and then kept in either the "regular" or "half-as-many" group throughout the entire six months of the test even if individual names switched their RFM segments. The team worked with IT to make sure that happened.
Results? The ROI (Return On Investment) was better with the old regular catalog mailing schedule. The "half-as-many-catalogs" test did reduce costs, but reduced sales and profits even more.
Trustworthy results? Yes, because the catalog team had thought through all the issues, had worked with the channels and systems teams to be sure no unexpected glitches would occur. And the test quantity was high enough for statistical significance, but low enough to minimize risk of sales loss during the test period.
ARE YOUR CATALOG TEST RESULTS WORTHLESS? OR WELL WORTHWHILE?
First published on RetailOnlineIntegration.com blog June 2014
© 2014 Susan J. McIntyre
Testing is an excellent way to learn how to maximize sales and minimize costs. But it's critical to construct your tests to deliver results that are both accurate and actionable. A poorly-designed test can instead deliver unreadable results and money down the drain. Here are tips to get clear, actionable results every time.
FOCUS THE TEST ON ONE ELEMENT
A cataloger with "selling" covers (products sold directly from the cover), decided to test a "lifestyle" cover to see if it could lift response. An attractive room setting was photographed for the lifestyle test, then the test and control catalogs were mailed. The results? the control "selling" cover appeared to win roundly.
Wait, there's more.
The "selling" control cover also had a great offer...but the "lifestyle" cover did not. Why not? Because "the offer would have hurt the graphic integrity of the beautiful photo"!
So look again at those results. Did selling products on the cover win? Or did the great offer win? Or did selling from the cover plus the great offer win? No one will ever know. The test results were worthless.
MAKE SURE YOUR SYSTEMS CAN EXECUTE THE TEST
New management at the cataloger questioned if "all these offers" were really necessary. So they decided to test catalogs without offers versus the normal control offers. Creative was the same on both catalog versions except that "control" had the offer and "test" didn't. Mail quantities were selected to deliver statistically significant order quantities. In other words, this was a well-constructed test.
Results? Just about even. Conclusion? "Our customers don't need offers...they'll buy just as much without one." A puzzling conclusion. What was up?
A closer look revealed that anyone who ordered via the web (whether they'd received the offer catalog or the no-offer catalog) saw — and used — the offer.
How did that slip-up happen? Partly poor internal communication. Partly that it was hard to program hiding the offer from customers getting the test, but it was easy to program showing the offer to everyone so IT took the easy route. Test results were worthless due to poor channel communication and internal oversight.
A FREQUENCY TEST THAT WORKED
"Do customers really need to get all these catalogs, or can we mail less often?" was the question the new owners asked. So a test was set up to mail some customers all catalogs on the "regular" schedule, and other customers only half as often. The catalog team made sure that the half-as-many-catalogs group didn't miss out on any sales or special offers. This was a well-constructed test.
The catalog team also realized that to make this test work right, the name selection and data processing had to be done differently than normal. That is, the exact same names needed to be flagged and then kept in either the "regular" or "half-as-many" group throughout the entire six months of the test even if individual names switched their RFM segments. The team worked with IT to make sure that happened.
Results? The ROI (Return On Investment) was better with the old regular catalog mailing schedule. The "half-as-many-catalogs" test did reduce costs, but reduced sales and profits even more.
Trustworthy results? Yes, because the catalog team had thought through all the issues, had worked with the channels and systems teams to be sure no unexpected glitches would occur. And the test quantity was high enough for statistical significance, but low enough to minimize risk of sales loss during the test period.