Retail catalog marketing is an inexact, imprecise science.
Let's assume that a major American retail brand sends you a catalog on April 1. Let's also assume that your small business purchases from this major American retail brand on the 15th of every month, regardless of marketing activity.
Did the catalog cause you to purchase merchandise?
The answer is probably "no".
The catalog may have influenced the merchandise you purchased. The catalog may have caused you to spend more than you normally would have. The catalog may have caused you to spend less than you normally would have.
But you would have purchased merchandise anyway, no matter what. You always buy something from this brand on the 15th of the month.
Now let's pretend you are the Database Marketing Executive at this major American retail brand. Your job is to measure the effectiveness of this retail catalog marketing effort. Using the tools and techniques available to the database marketers, let's see if you would decide to mail this sample customer future catalogs.
Methodology = Mail And Holdout Groups: Do Not Mail This Customer A Catalog
This is a classic direct marketing strategy, practiced for more than a century (and maybe for centuries). When measuring effectiveness by mail and holdout groups, we'd learn that this customer would purchase regardless of catalog marketing. Therefore, the segment this customer belongs to is not considered a "responder".
Methodology = Pattern Detection: Do Not Mail This Customer A Catalog
Pattern detection suggests that this customer buys on the 15th of every month. The database marketing executive learns that marketing doesn't influence this customer. Therefore, this individual customer would not be considered a responder.
Methodology = Matchback Analytics: Mail This Customer A Catalog
Matchback analytics, the kind offered by major list processing corporations, co-ops, and data compilers, match purchases within a window of time to a marketing activity. Let's say that the matchback window is three weeks (oftentimes, the matchback window is something silly, like ninety days or six months). Any retail purchase within three weeks of the catalog mailing is attributed to the catalog mailing. Therefore, this individual customer would be considered a responder. Here's a little secret. Matchback analytics grossly over-state the effectiveness of most retail activities. You've been warned!!
Methodology = Brand Marketing: Mail This Customer A Catalog
All too often, retail catalog marketing falls into the brand marketing arena. In other words, a budget is set, say $1,000,000. The database marketing team is asked to mail a million customers, to use up the entire budget. The database marketing team executes the strategy. In this case, if our sample customer buys every month, the customer is a "good" customer, and will receive this catalog. This is the most common scenario in retail catalog marketing --- the CMO determines a budget, the CMO determines the marketing tactics that will be employed, and the database marketing executive picks the best customers for any given strategy. In some instances, rogue database marketers set up tests to determine if the strategy actually worked or not. I've executed this rogue strategy myself --- I wanted to understand how much money my company was losing. For the most part, however, the effectiveness of the mailing isn't even measured.
Retail catalog marketing is an inexact, imprecise science. The corporate culture, the quality of information captured in the customer database, and the measurement technique used by the database marketing team determine whether you will receive a retail catalog from your favorite American retail brand.
How does your company execute measurement of retail catalog marketing activities?
Helping CEOs Understand How Customers Interact With Advertising, Products, Brands, and Channels
April 26, 2008
Retail Catalog Marketing
Subscribe to: Post Comments (Atom)
Many of you have forwarded articles to me outlining how your local mall is considering taking empty space and turning it into pickleball cou...
It is time to find a few smart individuals in the world of e-mail analytics and data mining! And honestly, what follows is a dataset that y...
Sometimes you think "people already know this stuff". Sometimes you realize that Google Analytics give smart analysts almost no op...
If you want to understand why clients don't trust vendors and trade journalists, read this little peach from a week ago: Direct Mail is ...
Most knowledgeable CMOs use the hold out as the measurement of choice. But even this method has its problems.ReplyDelete
For example, if the hold out segment responds at .5% and the catalog segment responds at .8%, then the logical conclusion is that .3% was the actual lift attributable to the catalog.
But if the hold out received a catalog one or two months ago, then the customers in this segment are still getting some lift over time. So the actual attribution may be a lot higher.
So the best hold out is the one that receives no catalogs for at least 12 months. Then the attribution to the catalog mailings should be more reliable.
How do you set aside your hold outs to get the most accurate reading?
I don't agree with the argument, because both the holdout group and the mailed group receive catalogs equally from one to two months ago, thereby offsetting the effect of previous catalogs.ReplyDelete
That being said, I prefer longer-term tests.
True... to a point. Because you won't see what the impact of not mailing the catalog has on the holdout group over time. All you can conclude is what mailing a single catalog did to a non-mailed group over a very short period.ReplyDelete
Your evaluation here may be based on short term lift without looking at the cumulative effect of mailing versus abandoning the catalog.
Short term evaluations mask cumulative impact. The conclusion might be to not mail to a given group ever again based on a single catalog wave.
I think your logic matches my comment about preferring long-term test.ReplyDelete
For folks reading this series of comments, it is important to pay attention to what Mr. Grigg suggests.ReplyDelete
The most valuable information you'll obtain in tests come from tests executed over a long period of time.