April 21, 2014

Gold, Green, Blue, and Red

I am always amazed at how infrequently catalogers analyze the performance of individual spreads.

Here's something I was introduced to at Lands' End, back in 1992 (yes, 1992). We calculated the profitability of every spread. If a spread performed at 30% or greater variable profit, it earned a color code of GOLD. If a spread performed at 20% to 29% variable profit, it earned a color code of GREEN. If a spread performed at 10% to 19% variable profit, it earned a color code of BLUE. Finally, if the spread performed below 10% variable profit, it earned a color code of RED.

You give the image above a quick review, and you know, immediately, what worked, and what did not work in the catalog.

  • Major success with pages 2 - 27.
  • A disaster on pages 28 - 35.
  • Another disaster on pages 46 - 55.
We learned that the first twenty pages were critical to the success of the catalog. If those pages worked (YELLOW, GREEN), then the catalog worked. Series of spreads that did not work were frequently aligned with terrible creative - inventory problems - or categories that were poorly merchandised.

This type of analysis takes all the pressure off of the circulation team. You can't argue that the circulation team "mailed the wrong customers" - that doesn't account for the reason why pages 2-27 did well while 30-35 were simply awful. We put accountability squarely on the individuals responsible for the merchandise presented, and the creative presentation of the merchandise.

Share with the audience the measurement technique you use to identify good/bad spreads. Leave a comment - or send me an email message (kevinh@minethatdata.com).

4 comments:

  1. Like this article a lot, thanks Kevin, we've only ever measured product/page turnover. Also just wondered if you have any tips for measuring revenue for products that feature in catalogues, and are sold in store (we don't know who made the purchase off the back of receiving a catalogue as we don't run a loyalty system)? Traditionally our mailing has incorporated anyone within a close distance to a store. Any pointers in measuring data or do we look to roll out a full blown loyalty system in the hope of capturing better data? And which of your books do you recommend I start with, just seen they're available to download on Kindle ;-)

    ReplyDelete
  2. Most retailers find some way to identify store purchases and loyal customers - it does not have to be a full blown loyalty system.

    Small retailers will test mailing catalogs - one store gets normal catalog mailings, while another store gets no catalog mailings for a month or a quarter. At the end of the test period, measure the difference in comp store sales performance between stores where customers were mailed catalogs, and stores where customers were not mailed catalogs. That incremental difference is the amount of sales the catalog drove to the store.

    ReplyDelete
  3. Kevin, you should not be shocked that when building mobile Apps (Android or iPhone) we have similar. They are called "heat maps". If we see lots of "page turns" (or hitting the 'next' button), you know you got a loser. Surprisingly, startup tech companies look at "page viewed rankings", which are useless. (---Yeah they go under). We can measure linger time (on a page), but that is useless because many Android tools do not measure angle of the device - like when you show someone something. (Does this make sense to you?)

    ReplyDelete
  4. Yup, know all about heat maps, good thoughts!

    ReplyDelete

Note: Only a member of this blog may post a comment.

Well, You Got Me Fired

I'd run what I now call a "Merchandise Dynamics" project for a brand. This brand was struggling, badly. When I looked at the d...