April 09, 2007

Making Decisions In The Catalog/Online Business

One of the best comments I've received on this blog came yesterday from Graham Hill. Here is his comment about my rejection of the hypothesis that marketing bloggers are largely negative:

"But, as one statistician to another. Are you not in danger of making unfounded generalisations about blogging based upon a hugely inadequate number of observations."

The answer to Graham's question is "YES"! I'm absolutely in danger of doing this! Graham's comment is insightful and correct.


Early in my career, when it was my job to be a statistician, it was my job to be "right". I made sure that my work was perfect, that my conclusions were rock-solid and air-tight. I was given months to complete a project. Those were good times.

In 1998, I became Director of Circulation at Eddie Bauer. I was member of the "Catalog Business Team", a group of Directors and VPs responsible for meeting or exceeding budgeted sales and profit goals for the Catalog/Online division at Eddie Bauer.

We met as a team every Wednesday morning.

Questions would come up, questions that required rapid answers. For instance, the Merchandise executive might say "We're killing this business by running Mens merchandise in the first twenty pages of the catalog. Let's stop this practice, and run best-selling Womens merchandise in the first twenty pages."

Maybe we ran Mens merchandise in the front of the past two catalogs, and maybe those two catalogs were ten percent below our expectations, whereas the prior five or six catalogs met expectations. On the surface, the merchandising executive seemed to have a point.

As a statistician, you'd like to run a series of experiments, and prove that Mens merchandise was killing the performance of the book. However, these experiments required many folks in print production and creative to create various versions of the catalog. Once created, it would be close to two months before the print production process was completed, resulting in catalogs being mailed to customers. Another month needed to go by before a proper statistical analysis was completed.

So, sitting in this meeting, my choices were to recommend a three month process to test the hypothesis in just one catalog, or to quickly review as a team the past eight catalogs in an ad-hoc, unscientific manner, and make a decision as a team before leaving the room.

It requires a lot of patience to learn the balance between making ad-hoc, gut-feel decisions and doing a thorough, accurate statistical analysis. You never really perfect the balance, you make mistakes, and you make the right decisions.

The key factor in this is that you "make decisions". Decisions, positive or negative, move a business forward, increase accountability, and reduce red-tape.

I once met with my marketing Vice President, when I was a statistician. I wanted a lot of time to do an analysis "the right way". He told me that he'd rather make five decisions with 80% accuracy than make one decision with 100% accuracy, because at the end of the day, you'd make four correct and one incorrect decision, whereas the "right" approach yielded only one correct decision. He preferred to make four right and one wrong decision each day than making one right decision.

I've tried to balance his viewpoint with my statistical heritage of "being right". I've always admired the leader who is decisive, makes four right and one wrong decision, and takes accountability for the wrong decision.

3 comments:

  1. Kevin

    Despite my challenge to your earlier post, I agree with you 100%.

    As a CRM consultant who is also an interim CRM manager, I am in the rare position of having to eat my own campaign dog-food. That leads to some interesting internal struggles.

    The CRM consultant side of me pulls me towards robust marketing experiments with careful designs, control groups and 95% confidence intervals. But the interim CRM manager side wants to run lots of campaigns, to adapt them on the fly and to well, just get on with it.

    My own way around the internal struggle is to develop as much of an understanding of customers, the client's marketing capabilities and the results it is expecting, and to use these systemic insights to drive forward a number of 'best-guess' activities. Each of the activities is a small real option that I can use to learn more about what works and to improve it the next time round.

    As you say, it is often better to do something approximately right (and to learn from it) than to do nothing precisely wrong. But it is important to at least understand how the world works a bit before venturing out into it too boldly.

    Graham Hill
    Independent CRM Consultant
    Interim CRM Manager

    ReplyDelete
  2. Fantastic post Kevin and Graham, very interesting.

    How would you handle analytic integrity when living by the 80% rule? In this numbers savvy industry, it sometimes feels like everyone wants to be an analyst. Each department has employees who routinely analyze data, reach conclusions, and make recommendations -- often times incorrect conclusions or casual assumptions that are quickly dismissed by your stats women and men.

    In our analytics group we've built a reputation of producting a scientifically valid and statistically correct basis for action. When we move towards the 80% rule (believe me, I like the premise) we risk compromising the integrity and trust that has been established. We move towards becoming another "I think" instead of an "I know."

    How do you balance the two? What if we asked less questions but whole-heartedly went after the statistically sound answers we have?

    ReplyDelete
  3. If your job is in the analytics field, you have no choice but to do great work, with integrity, and you have to have an analytical answer to a problem that is accurate and reliable.

    As you move from analyst to manager to director to executive, you have to slowly give up on some of the analytical rigor, in order to be able to make more decisions. You trust that your analytical people did the rigor necessary to have a good answer --- but a leader must make a lot of decisions on gut feel, experience, and instinct. The leader will take a small sample of data that may not be valid from a statistical standpoint, and will draw conclusions from the limited set of biased and imperfect information.

    So, I'd require the analyst to do a thorough job. As a leader, I trade-off some of the rigor, in order to be able to make more decisions that move the business forward, realizing that some of my decisions will be wrong, and will set the business back a bit.

    ReplyDelete