This week's question comes at us from a different direction. I spoke at #Vircomm2014 in London last week (click here). A great experience. Mind changing. Strategy enhancing. One of the questions was fascinating, it came from a college professor who spoke earlier in the day, and he asked the question of another researcher who was wrapping up his presentation. Here was the question - from one researcher to another:
- "You mentioned that customers who joined a community at "Brand X" spend eight times as much as other customers. And yet, Mr. Hillstrom shared earlier that when you measure the incremental value of any marketing tactic, like joining a community or joining a loyalty program, customer value seldom increases by more than 10%. How do you explain the difference in measurement philosophy and outcome? Who is right?"
Ah HA!
I love it when we statisticians use methods that appear to satisfy all necessary assumptions, and yet, result in highly biased analytical results. I only know this because I made mistakes for eons before I figured out just how wrong I was. And having said that, I don't have a right answer. I just know that the best practice is wrong.
When measuring the value of an activity, researchers typically execute measurement techniques with varying levels of bias.
- High Bias = Compare those who are part of a group or marketing tactic against those who are not. Terribly biased. This would be like measuring the political beliefs of those who call in to Rush Limbaugh's radio show, comparing them against everybody else. This is the methodology that is used 90% of the time.
- Moderate Bias = Compare those who join a program at the time they join the program vs. those who are equal at that time and do not join a program. This is the method that is used by smart researchers (like the individual who was presenting at the conference). You equalize all customers, then you measure the "self-selected" activity against all other customers. This method is very good. It is also biased. There is something about the customer who self-selects for a community or a loyalty program or whatever that is not accurately measured by the fact that the customer self-selects. When using this style of measurement, we always show a positive outcome - always, and often, a highly positive outcome. I have yet to see this method applied and result in a negative outcome. Are we suggesting that we, as marketers, never make mistakes, and never do anything that doesn't deliver a sales increase? The measurement tactic is biased because we don't accurately capture the motivations behind why a customer joins a community or a loyalty program, and as a result, we over-state the importance of the behavior.
- Low Bias = My method (hopefully, maybe I'm wrong, too). My method is still biased, but I've removed much of the friction. I don't analyze customers at the exact point in time when they do something, I freeze/segment customers at a later point in time, then compare the customer against similar customers who are not in a program. Let's say that we start a community program or a loyalty program on July 1, 2013. I will freeze my file as of January 1, 2014 - and create a segmentation variable for the customers who joined the program during 2013 - along with other RFM-centric variables as of 1/1/2014. Then, these variables are entered into a model, measuring spend during January 2014. This method usually shows that the incremental value of any marketing program is in the 10% range, maybe lower. Yes, there's a million biases here, but you avoid the self-selection bias that occurs in the other two methodologies.
Now, let's be perfectly honest - the person at the session did a great job of reducing bias in his analysis - at least 90% of the work I see out there doesn't go as far as this individual did. He tried very hard to eliminate bias. Very hard.
Here's our challenge, folks. With the high-bias and moderate-bias methodologies, we will always show that any marketing activity - any activity - has a positive return on investment, and frequently, a grossly overstated return on investment. The outcome does not pass the smell test ... if all these programs worked, then all companies would use the programs and all companies would show sales gains that are +20% or +50% per year.
That never happens.
That's how we know that the methods are biased.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.