So my fellow e-mail marketers, the vast majority of which act in an honest manner, marketing opt-in campaigns with integrity, let's consider the following:
Let's pretend that 100 customers abandon a shopping cart on Monday. On Tuesday, you send a targeted e-mail campaign, and you observe the following statistics:
- 30 customers click-through the e-mail campaign, 50% of those individuals buy something, meaning that 15% of the customers purchased because of the campaign.
One of the challenges of e-mail marketing is that e-mail marketers like you and I are used to measuring "positives". We are driven to measure positive outcomes. Our metrics are calibrated to highlight anything we do that is good.
But what about the 85% that did not purchase? What if we angered 25 of the 85 customers, and they don't ever come back and buy from us again, because of our marketing program? Are we measuring this important KPI? Probably not ... because it is truly hard to measure negatives, isn't it?
There are three things we can to do prove that shopping cart abandonment e-mail campaigns are good for us, and good for the customer.
- Execute e-mail campaign mail/holdout groups. If 15 of 100 customers purchase in the shopping cart abandonment e-mail campaign, and 11 of 100 customers purchase in the holdout group, then we got an incremental 4 customers to purchase. 4 is still better than 0, right? But we do need to measure the incrementality of our marketing activities, don't we? We cannot take credit for orders that would have happened anyway.
- Follow the mail/holdout group for a year. See if, at the end of twelve months (or even three months), the group that received these type of marketing campaigns spent any additional money. If so, good, it means that as a whole, the campaigns are working. But what if the groups have equal performance, when measured over the long-term? If this happens, then we are simply shifting demand, we're not actually creating demand.
- Quickly identify customers who do not interact with these campaigns, and create a field in your database, so that we don't necessarily send these campaigns to that audience.
- An increase in the annual customer retention rate, maybe from 44% to say 47%.
- An increase in the annual customer reactivation rate, maybe from 13% to say 15%.
- An increase in orders per retained/reactivated customer, from 2.25 to 2.35 as an example, measured annually.
- An increase in average order value, from $125 to maybe $132, measured annually.
- An increase in new customers, measured on an annual basis.
- An increase in customer profitability, measured on an annual basis.
And guess what? The long-term testing is just as likely to prove that the value of this marketing program is more than what is illustrated by traditional metrics as it is likely to prove that the value is less. When you convert a customer to a purchase, their future value is significantly increased --- so the testing may show that this style of marketing is essential.
Let's have a balanced perspective ... marketing works positively for some, works negatively for others. The sum of the two can be measured via testing. This is what I'm advocating in the article --- summing the positive, negative, and incremental outcomes. To only focus on half of the metric set is misleading.
We can do this kind of testing!
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.