Well, I never imagined how popular this thing would be!
No, not from a sales standpoint ... but from a CEO standpoint, as the projects are rolling in! If you're a consultant out there, you can work Twitter to death, or you can try old-school techniques like books and direct mail campaigns and e-mail campaigns. The old-school stuff still works!
The techniques in this book still work, too! If you're an average $100,000,000 cataloger that generates $5,000,000 of profit per year, the techniques in this book will help you get to $6,000,000 of profit, per year, plus/minus.
So buy the booklet, and get busy implementing the techniques outlined in the text.
If you don't want to implement this stuff yourself, join the long line of catalogers (50+ in the past four years) who've asked me to execute this type of work for them --- contact me now!
Ever wonder if your Twitter community is headed for unfettered growth? Been concerned that your Twitter community is slumping toward oblivion?
Hillstrom's Hashtag Analytics helps you answer that question. The methodology allows you to understand what the future trajectory of your Twitter community looks like, and helps outline ways for you to increase engagement among your followers.
This is a crisp, quick 44 page read. You'll also get two FREE spreadsheets that help you quantify the future trajectory of your Twitter-based social media community. Pick your format!
Seems like now is as good a time as any to secure your copy of Hillstrom's 2011 Almanac. Honestly, you don't want to order the book in April and miss out on daily tidbits for three or four months, do you?
We've talked a lot about merchandise in the past month, and for good reason ... so few people talk about it!
Here's a problem we all face. We want customers to buy from us, so we offer free shipping or 20% off, and we marvel at the 15% or 20% increase in demand we generate from the promotion. What we don't see is the impact this has on the average price per item purchased.
Say a customer buys three $40 items after you offer the customer 20% off on orders > $100.
Instead of a $40-per-item customer, you transform the customer to a $32-per-item customer.
Look at the table at the start of this post ... I took price points, and categorized them from cheap to high-end, within a client. Then, I looked at the price points that customers buy from in the future, cheap to high-end.
It doesn't take a rocket scientist to tell you that if a customer buys cheap items in the past, the customer will buy cheap items in the future, and vice versa. This trend holds for customers who have only one historical item in their purchase history ... the trend holds for customers who purchased twenty-five items in their purchase history.
When we measure everything in terms of campaigns or response to catalogs, we miss the subtleties associated with longer-term customer behavior. We make decisions that appear good today, then we can't understand why we can't sell higher-priced, high-margin items in the future!
From time to time, I'm asked to talk about designing tests.
I can honestly say that I wouldn't have my own business, had I (or the teams I led or worked with) not executed somewhere between five hundred and a thousand tests during my career at Lands' End, Eddie Bauer, and Nordstrom. I learned more from 500 tests than I learned from 7,000 days of work at the companies I worked for.
All of the secrets of your business are embedded in tests, not in KPIs or metrics or dashboards or analytical work. Few people actually talk about this fact ... you learn the most when you change things, when you vary your strategy. It turns out that customers react differently, depending upon the marketing plan you implement. Type of Test: The most common test is the A/B test. You test a control strategy, the strategy you are currently executing, against a new strategy. There are more complex designs. Many folks use Fractional Factorial Designs. My favorite is a Factorial Design, something I learned in college and used at the Garst Seed Company prior to taking a job in 1990 at Lands' End. My favorite test of all time is a 2^7 design we executed at Lands' End in 1993-1994. We tested every combination of seven different business unit mailings for a year (main catalogs yes/no by home catalogs yes/no by mens catalogs yes/no by womens catalogs yes/no by clearance catalogs yes/no by sale inserts in catalogs yes/no by a targeted inserts into main catalogs yes/no). We executed this test for a full year ... yes, a full year. The results were mind-blowing. I still have the results of the analysis sitting on my desk, the results were that important, that revolutionary, that spectacular. If you want to divide a room of Executives against each other, provide them the results of a 2^7 factorial design that does not yield test results congruent with existing "best practices".
Sample Size: I'll focus on A/B tests here, as the concepts apply to more complex designs, but become harder to explain. The most commonly asked question is "how big should my sample size be?" The Google Analytics Generation likes to focus on "conversion rate". Most web analytics tools are calibrated to measure anything, but strongly guide the user toward measuring conversion rate. This is a problem. You see, "response" or "conversion" are "1/0" variables, binary in nature. Binary variables don't have a lot of variability associated with them, as the value can only be 1 or 0. As a result, you don't need a lot of customers/outcomes in your sample, in order to achieve statistical significance. However, and especially for those of us in catalog/retail/e-commerce, we're not interested in measuring conversion or response ... no, we want to measure something more important, like $/customer. This is more problematic. Spending varies much more than response (1/0). A customer could spend $50, a customer could spend $100, a customer could spend $200. Now, there are tons of sample size calculators available on the internet, so do a Google search and find one. What is important is that you pre-calculate the variance of customer spend. Here's what you can do. Take any 12-month buyer through, say, October 31. Then, for this audience, calculate the average amount spent, per customer, in November. Most customers will spend $0. Some will spend $50, some $100. Calculate the variance of this $/customer metric. The variance will be used in your sample size calculations. It has been my experience that you need about 4x as many customers in a test sample to detect spending differences as you need to detect response/conversion differences.
Unequal Samples: It is acceptable to have 10,000 in one test group and 25,000 in another test group. Surprisingly, many Executives have a hard time understanding this concept, they cannot conceive differences between test groups unless the sample sizes are equal. I tend to use equal sample sizes for just this reason ... no need to get into an argument about why one group has 10,000 and another 25,000, even if you were trying to minimize the opportunity cost of a test and generate more profit for your company.
Opportunity Cost: I once had a boss who gave me a "testing budget" ... I could execute any test I wanted, as long as I didn't hurt annual sales by more than 3%. This is a very reasonable request from a forward-thinking CEO. Your best bet is to calculate the amount of demand/sales you'll lose by not executing the normal strategy. If you are going to hold out 50,000 e-mail addresses for one campaign, you will likely lose 50,000 * $0.20 = $10,000 demand, and maybe $3,500 profit, by not sending the e-mail campaign to 50,000 customers. Be honest and up-front about this. At the end of the year, show your Executive team how much sales and profit were lost by your various testing strategies, then show your Executive team how much you learned, quantifying the increase in sales and profit on an annual basis, given what your tests taught you. Make sure that what you learned offsets what you spent to execute tests!
Statistical Significance: I hate this concept, and I'm a trained statistician! Everything in marketing is a probability, not a certainty. Executives will teach you this. I once watched a statistician tell a Chief Marketing Officer not to implement a strategy because it failed a significance test at a 95% level. What was interesting is that the test passed at a 94% level. Focus on the probabilities associated with your test. If there is a 94% chance that one sample outperformed another sample, be willing to take a stand ... who wouldn't want to be "right" 94 times out of 100? Executives are used to making decisions that have a 52% chance of success, so when you present them with something that will be better than an existing strategy 88 times out of 100, you're going to find a receptive audience.
Testing and Timing, Part 1: The Google Analytics Generation have been trained to stop after sampling 2,200 customers (or 3,800 customers or however many) because at that stage, a statistically significant outcome occurs ... this is a problem, because you want to be able to measure more things than one simple result. In other words, if you let the test go through 10,000 customers instead of the 2,200 needed to detect statistical significance, you can learn "more". For instance, I like to measure results of tests across various customer segments within a test. When I have 50,000 customers in my sample, I can slice and dice the results across great customers, good customers, lousy customers, new customers! When I have 2,200 customers in my sample, I can only measure results at a macro-level, and that's unsatisfying. So often, test results differ across different customer segments. Set up your tests to measure many customers, and you can measure results across many customer segments!
Testing and Timing, Part 2: The Google Analytics Generation have been taught to iterate rapidly, meaning that they test until they achieve statistical significance, then they move on to another test, rapidly iterating toward an optimized environment. That's a good thing, don't get me wrong. I prefer to complement this strategy by extending the testing window. For instance, if you executed an e-mail holdout test for one of your one-hundred campaigns, you only truly learn what happened in that one test over a three-day window. The results of the test may not hold up in different timeframes, the results of the test may be highly influenced by variability. The longer your test is conducted, the less variability you have in the results, and therefore, the more confident you are in the outcome of your test.
Controlling For Other Factors: I can't tell you how many people get this concept wrong. As long as each test group is randomly selected from the same population, and your random number calculator hasn't gone bonkers, you don't have to control for other factors. I'll give you an example. I met a marketer who wanted to do an A/B test in e-mail, and wanted to exclude customers who recently received a 20% off promotion because they would "skew the results of the test". Each test group had an equal number of these customers in the test, so the equal numbers of customers cancel each other out! This e-mail marketing expert, however, disagreed 100%, hypothesizing that the interaction between the previous discount and the new strategy would yield an unexpected outcome that would bias the test. Don't get trapped by this form of lizard logic. As long as your groups are randomly selected from the same population, you're fine.
Sharing Results: Back in 1993 at Lands' End, we "typed-up" all test results, saving them in a black binder. All tests results were circulated to all Director/VP/SVP/CXO level individuals, using the same template, so that the results could be evaluated on a common platform. Today, folks use wikis and internal blogs/websites and files on network drives to store results. Storing and sharing results is important. And avoid a mistake I commonly made 20 years ago --- keep your political thoughts out of the test writeup! Don't tell the Creative Director that his idea was "stupid", as verified by the results of the test. Let the data speak for itself!
Retest: Just because you learned in 2001 that catalog marketing to marginal customers yielded poor results doesn't mean that it will yield poor results today. Retest findings, if you do, you're going to learn a lot about sampling error!
Geek Speak: Don't use it. SAS Programmers from 1990, Business Intelligence Analysts from 2000, and Web Analysts in 2010 all make/made the same mistake. Stay away from the geeky, mathematical details of your test, and focus on the sales, profit, staffing, and workflow outcomes of your test. Your Executive Team cares that you use best practices in your test design, they don't care that a paired-t-test yielded a one-tailed outcome that is significant at a p<0.02 level. Your Executive Team does care that the findings of your test yield $129,000 of incremental, annual profit. Your Executive Team does care that the results of the test suggest that 8% of the staff be downsized. So focus on what is important to your Executive Team, using language that your Executive Team understands.
Get Out Of Analytics/Marketing: Study what folks in academia and agriculture and clinical trials do. This is a good way to see the impact of Geek Speak, you'll find their writeups to be incomprehensibly obtuse, then imagine how an outsider would feel reading your writeups!
Conflict: Not everybody is going to embrace your analysis. You have employees who get to keep their job, as is, by not testing any new strategies. Anytime you prove that the current "best practice" doesn't work as well as a "new practice", you are going to be disliked, doubted, disrespected, dishonored, demeaned, you name it. You will be pelted with every conceivable and illogical argument on the planet. You'll be told you executed the test incorrectly, you'll be told your analysis is wrong, you'll be told that the test was tested at the wrong time of the year to the wrong audience, you'll be told that you are right but you've been wrong in the past so therefore maybe you really should stop testing altogether. You'll be banned from meetings. You'll be censured (it happened to me) by an Executive. You may even be fired. When people get political and angry with you, don't get defensive (I've gotten defensive, no good folks, no good), focus instead on being the "voice of the customer". Just remind everybody that this is how the customer responded, it's as plain and simple as that.
Test Things With A Long Half-Life: This is an important concept that few understand. People will test a black arial font on a yellow background vs. a blue times new roman font on a white background. I'm not saying you shouldn't test this, I'm saying that if you have a limited testing budget, focus your efforts on things that have significant strategic value, test things that yield outcomes that last for years, not weeks.
Future, Not Past: Instead of talking about the results of the test, talk about what the results mean to the future of your business. The Google Analytics Generation have not been given the tools to forecast five years into the future. Use your test results to illustrate what your business looks like in 2014 because you implemented the findings of a test you executed last week.
Ok, your turn. Use the comments section to publish your tips for test design and analysis!
It may well be that discounts and promotions are what are needed to stimulate business. Unfortunately, the tools needed to analyze whether a promotion is profitable or not aren't always available to the Google Analytics Generation. The savvy Web Analyst needs to go a step further, in order to determine if a promotion is likely to generate profit.
In our example, we're going to pretend the following:
Our promotion is "Take 20% Off Of Your Order, Today Only".
Average order value = $100.
35% of demand converts to profit.
Step 1 = Execute A Test: Ok, I realize almost none of you are going to do this. But if you had done this, you'd know exactly how much business would have have happened "organically", without the need of a promotion.
Step 2 = Talk To Finance: Since you didn't execute a test, you'll need to guess how much demand would have happened. Somebody in the Finance department has a forecast for total demand on the day of your promotion. Let's pretend that amount is $100,000.
Step 3 = Measure Sales on Promotion Day: Let's pretend that demand was $140,000 on the day of the promotion.
Step 4 = Calculate Incremental Profit: Here, we measure the difference in profit between $140,000 at 20% off vs. $100,000 at full price.
The $140,000 demand yields $140,000 * 0.35 = $49,000 profit. However, we gave up 20% of the $140,000 revenue, or $28,000, yielding $21,000 profit.
Now, honestly, the CFO folks are going to jump all over me, telling me that there are hundreds of subtleties involved in calculating profit. Go ahead, jump all over me. This is an example, folks, the idea here is to stimulate thought among the Google Analytics Generation. Savvy Web Analysts will work with their CFO to do this analysis and calculate profit.
Another thing to note here. In many cases, companies offer a promotion, and the customer chooses not to use it ... the customer fails to enter the promo code, for instance. So the Savvy Web Analyst will apply a "utilization rate" here, saying that 88%, for instance, of customers utilized the promotion.
Step 5 = Calculate Incremental New Customers, And Incremental Existing Buyers: This is important. Let's pretend that our average order value was $100 in each case. This means we had 1,400 customers purchase via discount, and we had $1,000 customers who would have purchased at full price. Carefully measure how many customers are new vs. existing.
Discount Example: 400 new customers, 1,000 existing customers.
Full-Price Example: 100 new customers, 900 existing customers.
Step 6 = Know 12-Month Profit By Customer Type: People have been arguing for lifetime value analyses for decades. For the Google Analytics Generation, it's hard to use software to calculate lifetime value. The savvy Web Analytics analyst exports data out of existing Web Analytics platforms and analyzes long-term value. I like to use 12-month profit. You use whatever you want to use.
Discount Newbies = $10 of 12-month profit.
Discount Existing Buyers = $15 of incremental, additional 12-month profit. This is the profit you get by converting, say, a three-time buyer into a four-time buyer.
Full-Price Newbies = $15 of 12-month profit.
Full-Price Existing Buyers = $17 of incremental, additional 12-month profit. This is the profit you get by converting, say, a three-time buyer into a four-time buyer.
In this case, the discount/promotion strategy yielded less short-term profit, more long-term profit, but not enough total profit.
Again, there are countless experts out there who will take exception with the methodology outlined here. That's ok, those experts should publish their take on this, letting everybody see how they would approach the topic. I'm trying to create a framework here for the Google Analytics Generation to see how one might measure whether discounts and promotions yield profitable outcomes. In this case, there's no denying that the promotion yielded a significant sales increase, but does not appear to generate enough profit, short-term or long-term, to pay for the promotion.
I can't tell you how many times I've heard catalog-based business leaders and catalog consultants utter this sentence:
"Paid Search doesn't work."
Oh.
It's been my experience that Paid Search performance is directly correlated with the mathematical expertise of the in-house staff managing Paid Search, or the mathematical expertise of the vendor chosen to manage Paid Search (or usually, both). In fact, can't we say that about every one of the micro-channels that we now manage?
Everything works, based on the following conditions.
Talent. The catalog industry seems bent on not acquiring talent, of late. It's one thing to be frugal and thrifty. It's quite another thing to not invest in talent.
Audience: An iPad app is not going to work if it is targeted to a 67 year old customer. A catalog is not going to work if it is targeted to a 27 year old customer. Contrary to the punditocracy, channels don't work if they are not aligned with the buying habits of the target audience.
Commitment: Here's something that is sorely missing in the catalog industry. You have to go "all-in", to use a poker analogy. I've heard "... we tried a blog, it didn't work", I've heard "... we tried paid search, it didn't work", I've heard "... we tried e-mail marketing, it doesn't work." I've never heard a cataloger say "... we've tried catalogs, they don't work." Catalogers go "all-in" on catalogs, just spend a day in the corporate office of a catalog brand, and you'll know what I mean. Every meeting is "all-in" on catalog marketing. Staffing levels for the catalog are at a 50-1 or 100-1 ratio compared to other channels. Every channel requires commitment. Channels don't succeed just because you have "multiple channels".
Back to Paid Search. Where do you stand on each axis (talent, audience, commitment)?
In other words, when you offer a specific type of content, you cause sales in sales channels to change. If you speak to a digital audience in a way that is congruent with their use of information, you'll sell merchandise digitally. If you speak to an offline audience in a way that is congruent with their use of information, you'll sell merchandise offline.
By the way, I executed one other creative strategy, just to see what might happen. In the Catalog Marketing PhD book, I write the following text in the blog post.
WARNING: DO NOT BUY THIS BOOKLET UNLESS YOU CONSIDER YOURSELF A CATALOG MARKETING EXPERT!
This "warning" caused the following outcome.
An approximate 50% reduction in traffic to the page on Amazon.com.
An approximate doubling of the conversion rate on Amazon.com.
It should come as no surprise to anybody who follows this blog that messaging and content result in changes in response by channel. If this happens in my humble little business, imagine what happens in your sophisticated business. You're analyzing these dynamics, right?!
As I mentioned, Dear Catalog CEOs, I have not forgotten about you!
Today, I am going to offer you one of the biggest profit opportunities you'll find in our industry ... for just $7.95 in print, and digitally for $2.99. But there is a warning ...
WARNING: DO NOT BUY THIS BOOKLET UNLESS YOU CONSIDER YOURSELF A CATALOG MARKETING EXPERT!
Hillstrom's Catalog Marketing PhD offers a doctorate program in multi-channel catalog mailing strategy for highly advanced catalog marketers ...that's you, right?!
This 42 page booklet accurately describes the highly popular methodology I use with my catalog clients to determine the optimal number of catalogs to mail to any customer on an annual basis.
If you take the garden-variety $100,000,000/year catalog/online business that generates $5,000,000 of annual profit, and you apply the techniques outlined in this book, you'll have the potential to increase annual profit by about $1,000,000, plus/minus. There's nothing wrong with that, now is there?!
Here's what you get:
An overview of Digital Profiles, and an illustration of how they are incorporated into the scoring process.
Explanation of the Regression procedure used to score the probability of buying merchandise on an annual basis.
Explanation of the Regression procedure used to predict how much each customer will spend, on an annual basis.
An overview of the Regression procedure used to predict the "Organic Percentage", the most important metric in the catalog industry.
An overview of the Algorithm used to determine the optimal number of catalogs to send to a customer on an annual basis.
An overview of the actual computation used to assign the optimal number of catalogs to send to each individual customer in your database.
Tables that illustrate how we calculate the overall increase in profitability using this methodology.
Ask your friends at Abacus, Experian, your favorite Co-Op, or your favorite List Organization if they are willing to sell you all of their secrets for just $7.95 in print, $2.99 via digital download? They'd never do that!
And yet, I'm making that opportunity available to you today. Pick up your copy now:
We've spent the better part of a month chatting about "Class Of" reporting.
We're looking at merchandise reporting, not customer reporting. Unfortunately, this might be the only place in the direct marketing world where we'll spend time talking about merchandise. You, fortunately, already know that merchandise means everything. Without merchandise, customer service is meaningless. Without merchandise, channels are meaningless. Without merchandise, you don't have any customers. You'd think everybody would spend time talking about merchandise.
Here is another business, evaluated via "Class Of" reporting.
What do you observe, when you review the report? First of all, nearly 45% of sales come from new items ... this is a very different business model than the business we previously advertised. This business cannot move forward without new items!
Second, there is an 81% correlation, a 65% r-squared, between the productivity of each item, and the number of items offered to the customer on an annual basis. And as expected, when more items are offered to the customer, productivity per item decreases ... 6,000 items yields us about $12,000 per item, while 12,000 items yields us about $9,500 per item. Third, there is an 85% correlation, a 72% r-squared, between the productivity of a new item and the productivity of a new item in the second year the new item is offered. Some of this is due to the number of new items, some of this is due to the fact that if product development suffers, future product productivity suffers.
This business is growing because Management chose to dramatically ramp-up the number of items offered to a customer ... this business is not growing because of the inherent productivity of the items offered. This is not uncommon, it is hard to increase the productivity of individual items. The problem, of course, is managing individual items from an inventory perspective. When skus proliferate, lots and lots of things happen to a business, many of which are unpleasant. At some point, you have too many skus, resulting in liquidation problems and significantly reduced profit. Inventory is like produce ... when it goes bad, it's bad!
The analysis was part of a larger series on what I call "Hillstrom's Hashtag Analytics".
You get a lot of feedback when you write content for close to 5,000 people, across the blog and for the folks following on Twitter. As one might expect, you get a lot of positive feedback.
You also get a lot of indifference, and you get a lot of negative feedback.
Here's a generalized view of the feedback:
Classic Database Marketers = Positive.
Classic Direct Marketers = Positive.
Social Media Advocates = Indifference.
Social Media Haters = Indifference and Negative.
My Catalog Marketing Audience = Indifference.
Social Media Agencies = Positive.
Analytics Agencies = Positive and Negative.
Digital Marketers = Negative.
Academia = Negative.
Purveyors of Twitter Analytics = Indifference and Negative.
Web Analysts = Indifference and Negative.
One person simply said the following: "... why are you even doing this?"
I don't care if you think that Social Media is a vapid expression of digital extroversion.
I don't care if you think that Social Media is a self-evident expression of the concept that markets are conversations.
I only care about the data, about the interaction of customers within this channel.
For all of the feedback I've received, all of the criticism and praise, almost nobody commented on the three most important findings:
No direct link was made between engagement and profit.
All anybody wanted to argue about was whether Social Media was a crock of hoo-ha or whether it was the single greatest invention of all time or whether web analysts have already invented the tools necessary to analyze customer/user information.
I did not receive any critical feedback about the data and the findings, the comparison of the two communities, or about profitability.
That's what is wrong with marketing, and analytics, in 2010.
That's why I am doing this.
We are blinded by Social Media theories, hypotheses, and opinions, to the point that we cannot even look at the data and offer an unbiased analysis of the findings. Heck, we don't even want to look at the data, do we? We'd rather have an argument than an evaluation of the findings.
I lived through this, from 1995 - 2005. I watched the catalog industry implode, not because of the viability of the business model, but because of a contempt for the online channel. Online was a religion, you either had faith in the online channel, or you had faith in offline marketing.
What I learned from that experience was that the existing set of tools were incapable of communicating to either audience ... existing tools failed to convince online marketers that offline marketing worked, existing tools failed to convince offline marketers of the new realities and possibilities of online marketing. In short, both parties went their separate ways, both disciplines suffered as a result.
I had to link both disciplines via Multichannel Forensics. By and large, both sides rejected Multichannel Forensics, rejecting the concept that customers interact with products, brands, and channels in ways that are not easily measured by existing tools. Several dozen marketers did figure out that this was important ... I was able to help those companies become more profitable, and I was able to find a way to make a good living in the process.
The same problems exist today. Social Media advocates cannot be convinced of anything other than the fact that Social Media is glorious, and they have their set of metrics to prove their worldview. Social Media critics cannot be convinced of anything other than the fact that Social Media is nothing more than an ego-centric version of digital extroversion, and they have sales metrics to prove their worldview.
I created a framework for having an honest discussion about how customers/users actually behave. I realize that Social Media advocates and Social Media critics will pan the methodology because the methodology doesn't fit their worldview. That won't stop me from continuing my research.
My job is to get you, the marketing expert, to ignore the hype and the criticism and the opinions and the theories and the hypotheses ... my job is to get you to simply focus on actual customer behavior.
Just as important, my job is to offer a roadmap for data integration. In ten years, Social Media data will be fully integrated with our current customer database infrastructure. I realize this is terribly hard to envision today ... go back to 1997, it wasn't easy to conceive that web analytics data and e-mail data could be integrated with the customer data warehouse, yet today, it's commonplace. The same type of data integration will happen with Social Media. Why not try to provide a roadmap for how this will happen? What's so bad about that?
If you read Twitter, you realize there are maybe five things that matter anymore.
Social Media (i.e. Nine Reasons Why Social Brands Capture Increased Mindshare, With Examples From Dell, Comcast, Zappos, and Southwest Airlines ... never mind that only a couple dozen companies generate more than 3% of their sales from social media).
Mobile Marketing (never mind that 72% of cell phone owners don't even own a smart phone, you'll be out of business if you don't have a mobile strategy in place by December 20, 2010).
Google Analytics (Seven Easy And Quick Ways To Segment Visitors And Set Goals That Result In A 2,439% Increase In Conversion Rates ... ever read something like that?).
Conversion and Engagement (Four Critical Tactics That Increase Engagement And Drive Up Conversion Rates, never mind that in so many cases, neither metric is correlated with profit).
Groupon. These folks seemingly invented the concept that if you offer merchandise at half-price, more customers will buy the merchandise, with background cheering supplied by those who haven't ever had the benefit of having to deliver a year-end profit and loss statement.
That's it. According to so many of the Twitterati, that's all that matters anymore. You do anything outside of that narrow box of goodies, and you're a Luddite. Everything else has been banished to the Island of Misfit Toys. Let's look at a few examples:
E-Mail Marketing: In the real world, e-mail doesn't exist anymore. But on the Island of Misfit Toys, residents eagerly await personal and relevant messages from gurus who specialize in targeting and segmentation.
Catalog Marketing: Oh boy. Oh boy. On the Island of Misfit Toys, customers still possess physical mailboxes. Can you believe it? Mailboxes. And people actually walk around in uniforms delivering catalogs to physical mailboxes. Wow.
Television Commercials: In the real world, this is called "unaccountable advertising", which is a fancy way of saying "... my web analytics tool doesn't lead me directly from a paid search click to a purchase when trying to measure television advertising, so measuring it accurately is really hard work, work that is more complicated that three clicks with a mouse, therefore, it is old school and therefore, the advertising couldn't possibly work.". On the Island of Misfit Toys, DVRs have yet to be invented, causing the population of the island to respond to Burger King commercials that air during re-runs of Family Ties.
Programming Code: In the real world, Google Analytics solves any imaginable problem, and if Google Analytics cannot solve the problem, the problem never mattered in the first place. On the Island of Misfit Toys, SAS programmers churn out old-school reporting using "proc summary" to identify trends that aren't easily measured by Omniture, Coremetrcs, or Google Analytics.
Radio: In the real world, you plug your iPod into your home network and you can hear any song you stole off of the internet. On the Island of Misfit Toys, there are all sorts of over-the-air radio stations that air 23 minutes of commercials per hour. People actually pay the radio stations money to air their commercials, and SAS programmers set up matched-market control groups that allow them to measure the effectiveness of the commercials. Even more curious, a subset of the Island of Misfit Toys play compact discs and long-play records that sound wonderful.
Billboards: In the real world, these are called "Display Ads", and even though only 8% of the population clicks on them, CRM experts are considered rocket scientists if they can squeeze an incremental 5% gain in response out of Display Ads. On the Island of Misfit Toys, there's this ancient technology called a "billboard" ... this ancient technology "displays ads" to folks who drive by the billboards in cars that consume fossil fuels.
Wal-Mart: In the real world, customers eagerly await their daily, personalized, customized discount via Twitter from Groupon. The customer never leaves her seat, she just clicks on the ad and scores a major discount On the Island of Misfit Toys, there's this big box retailer called "Wal-Mart". Customers have to get into a car and drive to this magical store that offers merchandise that is often 50% cheaper than the prices found at urban stores that are about to go out of business because of Wal-Mart.
Yahoo! and MySpace: In the real world, Google, Facebook and Twitter control users. On the Island of Misfit Toys, people love their Yahoo! e-mail accounts, and aren't afraid to exchange Christmas greetings via MySpace. In the real world, folks secretly worry that Facebook becomes MySpace in 2014, never sharing their opinions publicly, of course.
Profit: In the real world, profit is an irrelevant and inconvenient concept focused on only after one achieves "scale", cashes in all available stock options, and pays off angel investors who footed the bill required to achieve "scale". On the Island of Misfit Toys, businesses routinely generate profit, often 10% of annual sales. Businesses re-invest this profit in growth opportunities, make capital investments, or (gasp) give the profit back to the employees responsible for generating the profit via 401k plans or annual bonuses. On the Island of Misfit Toys, employees spend their annual bonuses at Wal-Mart, in an effort to save money.
Sales: In the real world, you only generate sales in three ways ... you either "join the conversation" and monetize the magical world of social media, or you embrace the Apple or Android app stores, or you partner with Groupon. That's it. The experts will tell you that's all you need to be successful, that everything else old-school folly, or worse, is "dead". On the Island of Misfit Toys, companies simply do what is right for the customer, regardless of channel. I know, that's a really old-fashioned concept, no wonder it has been banished to the Island of Misfit Toys.
As you can see, the Island of Misfit Toys is an odd place, a relic in a modern world where a friend mentions a Groupon promotion on her Facebook wall, causing a "fan" to tweet the message to "followers" who are highly "engaged" in "relevant" content, resulting in purchases that are easily measured via Google Analytics.
You cannot deny @marthastewart ... a global media empire and 2.0 million followers on Twitter, to boot.
But what about her community ... the folks who respond to @marthastewart or include #marthastewart in their tweets?
Let's use the magic of Hashtag Analytics to explore her community over a recent four week period of time!
7,168 users communicated via@marthastewart or #marthastewart.
5,419 users tweeted just one message.
1,045 users tweeted twice.
704 users tweeted 3+ times.
198 users (2.8%) were classified as "Mega Participants", with a tweet in the past week and tweets in 3+ of the past four weeks.
Only 24% of Mega Participants created content that was re-tweeted. Obviously, these folks are active because the love Martha Stewart, not because of the rewards of having their content shared.
Even among Mega Participants, the median number of tweets over this four week period of time is just five.
Let's classify Martha's active community via Digital Profiles. Remember, we have eight Digital Profiles that describe how participants use Twitter within a community. Here we go (the analysis window is pushed back one week so that I can analyze engagement rates).
Shaping The Conversation: 187 participants, 19.3% re-engagement rate.
May Be Interested: 92 participants, 9.8% re-engagement rate.
Making A Statement: 338 participants, 24.6% re-engagement rate.
Dipping A Toe: 1,359 participants, 5.1% re-engagement rate.
Joining The Conversation: 1,321 participants, 10.1% re-engagement rate.
One Topic Experts: 2,309 participants, 5.9% re-engagement rate.
Spreading The Word: 210 participants, 22.4% re-engagement rate.
The Ignored: 892 participants, 4.5% re-engagement rate.
Remember, yesterday, we analyzed the @nordstrom community. Here's what their data looked like:
Shaping The Conversation: 132 participants, 19.7% re-engagement rate.
May Be Interested: 103 participants, 3.9% re-engagement rate.
Making A Statement: 503 participants, 12.7% re-engagement rate.
Dipping A Toe: 3,443 participants, 2.2% re-engagement rate.
Joining The Conversation: 694 participants, 2.5% re-engagement rate.
One Topic Experts: 806 participants, 2.0% re-engagement rate.
Spreading The Word: 111 participants, 7.2% re-engagement rate.
The Ignored: 329 participants, 1.8% re-engagement rate.
Clearly, these are two very different communities, with two different user bases. Martha Stewart's community is about twice as likely to engage next week as is the Nordstrom community. This isn't good or bad, it's simply a different community. For Martha Stewart, those in "Shaping The Conversation", "Making A Statement", and "Spreading The Word" Digital Profiles are the most valuable, in terms of subsequent engagement.
Engagement rates (probability of tweeting next week using #marthastewart or @marthastewart) by weeks since last tweet look shockingly like classic e-commerce, retail, or catalog trends:
Recency = 1 Week: 14.8% Re-Engagement Rate.
Recency = 2 Weeks: 7.1% Re-Engagement Rate.
Recency = 3 Weeks: 5.4% Re-Engagement Rate.
Recency = 4 Weeks: 4.0% Re-Engagement Rate.
Here's another great tidbit. Remember that early in our analysis series, we noted that those who were "loved" by the #blogchat community experienced far greater engagement rates than those who were not loved. We looked at those who had values of recency = 1 and weeks = 1 and tweets = 1 ... if these folks were simply acknowledged in some way for their single tweet, they were 10x more likely to engage in the future.
Within the Martha Stewart community, we observe a similar trend for recency = 1 / weeks = 1 / tweets = 1:
Those who were loved had a 14.8% re-engagement rate.
Those who were "not loved" had a 5.9% re-engagement rate.
Granted, these are small numbers for a small snapshot in time. But I see the same trends in every analysis I run, so there is something to this. If a community "loves" those who participate in the community, especially newbies, the community thrives.
Let's see if the same thing holds true for Mega Participants:
Those who were loved had a 66.7% re-engagement rate.
Those who were "not loved" had a 52.4% re-engagement rate.
A little love matters, even to Mega Participants!
Make This Actionable!
Ok, I'll make this information actionable! I created a model that predicts next week's engagement rate by participant. If you like math, then you'll enjoy the Logistic Regression equation ... otherwise, skip ahead!
Logit = -3.399 - 0.670*SQRT(Recency) + 1.046*(Weeks Participated In Past Four Weeks) + 0.235*(Average Tweets Per Week) - 0.148*(Shaping The Conversation) + 0.457*(May Be Interested) + 0.678*(Making A Statement) + 0.150*(Dipping A Toe) + 0.165*(Joining The Conversation) + 0.271*(One Topic Experts) + 0.426*(Spreading The Word) + 0.000*(The Ignored).
Logit = EXP(Logit) / (1 + EXP(Logit)).
Now that we are past the geeky math, we can move forward!
It turns out that there are 163 of more than 7,000 participants who are forecast to have a 40% or greater chance of using @marthastewart or #marthastewart next week. That's a whopper of a percentage, don't you think! These folks were hyper-active in the past month, and are likely to be active next week. They averaged 11.4 tweets in the past four weeks, vs. 1.37 for the remainder of the population. They were re-tweeted 1.06 times in the past four weeks after issuing a statement via @marthastewart or #marthastewart vs. 0.10 times re-tweeted for the rest of the audience. They re-tweet other comments 2.67 times vs. an average of 0.39 for everybody else.
In other words, these folks are influential!
If you are responsible for the Martha Stewart community, you use this process to identify these highly valuable community members. My model identified 163 folks who are likely to engage next week, and are likely to spread the message to others. Each week, I can compile a list of users possessing these characteristics. And if I am part of Martha Stewart's marketing team, this is the audience that I'm going to communicate to ... I'm going to give them insider information and I'm going to make them feel special for their unwavering kindness.
In essence, we use Hashtag Analytics to create a database of Twitter users who evangelize our message. It's classic Database Marketing, folks, applied to Twitter.
Let's look at an example of Hashtag Analytics in action.
Today, we'll focus on Nordstrom, a former employer of mine, a company that many admire for their focus on customer service.
I pulled data for five recent weeks. I used four weeks to segment users, and then I used one week to see if I could predict if there are folks out there who simply cannot help themselves by mentioning #nordstrom or Nordstrom or by re-tweeting messages from @nordstrom! I identified 6,121 individuals who said something about #nordstrom/nordstrom/@nordstrom in my four week "pre" period.
85% only issued one tweet during the four week analysis period.
10% issued just two tweets during the four week analysis period.
In other words, this isn't a highly engaged audience. Recall, I created a segment, called "Mega Participants". These are folks who tweeted in the last week, and tweeted in 3 or 4 of the past four weeks.
73 out of 6,121 participants were classified as "Mega Participants".
1.2% of the audience can be called "Mega Participants".
And as one might expect, Mega Participants are likely to "engage" next week:
41.1% of Mega Participants engaged the following week.
3.1% of all other participants engaged the following week.
3.5% of all participants engaged the following week.
Recall that I created eight "Digital Profiles" to describe Twitter community behavior. Here's the distribution of the Nordstrom community during the four week "pre" period.
Shaping The Conversation: 132 participants, 19.7% re-engagement rate.
May Be Interested: 103 participants, 3.9% re-engagement rate.
Making A Statement: 503 participants, 12.7% re-engagement rate.
Dipping A Toe: 3,443 participants, 2.2% re-engagement rate.
Joining The Conversation: 694 participants, 2.5% re-engagement rate.
One Topic Experts: 806 participants, 2.0% re-engagement rate.
Spreading The Word: 111 participants, 7.2% re-engagement rate.
The Ignored: 329 participants, 1.8% re-engagement rate.
More than half of the audience is in the "Dipping A Toe" Digital Profile, with very low re-engagement rates in the next week.
What does a "Dipping A Toe" participant look like? Here's one example from @amandalustbuser:
John Mayer is playing in Nordstrom. This is a sign. I should buy everything.
Here's another example from @alyssafrazier:
Having a shoe shopping black pump dilemma @nordstrom.
Regardless, the participant is truly "Dipping A Toe", the participant isn't terribly engaged. It shouldn't be a huge surprise that more than half of the audience exhibits this type of behavior, and isn't terribly likely to tweet again in the near future.
"May Be Interested" is a more engaged Digital Profile. Take a peek at a tweet that is somewhat representative of this audience, from a participant, and you'll see why.
@nordstrom always carry a mirror. I've been to several cocktail parties where women have lipstick on their teeth.
Those who are "Shaping The Conversation" are the most likely to be engaged next week. Take a peek at a tweet that is somewhat representative of this audience, from a participant:
I got the @nordstrom job! I start my training for the lingerie dept on Monday.
Yup, it shouldn't be a surprise that this person was "engaged"!
It can be fun to review individual participants by Digital Profile. Let's profile a few of the participants.
First, here's @nordstrom, the corporate presence. During this four week stretch:
25 tweets.
4 statements.
8 retweets.
13 amplifications.
0 conversations (responses).
13 links.
Re-tweeted 150 times by others.
Answered 215 times by others.
Digital Profile = Spreading The Word.
Contrast that with @nordstrombeauty:
19 tweets.
8 statements.
4 retweets.
4 amplifications.
3 conversations (responses).
10 links.
Re-tweeted 72 times by others.
Answered 4 times by others.
Digital Profile = Shaping The Conversation.
And here is @nordstrombvue, the store manager from the Bellevue, WA store.
18 tweets.
6 statements.
9 retweets.
1 amplification.
3 conversations (responses).
6 links.
Re-tweeted 45 times by others.
Answered 14 times by others.
Digital Profile = Joining The Conversation
The store manager in Bellevue is more likely to re-tweet content from other folks. The Nordstrom Beauty twitter presence is more likely to be directive, to tell the audience what to think. The Nordstrom corporate presence is more likely to tell folks what's going on.
There are participants who are highly engaged. There's @daliamacphee, for instance, a participant who is actively selling her merchandise, merchandise offered in Nordstrom stores.
158 tweets.
142 statements.
1 retweet.
8 amplifications.
7 conversations (responses).
150 links.
Re-tweeted 12 times by others.
Answered 5 times by others.
Digital Profile = Making A Statement
And there are folks who are in the top Digital Profile, called "Shaping The Conversation", like aka_kristin. She tweeted her audience every time she was in a Nordstrom store, and every time she was in a unique department at Nordstrom.
24 tweets.
22 statements.
0 retweets.
0 amplifications.
2 conversations (responses).
12 links.
Re-tweeted 0 times by others.
Answered 1 times by others.
Digital Profile = Making A Statement
How Does This Become Actionable?
By using Digital Profiles and by identifying Mega Participants, I can predict which participants are likely to be "engaged" next week. I simply maintain a database of all Twitter members engaged with @nordstrom, and i predict which participants are likely to be "engaged" next week. I feed my predictions back to you, the marketer, and you then tweet your message to your heart's content to the audience most likely to be engaged in the future.
In the case of Nordstrom, I identified more than 100 Twitter users who evangelize the brand. If I were at Nordstrom, I would communicate directly to this audience, as if they were part of my e-mail marketing list (to draw a parallel).
I realize this is big-company type work, and companies like Nordstrom are probably already compiling databases of Twitter evangelists, but it is worth sharing so that you can start thinking about how you apply Twitter to your Database Marketing initiatives.
I have a booklet for you, too, available in the next seven days, called "Hillstrom's Catalog Marketing PhD"! This is a 42 page Christmas Treat that shows you every step of the customer selection process I go through, from A to Z, in determining how many catalogs a customer should receive.
If you have a statistical analyst on-staff, you'll be able to replicate my process and immediately begin generating hundreds of thousands or millions of dollars of additional, incremental profit. What's not to like about that?
If you don't have the bandwidth to handle a project of that magnitude, then read the booklet, and hire me instead, I'll be able to help you earn an Honorary PhD, and you'll generate hundreds of thousands or millions of dollars of additional, incremental profit. What's not to like about that?
Now go ahead, and ask your favorite catalog vendor if they are willing to sell you their secrets for just $7.95 via print, or $2.99 via Kindle/Nook, see what answer you get from them?
This booklet will likely be available in the next three to eight days, so be ready!!
When we focus on the success of a direct business, we like to think about things like channels and customers.
We almost never read about merchandise. What a shame.
Last week, we talked about "Class Of" reporting, analyzing items that were "born" during a certain year.
In this example, take a look at how new items perform in the year they were born, and then look at how those items perform one year later.
2005 New Items = $4,883 per item. 2006 Performance = $8,501 per item.
2006 New Items = $7,650 per item. 2007 Performance = $12,243 per item.
2007 New Items = $7,566 per item. 2008 Performance = $6,780 per item.
2008 New Items = $7,275 per item. 2009 Performance = $8,070 per item.
2009 New Items = $5,043 per item. 2010 Performance = $5,722 per item.
The relationship isn't perfect, obviously, but there is a 45% correlation in average performance, an R-squared of about 20%. The results early in the cycle are clearly influenced by a dramatically inflated bubble-based economy.
There are two really important pieces of this puzzle.
New items must perform at an acceptable level.
There must be enough high-performing new items to drive future productivity.
Your turn: Do you have the reporting in place to understand this important merchandise dynamic?
And interestingly, sales of Online Marketing Simulations are 100% ahead of forecast since the two new books were released. Finally, click-thrus to the B&N link are significantly greater than to the Amazon Kindle link, but conversions are 5x to 10x greater via Amazon than via B&N.
I sell three products to the same audience, and each product sells differently in each channel, with one product experiencing a significant sales increase when the product gets significantly more competition than it previously had.
If this happens to little-old me, imagine what is happening across your customer base, across your channels, across your various product lines! Are you measuring it? Do you understand the dynamics?