You probably don't want to read three hundred plus pages, so if that's you, Pablo Torre has a podcast interview with the author that is fabulous (click here). And if an hour of you time is too big a commitment, here is a couple of minutes (click here).
But if you want all of the details at a nominal cost, buy the book.
Last week a long-time friend sent me one of my posts from 2009. He said "It is incredible to see how much has not changed since 2009." Well, one of the reasons things don't change is that the way we keep score has not changed.
In email marketing, it is open rates, click-through rates, and occasionally conversion rates. This is how most email marketers keep score. If I suggest that the email marketer keep score via profit generated, I'm speaking a bizarre language that does not fit within their scoring/ranking system. Once the scoring system is set, it is incredibly difficult to make changes.
In search or paid social, it is ROAS ... return on ad spend. Amazingly, the digital marketing folks figured out how to create something unique albeit indifferent from the past ... they took the inverse of the old-school "ad-to-sales ratio" ... and converted it to ROAS.
- ROAS = 1 / (ad-to-sales ratio).
Once ROAS became entrenched as the scoring system for digital marketing, you're crazy to suggest that the marketer rank activities by profitability. You have one simplified metric that works across industries and the professionals who work within those industries.
In old-school catalog marketing, the ranking score was "dollars per book". Two generations of catalog marketers used either "dollars per book" or a version of "dollars per book" fueled by matchback analytics. Regardless, there was a way to keep score, to rank segments/customers, and if you ever suggested anything different (i.e. applying the organic percentage to each segment) you were viewed as being insane. Ask any catalog agency, vendor, or paper-centric professional if a catalog brand should work with me and you'll get a hearty "noooooooo" from the individual/organization. You'll get a "no" because I use a different scoring mechanism than they use.
I had the unfortunate experience of rating pickleball players at my club of 1,800+ members. I computerized the process ... your rating (2.0 - 5.0) was based on whether you won matches. If you won a match against somebody you should beat, your rating increased from 3.50 to 3.51. If you won a match against somebody you should never beat, your rating increased from 3.50 to 3.59. The same dynamic happened in reverse if you lost. People HATED the system. HATED IT!!! I essentially changed the scoring/ranking process from a human saying "you're good, the other player isn't good" to a scoring/ranking process based on winning. Shouldn't the better player win more often? That was a fact that many players simply couldn't accept. They wanted a different ranking system.
Our scoring/ranking systems influence our flexibility with change. When business conditions change, do we change with business conditions, or do we try to fit the change within our scoring/ranking system?
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.