November 21, 2024

Upsets

On Saturday night, long after most of you went to bed, New Mexico scored what would become a game-winning touchdown with twenty-one seconds left while playing ranked Washington State. New Mexico struggles to beat teams like Wyoming, they're not supposed to beat the former Pac12 team from Pullman. And yet, they played a gritty game and came from behind in the second half. They're the reason you play the games.

Aside: If you ever get the chance, take US-195 from Pullman to Lewiston/Clarkston. There's a view of the Snake River from high above the cities that is one of the neatest views you'll ever see.



The best way to describe an upset is that it is a "result that was not supposed to happen" or was "unlikely to happen".

Upsets happen in business all the time. You don't test putting socks on the home page, you just do it on May 18 and it "works" and from there it becomes a "best practice". Six months later, you wonder why conversion rates are down? Maybe it is time to fire the marketing team ... they're generating bad traffic.

Even in a testing environment, upsets happen, all the time. They happen for two reasons.

  1. Random Variability.
  2. Your Sample Size is Too Small

My favorite sample size error happened awhile back. The analyst would take a segment of maybe 8,000 customers, split it into two groups (7,000 mailed, 1,000 not mailed), then measure results (mailed group = $20.00 average, not-mailed group = $18.40 average), run a statistical significance test, realize the results were not statistically significant, then say "we shouldn't mail this segment because the not-mailed group performed the same as the mailed group and therefore we simply wasted $0.60 sending print to this customer.

Had the analyst split the segment 4,000 mailed and 4,000 not-mailed, the same difference would have been statistically significant, and the analyst would have recommended mailing the audience. The analyst made a mistake, then an apparent "upset" happens, and the analyst costs the brand money because the analyst acts on an upset (no statistical significance) instead of crafting a credible experiment. 

Worse, the analyst made a mistake that had "cascading consequences". Random outcomes would constantly happen because control groups had too few customers, meaning that each of a couple-hundred mailing segments were constantly being flagged as "not profitable", and therefore were not being considered anymore. Had the analyst been given another three years to pursue this approach, the analyst would have gotten the entire department fired - because nobody would have been deemed "mailable" anymore.

Upsets yield all sorts of unusual decision making in the future, in response to the "upset".

How many unusual decisions have you made, or your Management Team made, in response to an "upset" ... a business result that wasn't supposed to happen?

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Two Articles For You To Think About

First, translate everything in this article about AI and Media to "AI and E-Commerce". Then you'll be interested in the topic ...