November 09, 2020

Lots of Variability

Yesterday I showed you what happens when you sample 25% of customers from a full population with a known outcome ($14.42 per customer, average) ... you end up with noise.

That's what happens when you aren't dealing with the entire population.

What happens when you deal with an even smaller fraction of the original population? Look at the x-axis and compare the values in the histogram to the image above.

Oh oh.

Notice that the overall average is "similar" ($14.32 sampling 5% of customers vs. $14.39 sampling 25% of customers vs. sampling 100% of customers and getting $14.42).

But the spread is huge!


Some of the outcomes are around $13.75 ... and you CANNOT know if that outcome is the real outcome or a blip due to sampling variability. One of the outcomes was > $15.50 and you cannot know if that outcome is the real outcome or a blip due to sampling variability.

Look at the standard deviation metric ... 0.412 ... so much higher than the 0.113 we saw when sampling 25% of the original population. It means that your results will vary by +/- $0.82 instead of +/- $0.23.

The Virtual Chief Performance Officer has a job ... and that job is to STEER YOU AWAY from situations where you are measuring something with +/- $0.82 of variability ... steering you toward outcomes that are within +/- $0.23 of expectations.

Here's the problem we all face. We're told we're supposed to TEST outcomes, aren't we? You are supposed to be "data-driven".

But if you are "data-driven" and you make one little mistake ... testing without enough cases to test properly, well, you provide your company with GARBAGE, don't you?

This sin happens repeatedly in business. Smart data-centric folks making bad choices, not understanding variability, creating chaos in a company and then defending the chaos.

The Virtual Chief Performance Officer steps the company away from these situations. We'll talk more about these situations tomorrow.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

When Winners Aren't Quite Winners

It's common to measure winners via total demand generated. It's an easy calculation. But it's also the wrong calculation. It'...