## May 24, 2017

### Test Outliers

I've been recording my blood pressure readings. Let me show you the last seven systolic readings.
• 135
• 162
• 120
• 110
• 127
• 117
• 124
One number sticks out, right? It's that 162 number.

Here is the average including the 162 number.
• 128
Here is the average excluding the 162 number.
• 122
Which number is "right"?
• 128 or 122
If you have a credible reason for throwing out the 162 figure, then 122 is right. If you have a credible reason for keeping it, then 128 is right.

The same logic applies to the tests you perform.

We all see our tests ruined by outliers. Here's a common one. Here are average order values for customers who purchased in a test.
• \$119.
• \$84.
• \$99.
• \$79.
• \$143.
• \$21,477.
Why in the name of Snedecor and Cochrane would you include the \$21,477 order in your results?

Well, you'd keep it in there if 15% of all orders were \$21,477 or greater.

But if 0.1% of all orders are \$21,477 or greater? You adjust it down ... change it to \$150 or whatever the 99th percentile is for average order values.

I'm confident few of you are adjusting for outliers.

And then you wonder why your test results are all over the board?

I know, I know, you don't have the coding chops to remove outliers, and you don't want to invest a half-year learning how to code, so you want a rule-of-thumb that you can apply. Ok, try this one on for size. If you are concerned about large orders influencing your test results, analyze response-rate / conversion-rate. If response/conversion results are significantly different than spending results, you have an outlier problem. If you have an outlier problem?
• Measure the difference between response/conversion. Say it is 6%.
• Average your average order values between test/control groups.
• Apply the "average" average order value to both groups.
• This leaves you with a 6% difference in spend between the two groups.
Thoughts?