August 29, 2021

\$750

About sixteen years ago we started a program where if a customer spent \$750 or more in a year the customer got to pre-select Anniversary Sale merchandise in our Nordstrom stores a week before the sale actually began.

We weren't allowed to create a holdout group.

We weren't allowed to perform a typical A/B test or any other form of experimental design.

We were expected to measure the incremental impact of the decision.

My team performed a reasonably simple analysis. They measured how much the \$750+ group spent the year prior, then during Anniversary Sale the year prior. They compared this to how much the \$650 - \$749 group spent the year prior, then during Anniversary Sale the year prior. The change in performance was "attributed" to the new Anniversary Sale program.

Here's what the analysis looked like:

Anniversary Group:

• 2005 Pre-Spend = \$1,500.
• 2005 Anniversary Spend = \$300.
• 2006 Pre-Spend = \$1,550.
• 2006 Anniversary Spend = \$380.
Lower-Spending Group:
• 2005 Pre-Spend = \$700.
• 2005 Anniversary Spend = \$150.
• 2006 Pre-Spend = \$725.
• 2006 Anniversary Spend = \$155.
We created an index for the Anniversary Group.
• (380/1550) / (300/1500) = 1.226.
We created an index for the Lower-Spending Group.
• (155/725) / (150/700) = 0.998.
We created a "lift index".
• (1.226 / 0.998) = 1.228.
In other words, the lift during Anniversary Sale among the \$750 group was 22.8%.

We multiply \$380 by 0.228 and get \$86.64. The likely lift in performance was \$86.64 per customer.

Then you multiply that increase ... \$86.64 ... across say 500,000 participants ... and you get \$43,000,000 of incremental sales during the sale.

These obviously aren't the actual numbers, but this is the process we went through.

When you fire off your own loyalty program, you can do something similar if you aren't allowed to execute a controlled experiment. Compare to the group of customers who don't quite qualify and create a handful of indices and see what your analysis tells you.