Yesterday I discussed a paid search test. https://blog.minethatdata.com/2022/08/channels-arent-what-they-appear.html.
The test/post yielded questions from you, the loyal reader.
“Why would organic search revenue increase when paid search is turned off.”
“Why would customers spend more via email marketing when search is turned off?”
“Why would our vendor reporting over-state the importance of paid search?”
“Our attribution vendor shows us something very different and they are using AI and you are using an A/B test so their science is better than yours, correct?”
“Are you honestly telling me that in this case paid search is causing the company to lose money? Because if that is true, you have no idea what you are talking about.”
“You tested Boise and Salt Lake City - those cities don’t represent the America my customers live in, thereby invalidating the test, correct?”
“You are evaluating success via profit. ROAS is a better metric, don’t you think?”
“We ran this post by the Executive Team at an agency we respect and they said this post is utter gibberish.” (that is a comment, not a question).
All of the responses/questions avoid the point of the post. Not one question dealt with the reality that a test produced results different than typical reporting produce. When that happens via a controlled test, ask yourself instead what it means if the test results are right and you and your vendor partners are wrong?