Thoughts about media in Canada

The impact of puppies death on global warming

Did you know that each time a puppy dies, the planet’s temperature rises? You read correctly. It has been proven numerous times. See figure 1.1

Figure 1.1 the impact of puppies dying on global warming since 1950

As you can see, the causality is obvious. I even ran a correlation test. Correlation coefficient: 0.68 (!)

  • For more information about correlation coefficients, please refer to Wikipedia (fun bedtime read with your kids!)

Now what does this have to do with Digital Media, Analytics and Measurement? Everything! Of course… because it also has been proven that puppy fatalities are  due to botched research methodology. Please refer to figure 1.2 (correlation coefficient: -0.66!)

Figure 1.2 the impact of botched research methodology on puppies’ death per year


What’s the lesson here? Well, it’s a dual one.

  1. Our very bad use of  methodology kills innocent  puppies (this one is my dog by the way)
  2. The death of cute puppies then leads to Global warming, which will eventually eradicate Earth population, including ALL the poor cute innocent puppies!

This is terribly sad and atrocious. But there is hope!

Indeed, if we all start working on improving our methodology when conducting researches, the incidence of puppy casualties will greatly improve, which will then lead to a slow down of global warming. With a bit of luck, by the time the Earth’s temperature reaches a critical point, we will have built a gigantic A/C and save the world (and the puppies).

How to do it

As I said, it’s all a matter of improving research methodology. “But what should we work on first?” you will ask. Excellent question! I have conducted a survey against 1003 of my own opinions and came up with the most frequent methodology mistakes:

Figure 1.3 Survey about most common methodology mistakes

So, there you have it, “Not controlling all variables” is considered the most common methodology mistake. This means that if we can improve on that level, we might save the world.

 What does “Controlling your variables” mean?

While this may not be the most interesting subject, I’ll still try to make it somewhat enjoyable for you to read. To help me in this venture, here’s a picture of a cute puppy.

The pure definition of controlling your variables means that you need to have two identical environments / situations and run a test in one and not in the other, then see if the outcome of the test is different.

Is this feasible?

No. It is not. Not until “time rewind” is invented. For instance, assume you run an online video campaign for ShamWow in one market and not in the other, you may see an increase in Brand Awareness. This doesn’t necessarily mean that the lift is attributable to online video. It may be because someone in that market saved the city from a flood because he had a lot of ShamWow lying around in his backyard. This made the local news and everyone heard about it that way. To truly see if online video is the main factor, you have to “time rewind” and not run the online video campaign to see if the results are different.

  • Note that the term “time rewind” is used instead of time travel for a reason. If I was to run a test with online video and time travel to change the past, my presence in the past could create a butterfly effect. This could be the cause of the ShamWow awareness change, hence the need for time rewind.

If controlling all variables is not possible, what do we do?

The goal here is to improve the methodology, not to fix it. To do so, we need to:

  1. Try to keep all variables as stable & identical as possible
  2. Monitor environments / situations during the campaign to see if external variables are changing
  3. Ideally, don’t revise your campaign 57 times while it runs
  4. Do not draw erroneous conclusions (X influenced Y) if chances are too big that the influence is due to another factor.

 A quick, non-media related example

As you may or may not have noticed, I’ve been cleverly fooling your mind since the beginning of this article! Let’s assume that the figures above are true (Some genius probably conducted a research on these subjects anyway). Where I cleverly fooled you is by completely omitting to consider external variables and making flawed observations & recommendations. In other words:

–          Is the Earth’s temperature rising? Yes.

–          Are there more puppies dying every day? Yes.

–          Is research’s methodology an art that is getting neglected more often than ever? Yes.

–          Does this mean one is causing the other? Of course not!

What I did was plunking three completely unassociated yet similar trends next to one another and completely disregarded the rest of the world. Obviously, the trends holding hands like this can have fooled you into thinking they made total sense, that they were “related”, but they were not.

  • Increasing research methodology flaws may be due to an underfunding of universities…
  • Global warming may be due to people eating more and more beans…
  • And puppies may be dying more because people are becoming insensitive due to a very hedonistic and individualist culture, therefore abandoning their pets that starve to death…
  • This doesn’t mean the trends are related!


Media related examples

Here are a few examples of media related researches that did not do a good job of controlling their variables:

  • A/B testing a landing page but using a different vendor, each sending to a different iteration of the landing page
  • Assuming the media-mix in 2011 was more efficient than in 2010 because it generated more sales, but did not take into consideration that your client’s biggest competition went out of stock for 8 months in 2011
  • Having 10,000 TV GRPs + 2 newspaper insertions in market A, generating $100 sales; Having 100 TV GRPs in market B, generating $1 sales; Assuming that newspapers were the main sales driver because “There was no newspaper insertions in the market that performed poorly”
  • Setting a test campaign to measure ROI of behavioral targeting; Changing the campaign to Search retargeting, then to Contextual targeting, then to ROS, then changing vendors, adding online video, cut online video, redeploy online video, cut video again, go in RTB, change the landing page, cancel the whole campaign; Calculating the overall campaign ROI to 0.8 and state “Behavioral targeting delivers a ROI of 0.8 and is therefore not recommended …”

The list could go on and on.


Controlling your variables is a generic term I use that means: Think first, act later.


  • Before the campaign: How should I set my research test to avoid problems?
  • During the campaign: If I change XYZ during the campaign, will it modify end results from a research perspective?
  • After the campaign: I have noticed a trend that I think is due to ABC, but could this actually be attributable to “something else”?
  • All the time: think of the puppies!


One comment

  1. Pingback: Why I don’t trust (most) case studies | Thoughts about media in Canada

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: