Setting a baseline – Designing Marketing Experiments – Part 2 of 4

In Part Two of our series on designing and implementing a successful marketing experiment, we’re going to explain the importance of control groups and how to effectively use them in your own marketing experiments.
In our previous post, we discussed how to construct a well-defined hypothesis. In our hypothetical marketing department, we’ve decided to conduct an experiment to determine whether sending a discount email to our customers is an effective way to increase sales. After a few clarifications and refinements, the hypothesis we settled on was this:

Emailing customers a 20% discount increases the likelihood that they will make a purchase in the following week.

Imagine that we are eager to test our hypothesis and see our results. We decide to send emails to our entire customer base, and then observe open and click rates on the emails. After that, we perform a funnel analysis, and find that 50% of our customers opened the email, 10% clicked on the embedded link, and 2% made a purchase. We show our results to our skeptical boss, who looks at us and says, “And how can we tell that some of these people weren’t going to make purchases anyway, even without the discount email?” The answer is: we can’t.

The problem with a funnel analysis is that it implies a degree of causality that may not really be accurate. In order to build an experiment that produces useful, meaningful results, we have to have a clear picture of what we’re comparing our results to. This is why we need to make a control group.

Let’s return to our hypothesis for a moment. Pay particular attention to the phrase: “increases the likelihood that [customers] will make a purchase.” What this means is that we expect a customer to be more likely to make a purchase if we send him a discount email than if we don’t. However, quantum physics aside, we can’t simultaneously send and not send a discount to the same customer. That means we need to find another way to quantify the change we’re looking for. There are a few good ways to go about doing this, and quite a few bad ways. We’ll begin with some of the more common experimental design fallacies we see.

One (bad) way that we might attempt to include some sort of control is by comparing the repeat purchase rates during the two weeks following the email to the previous two weeks when we did not send an email. Unfortunately, this ignores the inherent week to week variability in sales, which for many retailers tends to be quite high. Thus we could incorrectly call our email campaign a success or failure due to some larger seasonal trends.

Another (also bad) strategy would be to try our experiment with a certain subset of users and then compare our results to another subset. For example, we might send all of our international customers our discount email, and then compare the results to our US customers. We would then compare the revenue from our international customers to our US customers for the same period. While this may appear to control for week to week variability, it does not do so completely, since even this variability is not consistent between countries.

What we need, then, is a way to control for natural variability over time and between groups. Our solution will require two steps. First, we’ll create two groups. One group, which we’ll designate our control group, receives the “status quo.” In our case, we’ll say that we’re not currently sending an email of any kind, which means our control group will receive no email. If, however, you were already sending out a weekly newsletter and wanted to test the effect of including a discount in the newsletter, then your control group would receive the regular newsletter while your experimental group would receive the newsletter with the discount.

Now that we’ve defined our control and experimental groups, we’re going to assign customers to each group at random. By randomly assigning our customers between groups, we create a powerful control against natural variability, as those effects will already be taken into account in both our control and experimental groups. This will allow us to directly compare the effects of our treatment (the discount email) on one group. This type of study is referred to as a randomized control study.

For our marketing experiment, let’s take 20,000 customers and divide them evenly between our control and experimental groups. We find that our control group has a 2% conversion rate while the experimental group has a 2.5% conversion rate. With randomized control groups we can quantify the difference between the two responses and figure out how likely we are to see similar results going forward.

With our randomized control groups, we’re able to control effectively for things like natural variability in sales over time and between different groups. However, we’re going to need additional control groups to determine exactly what component of our discount email caused the increase in conversion rates. We’ll also need statistical tests to figure out if our results are likely to hold in the future. We’ll be addressing both of these issues in future blog posts.

Every customer has a story. Make the most of it.

Formulating A Hypothesis – Designing Marketing Experiments – Part 1 of 4

The job of any marketing department is to develop effective ways of reaching customers. In order to do that effectively, however, you must first figure out who your customers are and what they like. You may have learned the standard tactics: upselling, cross-selling, targeted messages, discounts, loss-leaders, and so on. What you haven’t necessarily learned is how best to apply these tactics to your brand and customers. In order to identify and refine an effective marketing strategy, you have to find ways to test it. Once you have identified what tactics work best, you can also use controlled experiments to determine how well a given tactic works, and then develop ways to improve it further.

The benefits of running controlled marketing experiments can be direct and tangible. This can help marketing departments stand out in companies where many people (the CEO included) have only a vague notion of what marketing has to do with the brand, much less how it contributes to the bottom line. By running controlled experiments, marketers can figure out what strategies work, measure their impact on profits, and deliver consistent results. Over the next few weeks, we’ll be discussing how to design a marketing experiment and make sense of the results. We’ll begin our series with how to create a well-defined hypothesis.

Imagine for a moment that we’re the marketing department at a large company. We’re developing a new marketing strategy with the goal of improving customer retention. One of our ideas is to include an email to our customers, perhaps with a discount of some kind. Our boss, who is not the savviest of technology users, says he doesn’t think that customers would respond to emails, and instead wants to go with an expensive direct-mail advert. One way we might convince our boss to join the digital age is by designing an experiment to gauge the effect of email marketing on sales. At the heart of our experiment is our hypothesis. We can start with something simple, such as:

Email marketing increases sales.

Notice that our hypothesis takes the same form as the conclusion we are trying to prove. 1

Our particular hypothesis describes a cause and effect relationship, as in, “Action A leads to Result B.” We could formulate our hypothesis to test other types of relationships, but for now we’ll still with cause and effect.

At the moment however, both our cause and effect are only vaguely defined. A good hypothesis has to be specific enough to actually test, and it would take a massive number of experiments to determine if all email marketing increases sales. Furthermore, “sales” is itself a rather difficult objective to quantify. Right now we haven’t defined either a time frame or a target audience, which will make it difficult to effectively measure how effective our email marketing has or hasn’t been.

In order to create a useful and manageable experiment, we need to narrow our focus. Rather than testing “email marketing,” let’s test something more concrete, such as “Emailing customers a 20% discount.” And rather than looking for an increase in sales, we’ll look to see if customers who received the discount made a purchase sometime during the following week. Our revised hypothesis might look something like:

Emailing customers a 20% discount increases the likelihood that they will make a purchase in the following week.

Now we have a well-formed hypothesis that is specific enough to test. Our next step will be to test this hypothesis and quantify how much more likely a customer is to make a purchase after receiving our discount email. In our next post, we will discuss how to set up the proper control groups and make these measurements.

Every customer has a story. Make the most of it.

Notes:

  1. Technically, we are trying to disprove our hypothesis, and after sufficient failure to do so, we accept it to be true. This subtlety, while interesting, is not especially relevant for our purposes.