WETIKI, or the challenges of a growing team

Here at Custora, we’ve been fortunate to double in size since the beginning of the year. With growth, come challenges – one of them is communication among different teams and individuals.
Dan, Dave, Martin, and Aubrey came up with a short video to promote team communication, knowledge sharing, and healthy eating. Here it is for your viewing pleasure (not recommended while eating).

 

 

P.S – here‘s how you too can join our growing, WETIKIing team.

How leading retailers implement customer segmentation [webinar replay]

Customer segmentation is one of the most powerful tools for retailers. But there are significant challenges marketers need to overcome before they can feel confident they’ve built out a solid segmentation program. The marketing teams that we work with often describe a balance they must strike on three fronts, between customer demands, creative bandwidth, and merchandise needs.

150702_triple-venn-blog

Creative teams are often stretched thin, and finding the resources to write and design the additional, high-quality emails needed to ramp up customer segmentation can be a struggle. Merchandise teams often make their own demands on marketers’ campaigns, while each customer has feelings of their own as to what they want to buy.

We’ve heard this challenge described as threading the needle, and recently hosted a webinar where our co-founder Corey Pierson talked through some of the ways leading retailers can pull off that marketing feat.

 

Customer demands
In theory, what the customer wants is the most powerful motivating force for marketers. As a marketer you know that some customers are interested in certain product categories and others are interested in other product categories and you want to send each segment the most relevant content. One common (and very effective) segmentation method is sorting by predicted lifetime value (or customer spend). Understanding who your best, better, and top 1% (platinum) customers are will help inform specific product recommendations.

150702_aov-blog

See more on how leading teams are using Average Order Value (AOV) and predictive persona analysis to segment customers by their wants and personalize emails accordingly.

 

Creative
However, customer needs are not the only factor that needs to be taken into account. Having enough great creative is often a big challenge when teams look to increase personalization. Where before they were sending one email, teams are tasked with sending two or more versions to cater to specific customer demands.

150702_modular-blog

See how Backcountry uses modular content to decrease the creative lift required for dynamic email segmentation.

 

Merchandise needs
Finally there’s the merchandising needs. Sometimes, for a brand that’s selling their own goods it’s an internal merch team saying “we really need to push a certain category right now.” For other companies that sell a variety of brands it’s contingent upon what’s on clearance or perhaps there is a co-op deal in place with one of those external brands.

Learn how one retailer was able to use predictive product affinity to uncover customers in their database who are likely to want a particular item that needed to be pushed by marketing.

 

Watch the replay and get the deck below
Corey goes into greater detail on the relative importance of email vs. other channels, how good a/b testing fundamentals are the backbone of great segmentation, along with an in-depth question and answer session. You will also hear from Borderfree’s Sr. Director of Client Strategy and Operations, Mike Griffin who discusses how retailers sell in multiple international markets can use segmentation to great effect.

 

Beyond Batch and Blast: Getting Started with Smart Email Marketing

Let’s say that you want to run an email marketing campaign. You know the importance of setting up an experiment, establishing a control group, and continuing with the holdout until the results have been validated.

But hold on a moment. How do you take that crucial first step towards actually running the marketing experiment — figuring out which customers to mail, when to mail them, and what message you want to reach them with?

For example, imagine that you believe you can stimulate additional repeat purchases by offering customers a 20% discount. One way to do it would be just to send an email to your entire customer base (minus the control, of course) with the offer, and see whether it results in a lift in repeat purchases.

We often hear this referred to as “batch and blast,” or, more recently, “spray and pray.” (This one time we heard “pow and chow,” but we couldn’t figure out what it actually meant.)

Anyway, here’s how an un-targeted promotion can be potentially risky:

It’s costly. It’s true that a 20% promotion might help you reconnect with customers who have faded away over time — and stimulate some purchases that they might never otherwise have made. The problem is that plenty of the customers receiving this promotion would have made purchases anyway, with or without the discount. Giving them 20% off is just eating into your margins. What you would really like is a sharper way of targeting those customers who are fading away or at-risk — customers for whom any additional purchases will be incremental to what you have gotten without the promotion.

It leaves money on the table. The flip side to sending a discount to a shopper who would have made a purchase anyway is missing the chance to send the most relevant promotion to a given customer. Your job is to connect with your customers with the right message at the right moment. It’s possible that all of your customers might be interested in a 20% discount regardless of their relationship history, purchase patterns, and current behavior. Possible…but unlikely. Ideally, you would want to figure out a way to email different customer segments with a message that is crafted to appeal specifically to them.


So how can we move towards a smarter approach to email marketing?

1) Tie email triggers to the customer lifecycle and your customers’ “temperature.”

Consider three customers: Jessica, who has bought jewelry from your website every month for the past five years; Vesper, who used to buy new shoes every half a year or so but hasn’t made a purchase in nine months; and Leanne, a new customer who made her first purchase of jeans last week. These customers don’t look too different at first glance. All three are female and have made purchases in the past year. But each is likely to be most responsive to a different kind of campaign.

For Leanne, a follow-up email at the 30-day mark — possibly with a discount — can help your brand remain top-of-mind and trigger a repeat purchase. Jessica, on the other hand, is an active customer who is “hot” and needs little additional prodding to buy; an email with a sneak peek at the new earring collection might be more meaningful (and cost-effective) for her. And Vesper is a customer who is steering of her normal purchase course — “cooling,” so to speak — and might need additional incentives to reconnect with your brand. A welcome-back message and special deal on shoes could help remind her why she loves you before she becomes truly “cold” (inactive and likely gone for good).

Tying email triggers to specific points in the customer lifecycle and aligning your email marketing efforts with your customers’ “temperatures” can help you serve up more relevant messages and offers.

2) Sharpen your messaging and offers with smarter segmentation.
One of the foundations of advanced customer analytics is the premise that your customers are all different — so they shouldn’t be treated the same. If you know that customers who reach your site through the Google adword “carburetor” are fundamentally different than those who reach you through adword “muffler” (different repeat purchase likelihood, different profit per order, and ultimately different customer lifetime value) consider running separate email campaigns with different messages and offers for each segment. It will help tie your marketing efforts to the real drivers of your company’s performance.

3) Keep on refining.
Email marketing is not “one-and-done.” It’s an ongoing and iterative process; today’s exciting new idea is tomorrow’s status quo. A/B testing — the idea of holding a bake-off between two or more competing ideas to see which produces the best results — is sometimes called the “champion-challenger” model when a new idea is being evaluated against an existing favorite. So make sure that you have a robust pipeline of challengers ready to go up against the current champion for supremacy. Observing that a 15% discount leads to revenue lift over an email with no promotional offer? Why not try a dollar-denominated discount instead or a buy one, get one promotion? Why not experiment with a new subject line or new creative? Effective email marketing is about continuous, incremental improvement rather than putting the “right” option on auto-pilot.

 

Ultimately, the promise of email marketing lies in its ability to enable a more personal, individual relationship with the customer. Acknowledging that customers are all different — and linking marketing efforts to the stage of their relationship with your brand — can help ensure that you deliver the right message to the right customer at the right time.

 

Learn more about customer segmentation by taking our free online Customer Segmentation course on Custora U.

Evaluating Significance – Designing a Marketing Experiment Part 4 of 4

Welcome to the fourth and final installment of our series on designing and implementing a successful marketing experiment. We have already covered how to formulate a strong hypothesis, control for natural variation between groups, and how to draw valid conclusions from our experimental results. Today, we are going to discuss how you can use some statistical tools to gauge how meaningful your results are. However, what does it mean for a result to be meaningful?

A result is meaningful if it is likely to hold in the future, and the result was not do to random chance. As you will remember, our hypothetical marketing department has been testing the following hypothesis:

Emailing customers a 20% discount increases the likelihood that they will make a purchase in the following week.

By now, we have designed and run our experiment using two emails, which we will call A and B. Email A is our company’s standard email with no discount, while B contains the 20% discount that we’re testing. Imagine our results are something like the following:

Idea A Idea B
Total Sent

1000

1000

Conversion

300

320

Let’s imagine each email was sent to 1000 people. After receiving Email A, 300 customers returned to our store, while 320 customers returned after receiving Email B. That means there is difference of 2 percentage points between the two emails (30% conversion vs. 32% conversion).

However, response rates are subject to natural variation. We aren’t just interested in which email performed better during this single experiment; we are interested in which email will continue to perform better.

Our test illustrates how ambiguous results can be. Was Email B (which contained the 20% discount) really 10% more persuasive than A? Or were its additional conversions a matter of luck that had nothing to do with the email itself? If we can’t answer this question, we can’t call our results meaningful, and thus can’t conclude that our 20% discount actually helps to drive return customers.

Click here to see how to do the experiment calculation by hand. Or skip it and try out the A/B testing calculator

Idea A Idea B Total
Did Not Convert 620
Convert 1380
Total 1000 1000 2000
0.94 Not Significant


If our results are significant (the typical threshold used is less than a 5% probability that we observed as large as we did due to random chance), then the chi-squared statistic is greater 3.84. In our case, we see that 0.93 is much less than 3.84, meaning there’s a significant chance that the results we saw were due to natural variation rather than the presence (or absence) of a 20% discount in the emails.

You can also play with the A/B testing to get a sense for how big a sample size you need to achieve significance. Try out some of these examples:




You might look at these results and think that the solution is to repeat the same test with a much larger sample size. However, as I’ve written before, there are real problems using significance testing to determine experiment termination time. This means we are going to need to determine in advance how big our groups will be. But what’s the best way to go about doing that?

To figure that out, we first need to decide how big we want our effects to be. We don’t need a lot of users to detect large, obvious effects. For example, we would only need about 120 users per group to detect the difference between a 10% conversion rate and a 20% conversion rate, whereas we would need about 2200 to detect a difference between 5% and 4.5%.

Another way to think about all of this is to decide how precise we want our predictions to be. In our case, we are trying estimate what the long run average conversion rate is going to be for a given email; we can never be 100% certain what that rate is going to be, but we can be about 95% certain that it will fall within a certain range. The more that we sample, the smaller that range is going to be. Confidence Interval shrinks as sample size increases

Notice how the confidence bands shrink in the plot above. They shrink pretty dramatically between sample 0 and 200, and then almost imperceptibly between 800 and 1000. This is because the confidence interval shrinks in proportion to the square root of the sample size. So to cut the size of the confidence bands in half, you need to quadruple the number of users in your test.

This brings us to the end of our series. We hope this series has given you some insight into just how much goes into developing a quality marketing experiment and guidance in case you want to conduct your own.

If you are interested in doing A/B testing on email marketing, check out Custora for our A/B testing interface for email marketing.

 

Setting Up Control Groups – Designing a Marketing Experiment Part 3 of 4

Welcome to Part Three of our series on designing and implementing a successful marketing experiment. In our previous posts, we looked at a few strategies for designing an experiment with a well-formulated hypothesis and a way to control for natural variation between groups. Today, we are going to discuss how to identify the most salient effects of a treatment and draw valid, useful conclusions from experimental results.

Let us return to our hypothetical marketing department. As you will remember, our goal is to determine whether a 20% discount in an email is an effective way to get customers to return to our store. First, we formulated our hypothesis as a falsifiable statement which, if confirmed, also takes the same form as our conclusion, in our case:

Emailing customers a 20% discount increases the likelihood that they will make a purchase in the following week.

Imagine that we send all of our customers a 20% discount and see that many of them return to our store. Thus, we conclude that a 20% discount is an effective way to get customers to return to our store and declare the experiment a success. Our boss congratulates us and we all take the rest of the day off.

Based on the success of our experiment, our company decides to run a similar deal with the same 20% discount, only this time we include the discount as part of a Facebook promotion. Much to our surprise, however, very few customers return to our store. Despite substantial investments of time and money, our promotion seems to have gone belly up. Our boss wants to know how this could happen but we are at a loss to explain why. Was something wrong with our experiment?

Actually, the problem was not with our experiment, but rather our conclusions. Let us examine our hypothesis again:

Emailing1 customers a 20%3 discount2 increases the likelihood that they will make a purchase in the following week.

Although our hypothesis seems pretty straightforward, if we look more closely we will see that our 20% discount email actually consists of three different variables rolled together: 1. It is an email, 2. It is a discount email, 3. It is a 20% discount email. Our challenge then, is to determine which of these variables (or what combination of them) actually brought our customers back to the store. In order to find our answer, we will need to test each of these variables independently.

We can easily parse the effects of our different variables by dividing our population into groups. In this case, we randomly assign the members of our base to one of four groups. Group I receives no email (readers who have been following our series will recognize this as our control group from Part Two), Group II receives an email with no offer, Group III receives an email with a 5% discount, and Group IV receives an email with the original 20% discount.

After defining our groups, we can compare their responses to evaluate our hypothesis. However, we cannot compare our groups directly. Because we sent emails to only a sample of the population, we can not say for certain how the entire population would have responded. However, using the tools of statistical analysis, we can estimate the range for what the response rate would have been. Based on those estimates, we can then determine what the likely email response rate would be.

Suppose we have a sample of 40,000 users, divided evenly among our four groups. After sending each group the appropriate email (or not, in the case of Group I), we measure their responses.

We can imagine a few different plausible scenarios, for example:

 

Message Response Rate
No Email 1%
Email 5%
5% Discount 5%
20% Discount 5%

 

Here we see pretty unambiguously that customers who received an email of any kind were much more likely to return to the store than those who received no email. In this case, the magnitude of the discount (or even the presence of a discount) seems to play little or no role in increasing the response rate. Without the proper controls, however, we could have easily attributed the increase in customer returns to the discount.

Another plausible scenario could look something like this:

 

Message Response Rate
No Email 1%
Email 1%
5% Discount 3%
20% Discount 6%

 

Based on these responses we could conclude that the discounts, rather than a simple email, are what drove customers to return to our store. Even a modest discount increased the response rate somewhat, while a larger discount increased the response rate still further.

Using even these basic tools of statistical analysis, companies can learn more and better information from their marketing experiments. This information in turn helps them reach their potential customers through more effective, targeted marketing. As we have seen, controlling for different effects through groups can be a powerful means for identifying the most salient effects in any marketing experiment. In our next post, we will discuss some statistical tools that can help us gauge the significance of the effects we have parsed here.