Marketing Minds - Where Clicks Meet Creativity

View Original

Experimentation in Digital Marketing - Part 1: An Introduction to Experiments

Contents

Background

How We Experiment

Case Study

Next Steps

See this content in the original post

Background

Sales and marketing teams often use intuition when designing marketing campaigns. Although experienced marketers have learnt from their past experience, there is no way to determine the actual impact of using a certain methodology over another, unless you measure and experiment.

When experimenting, we pit variations of our campaign against one another until we can use data and statistics to prove which variation is the one true winner. Some statistical methods even allow us to approximate the improvement this variation could result in (this might be the number of potential leads we could gain/lose, or even $ value of lost/increased revenue).

When running an experiment, we collect user behavioural data and can use statistical models (see part 2) to prove or disprove a hypothesis we may have. When experimenting, it is important to select a good, high impact hypothesis (see part 3) as they generally require less data to prove, and have more tangible results. The more different our variations in the experiment, the more data we need to prove/disprove our hypothesis, and the longer we might need to run the campaign for - poorly designed experiments may never reach completion.

See this content in the original post

How we Experiment at Marketing Minds

Most people/companies compare variations via an A/B test on via ads platforms as ads platforms (which have this feature built in). Ads platforms already allow you to run multiple variation an ad, and analyse the success of each variation. This still leaves a lot left to desire as the ads platform does not have visibility over data that they are unable to track; such as cross platform touch points (e.g. email) or offline conversions.

A customer journey is diverse and complicated, by not experimenting or simply running and assessing an experiment at a single point in the journey, you are leaving learnings, customers and dollars on the table.

At Marketing Minds take this a step further. We ingest data from ads platforms, your website, communications and all other customer touch-points to understand the end to end customer experience. This means that we can experiment at any point in the customer journey, and also assess the impact of the experiment downstream.

A typical experiment via an ads platform might test clicks, leads generated or online sales from a single ad. With our data infrastructure, we can assess more complicated things like impact of each and every touch point, cross platform, in person sales, post sales and long customer journeys. This is important as a particular variation in the experiment might appear to be performing better, but doesn’t move the needle in metrics further down the funnel or other, more important metrics.

See this content in the original post

Case Study

We were working on lead generation for a Sydney, Australia based gym. We wanted to experiment between a few different types of promotions - some just mentioned a gym discount (A), others specifically outlined the details of the gym’s classes and services (B).

While we ran the experiment, the ads platforms were telling us that variation A was outperforming variation B both in terms of clicks and leads generated. However, after analysing the down stream impacts of our variation, we found that B resulted in more successful customer contact (via phone or coming to the gym in person), as well as much more gym membership signups. In fact, our Bayesian Statistics model (see part 2), estimated a 31% increase in conversions if we had run B over A.

See this content in the original post

Okay, so how do we run an effective experiment?

In part 2, we will be looking at how to measure and learn from an experiment
In part 3, we will be looking at how to run an effective experiment