An introduction to A/B testing: A crucial tool to improve your B2B marketing

January 22, 2021 Kristin Connell

By Chuck Leddy

The days when B2B marketers made important decisions by relying on their “gut instincts,” their experience, rules of thumb, or hunches are long gone, replaced by a more data-driven approach to decision-making that relies on data and methodical testing of options. That approach? A/B testing.

A/B testing is a commonly-used practice for making decisions through testing hypotheses (i.e., what you think will work) and following where the data leads. A Salesforce blog post defines A/B testing as “a scientific approach of experimentation when one or more content factors in digital communication (web, email, social, etc.) is changed deliberately in order to observe the effects and outcomes for a predetermined period of time. Results are then analyzed, reviewed, and interpreted to make a final decision with the highest yielding results.”

Amazon may be the best-known user of A/B testing. The Seattle-based online retailer changes its website and marketing communications constantly based on continuous experimentation.  As founder and CEO Jeff Bezos has said, “Our success is a function of how many experiments we do per year, per month, per week, per day.” Like Amazon, your B2B marketing can be informed by data and the use of A/B testing. Let’s explore more about how A/B testing works in practice.

The what of A/B testing

In order to do A/B testing, you’ll need a solid data management infrastructure in place. Once you can collect relevant data, you can start using that same data to conduct experiments and optimize your marketing efforts. As to what you should A/B test, Amazon is a good guide: they continuously test the key elements of their website, including color schemes, the size (and color) of various buttons (especially buy buttons), images, content types, and much, much more. One obvious place for any B2B marketing team to start A/B testing is with email subject lines, which can have a big impact on open rates. From there, you can move on to testing email headlines, sub-heads, colors schemes and beyond.

The how of A/B testing

At its most basic level, A/B testing does exactly what it says. It takes two different options, option A and option B, and tests them both under controlled conditions. You analyze the results of each test, and then pick a “winner.” For instance, when you conduct A/B testing on elements of a webpage, traffic is usually split between some users who will see the control (for example, the red buy button), and those who will see the variation (the green buy button). During the A/B test, data gets collected on what users do in each scenario (A and B), which allows marketers to analyze the results and then  choose between the control (option A) and the variation (option B).

For the results of any A/B test to be valid, users who are shown the control and the variation (A and B) should be representative of the target set of users – for example, all users should be people who would typically be interested in visiting your website. You’ll want to evenly split the number of users between the control group and the variation group, to make comparison easier. The selection of the users for A/B testing should (of course) be randomized, as random selection helps avoid any bias in the user groups.

As a Salesforce blog post explains, “[m]ost commercial software capable of running A/B testing in different marketing channels (including Marketing Cloud) normally has random selection functionality built-in so that marketers and non-technical people can execute tests easily.”

Defining the question is key

Defining the problem, or in other words, developing the precise hypothesis you intend to test, is often the biggest challenge for A/B testing. Albert Einstein once said, “If I were given one hour to save the planet, I would spend 59 minutes defining the problem and one minute resolving it.” For example, if your landing page is failing to convert visitors at the rate you expected, the key question would be around what element(s) of the page is causing the problem. You might need to define all the elements that could be creating friction for the customer. Moreover, the friction may have multiple causes, requiring multiple experiments.

After you’ve carefully reviewed the page analytics, maybe the biggest “friction factor” is your call to action (CTA). In that case, you could test different wordings for your call to action, analyze the results, and make a change. Your marketing team might brainstorm a half dozen different CTAs (i.e., hypotheses), then carefully test each one. Obviously, you’ll want to do one test at a time, not do all six at once. Follow good scientific practice and follow the data where it leads you.

Your A/B test plan

In its blog post, What is A/B Testing?, Salesforce helpfully suggests that you include each of the following elements in your A/B testing plan:

  • The problem you’re trying to solve
  • A SMART (Specific, Measurable, Achievable, Relevant, and Time-based) goal
  • The hypothesis being tested
  • Primary success metrics (how the results will be measured)
  • Audience (who you’re testing, and how many)
  • Location (where the test will be conducted)
  • Lever (what changes between the control and the variation)
  • Duration of the test to achieve its predetermined statistical significance

Analyzing test results

Once the results of the A/B test are in, it’s time for the team to analyze the data and try to understand what drove the results. If one particular wording of your call to action was better than the others (e.g., “To learn more, click here” outperformed. “Hey dude, hit us up here for more info!”), what made the winning option work better. Could it have been the brevity of the CTA, the tone, or the order of the words, or something else? If your team has different ideas, you can keep testing to find the root causes of success.

There may be times when the resulting numbers jump off the page, leaving the next step clear — do more of this. Other times, the outcomes will be less clear and require further analysis and experimentation. By definition, A/B testing is an iterative process that keeps going and going. Don’t forget, just because your customers prefer blue today doesn’t mean they won’t prefer yellow tomorrow. The optimization process, much like customer preferences, remains dynamic and ongoing.

Are you set up to conduct effective A/B testing in order to improve your B2B marketing efforts? Reach out to us for more information.

The post An introduction to A/B testing: A crucial tool to improve your B2B marketing appeared first on Sojourn Solutions.

Previous Article
Declutter your marketing database: 6 steps for success
Declutter your marketing database: 6 steps for success

Without good, clean data at the start, everything B2B marketers do later on is filled with clutter, includi...

Next Article
7 practical tips for building high-converting landing pages
7 practical tips for building high-converting landing pages

By Chuck Leddy The purpose of a landing page is to convert visitors into leads. A great landing page gets a...