Skip navigation links

Testing

Why Conduct Tests?

Tests help content authors and website managers optimize communications content and the channel experience for users. This can lead to more conversions on a goal, such as opening an email, reading a story or clicking the Apply button on a page.

Test results can inform strategy and content changes. For example, changing the color of a call-to-action button may seem like a small change, but small changes can add up to bigger impact when taken as scale.

Units can, and should, consider testing all of their channels. Email, social media and webpage testing are all fair game! Read below for some guidance on designing and conducting communications-related tests.

Experimentation Framework

Remember the scientific method taught in middle school science class? Well, it’s a great method for designing tests on communications channels. Following this framework helps ensure all elements of the test have been considered and reduces the risk of error. Here’s a refresher of the scientific method applied to communications work.

First, define the question to be answered. Testing for the sake of testing is a waste of time. Make sure the test is designed to provide information that can be acted on by first determining what question it should address.

Example questions:

  • Will the story read completion rate increase if a summary of the story is added at the top of the page?
  • Will the email open rate increase if it is sent on a Monday versus a Friday?
  • Will more email recipients click through to read a story if it includes a text snippet versus only a headline?
  • Will informing visitors of the time commitment required to read a story lead to higher story read completion rates?

Next, gather information and resources. What is known about the users? What data points are available to work with?

Examples:

  • What is the current story read completion rate?
  • What is the current email open rate?
  • Is email data available by day of the week?
  • What metrics are available for email and web?
  • Can story read time be calculated? Is there an industry standard?

Form an explanatory hypothesis. Take a stance and be specific about what is projected to happen. It’s fine if the hypothesis is proven wrong — the unit will have learned something! The objective is to choose a scenario that the test can prove to be true or false.

Example hypotheses:

  • Adding a bulleted summary to the top of articles that are 1,000 words or longer will increase story read completion rate.
  • Sending the email newsletter on Monday (instead of Friday) will lead to a decrease in open rate. We’ll be sure to control for holidays and avoid running the test over Memorial Day or Labor Day.
  • Adding more text that describes a story, instead of just providing a headline and image, will reduce the click-through rate on email newsletters.
  • Adding an indicator of the story read time on a webpage will increase the story read completion rate.

Set up and run the test. When starting out, it’s best to test one variable at a time and control for any extra variables. Testing for multiple variables (i.e., multivariate testing) can get very complicated and is most suitable with dedicated tools. Many email programs and web content management systems provide A/B testing tools. 

Analyze the data. Are there trends in the data? Can the hypothesis be proven correct or incorrect? Did the test meet statistical significance? (See the Tools and Resources section below for more information on significance calculations.) Draw conclusions that can serve as a starting point for a new hypothesis. 

Document results so they can be referred to in the future. Be clear about how the test was conducted.

Share the results. What was learned and how can it be applied to future work? 

Testing is a cycle. The unit may need to repeat the same test multiple times to confirm accuracy and ensure the initial results were not an anomaly. The test may lead to an additional hypothesis to be tested. Go back to step three and repeat the cycle as needed.

Tools and Resources

Test Template

The University Communications and Marketing analytics group created a PowerPoint template that can be used to support the testing process from start to finish. Use this template to design a test, document the experimentation process and record and present the results.

Access in SharePoint
The first slide of a deck showing the Spartan helmet in white, white text reading: "SPARTANS WILL. A/B Testing Plan Template - Test Name - Test Start Date" on a green gradient background

A/B Testing Calculator

SurveyMonkey provides an easy-to-use calculator to determine the statistical significance of an A/B test. Enter the visitors and conversions for the two variants, along with the desired level of confidence. 

Try it out

Survey Sample Size Calculator

Surveys also need to achieve statistical significance. This tool helps estimate the recommended sample size for a survey invitation. 

Calculate a sample size

Documentation updated: Nov. 1, 2024