Testing Your Emailing and Improving Results

By SmallBusinessComputing Staff | Posted June 19, 2002
By Mark Sakalosky

Our company advises a variety of clients on email marketing matters. When we speak with prospects for the first time, we are often asked to enlighten them by revealing the "best" way to send email. The simple answer is: use personalization and customization as much and as frequently as possible to ensure that the information delivered to customers is relevant and useful.

That usually fails to cut the mustard. Prospects push on. They want specific tactics.

Questions we receive frequently include:

Are customers more likely to open an email if the sender field is a company or an individual?

Do customers prefer newsletters with a left-hand or right-hand rail?

HTML or text: Which generates better results?

As a rule, I support sending email "from" individuals, using a right-hand rail for newsletters in an HTML format (recipients should have the option of receiving text). That said, rules are made to be broken and probably best ignored.

The correct answer to all email marketing questions is: test it. Employ test versus control methodology for definitive answers to tactical questions. Prospects are never particularly fond of that response. Prospects like certainty. Prospects like consistency. Prospects like absolutes. In email marketing, these don't exist.

Different audiences respond differently. If I were to send the same email campaign to a group of experienced tech professionals and a sampling of Internet newbies, results will be vastly disparate. Different open rates. Different click-through rates (CTRs). Different conversion rates. Different unsubscribe rates. Prospects who serve experienced technology professionals want to approach their market differently than clients targeting newbies. Precisely how the approach should differ can only be determined with testing.

Test versus control methodology analyzes observed customer data generated through actual campaigns to determine the most effective tactic for achieving the desired result. Test versus control is achieved by segmenting the audience into a minimum of two parts. Each segment receives an identical email, except for a single variable. Observed customer data is collected and analyzed to determine which audience segment took the desired action most frequently. Let's look at test versus control with respect to an e-newsletter. Our client's newsletter provides customers with valuable, relevant information. At the same time, it makes a promotional offer in a right-hand rail. It draws prospects with information to expose them to the offer.

Each issue is treated as an opportunity to optimize future editions. We're never satisfied. We seek improvement. With the upcoming issue, our goal is to learn if a change in design will generate incremental revenue.

The newsletter consists of two columns: a body column approximately two-thirds of the width of the newsletter and a right-hand rail filling the remaining third. This is the control version. The test version also has two columns - each the same width as its double in the control version. But the test features a left-hand rail instead of a right one. The content of both rails in both versions is identical. The only change is positioning.

We divide the audience into two segments: Control and Test. The test segment is expected to be much smaller than the control segment, but it should be large enough to ensure valid results that can be compared against the control. For the purposes of this example, assume the control segment is 9,000 and the test segment 1,000. The test segment should be randomly selected from the total subscriber base to ensure the characteristics of both groups are identical. It's important to take into account variables such as length and subscription source to ensure the two groups are equivalent.

The next part's easy. We send the control segment the control version of the newsletter. The test segment gets the test version of the newsletter. We wait for observed data to be collected. In this instance, we're collecting and analyzing CTRs, both total and unique, and conversion rates for the promotional offers in the right rail of the control version and left rail of the test version. Remember, the content of both is identical. If one version outperforms the other with a higher CTR or conversion rate, it's attributable to the position of the rail. It is important to measure both these metrics. The numbers do not always move in the same direction.

After 48 hours, the results are in. In our first run, the test clearly outperformed the control. For the next mailing, we'll repeat the test to validate the results. If they're valid, the design changes.

This approach can be used to test any newsletter variable: sender address, format (HTML versus text), subject line, and so on. With our client, we use every issue of the newsletter as an opportunity to optimize future mailings. We constantly test variables seeking opportunities for improvement. It never ends - nor should it.

Mark Sakalosky is vice president of marketing strategy at MarketSmart Technologies, an agency specializing in technology-powered marketing solutions that build long-term, personalized customer relationships. He also oversees development of MarketSmart's newsletter, E-mail Strategist. Previously, Mark was V.P. of Email product development at eTour.com, a consumer web site acquired by AskJeeves. Mark has held a variety of marketing roles at USA Information & Services; Home Shopping Network and The Golf Channel. Although he earned an MBA in marketing strategy from The College of William & Mary, he remains a devoted Spider from his undergrad days at the University of Richmond.

Reprinted from Click-Z.com.

Comment and Contribute


     

    Get free tips, news and advice on how to make technology work harder for your business.

    Submit
    Learn more
     
    You have successfuly registered to
    Enterprise Apps Daily Newsletter
    Thanks for your registration, follow us on our social networks to keep up-to-date