April 21, 2026 · 5 min read · Aldus

AB Tests That Actually Move Your Open Rates

Most newsletter AB tests are a waste of time. Here's how to run tests that actually teach you something and move the numbers that matter.

email marketingnewsletter growthab testingopen ratesemail strategy
AB Tests That Actually Move Your Open Rates

Why Most AB Tests Fail Before They Start

AB tests have a reputation for being the responsible, data-driven thing to do. Run a test, pick a winner, repeat. Simple. Except most newsletter creators are running tests that couldn't teach them anything even if they wanted to learn. They're testing button colours with 400 subscribers, declaring a winner after 48 hours, and then wondering why their open rates are flat six months later.

The problem isn't the concept. Split testing is genuinely one of the most useful things you can do as a newsletter operator. The problem is that people treat it like a ritual rather than a tool. They run tests because they feel like they should, not because they have a specific question they're trying to answer.

Before you touch a single variable, write down the question you're actually asking. Not "which subject line performs better" but "does leading with a number increase open rates more than leading with a curiosity gap?" That distinction matters. One gives you a one-time answer. The other builds a model of how your audience thinks.

What's Actually Worth Testing

Subject lines get all the attention, and they deserve some of it. Your subject line is the only thing standing between your email and the bin. But there's a hierarchy here, and most people test the wrong things first.

Subject line format is worth testing. Not just the words, the structure. Does your audience respond better to questions or statements? Numbers or no numbers? First-person or second-person? These are structural patterns you can apply across every issue once you know the answer. Testing "7 things about SEO" against "How we grew to 50k subscribers" isn't just a subject line test. It's a question about whether your readers are drawn to utility or story.

Preview text is the most underused lever in email. It's essentially a second subject line and most newsletters either ignore it or let it default to whatever the first line of the email happens to be. Test it deliberately. A good preview text can lift open rates by 5-10% on its own, which on a list of 10,000 people is 500 to 1,000 extra opens per send.

Send time gets tested obsessively and produces almost nothing useful. Yes, test it. But don't spend three months on it. Pick the best time from your data and move on. The difference between 9am Tuesday and 11am Thursday is usually a rounding error compared to getting the content right.

Content format is where the real learning happens and almost nobody tests it. Does your audience engage more with a long analytical take or a short punchy brief? Do they click more on embedded links or on a single CTA button at the bottom? These tests take longer to run because you're measuring clicks and replies, not just opens. But the answers shape your entire editorial approach, not just one send.

The Stats Problem With AB Tests

Here's the uncomfortable truth about AB testing for newsletters. Unless you have a list of at least 5,000 to 10,000 subscribers, most of your test results are statistical noise dressed up as insight.

To detect a meaningful difference in open rates, say 3 percentage points, with 95% confidence, you typically need around 1,500 people in each group. That's 3,000 total just to test one variable. Most independent newsletter creators don't have that. And even those who do often don't account for the fact that email results vary week to week based on factors that have nothing to do with your test. The news cycle, the day of the week, whether a big sender hit inboxes at the same time.

This doesn't mean small lists shouldn't test. It means they should interpret results differently. Instead of "Subject Line A won with 95% confidence," the honest read is "Subject Line A outperformed B in this send, which is a signal worth watching over time." Run the same structural test three or four times. If the same pattern keeps winning, trust it. One result is anecdote. Four results in the same direction is something you can act on.

If you're using a platform like Aldus, the send analytics will show you not just open rates but engagement patterns over time, which makes it easier to spot genuine trends rather than one-off flukes.

How to Set Up an AB Test That Teaches You Something

Start with a hypothesis. "I think leading with a specific number in the subject line will increase open rates because my audience is analytics-driven." That's a testable, falsifiable idea. "I want to see which subject line does better" is not.

Test one variable at a time. This sounds obvious and yet people constantly change the subject line, the preview text, and the send time simultaneously, then have no idea which change drove the result. Pick one thing. Change only that thing.

Split your list randomly and equally. Most email platforms do this automatically. If yours doesn't, switch platforms or at least alternate which half gets which version across different sends.

Decide your success metric before you run the test, not after. If you're testing subject lines, the metric is open rate. If you're testing CTAs, it's click rate. Choosing the metric after you see the data is how you accidentally find patterns that aren't there.

Give it enough time. For open rates, 24 to 48 hours is usually sufficient since most people open within the first day. For clicks and replies, give it a week. And resist the urge to check every hour. Looking at results before enough data has come in doesn't speed up the process, it just makes you anxious.

Document everything. Keep a simple log: what you tested, what your hypothesis was, what the result was, and what you're going to do with that information. Most newsletter creators run tests and then forget what they learned six weeks later. Your testing log is your institutional knowledge.

The AB Tests Worth Running Right Now

If you're not sure where to start, run these three tests over the next quarter.

First, test your subject line format. Take your next four issues and alternate between a curiosity-gap subject line and a direct value-statement subject line. Something like "What most newsletters get wrong about CTAs" versus "3 CTA mistakes that kill click rates." Track open rates across all four sends and see if a pattern emerges.

Second, test your preview text. For four sends, write a preview text that adds genuinely new information rather than echoing the subject line. Compare open rates against your historical average. This one almost always wins, which tells you something about how much attention people pay to the inbox view before opening.

Third, test your CTA structure. For a month, alternate between issues that have one single CTA and issues that have two or three options. Track total click rate and see which structure your readers respond to. Some audiences are decision-fatigued by multiple options. Others won't click if the first CTA doesn't appeal to them. You won't know which camp your list falls into until you test it.

Don't test all three at once. Run them sequentially so you know what's driving what.

The creators who get genuinely good at AB tests aren't running more tests than everyone else. They're running better ones, staying patient with the data, and actually changing their approach based on what they find. That last part is rarer than you'd think.

Try Aldus free

AI writes your newsletter. You just approve and send.

Get started →