The A/B test tool on ActBlue, which allows you to test out Contribution Form titles and pitches, among other variables, has gotten a significant upgrade, just in time for campaign season.

The old A/B testing tool worked great, but it also forced you to wait around for both test variations to get enough traffic to gain statistical significance. If one version was performing way better than the second one, that meant you were losing out on potential contributions in order to gain valuable insight.

This is how most A/B testing tools work, and it’s a good system. But with the new ActBlue testing tools, which use a more advanced statistical algorithm than typical A/B testing, you can still achieve statistical significance without having to sacrifice a ton of traffic to a losing form.

As the test runs and one variation begins performing better, we’ll start sending more traffic to that form, roughly in proportion to how they’re trending. You can see the traffic allocation listed just above each variation on the “A/B Test” tab of your Contribution Form. The traffic allocation will change continuously as donations come in. It’s important to note that if a variation is receiving 75% of the traffic, that does not necessarily mean it’s conversion rate is 3X as high as the other variation(s). If you’re curious what it actually does mean and want to talk complicated stats, you can get in touch with us here.

If there was a false positive and the losing form starts doing better, the traffic allocation will begin to reverse. The test will continue to run indefinitely until you click “Make Winner.” The A/B testing tool will eventually send 100% of volume to the winner if you don’t make either version the winner manually.

The new A/B testing tool makes your tests more efficient, which means you can try out more of them. If you have radically different language you want to try on a form, alongside three more standard pitches, there’s little risk. If it doesn’t work out, we’ll send fewer and fewer people to that losing form.

We wanted to give special thanks to Jim Pugh from ShareProgress for sharing notes on the multi-armed bandit method used in their software and helping us out with building this tool (and for hanging out in the ActBlue office for a week)!

As always, let us know what tests you’re running and what’s working for you at!

Here at ActBlue, we’re always optimizing our contribution form by testing different variations against each other to see which performs best. And, whenever possible, we like to share our results. Needless to say, it’s great to discuss tests that end up winning; every percentage point increase in conversion rate we bring to our contribution form benefits every committee — of which there are currently over 11,000 active — that fundraises on ActBlue.

A very important part of this process, however, is also tests that fail to bring about a positive change to our contribution form. Failure to openly discuss and reflect upon losing tests belies the experimental nature of optimization. Thus, I’m here to talk about an A/B test that we just ran on our contribution form that lost. (Bonus: it lost twice!)

We tried coalescing our “First name” and “Last name” fields into one “Full name” input. The theory was that one fewer input would reduce friction along the contribution path, thereby increasing conversions. Here’s what it looked like:



The control version, it turns out, was actually associated with a higher conversion rate than the “Full name” variation, though not statistically significantly.1 We even tested another slight variation of the “Full name” field with slightly different placeholder text and a more expressive label, but it lost again.

If you’re wondering why it lost, then that makes two of us; in a case like this, it’s tough to say what actually happened. Was it aesthetics? Anti-novelty effect? If we speculate like this ad infinitum, we’ll end up with more questions than answers — the world is full of uncertainty, after all. Far from discouraging this type of reflection, I’m saying that we indeed should! This is the origin story of many new testing ideas.


1: Pr(>|t|) > .05 , n = 63159

Our team is always thinking through ways to make our contribution forms easier to fill out and more streamlined. When donors have too many options and abandon a form, that’s known as choice paralysis. Eliminating that choice paralysis is a big part of building better contribution forms.

Tandem contribution forms list multiple candidates, which require more decisions to be made by donors. But the vast majority of people choose to just split their contribution evenly between all the candidates on the form. That used to look like this:

Too many options and too many boxes for our liking. Do you want to give more to candidate A than organization B? How much do you want to give in total?

We boiled the form down to that last question — how much do you want to give? This made it a lot easier for donors to give (spoiler alert: this A/B test was a huge success).

Now, when you land on a tandem form, you’ll see the normal amount buttons with a note underneath saying who the donation will be split among. You can still click a button to allocate different amounts to each candidate, but donors are less overwhelmed when they land on the page.

Here’s the new form:

So how successful was our A/B test? We saw a 7.16% overall improvement in conversion. That’s unheard-of-huge. We’ve done so many optimizations of our forms that we cheer for a test that leads to a 0.5% increase in conversions.

Part of that overall group consisted of non-Express users (people who haven’t saved their payment information with us) who land on our traditional multi-step form. Among that group we saw a 26% improvement in getting people to move from the first step of the process (choosing an amount to give) to the second step (entering their information).

There are so many candidates and organizations running really thoughtful tandem fundraising campaigns, and this is going to mean a huge bump for them. If you have questions, or want to tell us about a tandem campaign you’ve run, let us know at info AT actblue DOT com. We want to hear from you!

We’re fewer than six weeks from the election. That means, among other things, that optimal fundraising strategies become even more important than usual. Here at ActBlue, we’ve been running tests on a nearly daily basis on all kinds of Express Lane strategies.

Typically, we see the largest (statistically significant) improvements when optimizing factors related to the Express Lane askblock structure like amounts, number of links, and intervals between the links. For our own list, we find that, statistically speaking, the flashier aspects you see in some fundraising emails — emojis in subject lines, e.g. — do not do much (if anything) to improve donation outcomes. Here’s a tactic we recently tested, though, that’s a bit more on the fun side of things and definitely brought in a lot more money.

A little while ago, we started using our weekly recurring feature to great success. (By the way, if you haven’t tried this feature yet, shoot us an email at info [at] actblue [dot] com and we’ll turn it on for you.) After testing which amounts brought in the most money, we landed on this1:

We wanted to see if we could raise more money by asking for “$7 because there are 7 weeks until the election!” Gimmicky? Sure, but we had a hunch that it would perform well.2 Here’s what it looked like:

So what happened? The segment with the ‘7 for 7′ ask performed much better than the control; it brought in 87.6% more money, a statistically and practically significant improvement.3 Cool!

What’ll be interesting to me is to see when this tactic will lose its optimality. The key factor is that $7 (with gimmick) performed better than $10 (the control and previously optimal ask amount) despite it being a lower dollar amount. Though, at some point, a too-low number-of-weeks-to-election-dollar-ask-amount combination will negate the positive c.p. effect of the gimmick. Based on other testing we’ve done, my guess is that that will be at 4-weeks-$4. We’re doing follow-up testing on this “n weeks until the election!” tactic, so we’ll see!

If you decide to test something similar, send me an email and we can chat! Emails to info [at] actblue [dot] com with my name in the subject line will be directed to me.

P.S. Doing a lot of testing in the election run-up? Want a tool to help you manage your test groups? I wrote something in R for you! I’ll post something on the blog about it soon, but if you want it in the meantime, shoot me a note (emails to info [at] actblue [dot] com with my name in the subject line will be directed to me).


1 Actually, we built a model that predicts how a given Express user will respond to different types of donation requests based on previous donation information. Using those predicted values, we decide what type of donation ask they receive (of one-time, weekly recurring, monthly recurring) and for how much money they are asked. Math! The point: this is what we landed on for a certain subset of our list.

2 Of course, all else equal, it’s tough to distinguish whether any difference was due to the gimmick or because $7 is lower than $10. The theory would be that with a lower amount, more people would give, and even though the mean donation amount would likely be lower, the increase in number of donors would outweigh the decrease in mean donation size. This is definitely possible, but so is the opposite; it’s all about finding the optimal point.

In fact, we included a segment in the test which received an askblock starting with a lower amount and saw this dynamic in action, though the overall treatment effect was not statistically significantly different from the control. This lends support for interpreting the effect from the gimmick segment as the gimmick per se, but a detailed discussion is excluded from the body of the post for the sake of brevity. More rigorous follow-up testing on this “n weeks until the election!” tactic is already in the field— shoot us an email to chat!

3Pr(>|t|) < .01, controlling for other significant factors, including previous donation history.

We’re less than 8 weeks out from Election Day and are now making the weekly recurring feature available to campaigns and organizations. Just drop us a line at info [AT] actblue [DOT] com and we’ll turn it on for you.

Yep, weekly recurring is exactly what it sounds like. You can ask your donors to sign up to make a recurring contribution that processes on that same day of the week every week until Election Day. After Election Day, the recurring contribution automatically ends.

So, if you get someone to sign up today for a weekly recurring contribution, they’d then have 7 more contributions scheduled to process every Friday.

Election Day is getting closer and closer though, so if you’re going to use weekly recurring, we suggest getting started soon.

Once we turn on the feature for you, create a new contribution form and open the “Show recurring options” section in the edit tab. You will see a new option there for weekly recurring. Make sure you also turn off popup recurring if you have it enabled — these two features aren’t compatible (yet!).

It looks like this:

We’ve run a few tests on weekly recurring this week with our own email list and have had a good deal of success. As always, a donor needs to know exactly what amount and for how long they’ll be charged before they click a link. If you’re going to use weekly recurring with Express Lane (and you should!), here is the disclaimer language we used and recommend you use as well:

Based on our testing, certain segments of your list will respond better than others to a weekly recurring ask (not exactly a shocking revelation). We sort our list into those likely to give to a recurring ask and those who are more likely to give a one-time gift. For the recurring pool, the weekly ask has been performing strongly. Unsurprisingly, the same can’t be said for our one-time folks.

Test it out with the portion of your list that is more likely to give recurring gifts. And try fun things like offering a small package of swag like bumper stickers in return for signing up for a weekly recurring gift.

And if you find an angle that’s working really well for weekly recurring, let us know!

This post is the third in our blog series on testing for digital organizers. Today I’ll be talking a bit about what an A/B test is and explain how to determine the sample size (definition below) you’ll need to conduct one.

Hey, pop quiz! Is 15% greater than 14%?

My answer is “well, kind of.” To see what I mean, let’s look at an example.

Let’s say you have two elevators, and one person at a time enters each elevator for a ride. After 100 people ride each elevator, you find that 15 people sneezed in elevator 1, and 14 people sneezed in elevator 2.

Clearly, a higher percentage of people sneezed in elevator 1 than elevator 2, but can you conclude with any certainty that elevator 1 is more likely to induce sneezing in its passengers? Or, perhaps, was the difference simply due to random chance?

In this contrived example, you could make a pretty good case for random chance just with common sense, but the real world is ambiguous so decisions can be trickier. Fortunately, some basic statistical methods can help us make these judgments.

One specific type of test for determining differences in proportions1 is commonly called an A/B test. I’ll give a simple overview of the concepts involved and include a technical appendix for instructions on how to perform the procedures I discuss.

Let’s recall what we already said: we can perform a statistical test to help us detect a difference (or lack thereof) between the action rate in two samples. So, what’s involved?

I’ll skip over the nitty-gritty statistics of this, but it’s generally true that as the number of trials2 increases, it becomes easier to tell whether the difference (if there’s any difference at all) between the two variations’ proportions is likely to be real, or just due to random chance. Or, slightly more accurately, as the number of trials increases, smaller differences between the variations can be more reliably detected.

What I’m describing is actually something you’ve probably already heard about: sample size. For example, if we have two versions of language on our contribution form, how many people do we need to have land on each variation of the contribution form to reliably detect a difference (and, consequently, decide which version is statistically “better” to use going forward)? That number is the sample size.

To determine the number of people you’ll need, there are a few closely related concepts (which I explain in the appendix), but for now, we’ll keep it simple. The basic idea is that as the percent difference between variations you wish to reliably detect decreases, the sample size you’ll need increases. So, if you want to detect a relatively small (say, 5%) difference between two variations, you’ll need a larger sample size than if you wanted to be able to detect a 10% difference.

How do you know the percent difference you’d like to be able to detect? Well, a good rule of thumb to start with is that if it’s a really important change (like, say, changing the signup flow on your website), you’d want to be able to detect really small changes, whereas for something less important, you’d be satisfied with a somewhat larger change (and therefore less costly test).

Here’s what that looks like:

Sample Size Graph

Required sample size varies by the base action rate and percent difference you want to be able to reliably detect. Notice the trends: as either of those factors increases, holding all else equal, the sample size decreases.

For example, if you’re testing two versions of your contribution form language to see which has a higher conversion rate, your typical conversion rate is 20%, and you want to be able to detect a difference of around 5%, you’d need about 26k people in each group .

For instructions on how to find that number, see the appendix below. Once you have determined your required sample size, you’ll be ready to set up your groups and variations, run the test, and evaluate the results of your test. Each of those will be upcoming posts in this series. For now, feel free to email info [at] actblue [dot] com with any questions!

1 Note that this should be taken strictly as “proportions”. Of course, there are many things to be interested in other than the percentage of people who did an action vs. didn’t (e.g., donated vs. didn’t donate), like values of actions (e.g., contribution amounts), but for now, we’ll stick to the former.
2I.e., the number of times something happens. For example, this could be the number of times someone reaches a contribution form.


Statistics is a big and sometimes complicated world, so I won’t explain this in too much detail. There are many classes and books that will dive into the specifics, but I want you to have a working knowledge of a few important concepts you’ll need to complete an accurate A/B test. I’m going to outline four closely related concepts necessary for determining your sample size, and walk through how to find this number. Even though I’m sticking to the basics, this section will be a bit on the technical side of things. Feel free to shoot an email our way with any questions; I’m more than happy to answer any and all.

Like I said, there are four closely related concepts when it comes to this type of statistical test: significance level, power, effect size, and sample size. I’ll talk about each of these in turn, and while I do, remember that our goal is to determine whether we can reject the assumption that the two versions are equal (or, in layman’s terms, figure out that there is a real statistical difference between the two versions).

Significance level can be thought of as the (hopefully small) likelihood of a false positive. Specifically, the probability that you falsely reject the assumption that the two versions are equal (i.e., claim that one version is actually better than the other, even if it’s not.) When you hear someone talk about a p-value, they’re referencing this concept. The most commonly used significance level is 0.05, which is akin to saying “there’s a 5% chance that I claim a real difference, but there’s actually not”.

Power is the the probability that you’ll avoid a false negative. Or said another way, the probability that if there’s a real difference there, you’ll detect it. The standard value to use for this is 0.8, meaning there is an 80% chance you’ll detect it; though there are really good reasons for adjusting this value. 0.8 is by no means always the best value to choose for power; it’s generally a good idea to change it if you know exactly why you’re doing what you’re doing. .08 will work for our purposes, though. Why not just pick a value of .9999, which is similar to saying “if there’s a real difference, there’s a 99.99% chance that I’ll detect it”? Well, that would be nice, but as you increase this value, the sample size required increases. And sample size is likely to be the limiting factor for an organization with a small (say, fewer -than-100k-member) list.

Effect Size. Of the two versions you’re testing against each other, typically you’d call one the ‘control’ and the other the ‘treatment’, so we’ll use those terms. Effect size is saying, what do you expect the proportion of actions (e.g., contributions) to be for the control, and what do you expect it to be for the treatment? The percent difference is the effect size. How this affects sample size is demonstrated in the above graph. But the whole point of running this test is that you don’t know what the two proportions will be in advance, so how can you pick those values? Well, actually, you estimate what your base action rate will be. For example, if your donation rate from an email is typically 5%, then you can use that as your base action rate. Then, for the second proportion, pick the smallest difference you’d like to be able to detect. Similarly to power, you might find yourself asking “well why wouldn’t I just pick the smallest possible difference?”. Again, the answer is that as you decrease the magnitude of the difference, the sample size you need will increase.

Finally, we have sample size, or the number of people we need to run the test on. If we have values for the above three things, we can figure out how big of a sample we need!

So how do we do that? Well, there are many ways to do it, but one of the easiest, best, and most accessible is R. It’s free, open-source, and has an excellent community for support (which really helps as you’re learning). Some might ask, “well that has a relatively high learning curve, doesn’t it? And, isn’t there some easier way to do this?” The answer to both of those questions is “maybe,” but I’ll give you everything you need in this blog post. There are also online calculators of varying quality that you can use, but R is really your best bet, no matter your tech level.

Doing this in R is actually pretty simple (and you’ll pick up another new skill!). After you download, install, and open R, enter the following command:

power.prop.test(p1 = 0.1, p2 = 0.105, sig.level = 0.05, power = 0.8)

and press enter. You’ll see a printout with a bunch of information, but you’re concerned with n. In this example, it’s about 58k. That number is the sample size for each group you’d need to detect, in this case, a 5% difference at a significance level of 0.05, a power of 0.8, and a base action rate of 10%. So, just to be certain we’re on the same page, a quick explanation:
p1: Your ‘base action rate’, or the value you’d expect for the rate you’re testing. If you’re donation rate is usually 8%, then p1 = 0.08
p2: Your base action rate plus the smallest percent difference you’d like to be able to detect. If you only care about noticing a 10% difference, and your ‘base action rate’ is 8%, then p2 = 0.088 (0.08 + (0.08 * 0.10))

Of course, your base action rate will likely be different, as will be the percent difference you’d like to be able to detect. So, substitute those values in, and you’re all set! Playing around with different values for these can help you gain a more intuitive sense of what happens to the required sample size as you alter certain factors.

Recurring pledges are like gold. There’s a reason why they’re often called sustaining contributions. Building a base of recurring donors can have a huge impact on the sustainability of any organization, including campaigns.

And now we’re making it easier for you to raise more long-term recurring contributions. Introducing: infinite recurring!

You’ve got a choice: ask people for a recurring contribution for a defined number or months (old standard), or ask them for one with no expiration date (new!). You can also choose not to have a recurring option, but we don’t recommend it (I’ll explain later.)

Here’s how you do it: Go to the edit page of any contribution form. Scroll down till you see this:

recurring toggle

Click on it to expand. It’ll look like this:

recurring options expanded

Select your radio button and then scroll down and hit submit. Yep, that’s it.

ActBlue got it’s start helping candidates raise money for their campaigns, which are built in two year cycles, so we allowed folks to set up recurring contributions for up to 48 months. The assumption was that donors would feel more comfortable signing up for a recurring contribution that would be sure to end at some point. These days, more and more organizations, who are around cycle after cycle, are using ActBlue. Plus, the way people use credit cards has changed and we have a whole system to let you extend/edit/add a new card to your recurring contribution, complete with prompts from us. It doesn’t make a ton of sense to have time-limited recurring contributions anymore.

So we tested it. Would forms with an infinite recurring ask perform the same (or better) as forms with a set number of months? AND would you raise more money if you didn’t have a recurring ask on the form, but asked people with a pop-up recurring box after their contribution was submitted?

We’ve got some answers. Several committees have run tests, confirming that conversion rates on time-limited forms and infinite recurring forms are similar. So if you’re around longer than election day, go ahead and turn on infinite recurring.

Generally speaking, making a form shorter and giving people fewer options leads to higher conversion rates. So theoretically, taking the recurring option off of a form should lead to more donations. We have a pop-up recurring box that campaigns can turn on to try and persuade a one-time donor to make their donation recurring, and there seemed to be a reasonable chance that having no recurring ask on the form would raise more money.

Nope! Turns out that we got a statistical tie on conversion rates between having the recurring option on the form or off. Just having pop-up recurring turned on did not generate as many recurring contributions as having it both on the form and as a post-donation action.

There were slightly more contributions processed on forms without a recurring option, but not enough to generate a statistically significant result. And then add to that the lost revenue from having fewer recurring donations, you end up with a pretty clear take-way: leave the recurring option on the form. Sure, you can turn off the recurring option, but you’ll likely lose money. And nobody wants that.

That’s why recurring contributions have been on every ActBlue contribution form since the beginning. These days we run anywhere from 8-14% recurring, and over $11 million is pledged to thousands of campaigns and organizations.

There is one big question we haven’t answered yet: will you raise more money overall from an infinite recurring contribution than say one with a 48 month expiration date? We’re currently working on a long-term experiment to test exactly that.

The answer might seem self-apparent, but the truth is nobody really knows. Credit cards expire and people cancel their pledges. You never know for sure how much money you’ll raise from a recurring contribution, but if you pay attention to your long-term data, you’ll be able to figure out your pledge completion rate.

If you’re interesting in figuring out a recurring donor strategy, we’re more than happy to give you some (free) advice. Just drop us a line at


Get every new post delivered to your Inbox.

Join 55 other followers