Archive

Author Archives: Ross Martin

A couple of weeks ago, Julia unveiled our new mobile-responsive contribution forms to the world. Since we’ve rolled out mobile-responsive forms, our mobile contribution numbers have been through the roof, so we’re really excited to share them with you.

Check out this graph, in which the red line represents the release date. Notice anything?

ActBlue mobile donation trends

ActBlue mobile donation trends

As we’ve mentioned, our initial A/B test yielded some excellent results: our new mobile-responsive forms led to a 49% boost in conversions (a statistically significant improvement at p< .01). And these forms are already making a marked difference.

Since the release, 21.9% of sitewide donations have been made by supporters using a mobile device. For ActBlue Express users– those who have saved their credit card information with us– the number’s even higher at a full 25.9% mobile. According to the stats textbooks I keep on my desk for reference, that number is “insanely high”.1 Seriously though, from the beginning of the year to the day our mobile-responsive contribution forms were released, 9.0% of donations were made via mobile devices (12.3% for Express users). It’s pretty tough to exaggerate how prodigious this jump is, and there’s clearly more growth to come.

The importance of mobile donations is increasing inexorably; we all know that. But, on one of the busiest days of the year, we topped over 30% mobile donations among ActBlue Express users. It’s a whole new world.

Footnotes:
1Just kidding, of course :-)

This week we officially announced Express Lane, and I’m guessing the fact that it can more than triple your money caught your eye. It can, and the way to raise more money is to learn Express Lane best practices and do your own optimization. We’re here to help you with both.

We’ve done a significant amount of Express Lane testing in our email blasts over the past few months to help you get started on what works– and what doesn’t– with Express Lane. Each email list, of course, is different, so you should probably test and expand upon the the takeaways below with your own list. And definitely let us know the results; we’d love to hear about them. It’d be especially great if you wanted to share your results here on the blog– just like the fantastic folks at CREDO Action were happy to do for this post– so that others can learn from your test results.

Here’s a little bit of background: our own email list consists entirely of donors, therefore it’s a pretty diverse group of folks. Also, we always fundraise to support our own infrastructure, not specific issues or candidates. Further, we spend most of our time optimizing for recurring donations because we’ve found them to be best for our organization, but much of what we say here also applies to one-time donation asks. We are, by the way, totally interested in collaborating with you on testing and optimization efforts– just give us a shout.

For this post, we’re going to discuss the gains you can expect from using Express Lane, results from some of the tests we’ve run on our Express Lane askblocks, and touch on stylistic concerns. Then, we’ll finish up with a summary of our recommendations and where you can go from here.

What to Expect

So, you probably expect to raise a lot more money using Express Lane, but what’s a typical increase? We’ve tested Express Lane vs. non-Express Lane on both recurring and one-time asks among randomly sampled Express users and seen Express Lane bring in more than triple the money for one-time1 asks, and 37.7% more for recurring asks (measured by money donated plus pledged recurring).

That’s quite a big boost, but other partners have seen significant gains, too. For example, here’s a test that was run by our friends at CREDO Action, some of our most sophisticated users. They tested a $5 control ask against a $5, $10, $25 Express Lane askblock. Their Express Lane version brought in 37.4% more donations than the control version. If you don’t see a noticeable increase in your testing, you should definitely reach out.

exp_lane_test_graph

Results from ActBlue’s April 2013 Express Lane test

Askblock Structure

We have an awesome Express Lane Link Creator tool for you, which you can find by clicking the “Express Lane” tab of your fundraising page. It’s really important that you use the language we provide there so that donors know that they’ll be charged instantly and why that’s possible– if you want to deviate from this, you’ll have to get our approval first. We do think, though, that you should stick with this language since it’s clear and concise.

But, how many Express Lane links should you include in the body of your email, and for what amounts? Should the intervals between amounts be equal? The answer to such questions will depend on your email list members but here are some suggestions, based on tests we’ve run, that should help get you on your way to optimizing your own Express Lane askblock structures!

One approach we’ve seen used by organizations in different contexts is what we refer to as a jump structure. The basic idea is that you set a large interval between the lowest link amount (which should be a low amount relative to your list’s average donation amount) and second-lowest link amount. Here’s an example we’ve used:

jump_example_image

Example jump structure

This relatively low-dollar link could encourage a much higher number of donations (if your jump structure amount is, for example, $4 instead of the $5 you’d usually use). This is because it’s a lower absolute dollar amount, but also a lower amount relative to the rest of the structure. Basically, the large jump between the lowest amount and the second-lowest amount makes the first one look small.

We’ve found that in general, this type of jump structure does indeed lead to a higher number of donations, but a lower overall amount of money, than the common structures which we used as controls. While it led to more donations, we didn’t see enough extra donations to outweigh the “cost” of the lower dollar amount and bring in more overall money. If you’re looking to bring in more low-dollar donations in the hopes of larger-dollar donations in the future, however, this might be a good strategy to try.

We’ve also looked at the effect of changing the lowest dollar amount in your ask block. In July, we tested the the following three askblock structures against each other:

Structure "A"

Structure “A”

Structure "B"

Structure “B”

Structure "C"

Structure “C”

Obviously, we were trying to see whether we could increase the total money we raised by increasing the amount of the bottom link2. The risk of this approach is that you might lose a certain number of donations by setting the lowest ask amount to be a little bit higher3.

We found that the by number of donations, A>B>C, but by overall money raised, C>B>A. The askblock labelled “C”, in fact, raised 21.1% more money than “A” (“B” raised 12.1% more than “A”), even though “A” brought in 15.3% more donations than “C”!

structure_test_graph

The “other amount” Link

A great thing about Express Lane is that users’ donations are processed once they click the link in your email body. However, as much as we try to structure our links perfectly, some donors are always going to want to do their own thing, and that’s okay. Enter the “other amount” link.

An “other amount” link doesn’t process the donation right away, it’s simply a normal ActBlue link that takes the user to your contribution page and allows them to choose a custom donation amount and/or recurring length. This is included as a default in our Express Lane Link Creator tool.

We at ActBlue focus on recurring donation asks because over the long run– and our goal is to be the best technology both today and years into the future– they bring in more money than one-time donation asks, even taking into account imperfect pledge completion rates. So, we worried at first that adding an “other amount” link might draw too many people toward giving one-time donations instead of more valuable recurring donations. But, we also know that it’s important to give people the option to choose their own donation amount, lest they not donate at all. This is why every ActBlue contribution page allows people to easily choose between a one-time donation and a recurring donation.

So we decided to test two things. First, we wanted to know whether the presence of an “other amount” link in our email body would lead to more/fewer donations. Actually, we were almost positive that getting rid of the “other amount” link would be a big loss, but we wanted to run the test anyway. That way, we could confirm this and make sure no one else has to lose money on the test. The result: don’t try this at home. The version which included the “other amount” link brought in 88.3% more money (90.6% more donations) than the version which did not. We’ll accept your thanks in the form of chocolate or wine. Just kidding! Our lawyers won’t allow that.

Second, we’ve performed several tests (and several variations thereof) of whether an “other amount” link which indicated that users could instead give a one-time donation would lead to more/fewer donations than an “other amount” link that made no mention of one-time donations. This matters to us because, as we mentioned, we focus mostly on recurring donation asks, and wanted to see whether we could retain people who would give a one-time donation, but might not know that it was possible.

Typically, an “other amount” link which mentions one-time contributions leads to a statistically significantly higher number of donations, but less overall money raised. While this setup might draw in some people who otherwise wouldn’t have given, it also pulls some would-be recurring donors into giving one-time donations, which bring in less money. This doesn’t mean that such language is a bad thing, but you should consider your fundraiser’s goals and organizational priorities while choosing your link language. If, for example your goal is to increase participation rather than raise as much money as possible, then mentioning one-time donations in your “other link” might be a good idea during a fundraiser focused on recurring donations.

No mention of one-time donations

No mention of one-time donations

With mention of one-time donations

With mention of one-time donations

Style

Stylistic elements of an email can often have a huge impact on your ask, and since Express Lane links are new, the presentation of them hasn’t yet been set in stone. We started sending emails with our Express Lane askblock simply as an HTML <blockquote> element. We wanted the Express Lane askblock to stand out and to be easily identified, though, so we devised a simple design to make it pop. We put our Express Lane askblock in a gray box and center-aligned the text4. It looked like this:

We tested this against our original structure among several different link structures, and the results were pretty interesting. Among link structures with 4 or 5 links (including “other amount”), the gray box boosted the amount of money raised by up to 37.7%.

Subtle Express Lane askblock styling

Subtle Express Lane askblock styling

The obvious concern is that some stylistic elements are really subject to novelty effects, and the initial boost in action rate will decline or disappear altogether in time. We think the gray box may be an exception, though. First, the gray box is pretty subtle, almost to the point of being too dull, so I doubt that it caused the fervor of a “Hey” subject line or manic yellow highlighting. Second, the box serves a legitimate function, i.e., to identify this new set of links that’s now appearing in emails as a single entity that stands out from the email content.

Where to go from here

You’ve seen how some slight changes– the link amounts, the intervals between them, the number of links, etc.– can seriously affect the performance of your Express Lane email ask. Hopefully, you’ve picked up some tips about how to structure your asks, as well as picked up a few ideas for testing that might prove fruitful for your own organization.

As progressive organizers, we all know how important participation and collaboration are. In this light, I encourage you to get in touch with us if you’d like to work together on running a test. Moreover, if you run a test with interesting results, we would love to hear from you so that we can share them with the larger ActBlue community.

Footnotes:

1N.B.: some of this money came from people giving recurring donations from the “other amount” link in our one-time ask.

2There could be an additional effect from having one fewer link in “C”, but our other testing indicates that this isn’t a particularly important factor.

3Think about it as if it’s a variation of the classic revenue maximizing problem, where Revenue = Quantity * Donation Amount. Of course, donors can still choose their amount by clicking the “other” link, but the suggested amounts do indeed impact behavior.

4style="background-color:#ECEDF0; padding:1.0em 1.5em; text-align:center;"

If you had hundreds of millions of lines of contribution data, what would you want to know? Well here at ActBlue, we have an insane amount of data, and we’re always looking to learn more about our donors and how they use our site.

So we recently recently posed the question:

Who donates more…men or women?

The answer turns out to be women, but only if you approach things from the right perspective.

Before I go on, I’d like to say that I by no means want to perpetuate the gender binary; everyone at ActBlue respects and values people all across the gender spectrum.

We all know some of the basic election gender data – more women went for Obama, more men for Romney. But, political contributions involve personal investment, so I wanted to see how it breaks down on our site, which is obviously exclusive to Democrats. There was just one hiccup in my data-nerd fantasy: we don’t collect any information on our donors’ gender identification.

The easiest way to get around this problem is to use approximate name-gender matching. While many databases available for this purpose are either costly, unreliable, or both, I did eventually find a source which I felt comfortable using (an academic paper available for free in which the authors explained their methodology). So after digging into our database and crunching the numbers, I came out with some answers. I’ll give an overview of my results first and then explain my methodology and some statistical issues I want to highlight in a bit more detail further down.

I found that for individual contributions, women give about 15.0% smaller dollar amounts than men do. I also found, however, that women are 12.4% more likely to make a recurring contribution than men are. (Assume all of these values are statistically significant, but if you’re interested read more on that below.)

So the obvious question was: what happens once you factor in future installments of a recurring contribution, and not just the initial dollar amount? I crunched the numbers again, but it turned out not to change anything — women still donated about 16.6% smaller dollar donations than men. This was a big surprise, so I started racking my brain for possible explanations.

You’ve probably already figured it out, but I made quite an oversight in my initial assumptions. It’s well documented that the gender wage gap still persists; 77 cents is a popular estimate for how much a woman earns for doing the same amount of work a man is payed one dollar to do. This is incredibly unjust, but it is also directly relevant to my project — women are unfairly earning less income than men, so it makes sense that they’d have less disposable income from which they are willing and able to make political contributions, all else equal.

So I did what every progressive has always dreamed of. I punched a few computer keys and voilà– the gender wage gap disappeared! After this adjustment for equality, women turned out to make about 12.9% higher dollar contributions than men, and when factoring in the entirety of recurring donations, they donated 11.4% more than men. Quite the change from my initial findings, indeed. (This kind of broad and general adjustment is bound to be approximate, but in my opinion it was actually a fairly conservative change. But, see below for some discussion of that.)

Given ActBlue’s focus on grassroots donors, I wondered what would happen if I trimmed my dataset to include only donations that were $100 or less. Well, I did that and was left with about 95% of my original sample, which really does demonstrate the extent to which ActBlue is all about small-dollar donations. After trimming the dataset (and continuing to use adjusted donation amounts), I found that women were donating higher dollar amounts than men to an even greater extent than before, at 21.1%!

As many of you know, ActBlue Express Accounts allow donors to securely store their payment information with us and donate with just one click. I found that women and men in my sample donated using an ActBlue Express Account at a remarkably similar rate– within 1 percentage point. This just goes to show how egalitarian ActBlue Express Accounts are!

Now there are several important takeaways here. It looks like on ActBlue, for example, women tend to donate higher dollar amounts than men (after adjusting for the gender wage gap), and also tend to give recurring contributions more often than men. But for me, the biggest lesson was to be vigilant about understanding what outside factors might be affecting the internal nature of your data.

Before I move on to some nitty-gritty technical comments, I want to say that I really did mean the question that opened this blog post. So, readers, what would you want to know if you had that much data? I really enjoyed sharing these results with you, so please shoot me a note at martin [at] actblue [dot] com to let me know what you’d like our team to dig into for the next post!

My discussion below is a bit more technical and intended for other practitioners or very curious general readers.

As I mentioned above, name-to-gender matching is difficult for several reasons. In “A Name-Centric Approach to Gender Inference in Online Social Networks”, C. Tang et al. combed Facebook pages of users in New York City and, after using some interesting techniques, came up with a list of about 23k names, each of which was associated with the number of times a user with that name identified as male and female. I definitely recommend reading through their study– you might not think it’s perfect, but it could provide some inspiration for the aspiring data miners among you. In any case, I then did some further pruning of their list for suitability reasons, the effects of which were minimal. I combined their name-gender list with a n=500k random sample of contributions made on ActBlue since 2010, matching only names that appear on both lists for obvious reasons.

At that point, I had a dataset that included, on a contribution-basis, the donor’s name, estimated gender (the authors of the study pegged their matching accuracy at about 95%), and some other information about the contribution. Of the 500k sample, the matching spat out about 50.4% females.

When I say “other information”, I’m specifically referring to factors that I know from past analyses directly affect contribution amount (for instance, whether the donor is an ActBlue Express User or not). I took this extra information since I knew I’d need to control for these factors when evaluating the effect of gender on donation amount. This is a good reminder of why it’s super important to know your data really well by staying current with trends and performing frequent tests– otherwise you might end up omitting important explanatory variables, choosing a misspecified model, or making other common mistakes.

With my dataset ready, I tried a few different types of models, but landed on one in which the dependent variable (contribution amount) was in logarithmic form, so it looked like:

ln(contribution_amount) = β0 + β1female + some other stuff + u

This model was best for a few different, yet boring (even for practitioners) reasons, so I’ll spare you the discussion :)

As I noted in my general discussion, all of the results I found were “statistically significant”, but there was an issue I wanted to address. In my case, yes, beta coefficients were significant at p<.0001, as was the overall significance of the regression and joint significance of groups of regressors I thought it important to test. But with n=500k, I think saying certain things were “statistically significant” can be a bit insincere or misleading if not explained properly, unless you’re talking to someone fairly comfortable with statistics. What I mean is pretty obvious if you just think about how a t statistic is actually computed, why it’s done that way, and what that means.

At huge sample sizes, very small differences can be “significant” at very high confidence levels, and lead to misinterpreting your results. Moreover, just because something is statistically significant doesn’t mean that it is practically significant. There are a few different ways to deal with this, none of which are perfect, though. In my case, I saw that 95% CIs of the regressor coefficients were really tight, and would certainly consider 10%-14% differences practically significant (don’t get me wrong—of course there are times when small differences like 0.3% can be practically significant, but this isn’t one of them). I’m not bashing large sample sizes here or saying that hypothesis testing is unimportant (it is!), but rather emphasizing caution and clarity in our reporting.

Further, there’s another important lesson here. Sometimes, no matter how cleverly we choose our models or carefully we conduct our analysis, the explanatory power of a regression is going to be limited because you simply don’t have enough data. I don’t mean depth of data (i.e. sample size), but rather the breadth of the data (i.e. categories of information). For instance, personal income is clearly going to be an important factor in determining the dollar amount of a given political contribution. We don’t, however, have that kind of information about donors. Does that mean I should have just thrown away the regression and called it a day? Of course not, because obviously partial effects can be estimated fairly precisely with very large sample sizes, even with relatively large error variance. Again, the lesson is to be judicious in your interpretation and reporting of results.

I also noted that I thought my gender wage gap adjustment was fairly conservative. What I did was simple; for all contributions in the dataset made by females, I calculated an “adjusted” contribution amount by dividing the actual contribution amount by 0.77. This implicitly assumes that if women were paid equally for equal work, they would contribute more overall dollars, but at their current ratio of donations/income. In other words, their marginal propensity to donate would be constant as income increases. In fact, I think this is probably false in reality, and women (and men, for that matter) would instead demonstrate an increasing marginal propensity to donate with increased income, and therefore I should have increased the contribution amounts by even more than I did. I haven’t, however, read any study that provides a reliable estimate of a marginal propensity to donate, and therefore decided it best to keep things simple.

I already asked you to reach out and tell me what you’re interested in knowing, but I’ll double down here: I would love to hear from you and get your input so that the next blog post will reflect our community members’ input! So shoot me an email me at martin [at] actblue [dot] com.

Follow

Get every new post delivered to your Inbox.

Join 26 other followers