A gender contribution gap? featured image placeholder

If you had hundreds of millions of lines of contribution data, what would you want to know? Well here at ActBlue, we have an insane amount of data, and we’re always looking to learn more about our donors and how they use our site.

So we recently recently posed the question:

Who donates more…men or women?

The answer turns out to be women, but only if you approach things from the right perspective.

Before I go on, I’d like to say that I by no means want to perpetuate the gender binary; everyone at ActBlue respects and values people all across the gender spectrum.

We all know some of the basic election gender data – more women went for Obama, more men for Romney. But, political contributions involve personal investment, so I wanted to see how it breaks down on our site, which is obviously exclusive to Democrats. There was just one hiccup in my data-nerd fantasy: we don’t collect any information on our donors’ gender identification.

The easiest way to get around this problem is to use approximate name-gender matching. While many databases available for this purpose are either costly, unreliable, or both, I did eventually find a source which I felt comfortable using (an academic paper available for free in which the authors explained their methodology). So after digging into our database and crunching the numbers, I came out with some answers. I’ll give an overview of my results first and then explain my methodology and some statistical issues I want to highlight in a bit more detail further down.

I found that for individual contributions, women give about 15.0% smaller dollar amounts than men do. I also found, however, that women are 12.4% more likely to make a recurring contribution than men are. (Assume all of these values are statistically significant, but if you’re interested read more on that below.)

So the obvious question was: what happens once you factor in future installments of a recurring contribution, and not just the initial dollar amount? I crunched the numbers again, but it turned out not to change anything — women still donated about 16.6% smaller dollar donations than men. This was a big surprise, so I started racking my brain for possible explanations.

You’ve probably already figured it out, but I made quite an oversight in my initial assumptions. It’s well documented that the gender wage gap still persists; 77 cents is a popular estimate for how much a woman earns for doing the same amount of work a man is payed one dollar to do. This is incredibly unjust, but it is also directly relevant to my project — women are unfairly earning less income than men, so it makes sense that they’d have less disposable income from which they are willing and able to make political contributions, all else equal.

So I did what every progressive has always dreamed of. I punched a few computer keys and voilà– the gender wage gap disappeared! After this adjustment for equality, women turned out to make about 12.9% higher dollar contributions than men, and when factoring in the entirety of recurring donations, they donated 11.4% more than men. Quite the change from my initial findings, indeed. (This kind of broad and general adjustment is bound to be approximate, but in my opinion it was actually a fairly conservative change. But, see below for some discussion of that.)

Given ActBlue’s focus on grassroots donors, I wondered what would happen if I trimmed my dataset to include only donations that were $100 or less. Well, I did that and was left with about 95% of my original sample, which really does demonstrate the extent to which ActBlue is all about small-dollar donations. After trimming the dataset (and continuing to use adjusted donation amounts), I found that women were donating higher dollar amounts than men to an even greater extent than before, at 21.1%!

As many of you know, ActBlue Express Accounts allow donors to securely store their payment information with us and donate with just one click. I found that women and men in my sample donated using an ActBlue Express Account at a remarkably similar rate– within 1 percentage point. This just goes to show how egalitarian ActBlue Express Accounts are!

Now there are several important takeaways here. It looks like on ActBlue, for example, women tend to donate higher dollar amounts than men (after adjusting for the gender wage gap), and also tend to give recurring contributions more often than men. But for me, the biggest lesson was to be vigilant about understanding what outside factors might be affecting the internal nature of your data.

Before I move on to some nitty-gritty technical comments, I want to say that I really did mean the question that opened this blog post. So, readers, what would you want to know if you had that much data? I really enjoyed sharing these results with you, so please shoot me a note at martin [at] actblue [dot] com to let me know what you’d like our team to dig into for the next post!

My discussion below is a bit more technical and intended for other practitioners or very curious general readers.

As I mentioned above, name-to-gender matching is difficult for several reasons. In “A Name-Centric Approach to Gender Inference in Online Social Networks”, C. Tang et al. combed Facebook pages of users in New York City and, after using some interesting techniques, came up with a list of about 23k names, each of which was associated with the number of times a user with that name identified as male and female. I definitely recommend reading through their study– you might not think it’s perfect, but it could provide some inspiration for the aspiring data miners among you. In any case, I then did some further pruning of their list for suitability reasons, the effects of which were minimal. I combined their name-gender list with a n=500k random sample of contributions made on ActBlue since 2010, matching only names that appear on both lists for obvious reasons.

At that point, I had a dataset that included, on a contribution-basis, the donor’s name, estimated gender (the authors of the study pegged their matching accuracy at about 95%), and some other information about the contribution. Of the 500k sample, the matching spat out about 50.4% females.

When I say “other information”, I’m specifically referring to factors that I know from past analyses directly affect contribution amount (for instance, whether the donor is an ActBlue Express User or not). I took this extra information since I knew I’d need to control for these factors when evaluating the effect of gender on donation amount. This is a good reminder of why it’s super important to know your data really well by staying current with trends and performing frequent tests– otherwise you might end up omitting important explanatory variables, choosing a misspecified model, or making other common mistakes.

With my dataset ready, I tried a few different types of models, but landed on one in which the dependent variable (contribution amount) was in logarithmic form, so it looked like:

ln(contribution_amount) = β0 + β1female + some other stuff + u

This model was best for a few different, yet boring (even for practitioners) reasons, so I’ll spare you the discussion 🙂

As I noted in my general discussion, all of the results I found were “statistically significant”, but there was an issue I wanted to address. In my case, yes, beta coefficients were significant at p<.0001, as was the overall significance of the regression and joint significance of groups of regressors I thought it important to test. But with n=500k, I think saying certain things were “statistically significant” can be a bit insincere or misleading if not explained properly, unless you’re talking to someone fairly comfortable with statistics. What I mean is pretty obvious if you just think about how a t statistic is actually computed, why it’s done that way, and what that means.

At huge sample sizes, very small differences can be “significant” at very high confidence levels, and lead to misinterpreting your results. Moreover, just because something is statistically significant doesn’t mean that it is practically significant. There are a few different ways to deal with this, none of which are perfect, though. In my case, I saw that 95% CIs of the regressor coefficients were really tight, and would certainly consider 10%-14% differences practically significant (don’t get me wrong—of course there are times when small differences like 0.3% can be practically significant, but this isn’t one of them). I’m not bashing large sample sizes here or saying that hypothesis testing is unimportant (it is!), but rather emphasizing caution and clarity in our reporting.

Further, there’s another important lesson here. Sometimes, no matter how cleverly we choose our models or carefully we conduct our analysis, the explanatory power of a regression is going to be limited because you simply don’t have enough data. I don’t mean depth of data (i.e. sample size), but rather the breadth of the data (i.e. categories of information). For instance, personal income is clearly going to be an important factor in determining the dollar amount of a given political contribution. We don’t, however, have that kind of information about donors. Does that mean I should have just thrown away the regression and called it a day? Of course not, because obviously partial effects can be estimated fairly precisely with very large sample sizes, even with relatively large error variance. Again, the lesson is to be judicious in your interpretation and reporting of results.

I also noted that I thought my gender wage gap adjustment was fairly conservative. What I did was simple; for all contributions in the dataset made by females, I calculated an “adjusted” contribution amount by dividing the actual contribution amount by 0.77. This implicitly assumes that if women were paid equally for equal work, they would contribute more overall dollars, but at their current ratio of donations/income. In other words, their marginal propensity to donate would be constant as income increases. In fact, I think this is probably false in reality, and women (and men, for that matter) would instead demonstrate an increasing marginal propensity to donate with increased income, and therefore I should have increased the contribution amounts by even more than I did. I haven’t, however, read any study that provides a reliable estimate of a marginal propensity to donate, and therefore decided it best to keep things simple.

I already asked you to reach out and tell me what you’re interested in knowing, but I’ll double down here: I would love to hear from you and get your input so that the next blog post will reflect our community members’ input! So shoot me an email me at martin [at] actblue [dot] com.

Related Reading

Get the best small-dollar fundraising
strategies & resources in your inbox.

Email Signup - Step 1

By providing my email address, I consent to receive marketing communications from ActBlue. Privacy Policy.