Here at ActBlue, we’re always optimizing our contribution form by testing different variations against each other to see which performs best. And, whenever possible, we like to share our results. Needless to say, it’s great to discuss tests that end up winning; every percentage point increase in conversion rate we bring to our contribution form benefits every committee — of which there are currently over 11,000 active — that fundraises on ActBlue.
A very important part of this process, however, is also tests that fail to bring about a positive change to our contribution form. Failure to openly discuss and reflect upon losing tests belies the experimental nature of optimization. Thus, I’m here to talk about an A/B test that we just ran on our contribution form that lost. (Bonus: it lost twice!)
We tried coalescing our “First name” and “Last name” fields into one “Full name” input. The theory was that one fewer input would reduce friction along the contribution path, thereby increasing conversions. Here’s what it looked like:
The control version, it turns out, was actually associated with a higher conversion rate than the “Full name” variation, though not statistically significantly.1 We even tested another slight variation of the “Full name” field with slightly different placeholder text and a more expressive label, but it lost again.
If you’re wondering why it lost, then that makes two of us; in a case like this, it’s tough to say what actually happened. Was it aesthetics? Anti-novelty effect? If we speculate like this ad infinitum, we’ll end up with more questions than answers — the world is full of uncertainty, after all. Far from discouraging this type of reflection, I’m saying that we indeed should! This is the origin story of many new testing ideas.
Pr(>|t|) > .05 ,
n = 63159