We’re less than 8 weeks out from Election Day and are now making the weekly recurring feature available to campaigns and organizations. Just drop us a line at info [AT] actblue [DOT] com and we’ll turn it on for you.

Yep, weekly recurring is exactly what it sounds like. You can ask your donors to sign up to make a recurring contribution that processes on that same day of the week every week until Election Day. After Election Day, the recurring contribution automatically ends.

So, if you get someone to sign up today for a weekly recurring contribution, they’d then have 7 more contributions scheduled to process every Friday.

Election Day is getting closer and closer though, so if you’re going to use weekly recurring, we suggest getting started soon.

Once we turn on the feature for you, create a new contribution form and open the “Show recurring options” section in the edit tab. You will see a new option there for weekly recurring. Make sure you also turn off popup recurring if you have it enabled — these two features aren’t compatible (yet!).

It looks like this:

We’ve run a few tests on weekly recurring this week with our own email list and have had a good deal of success. As always, a donor needs to know exactly what amount and for how long they’ll be charged before they click a link. If you’re going to use weekly recurring with Express Lane (and you should!), here is the disclaimer language we used and recommend you use as well:

Based on our testing, certain segments of your list will respond better than others to a weekly recurring ask (not exactly a shocking revelation). We sort our list into those likely to give to a recurring ask and those who are more likely to give a one-time gift. For the recurring pool, the weekly ask has been performing strongly. Unsurprisingly, the same can’t be said for our one-time folks.

Test it out with the portion of your list that is more likely to give recurring gifts. And try fun things like offering a small package of swag like bumper stickers in return for signing up for a weekly recurring gift.

And if you find an angle that’s working really well for weekly recurring, let us know!

Labor Day has historically marked the start of campaign season. Somebody forgot to tell this cycle’s campaigns: we’ve already passed the $200M mark!

In August 2014 we handled $22,982,206 from 690,488 contributions, placing it in our top-3 with October 2012 and July 2014 for most contributions in a single month. Both this month’s incredible number of contributions and the low average donation size (just $32.88) demonstrate how hard campaigns and organizations are working to mobilize a grassroots movement.

August ’11 August ’12 August ’13 August ’14
Contributions 78,172 309,877 155,524 690,488
Volume ($) $3,051,815 $12,785,110 $5,674,068 $22,982,206
Mean Donation $39.04 $41.26 $36.48 $32.88
Committees 916 1,981 1,305 2,251

August 2014 was more than $10M larger than August 2012, while the number of contributions has increased by 122%. That’s causing the average contribution size to decrease by 20%. And that means more small-dollar donors are supporting the causes and candidates they care about.

We’ve seen a dramatic increase in the percentage of recurring contributions compared to previous cycles. 15.4% of August 2014’s total volume of money — a sum of $3,523,237 — came from sustaining donations. The chart below shows the growth of recurring volume in the 2010, 2012, and 2014 election cycles:

So far this election cycle, we’ve handled $17.8M of recurring contributions. That’s an increase of 170% from the same point in the 2012 cycle and more than ten times where at the end of August 2010.

Building a base of recurring donors can have a huge impact on the sustainability of a campaign or organization. A predictive, steady stream of money helps them better manage their finances. And for organizations, which will continue their important work long after November, a pool of recurring donations can help with the post-election donor-fatigue. Recurring donations help donors stay engaged by letting them make regular, small dollar investments in the causes they care about.

To cap off another massive month, we passed the 10 million contributions mark on August 30th! We were too preoccupied with a crazy amount of donations over the holiday weekend to properly celebrate. Over the last 3 days of the month, we handled 22.7% of August’s total volume of money. For those wondering, we did manage to take a screenshot of our internal metrics page and share it with the team.

It wouldn’t be an ActBlue monthly recap if we didn’t point out some impressive Express and mobile stats. Over 54k Express users signed up this past month to join our million-plus individuals strong community of supporters who can donate in an instant. Express users made 63.7% of all contributions in August 2014, which totaled more than $12M. That’s more than half of this month’s total volume, and a testament to how many campaigns are working with the Express tools.

Express users have saved their payment information with us, which increases mobile conversions. They give with mobile devices at a higher rate than non-Express users: 29.6% to 23.9%. Mobile donations continue to increase sitewide. This month saw 26.2% of all contributions made with a mobile device.

Campaigns and organizations are fundraising at a blistering pace. In the past two months, we’ve handled more than 1.3M contributions. And we’re prepared to handle the massive load of small-dollar contributions that are predicted to come between now and November. It’s going to be a crazy ride!

This post is the fourth in our blog series on testing for digital organizers. Today I’ll be talking about implementing your A/B test. This post will be full of helpful, quick tips.

So, we’ve discussed some things you might want to test, and some other things you might not want to test. Then, we walked through a simple way to figure out the number of people you’ll need in each of your test groups, which number depends on the smallest difference you’d like to reliably detect.1 Now what?

Well, the short answer is “run the test”, but of course it’s never that simple. Your next specific steps depend on what you’re testing, as well as which platform you’re using to run the test. There are too many possibilities for me to go through each one, but I can provide a few quick tips that should apply to you regardless of your specific situation.

First, make sure you have a reliable method of tracking your variations’ performance (like reference codes or an A/B testing tool (here are instructions for using ours)), and make sure you actually implement that method. This may sound like a no-brainer, but we’ve seen plenty of people start what would otherwise be an excellently set-up test with nothing to measure the variations’ relative performance! Is there a joke here about the “results” of the test?

Groaners aside, pointing out that error isn’t at all to make fun of the people who have committed it. Rather, we’re all busy, and things can get hectic. Having this on your pre-send checklist2 will save you from the realization that a lot of time spent thinking up a test, creating the content, and so on ad nauseam was all for naught.

What’s an example? Well, say you’re testing email content for donations. And of course, you want to use the best online fundraising software in the whole wide world, so you’re using ActBlue. Well, we have a handy feature that allows you to generate reference codes to track donations. We have a full instruction guide for using reference codes on our tutorial, found here. If you’re testing two different versions of your email, you could attach the URL param3 refcode=variation_a to the links in your first email and refcode=variation_b to those in your second email. Then, when you go to https://actblue.com/pages/[YOUR_PAGE_NAME]/statistics, you can measure the performance of each email. The information will also appear in a .csv download of your contribution form donations.

We also allow a handy refcode2 URL param if you want to conveniently subdivide your tracking. Conceptually, it’s the exact same thing as refcode; it’s value lies in the fact that it’s an extra place to store information. Think of a backpack with an extra divider on the inside for sorting your stuff. This is the internet version of that. For example, we use this for tracking link placement in the email. The need for refcode2, however, indicates that your test might be a bit complicated (i.e., there are more than just two variations, so setup and evaluation of the test is a bit outside the scope of the tips in this testing series.) That’s no problem, but you might want to shoot us an email at digital [at] actblue [dot] com to have a chat about test setup and design.

My second tip is related to groups. Taking your list—or some subset of your list—and dividing it up into smaller, randomized groups is a step that you’ll likely do in your CRM or email tool. Unfortunately, I can’t provide detailed instructions for each one. Chances are, though, that your CRM has an instruction page on how to do this within their software.4 In any case, this step is critical: without at least randomizing before conducting your trial, you’re setting yourself up for failure.

Here’s an example of how to do it wrong: let’s say you’re testing two emails, and even though you’re not sure which one is better, you have a hunch that email B is better than email A. So, not wanting to lose out on money, you decide to assign 20,000 people with the highest previous donations to group B and 20,000 people with the lowest previous donations to group A. That way, you can conduct the test to find out which email is definitely better, but not have to lose too much money along the way, right? Well, that’d be great, but unfortunately it’s all wrong. Assigning your groups that way would all but ensure you draw false conclusions about your test–email B is all but certain to bring in more donations, but it’s because it was assigned high-propensity donors, not necessarily because it’s the better email. Make sure you’re at least randomizing (with a proper algorithm, q.v. footnote 4) before splitting your groups and implementing your test.

My third tip is short and sweet. After you do all of this legwork, how do you know that the right variations were sent to the right number of people? What if you’re working with eight groups instead of just two? Well, the answer is that you don’t really. But, that can (and should!) be remedied. Place your own email address in each of the test groups. This won’t significantly affect the results of the test, but it will allow you to be sure that the right variations were sent. “But, I only have one email address, how can I put myself in multiple test groups without the hassle of creating new emails?”, you ask. Use the old email-campaigner’s trick of adding a “+” to your email address if you have a Gmail-based address. For example, if your email address is janesmith@actblue.com, you can add janesmith+test_email_a@actblue.com to group A and janesmith+test_email_b@actblue.com to group B; they’ll both be delivered to your inbox, and you’ll be able to perfectly spot whether the variations were sent correctly.

My fourth and last tip of the day is the most important one of all. Remember going through the process of determining your required sample size? Well, we did that for a (lengthily explained) reason. Don’t deviate from that now. What the hell am I talking about? I’m talking about peaking at the results too early (viz., before you reach your necessary sample size.)

I get it. You spent a lot of time setting up a test for these awesome variations of, say, a contribution form, and even though you know you need to wait until 15,000 people land on the form to see results, you want to check what’s happening? Has either taken an early lead? etc., etc., etc.

You can check what’s happening along the way, but you should definitely not stop the test early because it looks like one variation is performing better.5 This is a really common mistake, but a deadly one. I can’t stress this enough. The more times you test two variations for significance (which we’ll talk about in a future post) before the required sample size is hit, the more likely you are to detect a false positive. In fact, you can pretty quickly render your test effectively useless. So, if you just have to see what’s going on, fine, but promise yourself and statisticians everywhere that you won’t act on what you see!

Ok, that’s it for today! Next we’ll talk about evaluating your results and even more importantly, learning from them!

FOOTNOTES:
1 as well as your tolerance for the probability of getting a false positive and false negative, though using standard values can take some of the difficulty of this decision making away

2 which, if you don’t have a pre-send checklist (we prefer old-fashioned paper, big check boxes, and sharpies!), you should make one ASAP

3 A way of passing messages from the URL back to the website which it can use to customize the display or data recorded.

4 Now, this is generally the most basic possible insurance for proper group setup, as most tools will do nothing more than randomize and divide. There are other steps that should be taken for running anything more complex than a simple A/B test, which steps tend to best be done with a statistical tool such as R. If you think something more complicated is in-line for your program, don’t hesitate to shoot us an email (digital [at] actblue [dot] com)– we’d love to work with you to see if something more complicated is in order, and if so, we’d be glad to help.

5 Saying “definitely” in a conversation about statistics is— if also delightfully ironic— a bit misleading. This is actually a really complicated topic with plenty of proffered solutions, which range from minor adjustments in your calculations to an entirely different philosophical approach to statistics (I mean, who knew, right?). Those are all great discussions to have, but for now, it’s probably best to just assume you shouldn’t repeatedly evaluate your test variations before you hit your required sample size. Ok? Cool.

Democrats are fired up.

This month we handled a total volume of $19,812,843 from 694,625 contributions, second only to October ‘12 for the most contributions in a month. To put that in perspective, we handled 685,830 contributions in all of 2011.

Clearly, election years no longer have any dog days of summer. That’s especially true when Republicans’ antics include threatening to sue President Obama, actually suing him, and refusing to rule out impeachment.

In the wake of the GOP’s outrageous actions, major Democratic committees and organizations sent out hundreds of emails. And it worked. They raised millions of dollars from hundreds of thousands grassroots supporters.

July’s average contribution size was just $28.52. That’s the lowest we’ve seen in the history of ActBlue.

And we couldn’t be more thrilled! A month like this exemplifies ActBlue’s mission: to democratize power by putting fundraising tools in the hands of grassroots donors across the USA. It’s evident that small-dollar donations are increasingly powering the left’s campaigns and organizations.

July ’11 July ’12 July ’13 July ’14
Contributions 66,746 200,193 162,935 694,625
Volume ($) $2,424,679 $8,342,134 $5,750,964 $19,812,843
Mean Donation $40.12 $41.67 $35.30 $28.52
Committees 862 1,866 1,219 2,153

This month’s total volume was more than twice July ‘12’s total of $8,342,124. This year’s trend of doubling our fundraising numbers from 2012 is impressive in its own right, but it’s even more exciting to see the number of contributions increase by 3.5 times, while the average donation size continues to decrease. And we’re working with even more candidates, organizations, and committees: 2,153 this month, compared with 1,866 in July ‘12.

Recurring contributions continue to play a major role as more and more organizations and campaigns are realizing the value of a reliable stream of money. Last month we introduced infinite recurring to make it easier for campaigns and organizations to raise long-term, sustaining contributions. Recurring contributions accounted for 13.7% of this month’s total number of contributions.

Like we mentioned in last month’s recap, we process all of the recurring contributions early in the morning when most people are asleep. We do this for two reasons: campaigns and organizations wake up to a surplus of cash and and our system is ready to handle the rest of the day’s contributions.

The number of recurring contributions has grown and grown throughout this election cycle, a result of the hard work people put in to get sustaining supporters signed up. Recurring contributions are a convenient way for people to remain engaged with entities. And on July 30, we handled 8,453 contributions from 4-5AM, which is the new record. Because the number of people signed up for recurring contributions continues to increase, we don’t expect this record to last long!

Despite the incredible volume we saw in the final week of July (one $2M day, five days over $1M, and one day over $900K), it was smooth sailing for the ActBlue technical team. This is a true testament to the work that our tech team does on a daily basis to prepare for surges in volume like these.

Here’s a look at the month’s daily volume of money and number of contributions:

As you can see, the final push to the end of the month was massive: 50.1% of the month’s total volume of money came in the last 7 days. And over one-third of July’s total contributions (35.4%) were made in the final 4 days. Over the course of this month, 29% of the contributions were made with a mobile device. That’s due in large part to Express users: 31% of their contributions were made via mobile.

In case you missed our earlier blog post, the number of Express users climbed to more than one million! What makes this community a game changer for the left? Express users have securely saved their payment information with us, so they can donate to a Democratic campaign or organization in an instant.

In July alone, Express users contributed $10,465,812 (or 52.8% of the month’s total volume of money) and accounted for 61.5% of the total number of contributions. And 16.7% of all newly made contributions were made via Express Lane, our one-click contribution system.

Campaigns and organizations have set a blistering pace for emails sent and money raised in order to win this fall. And at this point in time, it’s safe to say that this cycle’s not slowing down anytime soon. There’s too much at stake not to.

Yep, that’s right. The community that we started building in 2008 has grown to include one million supporters.

What’s so special about Express users? They’ve securely saved their payment information with us, which means they can give in an instant to any candidate, committee, or organization using ActBlue. Whenever you make it easier for people to donate wherever they are, whenever they want, they’re more apt to give and conversion rates go up. That means our work is directly benefitting Democrats and progressives across the country.

In other words, Express users are power donors. But not the Koch-brother type. The small-dollar kind. Check out how many dollars ActBlue Express users have been giving:

Express users also power Express Lane, ActBlue’s one-click payment system. Express Lane increases donation rates anywhere from 40% to 200%. The more Express users an organization has, the more likely they are to bring in those Express Lane donations. The best way to increase your Express user pool? We’ve found that sending Express Lane links to all your users increases donation rates and helps you convert more of your list to Express users. We wrote a post about it here.

The community has been growing rapidly, but we’re really happy to see the current monthly growth rate at about the same level as October 2012. If you remember, there were some big things going on.

As we get further into the election cycle, more and more new donors are emerging and joining the Express Lane community. Every campaign out there is organizing and growing their ranks, all the major campaigns in the country are using ActBlue and adding their donors to this pool, and together the entire left is raising more money. And if we’re seeing this many new donors signing up during summer vacation months….well, we can only imagine what this fall will bring!

If you looked really closely to the Express Lane emails of a number of groups and campaigns recently, you might have noticed a tiny but significant change. Rather than saying: “Because you’ve saved your payment information with ActBlue Express…” the emails now read: “If you’ve saved your payment information with ActBlue Express…”



Why? Well, it turns out that you can raise slightly more money by sending an Express Lane-structured email to your entire membership. Traditionally, list admins send two distinct emails; Express users see Express Lane links, while everyone else gets an email with “regular” links. With Express Lane to all, you can send the same Express Lane email to all of your users, saving you time and opening up the possibilities for groups with smaller lists.

There’s been a lot of testing done both by us and other committees on sending Express Lane emails to everyone. The general consensus is that Express Lane structure to non-Express users does perform slightly better than normal links. We’ve tested sending Express Lane links to non-Express users 4 different times. Consistently, we see more money (the net bump is around 6-7%), but these results aren’t statistically significant. Others are seeing similar gains.

While we’d love to see statistical significance, we think it’s still a great idea because there is a tremendous upside potential for both groups and campaigns that are already using Express Lane and those who have yet to try it out. It’s a time saver for smaller groups and also encourages your members to save their information with ActBlue and become an Express user.

Our recommendation is that groups and campaigns test this with their membership and confirm that they are getting similar results before making this a best practice. There is some reason to believe that we’re seeing a novelty effect, since the new link structure is unusual. We’ll test this again in the future to make sure that the results are still holding, and we urge others do the same.

This tactic works particularly well for groups with smaller lists. We’re confident enough in the testing to tell you that you’re likely to raise more money from sending Express Lane to your entire list, especially with the strong growth in the Express universe (994k users and counting!). However, pay attention to future posts, in case we do find that there is a novelty effect.

This post is the third in our blog series on testing for digital organizers. Today I’ll be talking a bit about what an A/B test is and explain how to determine the sample size (definition below) you’ll need to conduct one.

Hey, pop quiz! Is 15% greater than 14%?

My answer is “well, kind of.” To see what I mean, let’s look at an example.

Let’s say you have two elevators, and one person at a time enters each elevator for a ride. After 100 people ride each elevator, you find that 15 people sneezed in elevator 1, and 14 people sneezed in elevator 2.

Clearly, a higher percentage of people sneezed in elevator 1 than elevator 2, but can you conclude with any certainty that elevator 1 is more likely to induce sneezing in its passengers? Or, perhaps, was the difference simply due to random chance?

In this contrived example, you could make a pretty good case for random chance just with common sense, but the real world is ambiguous so decisions can be trickier. Fortunately, some basic statistical methods can help us make these judgments.

One specific type of test for determining differences in proportions1 is commonly called an A/B test. I’ll give a simple overview of the concepts involved and include a technical appendix for instructions on how to perform the procedures I discuss.

Let’s recall what we already said: we can perform a statistical test to help us detect a difference (or lack thereof) between the action rate in two samples. So, what’s involved?

I’ll skip over the nitty-gritty statistics of this, but it’s generally true that as the number of trials2 increases, it becomes easier to tell whether the difference (if there’s any difference at all) between the two variations’ proportions is likely to be real, or just due to random chance. Or, slightly more accurately, as the number of trials increases, smaller differences between the variations can be more reliably detected.

What I’m describing is actually something you’ve probably already heard about: sample size. For example, if we have two versions of language on our contribution form, how many people do we need to have land on each variation of the contribution form to reliably detect a difference (and, consequently, decide which version is statistically “better” to use going forward)? That number is the sample size.

To determine the number of people you’ll need, there are a few closely related concepts (which I explain in the appendix), but for now, we’ll keep it simple. The basic idea is that as the percent difference between variations you wish to reliably detect decreases, the sample size you’ll need increases. So, if you want to detect a relatively small (say, 5%) difference between two variations, you’ll need a larger sample size than if you wanted to be able to detect a 10% difference.

How do you know the percent difference you’d like to be able to detect? Well, a good rule of thumb to start with is that if it’s a really important change (like, say, changing the signup flow on your website), you’d want to be able to detect really small changes, whereas for something less important, you’d be satisfied with a somewhat larger change (and therefore less costly test).

Here’s what that looks like:

Sample Size Graph

Required sample size varies by the base action rate and percent difference you want to be able to reliably detect. Notice the trends: as either of those factors increases, holding all else equal, the sample size decreases.

For example, if you’re testing two versions of your contribution form language to see which has a higher conversion rate, your typical conversion rate is 20%, and you want to be able to detect a difference of around 5%, you’d need about 26k people in each group .

For instructions on how to find that number, see the appendix below. Once you have determined your required sample size, you’ll be ready to set up your groups and variations, run the test, and evaluate the results of your test. Each of those will be upcoming posts in this series. For now, feel free to email info [at] actblue [dot] com with any questions!

Footnotes:
1 Note that this should be taken strictly as “proportions”. Of course, there are many things to be interested in other than the percentage of people who did an action vs. didn’t (e.g., donated vs. didn’t donate), like values of actions (e.g., contribution amounts), but for now, we’ll stick to the former.
2I.e., the number of times something happens. For example, this could be the number of times someone reaches a contribution form.

Appendix:

Statistics is a big and sometimes complicated world, so I won’t explain this in too much detail. There are many classes and books that will dive into the specifics, but I want you to have a working knowledge of a few important concepts you’ll need to complete an accurate A/B test. I’m going to outline four closely related concepts necessary for determining your sample size, and walk through how to find this number. Even though I’m sticking to the basics, this section will be a bit on the technical side of things. Feel free to shoot an email our way with any questions; I’m more than happy to answer any and all.

Like I said, there are four closely related concepts when it comes to this type of statistical test: significance level, power, effect size, and sample size. I’ll talk about each of these in turn, and while I do, remember that our goal is to determine whether we can reject the assumption that the two versions are equal (or, in layman’s terms, figure out that there is a real statistical difference between the two versions).

Significance level can be thought of as the (hopefully small) likelihood of a false positive. Specifically, the probability that you falsely reject the assumption that the two versions are equal (i.e., claim that one version is actually better than the other, even if it’s not.) When you hear someone talk about a p-value, they’re referencing this concept. The most commonly used significance level is 0.05, which is akin to saying “there’s a 5% chance that I claim a real difference, but there’s actually not”.

Power is the the probability that you’ll avoid a false negative. Or said another way, the probability that if there’s a real difference there, you’ll detect it. The standard value to use for this is 0.8, meaning there is an 80% chance you’ll detect it; though there are really good reasons for adjusting this value. 0.8 is by no means always the best value to choose for power; it’s generally a good idea to change it if you know exactly why you’re doing what you’re doing. .08 will work for our purposes, though. Why not just pick a value of .9999, which is similar to saying “if there’s a real difference, there’s a 99.99% chance that I’ll detect it”? Well, that would be nice, but as you increase this value, the sample size required increases. And sample size is likely to be the limiting factor for an organization with a small (say, fewer -than-100k-member) list.

Effect Size. Of the two versions you’re testing against each other, typically you’d call one the ‘control’ and the other the ‘treatment’, so we’ll use those terms. Effect size is saying, what do you expect the proportion of actions (e.g., contributions) to be for the control, and what do you expect it to be for the treatment? The percent difference is the effect size. How this affects sample size is demonstrated in the above graph. But the whole point of running this test is that you don’t know what the two proportions will be in advance, so how can you pick those values? Well, actually, you estimate what your base action rate will be. For example, if your donation rate from an email is typically 5%, then you can use that as your base action rate. Then, for the second proportion, pick the smallest difference you’d like to be able to detect. Similarly to power, you might find yourself asking “well why wouldn’t I just pick the smallest possible difference?”. Again, the answer is that as you decrease the magnitude of the difference, the sample size you need will increase.

Finally, we have sample size, or the number of people we need to run the test on. If we have values for the above three things, we can figure out how big of a sample we need!

So how do we do that? Well, there are many ways to do it, but one of the easiest, best, and most accessible is R. It’s free, open-source, and has an excellent community for support (which really helps as you’re learning). Some might ask, “well that has a relatively high learning curve, doesn’t it? And, isn’t there some easier way to do this?” The answer to both of those questions is “maybe,” but I’ll give you everything you need in this blog post. There are also online calculators of varying quality that you can use, but R is really your best bet, no matter your tech level.

Doing this in R is actually pretty simple (and you’ll pick up another new skill!). After you download, install, and open R, enter the following command:

power.prop.test(p1 = 0.1, p2 = 0.105, sig.level = 0.05, power = 0.8)

and press enter. You’ll see a printout with a bunch of information, but you’re concerned with n. In this example, it’s about 58k. That number is the sample size for each group you’d need to detect, in this case, a 5% difference at a significance level of 0.05, a power of 0.8, and a base action rate of 10%. So, just to be certain we’re on the same page, a quick explanation:
p1: Your ‘base action rate’, or the value you’d expect for the rate you’re testing. If you’re donation rate is usually 8%, then p1 = 0.08
p2: Your base action rate plus the smallest percent difference you’d like to be able to detect. If you only care about noticing a 10% difference, and your ‘base action rate’ is 8%, then p2 = 0.088 (0.08 + (0.08 * 0.10))

Of course, your base action rate will likely be different, as will be the percent difference you’d like to be able to detect. So, substitute those values in, and you’re all set! Playing around with different values for these can help you gain a more intuitive sense of what happens to the required sample size as you alter certain factors.

Follow

Get every new post delivered to your Inbox.

Join 42 other followers