Skip to main content

Increasing Charity Donations: A Bandit Learning Approach

Increasing Charity Donations: A Bandit Learning Approach

In recent work, Professor Somya Singhvi and his co-author propose a novel approach to maximizing donations on such platforms by developing a recommendation system that overcomes some of the drawbacks of widely used recommender systems.

Color photograph of a computer keyboard with a blue key for "donate".
Stay Informed + Stay Connected

Crowdfunding for charitable causes via platforms such as GoFundMe and DoubleTheDonation has been growing at a rapid rate, especially among millennials and GenXers. While some donate to specific, pre-decided campaigns, others donate to causes recommended by the platform. GoFundMe launches on average 10,000 new campaigns every day supporting a large number of different causes. About 1.3 million visitors view their campaigns multiple times every day but only 1 in 60 visits results in a donation. Given the large number and diverse nature of campaigns, effective personalization of recommendations based on the potential donor’s preferences is crucial for maximizing donations on the platforms.

In recent work, Professor SOMYA SINGHVI and his co-author propose a novel approach to maximizing donations on such platforms by developing a recommendation system that overcomes some of the drawbacks of widely used recommender systems. Most recommender systems developed to personalize recommendations on websites for charity donations, movies, and products are based on “bandit learning” algorithms, a type of machine learning or AI method. These models use a dataset of previous charitable donations to train a preference model that predicts the likelihood of a given donor making a donation when offered a set of campaign recommendations. Widely used recommendation systems based on typical bandit models assume that there is a single underlying preference model that can capture both the users’ propensity to click and the amount they donate. However, the factors that underlay the propensity to click may be different from the factors that determine the amount donated. For example, prior studies have shown that income is an important predictor of the donation amount but not necessarily the click-through rate. However, most recommendation systems do not take into the account this difference. In particular, they ignore the fact that donation amounts are only available for those who clicked and this biases the learning model because only observed donations are used to estimate donor preferences.

This work develops a better learning model that overcomes the above bias, captures user preferences more accurately, and can therefore make better recommendations. Their approach, called the Sample Selection Bandit (SSB) algorithm proposes distinct preference models to capture the following two user decisions: (i) whether to click on a recommended campaign using campaign selection data; and (ii) how much to donate for a recommended campaign using past donation data, given that they decided to donate. Using real data from GoFundMe, the study shows that the proposed approach is far superior to state-of-the-art approaches in the literature in achieving higher conversion rates as well as higher expected donation amounts.

The proposed approach is also of value to recommender systems for platforms offering movie selections or e-commerce platforms that develop user preference models based on user ratings of movies or products. A user rating is analogous to the donation amount because only users who clicked on and watched a movie will provide a rating, just as donation amount information is only available for donors who clicked on a campaign.

Overall, the study provides a novel and effective approach based on machine learning methodology to increase charitable contributions on online platforms.

"Increasing Charity Donations: A Bandit Learning Approach"

by Divya Singhvi and Somya Singhvi