Observational learning and crowdfunding campaigns
What is backer herding behaviour? Is it always a good thing? Read on to find out.
For the past few weeks, I’ve been digging into the literature to learn about how to optimize a crowdfunding campaign. Many of the papers I’ve discussed have identified elements associated with success, but while this has been useful, the analysis has been limited
By that, I mean the regression modelling pays no attention to the quality of the project, which is one of the key decision points for any consumer looking to purchase something.
But, luckily for me, I found a recent paper titled “How does Observational Learning Impact Crowdfunding Outcomes for Backers, Project Creators and Platforms?” which starts to unpack this very complicated question.
Warning: this is a very technical paper, but worth reading in full if you have the time.
Some key background for this work is shown in the text below:
To increase the effectiveness of crowdfunding and to help backers to make more informed decisions, some measures can be taken to mitigate the uncertainty around the quality of proposed products. One action could be to display more detailed information during the funding campaign. For example, Kickstarter.com shares information on accumulated funds and number of backers. Moreover, their website allows backers to communicate through comments. On Crowdfunder.co.uk, visitors can see the entire timeline of the funding raised on every project, which shows the profile of backers and how much they contributed to projects.
Information available to backers about a campaign is referred to as Observational Learning (OL), which helps backers to determine the quality of a product based on the behaviour of previous backers.
For most reward-based crowdfunding platforms (e.g., Kickstarter) these OL signals are provided by quantitative data such as backer count, funding total, number of comments/updates etc. However, there is also network information identifying which people in your network (backers you follow or who follow you) have backed a given project.
This paper is focused on how OL affects decision-making in a (relatively) simple crowdfunding model, which has the following main features:
There are two projects.
They launch at the same time with the same funding goal and same pledge options.
The projects are of variable quality (defined as being high or low quality).
There are two backers.
Each backer arrives sequentially to the platform (one is the “early” backer and one is the “late” backer. The late backer is the one who can learn from the behaviour of the early backer.
Each backer can have different levels of expertise to recognize quality (high and low) of a project.
At this point, you’re probably asking: “how do they define quality?” This model is a purely binary one, so a project is either high or low quality with no further refinement. In the real world, quality is not easily determined and has many contextual factors which makes it impossible to model. It’s not perfect, but it’s the best approach when trying to understand
However, as there are two projects, there are four quality states:
Both high quality
Project one is high quality and project two is low quality
Project one is low quality and project two is high quality
Both projects are low quality
To enable modelling, the following conditions must also be enforced:
If a high-quality project is successful, each backer receives a payoff of one; and, in the case of a low-quality project being successful or a project not meeting the funding target, the backers receive a payoff of zero. Thus only when they pledge to a high-quality project backers can obtain positive utility.
This is simply saying that if backers support a low quality project, they are disappointed (i.e., they are content). This is a typical, and rationale, response for anyone who purchases goods or services.
The authors have to build a complicated Bayesian model, with a few assumptions that are constraining for the model, but seem fairly typical. They can then determine the impacts of their model on a set of evaluation measures, which are defined for three groups:
Backers (contentedness)
Creators (probability of funding)
Platform (platform profit, and platform effectiveness)
Got all that? Good. Here’s what matters.
Impact on backers
OL is important for ensuring backer contentedness, with the largest improvement occurring when the early and late backers have high and low expertness levels, respectively, as this gives rise to herding behaviour. This suggests it’s important for expert backers to back a campaign in its earliest stage. However, backers always benefit from observational learning, particularly if a project is of low quality.
Impact on creators
The authors cleverly added an extra twist in this scenario, by defining two competition states: tight competition (scarce funding), where projects need support from both backers to be funded, and a relaxed competition, where projects need support from only one of the backers.
In the model where funding is scarce, OL improves the success probabilities for all projects (even low quality ones) due to herding behaviour. - where one of the projects receives all the funding. However, in the relaxed model, OL can actually impede success (even for high quality projects) because of herding behaviour. In this setting, the preferable outcome occurs only when funding is evenly distributed between projects.
Impact on the platform
Outcomes are influenced by both the availability of funding and the quality difference between proposed projects. OL is beneficial for the platform when projects are of different quality. For example, platform effectiveness is improved by OL in a tight competition model (except for when both projects are low quality), and also in a relaxed competition model (except when both projects are high quality).
Looking at this example, it means that when funding is scarce - the platform is deemed effective in all cases except when projects are low quality. This intuitively makes sense, because a bunch of junk projects will deter backers. However, when there is lots of funding, platforms actually suffer when both projects are high quality. This is again due to herding behaviour, as one of the projects will end up over-funded.
This suggests that in such a situation, it is better for the platform to turn off all OL signals!
The implications of such a finding are clear. If crowdfunding continues to grow as a market, moving it closer to the relaxed competition model, it may actually be more beneficial for the platform to turn off all the backer data/information, and force backers into decision-making without the benefit of OL. So, for example, this could mean a future version of Kickstarter which doesn’t show the number of backers, or the units sold, i.e., making it more like a standard sales platform.
This is important because the team also added an extra condition to their modelling, which focused on creator choice to make a high - or low - quality product. We want to assume that creators would want to make something of high quality, but there is a high cost associated with that, which can impact decision-making.
The paper found that it’s more optimal for creators to make high-quality products in a tight or relaxed competition model, when there is OL. If there is no OL, then creators are disadvantaged by making high-quality products.
So… in a relaxed competition model where it is more beneficial for platforms to turn off the visible metrics, it actually harms creators if they make high-quality product. The logical conclusion, is that they will then only make low-quality products.
This is not what we want for the future state of crowdfunding.
But John, this is a simple model where there are only two projects and two backers. This isn’t true in real life.
I agree.
While the authors also recognize this limitation, they did more modelling with fifty and one hundred backers, respectively, and found their conclusions were robust. This suggests the behaviour is true even in the large N limit (at least for backers). More work - as always - is needed to understand what happens when there are more than two projects.
What does this mean for creators?
This has been a complicated discussion, based on a mathematical model of low-dimensionality. However, the insights may hold more generally, so it’s important to distill them down.
Given that platforms such as Kickstarter have multiple OL measures built into their design, it behooves creators to take advantage of that (where possible) to improve their chances of success.
For example, backer findings suggest that backers and creators both win if expert backers contribute early, so creators should try to find ways of attracting superbackers to their campaign immediately after launch, as this leads to herding effects.
While the modelling shows that herding behaviour could negatively impact creators in a relaxed competition scenario, the reality is that this is beyond our control. Given the large volume of projects being launched on a daily basis, it is more likely that platforms have a tight competition scenario which is slowly evolving into a relaxed competition scenario, and therefore herding is helpful.
It also suggests that creators need to keep a careful eye on the platforms as they evolve over time, since it may be beneficial for platforms to move away from sharing OL signals, which could negatively impact creators who are trying to produce high quality products.
Ultimately, this paper helps us understand a basic model of reality - but not necessarily reality itself. However, the main takeaway for me is that herding behaviour of backers is important to understand and manage in order to have the best chance of creative success in crowdfunding.
What do you think? What resonated for you?
I don't think I follow the reasons / the trigger that allows for the relaxed competition environment to create a negative impact for high quality projects due to OL. In what way does this happen? Why does herding mentality make it happen?
Yeah it’s a good question as it’s one of the areas they skip over a little in the paper. It’s a result of the probability distribution arising from backer expertise levels - which is maximized when backers tend to do their own thing (I.e., no herding behaviour). This makes it different to the tight competition curves which are maximized when there is herding.
It’s a probability so they’re right to point out that even a high quality project COULD be negatively impacted by herding in this model, but realistically it - but I suspect in reality this may not be the case.