TY - JOUR
T1 - Relative Feedback Increases Disparities in Effort and Performance in Crowdsourcing Contests
T2 - Evidence from a Quasi-Experiment on Topcoder
AU - Tsvetkova, Milena
AU - Müller, Sebastian
AU - Vuculescu, Oana
AU - Ham, Haylee
AU - Sergeev, Rinat A.
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/11/11
Y1 - 2022/11/11
N2 - Rankings and leaderboards are often used in crowdsourcing contests and online communities to motivate individual contributions but feedback based on social comparison can also have negative effects. Here, we study the unequal effects of such feedback on individual effort and performance for individuals of different ability. We hypothesize that the effects of social comparison differ for top performers and bottom performers in a way that the inequality between the two increases. We use a quasi-experimental design to test our predictions with data from Topcoder, a large online crowdsourcing platform that publishes computer programming contests. We find that in contests where the submitted code is evaluated against others' submissions, rather than using an absolute scale, top performers increase their effort while bottom performers decrease it. As a result, relative scoring leads to better outcomes for those at the top but lower engagement for bottom performers. Our findings expose an important but overlooked drawback from using gamified competitions, rankings, and relative evaluations, with potential implications for crowdsourcing markets, online learning environments, online communities, and organizations in general.
AB - Rankings and leaderboards are often used in crowdsourcing contests and online communities to motivate individual contributions but feedback based on social comparison can also have negative effects. Here, we study the unequal effects of such feedback on individual effort and performance for individuals of different ability. We hypothesize that the effects of social comparison differ for top performers and bottom performers in a way that the inequality between the two increases. We use a quasi-experimental design to test our predictions with data from Topcoder, a large online crowdsourcing platform that publishes computer programming contests. We find that in contests where the submitted code is evaluated against others' submissions, rather than using an absolute scale, top performers increase their effort while bottom performers decrease it. As a result, relative scoring leads to better outcomes for those at the top but lower engagement for bottom performers. Our findings expose an important but overlooked drawback from using gamified competitions, rankings, and relative evaluations, with potential implications for crowdsourcing markets, online learning environments, online communities, and organizations in general.
KW - crowdsourcing contests
KW - engagement
KW - feedback giving
KW - task effort
KW - task performance
UR - http://www.scopus.com/inward/record.url?scp=85146419545&partnerID=8YFLogxK
U2 - 10.1145/3555649
DO - 10.1145/3555649
M3 - Journal article
AN - SCOPUS:85146419545
SN - 2573-0142
VL - 6
JO - Proceedings of the ACM on Human-Computer Interaction
JF - Proceedings of the ACM on Human-Computer Interaction
IS - CSCW2
M1 - 536
ER -