TY - JOUR
T1 - Experiment aversion does not appear to generalize
AU - Mazar, Nina
AU - Elbæk, Christian T.
AU - Mitkidis, Panagiotis
PY - 2023/4
Y1 - 2023/4
N2 - Over the past decade governments and organizations around the world have established behavioral insights teams advocating for randomized experiments. However, recent findings by Meyer et al. (1, 2) and Heck et al. (3) published in 2019 and 2020 suggest that people often rate randomized experiments as less appropriate than the policies they contain even when approving the implementation of either policy untested and when none of the individual policies is clearly superior. The authors warn that this could cause policymakers to avoid running large-scale field experiments or being transparent about running them and might contribute to an adverse heterogeneity bias in terms of who is participating in experiments. One direct and six conceptual pre-registered replications (total N = 5,200) of the previously published larger-effect studies with variations in scenario-wordings, recruitment platforms and countries, and the addition of further measures to assess people’s views test the generalizability and robustness of these findings. Together, we find the original results do not appear to generalize. That is, our triangulation reveals insufficient evidence to conclude that people exhibit a common pattern of behavior that would be consistent with relative experiment aversion. Thus, policymakers may not need to be concerned about employing evidence-based practices more so than about universally implementing policies.
AB - Over the past decade governments and organizations around the world have established behavioral insights teams advocating for randomized experiments. However, recent findings by Meyer et al. (1, 2) and Heck et al. (3) published in 2019 and 2020 suggest that people often rate randomized experiments as less appropriate than the policies they contain even when approving the implementation of either policy untested and when none of the individual policies is clearly superior. The authors warn that this could cause policymakers to avoid running large-scale field experiments or being transparent about running them and might contribute to an adverse heterogeneity bias in terms of who is participating in experiments. One direct and six conceptual pre-registered replications (total N = 5,200) of the previously published larger-effect studies with variations in scenario-wordings, recruitment platforms and countries, and the addition of further measures to assess people’s views test the generalizability and robustness of these findings. Together, we find the original results do not appear to generalize. That is, our triangulation reveals insufficient evidence to conclude that people exhibit a common pattern of behavior that would be consistent with relative experiment aversion. Thus, policymakers may not need to be concerned about employing evidence-based practices more so than about universally implementing policies.
KW - behavioral science practice
KW - nudging
KW - policy
KW - randomized controlled trial
KW - replication
KW - Policy
KW - Humans
KW - Behavioral Sciences
KW - Research Design
KW - Randomized Controlled Trials as Topic
UR - http://www.scopus.com/inward/record.url?scp=85152101845&partnerID=8YFLogxK
U2 - 10.1073/pnas.2217551120
DO - 10.1073/pnas.2217551120
M3 - Journal article
C2 - 37036965
SN - 0027-8424
VL - 120
JO - Proceedings of the National Academy of Sciences (PNAS)
JF - Proceedings of the National Academy of Sciences (PNAS)
IS - 16
M1 - e2217551120
ER -