All Projects

What predicts policymakers’ decisions to adopt evidence-based strategies?

People in a meeting with one person organizing sticky notes on whiteboard

Project Summary

Against a backdrop of growing interest in evidence-based policymaking, governments increasingly use randomized controlled trials (RCTs) to evaluate “what works.” Yet, relatively little is known about how often – or why – evidence from such experiments is adopted at scale. We followed-up with 67 city departments across the United States that, combined, ran 73 RCTs over a four year period, in collaboration with The Behavioral Insights Team. We found that, five years later, just 27% ultimately adopted the innovations they tested. The strength of the evidence did not predict which innovations were adopted. Rather, the strongest predictor of adoption was whether the innovation was embedded within an existing communications infrastructure. Tweaks to pre-existing processes were much more likely to be adopted compared to strategies that required a new communications process.

Why is this issue important?

New calls for evidence-based policies have led to greater use of rigorous evaluations in the public sector, in the hopes that evidence will be adopted by policymakers. Yet it is unclear whether merely having access to evidence changes the decision-making of policymakers. Understanding what factors lead to evidence adoption is critical in supporting policymakers and researchers struggling to bridge this evidence-practice gap.

What are we doing?

We contacted every city department that worked with the Behavioural Insights Team – North America to co-design and test a new communication strategy between 2015 and 2019 to learn whether the tested strategies were subsequently incorporated into practice. We then analyzed predictors of adoption including the strength of the evidence, characteristics of the city department, and factors related to the design of the communication strategy.

What have we learned?

Although 78% of the experiments run by cities in this sample yielded positive effects, and 45% had statistically significant positive effects, adoption of the tested strategies was closer to 27%. The strength of the evidence was not a strong predictor of which experiments were adopted, and although staff retention seemed to increase adoption directionally, the impact of both of these factors was lower than expected. By far, the strongest predictor of adoption was whether the tested strategy was an adaptation of a pre-existing communication or a new communication. When the new strategy built on pre-existing communications, the subsequent adoption rate was 67%. In comparison, just 12% of tested strategies that involved developing new communications were subsequently adopted.

What comes next?

Additional research on the take-up of evidence and the bottlenecks for evidence adoption is needed. In the meantime, practitioners and researchers can design strategies to overcome hurdles to adoption, to help ensure that more agencies and residents benefit from evidence-based policies.

Back to Top