Urban Institute
Research Product Manager

Why we might know less than we thought, and how to fix it

An episode of NPR’s Planet Money podcast raises an interesting question: Is existing research as infallible as we think it is? The concerns underpinning this question aren’t new—a 2005 paper by Stanford professor John Ioannidis was provocatively titled “Why Most Published Research Findings Are False”—but with policymakers increasing focused on evidence-based decision-making, the issue is particularly relevant, and a new model, pay for success (PFS), offers one potential solution.

Determining whether the results of a previous study are replicable—that is, reproducible when conducting the same experiment using identical settings and parameters—helps confirm or disprove the original results. However, replication studies are often either not undertaken at all or go unpublished when they are. This is because research journals can suffer from a selective bias towards publishing positive results. This bias, called the file drawer effect, is mutually reinforced by the fact that funding and prestige are usually directed towards groundbreaking research which “proves” something, rather than replication studies or new research that yield results that are not statistically significant. The difficulty of replicating some results combined with the disincentive to even try can yield a lack of confidence in many existing research conclusions.

In reality, the situation is less dire than it sounds. Many policies and programs are based on rigorous, replicated research, and it’s still reasonable to have more confidence in decisions based on one high-quality study than decisions based on no studies at all. Furthermore, a statistical method called meta-analysis allows policymakers to synthesize results from different studies in order to give a more precise estimate of effect based on the available evidence. Meta-analyses are useful, but it’s important to remember that if the studies on which a meta-analysis are based are flawed or limited, the meta-analysis itself might be flawed or limited.

For PFS, an innovative financing tool for social programs, gaps in our knowledge on effective programs pose a particular challenge because PFS depends on evidence of what works to identify programs for investment. Yet the PFS model itself also presents an opportunity to grow the evidence base of a social program by requiring an evaluation at the end of each project. These evaluations, particularly if the evaluation design chosen is rigorous, help fill the gaps created by “missing studies” that were either not undertaken or not published. In this way, PFS can cover and re-cover ground that might otherwise be ignored, enabling governments to learn from others and make sounder decisions about directing money towards or away from certain programs.

It is important to be reminded that there are gaps in our knowledge of what works. Yet in pursuit of ever stronger evidence-based policymaking and the identification of truly effective programs, it is equally important to identify opportunities to bridge those gaps. By incentivizing evaluation and systematically building evidence, PFS offers one such opportunity.  

Have a Pay for Success question? Ask our experts here

As an organization, the Urban Institute does not take positions on issues. Scholars are independent and empowered to share their evidence-based views and recommendations shaped by research. Photo via Shutterstock.