Rayanne Hawkins
Urban Institute
Business Operations Manager
Urban Institute
Research Assistant

Future of PFS: Local realities shape evaluations of PFS projects

August 11, 2017 - 3:33pm

In pay for success (PFS) projects, it’s important to use the most rigorous evaluation design possible to determine which programs work and to build the evidence base around what works. With that said, however, the project’s evaluation designs are influenced by the characteristics of the program being offered and the local realities around implementing that program, such as the availability of data and the number of potential participants.

Representatives from Chicago’s Child-Parent Center Pay for Success Initiative and Denver’s Housing to Health Initiative came together on a panel at the National Symposium on the Future of Pay for Success to discuss some of the similarities and differences between the two projects, with a focus on how the local environment shaped project evaluation design.

Launched in 2014, the Chicago Child-Parent Center (CPC) PFS Initiative aims to increase the number of low-income four-year-old children enrolled in high-quality preschool, which evidence suggests is important for educational development. The Denver Housing to Health Initiative began providing permanent supportive housing (PSH) services in 2016 after city officials recognized that the most frequent users of certain public services – including jail, detox, and emergency medical services – lacked fixed addresses. The Denver project attempts to reduce the strain on city services through PSH, which research indicates may reduce contact with the criminal justice system and improve the mental and physical health of service recipients

 

Chicago Child-Parent Center Pay for Success Initiative

Denver Housing to Health Initiative

 

Population

  • 2,618 children 4-year-old children living in Chicago Public School Title 1 attendance areas (neighborhoods with the highest poverty rates)

 

  • 250 chronically homeless individuals who are frequent users of emergency services

 

Intervention

  • Expanded Child-Parent Center Model (CPC) – half- or full-day pre-K that aligns education with services for the student and his/her family

 

  • Housing First
  • Modified Assertive Community Treatment (ACT) model

Outcomes

  • Decrease in special education
  • Increased kindergarten readiness
  • Increased number of students reading at grade level in third grade

 

  • Reduced jail bed days
  • Housing stability

Evaluation

  • Quasi-experimental for special education outcomes
  • Descriptive study with no comparison group (kindergarten readiness and third-grade literacy rates)
  • Randomized control trial (RCT)

 

Two local realities significantly influenced the type of evaluation design that the stakeholders in both places chose to employ: service availability, and data quality and access.

Service Availability and Demand

Each project had different goals for the number of people served. In Denver, demand for PSH exceeded the supply of 250 available units by an amount large enough for a treatment and control group. Therefore, they were able to use a randomized control trial or RCT as the evaluation design. Since demand exceeded the supply of units, the ethical objections around denying service to the control group were lessened. Also, while only a randomly selected group of individuals received PSH, the control group receives care as usual, which one panelist said addressed the provider’s concerns about serving everyone in need.

In contrast, the Chicago project was designed to serve about 650 students each year and needed to actively market its intervention to make children and their families aware of the program and get them to participate. It would have been difficult to recruit enough students for a comparison group in an RCT. The project evaluators still recognized the importance of having a counterfactual and so employed a  quasi-experimental study that uses propensity scoring to match students in the CPC program with those of similar individual and community-level characteristics in order to compare the two groups on the rates of enrollment in special education.

Data Quality and Access

Without investments in data infrastructure, the quality of data and the ability to access it can limit the type of evaluations that a PFS project can use. Denver had conducted data analysis in advance of project launch to identify the highest intensity utilizers of government services and to inform the project’s design. The evaluation plan also included the data sharing agreements and process for tracking data on participants using data from multiple parties, such as hospitals and the police department. Furthermore, the evaluation design helped define housing stability and reduced interaction with jails in a way that can be measured with administrative data.

Chicago needed access to data on students, which can prove difficult to obtain because of parental consent and confidentiality issues, and they also faced challenges related to data quality. Even though the Chicago project was granted access to student data, there isn’t consensus among evaluators around how to measure school readiness. The project uses the Teaching Strategy (TS) Gold assessment to best approximate this outcome on youth receiving CPC, but since it is administered within the program and before students get to kindergarten, there was no comparison data. There was also a lack of data indicating which children attended Head Start or other pre-kindergarten programs, and so students who didn’t receive the CPC services may have received a pre-kindergarten education elsewhere. Given the lack of a TS Gold comparison data and a concrete measurement for school readiness, the project’s evaluation design uses a descriptive analysis without a comparison group for the two school readiness measures in addition to the quasi-experimental design for special education avoidance.

While these programs use different types of evaluations, due to local realities around data and demand for services, each project’s evaluations did not hinder the provision of services, an important element that one funder in the room stressed. This highlights that regardless of local contexts and their effects on evaluation, the vulnerable populations that are served by these programs remain the most important consideration in evaluation design.

This is the seventh blog in our Future of PFS blog series.

On June 22nd and 23rd, 2017, the Pay for Success Initiative hosted a National Symposium on the Future of Pay for Success in Washington, D.C. The invite-only Symposium brought together leaders from government, nonprofits, and organizations active in pay for success to consider the big questions facing the field, as well as highlight lessons for engaging in PFS efforts. More information on the Symposium can be found here.

Over the next several months, the Initiative will be releasing a series of blogs highlighting important conversations, themes, and questions that arose during the Symposium. To join the conversation, visit pfs.urban.orgfollow @UrbanPFSI and #FutureofPFS on Twitter, and subscribe to our monthly newsletter.

Have a Pay for Success question? Ask our experts here!

As an organization, the Urban Institute does not take positions on issues. Scholars are independent and empowered to share their evidence-based views and recommendations shaped by research. Photo via Shutterstock.