This is historical material “frozen in time”. The website is no longer updated and links to external websites and some internal pages may not work.

Search form

How Low-cost Randomized Controlled Trials Can Drive Effective Social Spending

Summary: 
The Office of Science and Technology Policy and the Coalition for Evidence-Based Policy convened leaders from the White House, Federal agencies, Congress, philanthropic foundations, and academia this week to explore an important development in the effort to build credible evidence about “what works” in social spending: low-cost randomized controlled trials (RCTs). The goal of the conference was to help advance a broader Administration effort to promote evidence-based policy, described in the evaluation chapter of the 2014 Economic Report of the President, and the Performance and Management section of the President’s budget.

The Office of Science and Technology Policy and the Coalition for Evidence-Based Policy convened leaders from the White House, Federal agencies, Congress, philanthropic foundations, and academia this week to explore an important development in the effort to build credible evidence about “what works” in social spending: low-cost randomized controlled trials (RCTs). The goal of the conference was to help advance a broader Administration effort to promote evidence-based policy, described in the evaluation chapter of the 2014 Economic Report of the President, and the Performance and Management section of the President’s budget.

Large and rigorous RCTs are widely regarded as the most valid method of evaluating program effectiveness, but they are often perceived as too costly and burdensome for practical use in most contexts. The conference showcased a new paradigm: by measuring key outcomes using large administrative data sets already collected for other purposes – whether it be student test scores, hospitalization records, or employment and earnings data – sizeable RCTs can be conducted at low cost and low burden.

The conference showcased a number of RCTs that were conducted for between $50,000 and $350,000 (a fraction of the usual multimillion dollar cost of such studies), yet produced valid evidence that informed important policy decisions.

For example, the state of Illinois ran an RCT that evaluated the impact of offering Recovery Coach services to substance-abusing parents whose children had been temporarily given up to the state. The Recovery Coach engages parents, child welfare workers, and substance-abuse treatment centers to encourage parents to seek treatment. Sixty child welfare agencies engaging 2,763 eligible parents were randomly assigned to receive either the Recovery Coach program (the treatment) or the standard services (the control). Importantly, the study was conducted at a relatively low-cost – approximately $100,000 over nine years – by measuring all outcomes using administrative data the state was already collecting for other purposes, such as foster care closure rates. Using this approach, the Recovery Coach program demonstrated a positive impact. Over five years, those who received the program saw a 14% increase in family reunification, a 15% increase in completed foster care cases, and state savings of $2,400 per parent.

Another example is the recent RCT of a $75 million Teacher Incentive Program in New York City. The program provided low-performing public school teachers with an annual bonus if their school successfully increased student achievement and other key outcomes. Because the city did not have sufficient funding to provide the program to every low-performing school in New York, the city and its research partners conducted a random assignment lottery to determine which of 396 eligible schools would receive the program (the treatment group) and which would not (the control group). The researchers then measured student outcomes in both groups using administrative data such as state test scores that the school district already collected for other purposes.

The study cost $50,000 because random assignment was baked into the design of the program and all outcomes were measured using administrative data. And it produced a definitive result: over a three-year period, the study found that the Teacher Incentive Program had no effect on student achievement, attendance, graduation rates, behavior, GPA, or other outcomes compared to the control schools. Based in part on these results, New York City decided to end the program, freeing up resources to invest in programs with greater promise to improve student outcomes.

At the event, conference participants explored how to more effectively embed low-cost RCTs of this kind in a diversity of government social spending activities. Ideas included: (1) greater research access to government administrative data, with appropriate privacy protections; (2) increased government funding opportunities that specifically focus on low-cost RCTs, such as one recently released by the National Institutes of Health; and (3) more high-profile competitions and challenges for low-cost RCTs, such as those recently launched by the Coalition for Evidence-Based Policy and the National Institute of Justice.

The ultimate goal of the conference on low-cost RCTs and other evidence-building efforts – including initiatives such as tiered-evidence grant programs, Pay for Success projects, and the White House Social and Behavioral Sciences Team – is to focus public funds on program activities that have rigorously shown effectiveness in addressing important problems, such as low-performing schools, youth unemployment, and rising healthcare costs.

Send us your ideas for low-cost RCTs and other evidence-based policy innovations at evidence@ostp.gov.

Maya Shankar is Senior Advisor to the Deputy Director at the White House Office of Science and Technology Policy