ROI in Education: A Reasonable Question With a Complicated Answer
ROI (Return on Investment) has become one of the most familiar phrases in education conversations. We hear it in budget meetings, grant applications, board presentations, and strategic planning sessions. And for good reason: resources are finite and the people responsible for educational investments deserve to know whether those investments are paying off.
What is worth slowing down on is what we mean when we ask it, and whether we have the tools to answer it honestly. ROI borrows the language of finance, which carries an implied promise: that there is a calculation somewhere with a defensible answer. While in other fields like finance, the promise is kept because the processes are straightforward, in education that promise can be kept but the path is layered in complexity.
A Field-Wide Conversation Still Catching Up to Itself
ROI language gained significant momentum in K12 education through the federal accountability movement, which normalized the term long before the infrastructure to use it rigorously was in place.
Beginning with A Nation at Risk in 1983 and accelerating through No Child Left Behind (2001), Race to the Top (2009), and the Every Student Succeeds Act (2015), federal policy steadily pushed the idea that educational spending should produce measurable outcomes (McGuinn, 2006; Dee & Jacob, 2011). The logic of investment and return was embedded in the policy architecture, even when the term ROI was not explicitly used.
Some of the most influential early uses of ROI language in education came from economists studying early childhood interventions. Longitudinal studies of programs like Perry Preschool estimated societal returns of $7 to $12 for every dollar invested (Schweinhart et al., 2005; Karoly et al., 2005). Economist James Heckman’s widely cited research on the returns to early investment gave this framing significant academic credibility and helped establish ROI as a legitimate frame for talking about educational value at the program level (Heckman & Masterov, 2007).
What this movement brought was the reasonable grounds for the language it used, without the methodological apparatus to translate ROI from a useful metaphor into a defensible calculation. That gap between the currency of the term and the rigor required to back it up is a field-wide condition. The ROI language traveled fast through educational policy but the infrastructure to support it has followed more slowly, and less visibly.
Where the Complexity Lives
When we try to calculate ROI in education carefully, we run into at least three friction points that do not have easy resolutions.
Defining what counts as a return. Is the return a test score gain? A reduction in chronic absenteeism? Improved teacher retention? A decrease in disciplinary incidents? Each of these represents a legitimate outcome. Each also produces a different answer. The choice of what to measure is rarely neutral, and it shapes everything that comes after.
Putting a number on outcomes that resist numbers. Finance has it easier because profit is profit, but education produces things that do not come with price tags. Think, for instance, a student’s growing confidence, a teacher’s expanding instructional repertoire, a school culture that becomes safer and more inclusive. Shadow pricing, the practice of assigning estimated economic value to non-market outcomes, is one tool for navigating this, but it requires assumptions that deserve to be made explicit.
The time horizon problem. Educational returns are often long, diffuse, and resistant to clean attribution. An intervention that pays off meaningfully over five years can look like a cost center at year one. Decisions about how far out to measure and how to account for the fact that benefits accrue over time are themselves methodological choices with real consequences for what the final number looks like, and for whether a program is funded for year two.
These friction points have solutions, but they require meticulous analysis: identifying each relevant outcome, assigning it a monetary value, and defending those decisions with enough transparency that others can assess the assumptions behind them. This is entirely possible, and when done well it produces genuinely useful information. It is also demanding work that requires expertise, appropriate data, and sufficient time. In practice, not every evaluation question calls for that level of monetary conversion and not every program lends itself to it. That is worth sitting with, because it opens the door to a different question: what if the goal is rigorous cost analysis, but the outcomes we care most about do not need to be expressed in dollars to be meaningful?
Beyond ROI: Other Ways to Think About Cost
ROI is one economic evaluation method, but it is not the only one and is not always the right fit. No single approach applies equally well to every program, activity or intervention. There are real benefits to conducting an ROI study when the question fits the model, but the decision to use it should follow from the evaluation question, not precede it. Other approaches can be equally rigorous and, in many education contexts, more appropriate.
Henry Levin, an economist at Columbia University's Teachers College, spent decades developing a rigorous framework specifically designed for economic evaluation in education. His work established cost-effectiveness analysis and cost-benefit analysis as practical tools for education program evaluation around the realities of how schools and districts operate (Levin et al., 2018).
Levin's ingredients method addressed one of the most persistent problems in educational cost analysis: the tendency to undercount what programs actually cost. Rather than relying on budget line items alone, the ingredients method requires evaluators to identify and value every resource a program consumes, including personnel time, facilities, materials, and even volunteer contributions that never appear in a budget. The result is a complete and more honest picture of cost, which is a necessary precondition for any credible ROI-adjacent analysis.
Levin's framework has been updated and extended over the years, and resources are now more accessible than ever. The Center for Cost-Benefit Studies in Education at Teachers College has produced practical guides for practitioners, and researchers like Clive Belfield have continued to build out the methodology for applied use in district and program contexts (Levin, Belfield, Muennig, & Rouse, 2007).
Cost-benefit analysis, which supports ROI calculations, requires converting all outcomes into a common monetary unit. That is a meaningful standard, and when it can be met rigorously, it produces genuinely useful information. But it sets a high bar, one that is often difficult to clear honestly in education, where many of the outcomes we care about most resist dollar conversion.
Cost-effectiveness analysis (CEA) offers a different approach. Rather than asking “how much return did we get in dollars,” CEA asks “how much did it cost to produce a given unit of outcome?” The outcome stays in its natural unit, a reading level gain, a percentage point reduction in chronic absenteeism, a point increase on a well-being measure. No dollar conversion required.
Consider a district comparing two literacy interventions. Both show meaningful gains in reading scores, but one costs significantly less per student. A cost-effectiveness analysis surfaces that difference clearly and actionably, without requiring anyone to assign a dollar value to a third grader’s reading growth. For many of the decisions districts actually face, that is the more honest and more useful frame.
This is not to say CEA is simple or assumption-free because it comes packed with its own methodological demands. But it is a reminder that “ROI” and “cost analysis” are not synonyms. The question of whether an investment was worthwhile can be approached in more than one way, and choosing the right method for the right question is part of doing this work well.
None of this is an argument against ROI or the economic evaluation frameworks that support it. When the question fits the method, these tools are genuinely powerful and worth the investment of rigor they require. The point is simply that the question should always come first. Different evaluation questions call for different methods, and part of doing this work well is knowing which tool belongs in which situation and being honest when the one everyone is asking for may not be the right fit for what we need to know.
The Path Forward
The good news is that the methodology to do these carefully does exist. Cost-effectiveness analysis, cost-benefit analysis, and return on investment, each offer frameworks for moving beyond intuition toward something more rigorous and defensible. Some districts and researchers are using these tools well, and the field is developing in meaningful ways.
The next time ROI comes up in a meeting, it is worth a brief pause before the number gets produced. The question itself is a good one. What it deserves is an honest answer, and getting there starts with knowing what we are actually trying to calculate and whether we have the data and the tools to do it well. If economic evaluation is on your organization's horizon, I am always glad to think through the questions with colleagues who are wrestling with the same challenges.
References
Dee, T. S., & Jacob, B. (2011). The impact of No Child Left Behind on student achievement. Journal of Policy Analysis and Management, 30(3), 418–446. https://www.nber.org/papers/w15531
Heckman, J. J., & Masterov, D. V. (2007). The productivity argument for investing in young children. Review of Agricultural Economics, 29(3), 446–493. https://www.nber.org/papers/w13016
Karoly, L. A., Kilburn, M. R., & Cannon, J. S. (2005). Early childhood interventions: Proven results, future promise. RAND Corporation. https://www.rand.org/pubs/monographs/MG341.html
Levin, H. M., Belfield, C., Muennig, P., & Rouse, C. (2007). The costs and benefits of an excellent education for all of America's children. Teachers College, Columbia University. https://repository.upenn.edu/cbcse/20/
Levin, H. M., McEwan, P. J., Belfield, C. R., Bowden, A. B., & Shand, R. D. (2018). Economic evaluation in education: Cost-effectiveness and benefit-cost analysis (3rd ed.). SAGE Publications.
McGuinn, P. J. (2006). No Child Left Behind and the transformation of federal education policy, 1965–2005. University Press of Kansas.
Schweinhart, L. J., Montie, J., Xiang, Z., Barnett, W. S., Belfield, C. R., & Nores, M. (2005). Lifetime effects: The High/Scope Perry Preschool study through age 40. High/Scope Press. https://www.rand.org/pubs/research_briefs/RB9145.html