Jumping to Conclusions
Academic studies that have been critical of state performance-based funding policies lack the data to back up their conclusions, writes Martha Snyder, and fail to account for the design and implementation of these policies.
A recent Inside Higher Ed article about the analysis of state performance funding formulas by Seton Hall University researchers Robert Kelchen and Luke Stedrak might unfairly lead readers to believe that such formulas are driving public colleges and universities to intentionally enroll more students from high-income families, displacing much less well-off students. It would be cause for concern if institutions were intentionally responding to performance-based funding policies by shifting their admissions policies in ways that make it harder for students who are eligible to receive Pell Grants to go to college.
Kelchen and Stedrak’s study raises this possibility, but even they acknowledge the data fall woefully short of supporting such a conclusion. These actions would, in fact, be contrary to the policy intent of more recent and thoughtfully designed outcomes-based funding models pursued in states such as Ohio and Tennessee. These formulas were adopted to signal to colleges and universities that increases in attainment that lead to a better-educated society necessarily come from doing a much better job of serving and graduating all students, especially students of color and students from low-income families.
Unfortunately, Kelchen’s study has significant limitations, as has been the case with previous studies of performance-based funding. Most notably, as acknowledged by Kelchen and Stedrak, these studies lump together a wide variety of approaches to performance-based funding, some adopted decades ago, which address a number of challenges not limited to the country’s dire need to increase educational attainment. Such a one-size-fits-all approach fails to give adequate attention to the fact that how funding policies are designed and implemented actually matters.
For example, the researchers’ assertion that institutions could possibly be changing admissions policies to enroll better-prepared, higher-income students does not account for differential effects among states that provide additional financial incentives in their formulas to ensure low-income and minority students’ needs are addressed vs. those states that do nothing in this area. All states are simply lumped together for purposes of the analysis.
In addition, the claim that a decrease in Pell dollars per full-time-equivalent student could possibly be caused by performance-based funding fails to account for changes over time in federal policy related to Pell Grants, different state (and institutional) tuition policies, other state policies adopted or enacted over time, changes in the economy and national and state economic well-being, and changes in student behavior and preferences. For example, Indiana public research and comprehensive universities have become more selective over time because of a policy change requiring four-year institutions to stop offering remedial and developmental education and associate degrees, instead sending these students to community colleges.
If any of these factors have affected states with newer, well-designed outcomes-based funding systems and other states with rudimentary performance-based funding or no such systems at all, as I believe they have, then there is strong potential for a research bias introduced by failing to account for key variables. For example, in states that are offering incentives for students to enroll in community colleges, such as Tennessee, the average value of Pell Grants at public bachelor’s-granting institutions would drop if more low-income, Pell-eligible students were to choose to go to lower-cost, or free, community colleges.
I agree with Kelchen and Stedrak that more evaluation and discussion are needed on all forms of higher education finance formulas to better understand their effects on institutional behavior and student outcomes. Clearly, there are states that had, and in some cases continue to have, funding models designed in a way that could create perverse incentives for institutions to raise admissions standards or to respond in other ways that run contrary to raising attainment for all students, and for students of color in particular. As the Seton Hall researchers point out, priority should be given to understanding the differential effects of various elements that go into the design and implementation of state funding models.
The HCM Strategists’ report referenced in the study was an attempt by us to inform state funding model design and implementation efforts. There needs to be a better understanding of which design elements matter for which students in which contexts -- as well as the implications of these evidence-based findings for policy design and what finance policy approaches result in the best institutional responses for students. There is clear evidence that performance funding can and does prompt institutions to improve student supports and incentives in ways that benefit students.
Analysis under way by Research for Action, an independent, Philadelphia-based research shop, will attempt to account for several of the existing methodological limitations correctly noted by Kelchen and Stedrak. This quantitative and qualitative analysis focuses on the three most robust and longest-tenured outcomes-based funding systems, in Indiana, Ohio and Tennessee.
Factors examined by Research for Action will include the type of outcomes-based funding being implemented, specifics of each state’s formula as applied to both the two- and four-year sectors, the timing of full implementation, changes in state policies over time, differences in the percentages of funding allocated based on outcomes such as program and degree completion, and differences in overall state allocations to public higher education. And, for the first time, Research for Action will move beyond the limitations of analyses based primarily on federal IPEDS data by incorporating state longitudinal data, which give a more complete picture.
As states continue to implement various approaches to funding higher education, it is essential to understand the effects on institutional behavior and student outcomes. Doing so will require more careful analyses than those seen to date and a more detailed understanding of policy design and implementation factors that are likely to affect institutional responses. Broad-brush analyses such as Kelchen and Stedrak’s can help to inform the questions that need to be asked but should not be used to draw any meaningful conclusions about the most effective ways to ensure colleges and universities develop and maintain a laser focus on graduating more students with meaningful credentials that offer real hope for the future.
Martha Snyder is a director at HCM Strategists, a public policy advocacy and consulting firm.
Read more by
You may also be interested in...
Opinions on Inside Higher Ed
Inside Higher Ed’s Blog U
What Others Are Reading