State policy

The new and returning governors and their campaign pledges on higher education

Smart Title: 

What the winning candidates promised for colleges and students.

An evaluation of whether performance funding in higher education works (essay)

More than 30 states now provide performance funding for higher education, with several more states seriously considering it. Under PF, state funding for higher education is not based on enrollments and prior-year funding levels. Rather, it is tied directly to institutional performance on such metrics as student retention, credit accrual, degree completion and job placement. The amount of state funding tied to performance indicators ranges from less than 1 percent in Illinois to as much as 80 to 90 percent in Ohio and Tennessee.

Performance funding has received strong endorsements from federal and state elected officials and influential public policy groups and educational foundations. The U.S. Department of Education has urged states to “embrace performance-based funding of higher education based on progress toward completion and other quality goals.” And a report by the National Governors Association declared, “Currently, the prevailing approach for funding public colleges and universities … gives colleges and universities little incentive to focus on retaining and graduating students or meeting state needs …. Performance funding instead provides financial incentives for graduating students and meeting state needs.”

But with all this state activity and national support, does performance funding actually work? As we report in a book being published this week, Performance Funding for Higher Education (Johns Hopkins University Press), the answer is both yes and no.

Based on extensive research we conducted in three states with much-discussed performance funding programs -- Indiana, Ohio, and Tennessee -- we find evidence for the claims of both those who champion performance funding and those who reject it. In keeping with the arguments of PF champions, we find that performance funding has resulted in institutions making changes to their policies and programs to improve student outcomes -- whether by revamping developmental education or altering advising and counseling services.

Underpinning those changes have been increased institutional efforts to gather data on their performance and to change their institutional practices in response.

But we often cannot clearly determine to what degree performance funding is driving those changes. Many of the colleges we studied stated they were already committed to improving student outcomes before the advent of performance funding. Moreover, in addition to PF, the states often are simultaneously pursuing other policies -- such as initiatives to improve developmental education or establish better student pathways into and through higher education -- that push institutions in the same direction as their PF programs. As a result, it is nearly impossible to determine the distinct contribution of PF to many of those institutional changes.

Meanwhile, supporting the arguments of the PF detractors, we have not found conclusive evidence that performance funding results in significant improvements in student outcomes -- and, in fact, we’ve discovered that it produces substantial negative side effects. In reviewing the research literature on PF impacts, we find that careful multivariate studies -- which compare states with and without performance funding and control for a host of factors besides PF that influence student outcomes -- largely fail to find a significant positive impact of performance funding on student retention and degree attainment. Those studies do find some evidence of effects on four-year college graduation and community college certificates and associate degrees in some states and some years. However, those results are too scattered to allow anyone to conclude that performance funding is having a substantial impact on student outcomes.

Various organizational obstacles may help explain that lack of effect. Many institutions enroll numerous students who are not well prepared for college. In addition, state performance metrics often do not align well with the missions of broad-access institutions such as community colleges, and states do not adequately support institutional efforts to better understand where they are failing and how best to respond.

Even if performance funding ultimately proves to significantly improve student outcomes, the fact remains that it has serious unintended impacts that need to be addressed. Faced both by state financial pressures to improve student outcomes and substantial obstacles to doing so easily, institutions are tempted to game the system. By reducing academic demands and restricting the enrollment of less-prepared students, broad-access colleges can retain and graduate more students, but only at the expense of an essential part of their social mission of helping disadvantaged students attain high-quality college degrees. Policy makers should address such negative side effects, or they could well vitiate any apparent success that performance funding achieves in improving student outcomes.

In the end, performance funding, like so many policies, is complicated and even contradictory. To the question of whether it works, our answer has to be both yes and no. It does prod institutions to better attend to student outcomes and to substantially change their academic and student-service policies and programs. However, performance funding has not yet conclusively produced the student outcomes desired, and it has engendered serious negative side effects. The question is whether, with further research and careful policy making, it is possible for performance funding to emerge as a policy that significantly improves student retention, graduation and job placement without paying a stiff price in reduced academic quality and restricted admission of disadvantaged students. Time will tell.

Kevin Dougherty is a senior research associate at the Community College Research Center, Teachers College, Columbia University and an associate professor at Teachers College. Sosanya M. Jones is an assistant professor at Southern Illinois University. Hana Lahr is a research associate, Rebecca S. Natow is a senior research associate, Lara Pheatt is a former research associate and Vikash Reddy is a postdoctoral research associate, all with CCRC.

Editorial Tags: 

Completion rates are key to Georgia State U's merger with Georgia Perimeter College

Smart Title: 

Will merger with Georgia State U, a completion rate success story, boost the rock-bottom graduation rates of two-year Georgia Perimeter College?

Iowa regulator agreed with Ashford University's complaint about meddling by federal and California agencies

Smart Title: 

Ashford University cries foul on veterans' agency and California for meddling in Iowa's decision to yank the for-profit's GI Bill eligibility, and newly released emails show an Iowa official shared that view.

HBCUs cut from North Carolina $500 tuition bill

Smart Title: 

North Carolina legislation would have cut tuition dramatically, and many at the institutions feared they would lose revenue. Two universities still are covered by bill.

Kansas cuts criticized for hurting large research universities

Smart Title: 

State has changed its formula to impose deepest reductions on universities that receive more outside support.

College has become less affordable in most states, threatening to worsen economic stratification

Smart Title: 

College affordability has declined in 45 states since 2008, with low- and middle-income students in particular feeling the pinch, new study finds.

Public colleges relied less on tuition in 2015

Smart Title: 

For the second year in a row, public colleges relied more on state funding and less on tuition revenue, reversing a recent trend.

Essay challenging academic studies on states' performance funding formulas

A recent Inside Higher Ed article about the analysis of state performance funding formulas by Seton Hall University researchers Robert Kelchen and Luke Stedrak might unfairly lead readers to believe that such formulas are driving public colleges and universities to intentionally enroll more students from high-income families, displacing much less well-off students. It would be cause for concern if institutions were intentionally responding to performance-based funding policies by shifting their admissions policies in ways that make it harder for students who are eligible to receive Pell Grants to go to college.

Kelchen and Stedrak’s study raises this possibility, but even they acknowledge the data fall woefully short of supporting such a conclusion. These actions would, in fact, be contrary to the policy intent of more recent and thoughtfully designed outcomes-based funding models pursued in states such as Ohio and Tennessee. These formulas were adopted to signal to colleges and universities that increases in attainment that lead to a better-educated society necessarily come from doing a much better job of serving and graduating all students, especially students of color and students from low-income families.

Unfortunately, Kelchen’s study has significant limitations, as has been the case with previous studies of performance-based funding. Most notably, as acknowledged by Kelchen and Stedrak, these studies lump together a wide variety of approaches to performance-based funding, some adopted decades ago, which address a number of challenges not limited to the country’s dire need to increase educational attainment. Such a one-size-fits-all approach fails to give adequate attention to the fact that how funding policies are designed and implemented actually matters.

For example, the researchers’ assertion that institutions could possibly be changing admissions policies to enroll better-prepared, higher-income students does not account for differential effects among states that provide additional financial incentives in their formulas to ensure low-income and minority students’ needs are addressed vs. those states that do nothing in this area. All states are simply lumped together for purposes of the analysis.

In addition, the claim that a decrease in Pell dollars per full-time-equivalent student could possibly be caused by performance-based funding fails to account for changes over time in federal policy related to Pell Grants, different state (and institutional) tuition policies, other state policies adopted or enacted over time, changes in the economy and national and state economic well-being, and changes in student behavior and preferences. For example, Indiana public research and comprehensive universities have become more selective over time because of a policy change requiring four-year institutions to stop offering remedial and developmental education and associate degrees, instead sending these students to community colleges.

If any of these factors have affected states with newer, well-designed outcomes-based funding systems and other states with rudimentary performance-based funding or no such systems at all, as I believe they have, then there is strong potential for a research bias introduced by failing to account for key variables. For example, in states that are offering incentives for students to enroll in community colleges, such as Tennessee, the average value of Pell Grants at public bachelor’s-granting institutions would drop if more low-income, Pell-eligible students were to choose to go to lower-cost, or free, community colleges.

I agree with Kelchen and Stedrak that more evaluation and discussion are needed on all forms of higher education finance formulas to better understand their effects on institutional behavior and student outcomes. Clearly, there are states that had, and in some cases continue to have, funding models designed in a way that could create perverse incentives for institutions to raise admissions standards or to respond in other ways that run contrary to raising attainment for all students, and for students of color in particular. As the Seton Hall researchers point out, priority should be given to understanding the differential effects of various elements that go into the design and implementation of state funding models.

The HCM Strategists’ report referenced in the study was an attempt by us to inform state funding model design and implementation efforts. There needs to be a better understanding of which design elements matter for which students in which contexts -- as well as the implications of these evidence-based findings for policy design and what finance policy approaches result in the best institutional responses for students. There is clear evidence that performance funding can and does prompt institutions to improve student supports and incentives in ways that benefit students.

Analysis under way by Research for Action, an independent, Philadelphia-based research shop, will attempt to account for several of the existing methodological limitations correctly noted by Kelchen and Stedrak. This quantitative and qualitative analysis focuses on the three most robust and longest-tenured outcomes-based funding systems, in Indiana, Ohio and Tennessee.

Factors examined by Research for Action will include the type of outcomes-based funding being implemented, specifics of each state’s formula as applied to both the two- and four-year sectors, the timing of full implementation, changes in state policies over time, differences in the percentages of funding allocated based on outcomes such as program and degree completion, and differences in overall state allocations to public higher education. And, for the first time, Research for Action will move beyond the limitations of analyses based primarily on federal IPEDS data by incorporating state longitudinal data, which give a more complete picture.

As states continue to implement various approaches to funding higher education, it is essential to understand the effects on institutional behavior and student outcomes. Doing so will require more careful analyses than those seen to date and a more detailed understanding of policy design and implementation factors that are likely to affect institutional responses. Broad-brush analyses such as Kelchen and Stedrak’s can help to inform the questions that need to be asked but should not be used to draw any meaningful conclusions about the most effective ways to ensure colleges and universities develop and maintain a laser focus on graduating more students with meaningful credentials that offer real hope for the future.

Martha Snyder is a director at HCM Strategists, a public policy advocacy and consulting firm.

Editorial Tags: 

Federal spending protects most vulnerable students from state disinvestment, study finds

Smart Title: 

Low-income and nondependent students have been protected from state disinvestment in higher education during the last two decades because of increasing federal aid spending, a new study finds.


Subscribe to RSS - State policy
Back to Top