You have /5 articles left.
Sign up for a free account or log in.

Policy making is difficult and complex; evaluating the effects of policy can also be quite difficult. Nevertheless, it is important that researchers and policy analysts undertake the hard work of asking difficult questions and doing their best to answer those questions.

This is what we attempted to do when we undertook a yearlong effort to evaluate the effects of performance funding on degree completions. This effort has culminated in two peer-reviewed papers and one policy brief which summarizes the results of those papers. Our policy brief was widely distributed and the results were discussed in a recent Inside Higher Ed article.

Recently, Nancy Shulock (of California State University at Sacramento) and Martha Snyder (of HCM Strategists, a consulting firm) responded to our policy brief with some sharp criticism in these pages. As academics, we are no strangers to criticism; in fact, we welcome it. While they rightly noted the need for stronger evidence to guide the performance funding debate, they also argued that we produced “a flawed piece of research,” that our work was “simplistic,” and that it merely “compares outcomes of states where the policy was in force to those where it was not.”

This is not only an inaccurate representation of our study, but it shows an unfortunate misunderstanding of the latest innovations in social science research. We see this as an opportunity to share some insights into the analytical technique Shulock and Snyder are skeptical of.

The most fail-proof method of determining whether a policy intervention had an impact on an outcome is an experimental design. In this instance, it would require that we randomly assign states to adopt performance funding while others retain the traditional financing model. But because this is impossible, “quasi-experimental” research designs can be used to simulate experiments. The U.S. Department of Education sees experimental and quasi-experimental research as “the most rigorous methods to address the question of project effectiveness”, and the American Educational Research Association actively encourages scholars to use these techniques when experiments are not possible to undertake.

We chose the quasi-experimental design called “differences-in-differences,” where we compared performance-funding states with non-performance funding states (one difference) in the years before and after the policy intervention (the other difference).  The difference in these differences told us much more about the policy’s impact than could traditional regression analysis or descriptive statistics. Unfortunately most of the quantitative research on performance funding is just that – traditional regression or descriptive analysis – and neither strategy can provide rigorous or convincing evidence of the policy’s impacts. For an introduction to the method, see here and here.

Every study has its limitations and ours is no different. On page 3 of the brief (and in more detail in our full papers) we explain some of these issues and the steps we took to test the robustness of our findings. This includes controlling for multiple factors (e.g., state population, economic conditions, tuition, enrollment patterns, etc.) that might have affected degree completions in both the performance funding states and the non-performance funding states. Further, Shulock and Snyder claim that we “failed to differentiate among states in terms of when performance funding was implemented,” when in fact we do control for this as explained in our full papers.

We do not believe that introducing empirical evidence into the debates about performance funding is dangerous. Rather, we believe it is sorely missing. We also understand that performance funding is a political issue and one that is hotly debated. Because of this, it can be dangerous to promote expensive policies without strong empirical evidence of positive impacts. We wish this debate occurred with more transparency to these politics, as well as with a better understanding of the latest developments in social science research design.

The authors take issue with a second point that requires a response – their argument that we selected the wrong performance funding states. We disagree. The process of identifying these states required painstaking attention to detail and member-checks from experts in the field, especially when examining a 20-year period of time (1990-2010). In our full studies, we provide additional information beyond what is included in our brief (see endnote 8) about how we selected our states.

The authors suggested that we misclassified Texas and Washington. With Texas, our documents show that in 2009, SB 1 approved “Performance Incentive” funding for the biennium. Perhaps something changed after that year that we missed, and this would be a valid critique, but we have no evidence of that. The authors rightly noticed that our map incorrectly coded Washington state as having performance funding for four-year and two-year colleges when in fact it is only for two-year colleges. We correctly identified Washington in our analysis and this is displayed correctly in the brief (see Table 2).

All of these details are important, and we welcome critiques from our colleagues. After all, no single study can explain a single phenomenon; it is only through the accumulation of knowledge from multiple sources that allows us to see the full picture. Policy briefs are smaller fragments of this picture than are full studies, so we encourage readers to look at both the brief and the full studies to form their opinions about this research.  

We agree with the authors that there is much that our brief does not tell us and that there are any number of other outcomes one could choose to evaluate performance funding. Clearly, performance funding policies deserve more attention and we intend to conduct more studies in the years to come. So far, all we can say with much confidence is that, on average and in the vast majority of cases, performance funding either had no effect on degree completions or it had a negative effect.

We feel that this is an important finding and that it does “serve as a cautionary tale.” Policy makers would be wise to acknowledge our findings in the context of other information and considerations when they consider whether to implement performance funding in their states, and if so, what form it might take.

Designing and implementing performance funding is a costly endeavor. It is costly in terms of the political capital expended by state law makers; the time devoted by lawmakers, state agency staff, and institutional leaders; and in the amount of money devoted to these programs. Therefore, inserting rigorous empirical analysis to the discussion and debate is important and worthwhile.

But just as the authors say performance funding “should not be dismissed in one fell swoop,” it should not be embraced in one fell swoop either. This is especially true given the mounting evidence (for example here, here, here, and here) that these efforts may not actually work in the same way the authors believe they should.

Claiming that there is “indisputable evidence that incentives matter in higher education” is a bold proposition to make in light of these studies and others. Only time will tell as more studies come out. Until then, we readily agree with some of the author’s points and critiques and would not have decided to draft this reply had they provided an accurate representation of our study’s methods.

Next Story

More from Views