Policy making is difficult and complex; evaluating the effects of policy can also be quite difficult. Nevertheless, it is important that researchers and policy analysts undertake the hard work of asking difficult questions and doing their best to answer those questions.
This is what we attempted to do when we undertook a yearlong effort to evaluate the effects of performance funding on degree completions. This effort has culminated in two peer-reviewed papers and one policy brief which summarizes the results of those papers. Our policy brief was widely distributed and the results were discussed in a recent Inside Higher Ed article.
Recently, Nancy Shulock (of California State University at Sacramento) and Martha Snyder (of HCM Strategists, a consulting firm) responded to our policy brief with some sharp criticism in these pages. As academics, we are no strangers to criticism; in fact, we welcome it. While they rightly noted the need for stronger evidence to guide the performance funding debate, they also argued that we produced “a flawed piece of research,” that our work was “simplistic,” and that it merely “compares outcomes of states where the policy was in force to those where it was not.”
This is not only an inaccurate representation of our study, but it shows an unfortunate misunderstanding of the latest innovations in social science research. We see this as an opportunity to share some insights into the analytical technique Shulock and Snyder are skeptical of.
The most fail-proof method of determining whether a policy intervention had an impact on an outcome is an experimental design. In this instance, it would require that we randomly assign states to adopt performance funding while others retain the traditional financing model. But because this is impossible, “quasi-experimental” research designs can be used to simulate experiments. The U.S. Department of Education sees experimental and quasi-experimental research as “the most rigorous methods to address the question of project effectiveness”, and the American Educational Research Association actively encourages scholars to use these techniques when experiments are not possible to undertake.
We chose the quasi-experimental design called “differences-in-differences,” where we compared performance-funding states with non-performance funding states (one difference) in the years before and after the policy intervention (the other difference). The difference in these differences told us much more about the policy’s impact than could traditional regression analysis or descriptive statistics. Unfortunately most of the quantitative research on performance funding is just that – traditional regression or descriptive analysis – and neither strategy can provide rigorous or convincing evidence of the policy’s impacts. For an introduction to the method, see here and here.
Every study has its limitations and ours is no different. On page 3 of the brief (and in more detail in our full papers) we explain some of these issues and the steps we took to test the robustness of our findings. This includes controlling for multiple factors (e.g., state population, economic conditions, tuition, enrollment patterns, etc.) that might have affected degree completions in both the performance funding states and the non-performance funding states. Further, Shulock and Snyder claim that we “failed to differentiate among states in terms of when performance funding was implemented,” when in fact we do control for this as explained in our full papers.
We do not believe that introducing empirical evidence into the debates about performance funding is dangerous. Rather, we believe it is sorely missing. We also understand that performance funding is a political issue and one that is hotly debated. Because of this, it can be dangerous to promote expensive policies without strong empirical evidence of positive impacts. We wish this debate occurred with more transparency to these politics, as well as with a better understanding of the latest developments in social science research design.
The authors take issue with a second point that requires a response – their argument that we selected the wrong performance funding states. We disagree. The process of identifying these states required painstaking attention to detail and member-checks from experts in the field, especially when examining a 20-year period of time (1990-2010). In our full studies, we provide additional information beyond what is included in our brief (see endnote 8) about how we selected our states.
The authors suggested that we misclassified Texas and Washington. With Texas, our documents show that in 2009, SB 1 approved “Performance Incentive” funding for the biennium. Perhaps something changed after that year that we missed, and this would be a valid critique, but we have no evidence of that. The authors rightly noticed that our map incorrectly coded Washington state as having performance funding for four-year and two-year colleges when in fact it is only for two-year colleges. We correctly identified Washington in our analysis and this is displayed correctly in the brief (see Table 2).
All of these details are important, and we welcome critiques from our colleagues. After all, no single study can explain a single phenomenon; it is only through the accumulation of knowledge from multiple sources that allows us to see the full picture. Policy briefs are smaller fragments of this picture than are full studies, so we encourage readers to look at both the brief and the full studies to form their opinions about this research.
We agree with the authors that there is much that our brief does not tell us and that there are any number of other outcomes one could choose to evaluate performance funding. Clearly, performance funding policies deserve more attention and we intend to conduct more studies in the years to come. So far, all we can say with much confidence is that, on average and in the vast majority of cases, performance funding either had no effect on degree completions or it had a negative effect.
We feel that this is an important finding and that it does “serve as a cautionary tale.” Policy makers would be wise to acknowledge our findings in the context of other information and considerations when they consider whether to implement performance funding in their states, and if so, what form it might take.
Designing and implementing performance funding is a costly endeavor. It is costly in terms of the political capital expended by state law makers; the time devoted by lawmakers, state agency staff, and institutional leaders; and in the amount of money devoted to these programs. Therefore, inserting rigorous empirical analysis to the discussion and debate is important and worthwhile.
But just as the authors say performance funding “should not be dismissed in one fell swoop,” it should not be embraced in one fell swoop either. This is especially true given the mounting evidence (for example here, here, here, and here) that these efforts may not actually work in the same way the authors believe they should.
Claiming that there is “indisputable evidence that incentives matter in higher education” is a bold proposition to make in light of these studies and others. Only time will tell as more studies come out. Until then, we readily agree with some of the author’s points and critiques and would not have decided to draft this reply had they provided an accurate representation of our study’s methods.
David Tandberg is assistant professor of higher education at Florida State University. Nicholas Hillman is an assistant professor educational leadership & policy analysis at the University of Wisconsin at Madison.
A recent research paper published by the Wisconsin Center for the Advancement of Postsecondary Education and reported on by Inside Higher Ed criticized states' efforts to fund higher education based in part on outcomes, in addition to enrollment. The authors, David Tandberg and Nicholas Hillman, hoped to provide a "cautionary tale" for those looking to performance funding as a "quick fix."
While we agree that performance-based funding is not the only mechanism for driving change, what we certainly do not need are impulsive conclusions that ignore positive results and financial context. With serious problems plaguing American higher education, accompanied by equally serious efforts across the country to address them, it is disheartening to see a flawed piece of research mischaracterize the work on finance reform and potentially set back one important effort, among many, to improve student success in postsecondary education.
As two individuals who have studied performance funding in depth, we know that performance funding is a piece of the puzzle that can provide an intuitive, effective incentive for adopting best practices for student success and encourage others to do so. Our perspective is based on the logical belief that tying some funding dollars to results will provide an incentive to pursue those results. This approach should not be dismissed in one fell swoop.
We are dismayed that the authors were willing to assert an authoritative conclusion from such simplistic research. The study compares outcomes of states "where the policy was in force" to those where it was not -- as if "performance funding" is a monolithic policy everywhere it has been adopted.
The authors failed to differentiate among states in terms of when performance funding was implemented, how much money is at stake, whether performance funds are "add ins" or part of base funding formulas, the metrics used to define and measure "performance," and the extent to which "stop loss" provisions have limited actual change in allocations. These are critical design issues that vary widely and that have evolved dramatically over the 20-year period the authors used to decide if "the policy was in force" or not.
Treating this diverse array of unique approaches as one policy ignores the thoughtful work that educators and policy makers are currently engaged in to learn from past mistakes and to improve the design of performance funding systems. Even a well-designed study would probably fail to reveal positive impacts yet, as states are only now trying out new and better approaches -- certainly not the "rush" to adopting a "quick fix" that the authors assert. It could just as easily be argued that more traditional funding models actually harm institutions trying to make difficult and necessary changes in the best interest of students and their success (see here and here).
The simplistic approach is exacerbated by two other design problems. First, we find errors in the map indicating the status of performance funding. Texas, for example, has only recently implemented (passed in spring 2013) a performance funding model for its community colleges; it has yet to affect any budget allocations. The recommended four-year model was not passed. Washington has a small performance funding program for its two-year colleges but none for its universities. Yet the map shows both states with performance funding operational for both two-year and four-year sectors.
Second, the only outcome examined by the authors was degree completions as it "is the only measure that is common among all states currently using performance funding." While that may be convenient for running a regression analysis, it ignores current thinking about appropriate metrics that honor different institutional missions and provide useful information to drive institutional improvement. The authors make passing reference to different measures at the end of the article but made no effort to incorporate any realism or complexities into their statistical model.
On an apparent mission to discredit performance funding, the authors showed a surprising lack of curiosity about their own findings. They found eight states where performance funding had a positive, significant effect on degree production but rather than examine why that might be, they found apparent comfort in the finding that there were "far more examples" of performance funding failing the significance tests.
"While it may be worthwhile to examine the program features of those states where performance funding had a positive impact on degree completions," they write, "the overall story of our state results serves as a cautionary tale." Mission accomplished.
In their conclusion they assert that performance funding lacks "a compelling theory of action" to explain how and why it might change institutional behaviors.
We strongly disagree. The theory of action behind performance funding is simple: financial incentives shape behaviors. Anyone doubting the conceptual soundness of performance funding is, in effect, doubting that people respond to fiscal incentives. The indisputable evidence that incentives matter in higher education is the overwhelming priority and attention that postsecondary faculty and staff have placed, over the years, on increasing enrollments and meeting enrollment targets, with enrollment-driven budgets.
The logic of performance funding is simply that adding incentives for specified outcomes would encourage individuals to redirect a portion of that priority and attention to achieving those outcomes. Accepting this logic is to affirm the potential of performance funding to change institutional behaviors and student outcomes. It is not to defend any and all versions of performance funding that have been implemented, many of which have been poorly done. And it is not to criticize the daily efforts of faculty and staff, who are committed to student success but cannot be faulted for doing what matters to maintain budgets.
Surely there are other means -- and more powerful means -- to achieve state and national goals of improving student success, as the authors assert. But just as surely it makes sense to align state investments with the student success outcomes that we all seek.
Nancy Shulock is executive director of the Institute for Higher Education Leadership & Policy at California State University at Sacramento, and Martha Snyder is senior associate at HCM Strategists.
If you were a casual reader of American newspapers, you would think that the fate of the humanities was in doubt. Polishing off a 30-year-old critique, most famously offered by Allan Bloom in 1987’s The Closing of the American Mind, an acerbic corps of doubters – David Brooks of The New York Times is in the vanguard -- wonders if scholars of literature have lost their way, substituting politically chosen texts for classics, stripping away the basic function of the humanities, defined gloriously as: to help us make sense of our world. Enrollments are down, they note, which means that students are shifting their efforts into the sciences, or business, or technology. The doubters want us to believe that the wonderful dreamers who once taught at Chicago or Penn or Yale are, sorrowfully, gone.
This skeptical cohort is often partnered with another, angrier, and more politically active group, which questions whether a college degree is even worth the money these days. Hack the degree, they say. Take a MOOC. If you have to go, enroll at Stanford, or choose your major based on your starting salary after graduation. This platoon of nail-biters and shouters asks us – the big "us," that is, our fractious national family – to distrust the words of tenured radicals, to seek an end to administrative bloat, to treat higher education, basically, as a commodity.
As many have noted – Michael Bérubé and Scott Saul foremost among them – this is all generally hogwash. The humanities remain popular with students, and the great bulk of student credit hours in the humanities are still generated by courses that discuss Important Events or Great Books or Big Thinkers. Much of the decline in enrollments can be attributed to long-term trends – for instance, changes in the gender distribution of majors as universities open doors into STEM fields for students, or the rise of new interdisciplines that eat away at our notion of what counts as the core of the humanities. Professors still love their subjects, even if they don’t wear tweed and even if some of them are women or people of color, even if they sometimes look different, dress different, talk with accents, come with different histories, and sometimes even use foreign languages in the classroom. Great lectures are still given, by "star" faculty and wandering adjuncts alike. Students are still inspired, even if they read William Faulkner alongside Toni Morrison.
I’m in lockstep with Bérubé and Saul, but I also think we need to continually reframe this conversation, to focus in on the single greatest threat to higher education: the defunding of public colleges and universities and the consequent overemphasis on revenue through student credit hours. The threat to the humanities – really, to higher education comprehensively – isn’t caused by a loss of passion or direction or focus, as Brooks and his chorus of doubters want us to believe. Or about bloat in the administrative middle.
It comes from the transformation of the day-to-day interactions between students and faculty, a transformation that is ensured by an emphasis on vast classes, big draws, and throngs of students. And that emphasis flows – in a straight and narrow line – directly from the declining state contributions to public universities and, more abstractly, from our recent consensus that profit alone is the surest measure of importance. It is great that Harvard University wants to pour more money into the humanities, but such an investment is meaningless, really, if every place that isn’t Harvard, or Yale, or Princeton has to trim and cut in one corner to build and grow in another (let alone to cover the skyrocketing health care costs of employees).
Who am I to contribute to this conversation? I should not be here today. I should be silent, or muted, or fixed in the background, a security guard or a mechanic or a grocery clerk – noble professions, I know, but not generally featured in conversations like this one. There was nothing inevitable about my present social position. Indeed, if you were a gambler, you’d have wagered against me. I am no David Brooks, you see. But I am just as much a creature of the humanities.
I was a screw-up, a wastrel, washed-out and adrift for a long time. And headed to nowhere-in-particular very slowly. A generally lackluster youth from a small, forgettable town, I was a C- student at the end of high school, trending down and not up. I enrolled -- at my mother’s loving insistence -- at a big public university, signed up to major in political science, and bombed out fast and hard, earning a 0.5 GPA in my first semester.
With my failure thus well proven, I moved out to a trailer park at the dusty, quiet, southern tip of New Jersey’s Long Beach Island, and went to work in a used bookstore. I rode my bicycle, drove an old station wagon, grew my hair long, drank Miller Lite in tall, dark bottles, smoked Camel cigarettes, and genuinely enjoyed my early hermitage.
The institution that saved me from this enthralling vagabondage wasn’t a church, or a gang, or prison, or the family. It wasn’t football or baseball or basketball. It wasn’t "America." I didn’t read Kerouac. I didn’t hear an inspirational speech on television. It was a small place, Richard Stockton College, tucked away in the Pine Barrens, perhaps the simplest and most basic expression of our belief in an educated adult citizenry. I signed up – not knowing what I meant to do, really – and then showed up, ready for absolutely nothing.
My saviors weren’t clerics or wardens or coaches. They were teachers. They wore mismatched socks, drank coffee by the gallon, and loved ideas, evidence, and debate. They weren’t generalists but specialists, with hard-earned knowledge about medical science in Scotland, or library readership in the early Republic. I couldn’t tell you anything about their politics, but I could paint you a richly detailed portrait of their presence at the head of the classroom. From what I could see, they lived cheaply, responsibly, and haphazardly, drawing sustenance from the material of their research, which they shared, twice or three times a week, with a group of 35 or so history majors, mouth-breathers all. These strange masters of the blackboard, drove cars just like mine, except that theirs were filled with random slips of paper and wildly strewn books and file folders. They gave extraordinary, dazzling lectures, even though much of the time, I could not understand anything they were saying. They were a live cliché.
I wish I could say that their job was easy, that I turned myself around, figured it out, and bootstrapped my way back to the right track. The truth is, I was hard work, just like everyone else. In red ink, they implored me to rewrite and rethink. In a cascade of office meetings and hallway conversations they pored over my paragraph formation, transition sentences, basic grammar and syntax.
They didn’t see anything special in me, of course, because there just wasn’t anything special to see. They merely believed that this was what they should do for everyone who walked into their classroom. They had seen thousands of people before I arrived, and they would see thousands after I was gone. They weren’t naïve or wide-eyed, and they didn’t imagine themselves as heroic or romantic. They were professional. And, when I look back on the last 20 years of my life, it wasn’t their lecture material that made the difference. It was the time they spent with me outside of class.
Of course, I was lucky. I was born in 1970, at a moment when most states believed in adequately funding higher education. I grew up in a place that had an enhanced system of public universities and colleges, all staffed with well-trained, research-focused faculty, people with published expertise in a specific field, with a dedication to craft. And I went to school and college at a time when professors – and schoolteachers more generally – were respected for their role in civil society, and trusted to patiently instruct and constructively challenge slack-jawed young men and women like me.
Raised in the idyllic world of yesteryear, I honestly never once thought to measure my education – or my intelligence, or my civic worth – by my starting salary after graduation. I had been making $78 a week at the bookstore, borrowing money for college, and charging meals and gas and cigarettes on a credit card. I just assumed that this pattern would continue forever. Even now, I am surprised that I didn’t just keep working at the bookstore, didn’t just keep shivering my way through the cold, lonely winters and hot, busy summers of what is colloquially known as “LBI,” didn’t just keep grifting my way to a full stomach.
When it comes to higher education, I’m not nostalgic for the way things used to be. I’m indebted to those who came before, to those who made this current "me" possible. I’m unhappy that we can’t do the same here and now for others. And I think the problem is quite clearly not about escalating salaries or administrative expansion.
Long after my redemption, I spent nine years teaching at a public university. For most of that time, I was running an interdisciplinary program at the very heart of the humanities. We were charged to grow an "honors-style" major, with small classes, lots of writing, and intense faculty and student interactions. In short, to create the experience of a small liberal arts college -- an experience I know well – within a 35,000-student university. Our capacity to grow was the result of a clever administrator, who – in the face of a statewide budget freeze – added on an additional fee for incoming students, and used that vast pot of money to shift growth toward the emerging interdisciplines. But this "honors-style" dream was chipped away slowly by the annual news reports of state budget cuts. We were pressed to create bigger courses, to put "fannies in the seats." We ended our enhanced foreign language requirement because it kept our major count down. We were encouraged to open up our enrollments, to create a big survey course at the front end of the major, a course that became so large that we had to trim off the writing requirement and give multiple-choice exams. We spent hours on assessment data, all required by the state higher education board, and less and less, as a consequence on students.
Not surprisingly, some of us left, hoping to find somewhere else something rather like what we’d experienced as young adults, some place where we could do for every student what had been done for us.
Wherever we are now, the stakes, for all of "us," in this higher education debate are high. Few students are ready, right at the start, to be inspired by a lecture on Plato. Most need help taking notes, or forming a thesis statement, or just thinking hard about anything. Still, every time a university has to add 500 students to the freshman class to make up for a budget cut without also hiring faculty, and every time an administrator – typically, a good person trying to save an institution – has to ask for a significantly larger lecture class without having the funds to beef up the support structure for students, we make stories like mine less likely.
When we describe the lecture as a delivery mode, as a site for Great Thinkers to Expound on Big Ideas, and not as the public expression of hundreds of miniature conversations in which one or two students work through material, and expression, and form with a single person, and we don’t emphasize the equal importance of those behind-the-doors sessions, we do damage to the representation of great teaching. We make it possible to believe that "big" is better. Without those conversations, it isn’t just the humanities that gets shortchanged – it is all of us.
Today’s jobs might not be yesterday’s, but they still require the ability to write and speak clearly, to analyze evidence and form opinions, to solve problems with research, to reach an informed opinion and to persuade others, through a presentation of logic or facts or material, that your opinion is worth their attention. This is what higher education is supposed to do. Fulfilling this mission requires an attention to scale, and a commitment to making it possible for faculty and students to work together closely. In the big and small publics – the great post WWII laboratories of social mobility, from which Brooks and his cohort are so greatly distanced – we simply can no longer teach these skills or create this scale of interaction. And if these centers of gravity fail, everything else will, too.
This should make ordinary Americans angry. It used to be that my story could be your sons' and daughters' story, but not any longer. Don’t blame the teachers in the classroom, though. They still work as hard as they can – they still drink too much coffee, still drive beat-up cars, still occasionally mismatch their socks – to deliver sparkling lectures, to rouse students to believe in the passionate study of humanity, to expand their intellectual horizons. And they try very hard to work closely with students in need, students with talent, and students who seem to want more. Don’t blame the administrators either. Most of them are simply trying to stave off the very worst consequences of this transformation. Blame the folks with the budget ax. And blame those who vote them in.
Matthew Pratt Guterl is professor of Africana studies and American studies at Brown University.