Despite the title of this article, I’m not opposed to the core idea: it obviously makes more sense to fund universities (the issues for two-year colleges are often quite different, so I’m leaving them out of this discussion) on the basis of course completions than on the traditional one of enrollments. Adding additional focus to outcomes by counting graduates in some way could also make sense, though the inputs and assessments would have to be fairly weighted -- not easy to do.
The problem with performance funding isn’t in the formulas; it’s in the unreasonably exaggerated expectations for results.
The promoters of performance funding — let’s call them the resultari to save space — are good and capable people with laudable goals whose belief that performance funding will effect major change in higher education appears to stem from two basic assumptions: 1) university leaders are inept managers; 2) both university leaders and faculty care too little about quality and success in undergraduate education.
Examining the Assumptions Behind the Push for Performance Funding
At their core, the assumptions of the resultari stem from a belief that universities suffer from internal contradictions, ones that inevitably flow from the self-interest of both faculty and administrators. Their view is that faculty are too focused on their research and that faculty and administrators are too often intent on enhancing institutional prestige through such things as unneeded programs (principally in the graduate and professional areas). These self-interested emphases, the resultari argue, distract from the real business of the university — graduating as many students as possible with the highest levels of learning and at the lowest possible cost.
Specific manifestations of this alleged administrator-faculty malfeasance include: a general unwillingness to make use of data on productivity and learning, innocence of where money is actually spent, lack of accountability for attrition, a resistance to using technology, and a failure to measure student learning. Performance funding, we are told, will change all this because it will force priorities back to where they should be. And then, when the death grip of administrator-faculty self-interest is wrested from the helm, the ship of higher education will steer a new and better course.
A History of Recent Times
If we accept the resultari’s assumptions, then they are certainly right about the profound impact that performance funding will bring. But are their beliefs about the state of the university correct? Let’s look at what’s happened in public higher education in the past few decades.
First, the resultari often cite as an illustration of inept university management the fact that tuition — even after declining state support — is increasing faster than inflation. There are many flaws in this argument, not least the point, recently cited in these pages, that it ignores basic economics: in a knowledge economy, the cost of all services that rely on highly educated individuals has been going up relative to that of the manufactured goods and other fruits of unskilled labor that comprise so much of the CPI market basket. And, compared to many businesses, higher education has less ability to improve productivity through outsourcing and automation.
Second, the problem of misplaced faculty and administrative priorities is mostly in the past, especially the 1970s and to some extent the 1980s. But universities have undergone radical internal changes in the last 30 years. I saw fine scholars turned down for promotion because of ineffective teaching at the university where I worked (Ohio State) as early as the late 1970s. I know from conferences, etc., that we weren’t atypical. The professoriate’s sense of entitlement, largely generated by the faculty shortages of the 1960s, has long since worked its way through the system. Today’s younger group of faculty is highly attuned to the importance of student learning, and even at major research-focused universities, this aspect of the job garners a significant proportion of their energy and creativity.
I’ve worked with several dozen presidents in both Ohio and South Carolina, and haven’t met one in the last 15 years who didn’t have student success as his or her highest priority — this statement most specifically includes leaders at large public research universities. And it isn’t just talk: these presidents have and are following up with incentives, programs, and rigorously balanced promotion and tenure criteria.
While the big and small universities changed early, it’s true that some mid-sized institutions continued to be a problem well into the 1980s, and in a few cases into the 1990s. Far too many of these institutions suffered the reign of what I call Louis XIV presidents (l’université, c’est moi), who focused on building prestige in the form of doctoral and professional programs that were typically of indifferent quality. But my observation is that this plague of academic locusts has passed and now shows up as just the occasional grasshopper.
There are several reasons why the Louis XIV presidents have met the fate of Louis XVI: state systems have woken up to issues of quality and duplication and cracked down — in many cases drawing public attention to the weak success of the doctoral and professional graduates in the job market; and trustees, often benefiting from the good advice of the Association of Governing Boards of Universities and Colleges, have learned to be vigilant. There was a time when the chairman of the English department could look around the table at a faculty meeting and say, “Damn! There are more than 20 of us now! Let’s offer the Ph.D.!” and see the dream quickly and easily realized. No more (though the unfortunate consequences of many of these earlier decisions linger).
Like generals whose philosophy is shaped by the last war, the resultari appear to be busy preparing to do battle with the universities of the 1970s and ‘80s. Unfortunately, once we’ve constructed the computer-derived version of the Maginot Line, with its walls of data and turrets of formulas, we’ll peek over the top and see there’s no one there.
Another dimension of the misplaced priorities category, according to many critics, is that universities show their lack of appreciation for undergraduate learning by failing to measure it, particularly in general education.
I certainly agree that colleges and universities have been terribly deficient in assessment. We’ve operated on the “infectious disease” principle — the faculty are critical thinkers, the students are in contact with them, so…
No matter how you view the issue, higher education’s “take our word for it” approach as an answer to questions about student learning is unconscionable. That being said, I strongly believe that standardized testing isn’t a solution but a new problem. It’s an approach that creates powerful contradictions and also flies in the face of experience — notably, we know from the history of quality management that the “inspect the product at the end of the line” approach is certain to fail. I won’t attempt to recap the extensive literature in this area. My bottom line is that rigorous, improvement-focused, campus-level, non-standardized assessment (and reporting) is critical — and sufficient.
Ever More Data?
A core part of the performance funding push, and one that I find especially alarming, is the relentless, almost ritualistic, advocacy for more and better data. It seems that American education bureaucrats generally suffer from OCD (Obsessive Computational Disorder). A part of the problem in the higher education world is that the “more data” argument is being pushed by folks — I call them the datarati — who have spent their lives with numbers and, like the man whose only tool is a hammer and who therefore thinks everything looks like a nail, they naturally see data as a primary solution. But getting human-based data down to the decimal point is not necessarily a good investment, and this leads us to another problem of expectations.
Education datarati purport to draw their inspiration from business. One source is the late Harold Geneen, CEO of ITT, who is famous for saying that it didn’t matter the kind of business, if you knew the numbers inside out, you knew the company inside out. Geneen may have made other contributions to business, but this observation is on its face nonsense.
If you had applied Geneen’s thinking to IBM in the mid-‘80s (as its leaders did) you’d have seen an incredibly strong business numbers-wise but failed to notice that a major shift in technology was soon to bring the company to the brink of bankruptcy. There are lots of other business examples of this data-centric blindness. A modern government illustration is the recent federal Race to the Top competition, where data systems count for 9% of the total score but “turning around the lowest-achieving schools” gets only 10%.
Based on extensive personal experience, I find the allegation that universities don’t use data amazing. In the fat years of the ‘60s and to a certain extent the ‘70s, there were certainly gross inefficiencies resulting from lack of attention. But, after waves of budget cuts in recent decades, my observation is that presidents and senior staff are very knowledgeable about where the money goes and about where efficiencies can be found, and they do care deeply about attrition and do assign responsibility for it.
How about those longitudinal data systems, ones that will allow for the tracking of students from K-12 through college? I agree these are likely to have some value. But I strongly suspect the probable impact of these systems is seriously (albeit unintentionally) oversold. Why?
The longitudinal data approach anticipates a system that will be able to tell us, for example, that Mme. Maron’s 11th-grade advanced algebra class at Hogwarts High is turning out students who are weak at college algebra. So far so good, but I suspect that in most cases when the Datarati Swat Team arrives at the school they’ll find that leaders were already well aware that Mme. Maron was a concern. Thus, the second issue, which is that it will be much easier to find the sources of problems than to resolve them. An automobile manufacturer detecting faulty parts can force suppliers to revise their practices and, if that doesn’t work, go to another firm. But, as we are reminded nearly every day in the papers, it’s not so simple with teachers (quite often because it’s not really their fault).
I also believe we are seriously underestimating the cost of these new data palaces. Projects to connect disparate information systems have an amazing ability to always cost more than initially projected — usually a lot more. And that’s just putting them in place. I’ve not seen evidence that people are thinking carefully about the long-term costs of maintenance and analysis. There’s no denying there can be value in the systems, but balancing that against true costs isn’t being done.
Like the resultari, the datarati are winning. My office is now making its first hires in two years and they will all be for a federally funded longitudinal data system. After years of bleeding vital staff positions, this project is not anywhere near the top of our priority list. Worse, when the federal dollars are gone, the first of any new state monies will likely have to go to maintaining the effort. In short, the new data system will be the devourer of rational priorities; I’m thinking we’ll call it Grendel.
Failure to Use Technology
Another issue to consider is whether more use of technology will sharply lower the cost of instruction and in consequence, as a great many reformers argue, contain costs and make higher education much more affordable. The idea of transformational change through technology is certainly an appealing one, but is it based on solid fact or does it contain an important amount of wishful thinking? Unfortunately, I think it’s the latter. I’ll cite three reasons.
First, instructional technology buffs have a habit of using “new” as an explicit or implicit modifier. But the reality is that the technology that’s around today has been there for quite a while. Faculty were using computers to offload drill from the classroom 35 years ago and such use has been pervasive for at least a decade and a half (remember that the Web and HTML have been around for this long). Given the history, and the fact that today’s technology is better but not dramatically different, we don’t have evidence to support the idea that breakthroughs in transforming learning and productivity through technology are on the horizon.
Carol Twigg, head of the excellent Course Redesign effort at the National Center for Academic Transformation, argues that the potential is there with existing technology, but laments that faculty want to keep technology as an adjunct to instruction and will not take the major steps needed to allow it to lower costs. Twigg knows the topic better than anyone else, so I won’t dispute her conclusion about faculty reluctance. On the other hand, I really don’t buy the idea that their recalcitrance simply reflects a desire to preserve their own jobs. Instead, I think the faculty are generally right in seeing limits on replacing people with machines.
This leads to the second point about technology — the human side. Talk to people who deal with students and they’ll tell you there’s a psychological breaking point — most people like to work directly with other people and don’t want to do everything on the computer. Highly motivated adults are certainly an exception (and I’m helping to develop programs based on that belief). But it’s not the same for undergraduates. For example, a recent study showed that three-quarters of those surveyed believed that “online courses are not as appropriate for traditional-age college students, who they believe are better served by classroom courses.”
A third dimension of the potential of technology can be seen in the evidence about staffing. All of the college and university leaders I’ve talked to say that, with few exceptions, online courses are more expensive than the classroom equivalent because they require more instructor time, therefore cutting section size. This also appears to be the case in the for-profit sector, where institutions typically charge more for online education than for the classroom equivalent — about 30% more, in the case of the University of Phoenix. Leaders at this university told me the same thing as their public peers — in consequence of the large number of one-to-one vs. one-to-many communications that occur online, instructors can handle fewer students per section and costs therefore go up.
If you add these three things — 1) the already existing proliferation of technology in instruction; 2) the natural limits to human-computer interactions (in this vein I encourage people to read E.M. Forster’s powerful short story "The MachineStops"); and 3) the fact that experience so far does not suggest technology will lower costs (indeed much of the evidence is in the opposite direction) — then it’s hard to be as sanguine as the resultari about more technology leading to sharply lowered costs.
Put another way: Will technology give us further improvements in learning while also helping to reduce costs? Sure. Will the change be transformational and make college notably more affordable for traditional undergraduates? No.
What Do Proprietary Colleges Tell Us?
Critics and supporters of higher education alike tend to forget that public and private colleges and universities aren’t the only game in town. There’s a growing for-profit sector. In considering the management/efficiency issue, what can we learn from them?
Let’s begin with an important question: Why don’t the for-profits compete on price?
The easy answer is that the public has an accepted price of what it is willing to pay for education, so the proprietaries are able to use savings from greater efficiencies to add to their profit rather than to lower their prices. It’s true that in luxury markets “low price” doesn’t compete well, or not totally (BMW and Lexus do advertise discounts). So, if higher education is presumed to be this kind of market, a low-price alternative probably wouldn’t work — even if you could show comparable learning outcomes (“Learn-A-Lot U — Just as Good as Harvard at Half the Price”).
But this argument has several flaws. First, while the for-profit sector has long been doing well for investors (with the usual market hiccups in stock prices), I haven’t seen evidence that they are making the kind of profits that, if applied to cost reduction at public institutions, would equate to significantly lower tuition — even in the face of lower state support. Again, remember this kind of change is what some are saying we would get if only public universities would operate efficiently.
Second, the for-profits compete against each other to a significant extent, and it would be natural for someone to lower prices at least temporarily to grab market share (as the luxury vehicle makers do). At least that would be true if someone did in fact have much lower costs.
Third, you really ought to see price competition in the lower part of the market — technical programs — where luxury effects don’t apply. But it’s not there. Indeed, after several decades of working with the technically focused for-profit institutions, my observation is that a major part of their marketing sell is that they are “hands-on,” and that students will be able to work closely with accomplished practitioners. Emphasizing personal instruction by highly qualified individuals in a knowledge economy is not how you lower costs.
Here are some suggestions for resultari, datarati, et al.
Do go ahead and, over time and with appropriate care for balance, implement formulas that primarily reflect outcomes — especially course completions and, to the extent reasonable and practicable, graduation. This only makes sense; funding formulas should always have been constructed on outcomes. But…
Do not describe the change to outcomes as a major structural improvement in higher education that will bring great benefits to students and the public. If the formulas are carefully weighted to reflect the varying inputs — principally in terms of student preparation and background in education culture — we’ll find that, with few exceptions, differences between institutions will be very small and not statistically significant. Those using crude data to support winner-loser formulas point out that right now some universities are doing better than others in, for example, graduation rates. Of course. In part that’s a function of different inputs. If we have just a few simple measures to compare disparate entities, we’re going to get simpleminded results that won’t convince anyone — garbage in, garbage out.
Also, even after you control for inputs, variation will exist. In any environment with a lot of players it’s always the case that at any given time some are doing better than others. That’s not a reason to conclude, as resultari seem to do, that the ones doing less well are lagging because they really don’t care and aren’t trying hard. The punitive spirit of formulas designed to create winners and losers serves no one well, not least in the gratuitous damage to students at the allegedly “underperforming” institutions.
Do invest to improve performance. Universities are more deprived of spare funds than at any time in recent history, with the result that start-up monies for new faculty-led initiatives, such as rethinking strategies in general education, are almost impossible to find. State-level incentive programs, ideally ones that are peer-reviewed, could be used to further encourage faculty creativity in both improved learning and improved productivity (Lumina’s Tuning USA project appears to be a great example of what should be done). Investments of this kind will have the added benefit of setting a positive tone.
Do not add more data unless state and campus leaders agree they meet two key criteria: 1) Will the new information lead to better decisions in the real world?; 2) What is the opportunity cost — are there other things we could do with the money that would have greater impact? It’s curious that the “culture of evidence” crowd seems to think there’s little requirement to provide evidence for the value of new data before it’s gathered. A better approach for the datarati would be to shift gears, focus only on what can be shown to matter, and help us avoid creating a voracious new consumer of educational resources without any clear evidence of proportional benefit.
Finally, do change the tone. I agree that there are actions we can take to suppress the cost spiral — among the things that appeal to me are dropping uncompetitive doctoral programs, more shared and/or outsourced instructional and operating services, and in some cases even merging of institutions — but I don’t believe the best way to advocate for change is by implying that only the self-interested will disagree with me. We need fewer leaders of numerical lynch mobs and more people who are willing to offer a truly thoughtful challenge along the lines of, “We recognize these are good and capable people leading our universities and our faculties, and that they are going against some really tough problems. But, as with any organization, there’s more we can do and we have some suggestions.”
I can’t quantify the difference this change in tone would make, but I think it would be huge.
Garrison Walters is executive director of the South Carolina Commission on Higher Education.
This month the University of Texas System released 821 pages of “productivity” data for all faculty members and graduate assistants employed at the nine academic campuses that make up the UT System. As an adjunct lecturer for UT- Arlington, I am listed, along with my dear friends and colleagues in the Department of Sociology and Anthropology, on pages 91 through 93. We are sorted alphabetically, our names stacked one atop the other much like our mailboxes in the departmental office, and beside each is information about teaching loads, external research funding, cumulative grade-point averages, and compensation received in the form of salaries and benefits.
In public conversations, those taking place in print and online media, it is the report itself, rather than its content, that is at the center of the controversy. Publication of detailed information about the professional activities of those employed in postsecondary education has reignited long-running debates about the often conflicting ideals of individual privacy and institutional transparency, the relative values of teaching and research, and the meaning of and purpose of academic freedom.
As these are presented in op-ed pieces and blogs, there is the sense that while academics’ opinions on the issues are complex and numerous, that the positions from which they may be formulated are simple and number only two: tenured or tenure-track professor. Further, much of the discussion has centered on the reactions at the flagship campus at Austin and how this type of public reporting will affect that institution’s ability to recruit and retain “superstar” professors. At both the national and local levels, the focus has been on the Austin campus and little attention has been given to those employed by the other eight campuses. Those of us who are outside of Austin and the tenure system find ourselves outside of the conversation, our concerns not represented. The result is that the discussions taking place publicly are incomplete.
On university campuses in North Texas there are two very different takes on what publication of salary data means. Those who are tenured or tenure-track are worried about the reactions of the general public. And with good reason. Those outside academe often have little appreciation for the economic, social and cultural contributions postsecondary educators make to the state and lack understanding of the research and publication processes. Though most professors earn only modest salaries, and only a few's could be described as handsome, many still fear that the public’s perception will be that they are receiving more than they are worth. In a time of state budget shortfalls and widespread economic uncertainty, those whose work is often invisible and so misunderstood, make for easy targets.
Adjuncts and other contingent instructors are uneasy for another set of reasons. Not even the most creative political commentator could accuse us of greed. Our wages are not only shamefully low relative to those of our colleagues, but also in comparison to the workforce as a whole. And so for us, the publication of salary data has a very different meaning. It turns what is ordinarily a private embarrassment into a public one. It is insult added to injury. Still, this is not our biggest concern. Our real worry is how this information may further erode students' perceptions of our worth.
Attitudes about the relative value of those within and those outside of the tenure system are an often-unacknowledged aspect of the university culture. These attitudes are communicated in subtle but powerful ways and students pick up on them. Students see that the names of some instructors are listed in directories and department websites and others are not. They take note of the fact that those who are not listed are the same ones who are also crammed like cordwood in shared offices that often lack basic equipment. Through these and other small indicators, students come to understand that adjuncts are not valued, that we are expendable, that we are — as we are designated in the report— "other."
When students internalize these messages — and it is inevitable that they will — they lose respect not only for the individual adjuncts, but also for what we do. The classes we teach, the information we deliver and the assignments we give are deemed less important and less valuable.
I know that I am not supposed to complain. Regularly, I am reminded that I am fortunate to have a job in academia. And twice a year, when classes are assigned and I find myself again having somehow managed to make the cut, I am thankful: I will have another semester of getting to do what I love in the place I have grown attached to. But then come the calculations: after withholdings, and the expenses I incur — gas, parking, dry cleaning, toner cartridges — how much is left? Each semester it is a bit less, even as I am asked to do a bit more — submit more progress reports, assign more written work, be more available to students, teach larger sections.
Like most Americans, I am finding ways to do more with less. What I cannot afford to do without is the respect and confidence of my students. I worry about the conclusions they may draw if they learn what I am paid: $2,500 per course. Put differently: that’s $12,500 for five courses a year, when a 3-2 courseload would be considered full-time at many institutions. It’s there, in black and white for anyone with the time and inclination to sift through the data and work the math. What the figure doesn’t show is the number of hours I spend preparing for those classes — reading, planning lectures, updating statistics, reviewing notes, tweaking and grading assignments. It doesn’t show the commitment I have to my discipline, those with whom I share it, and the university in whose name I do it.
My position is not secure. I have not yet signed my contract for next semester and I will admit to being a bit nervous as I write this. Still, I believe that the issues raised by the publication of the data are important and that if we are to address them, we must all be allowed and willing to participate in the conversation.
Harvest Moon is an adjunct in sociology at the University of Texas at Arlington.
Colleges need to accept that the "social compact" between higher education and government that led to a century of growth for American higher education is dead and will not return, Larry R. Faulkner said Sunday.
Faulkner, president of the University of Texas at Austin, delivered that message to hundreds of college presidents gathered in Washington for the annual meeting of the American Council on Education. Bemoaning the death of the compact is not of itself earth-shattering -- academics have been complaining along those lines for some time.
The University of Texas at Austin student newspaper is believed to be the only such publication where the student body elects the editor, right alongside student government leaders, in a vote each spring.