These days colleges boast about their admissions rankings, their graduation rates, their faculties’ achievements and much more. Many say that the statistics are a tool to promote accountability and improvement.
Jerry Z. Muller disagrees. His new book, The Tyranny of Metrics (Princeton University Press), critiques not only higher education but many parts of society that rely on metrics.
"Gaming the metrics occurs in every realm: in policing, in primary, secondary and higher education; in medicine, in nonprofit organizations; and, of course, in business," Muller writes. "And gaming is only one class of problems that inevitably arise when using performance metrics as the basis of reward and sanction. There are things that can be measured. There are things that are worth measuring. But what can be measured is not always what is worth measuring; what gets measured may have no relationship to what we really want to know."
Muller, a professor of history at Catholic University of America, responded via email to questions about the book.
Q: Your book talks about the use of metrics in higher education and also in other parts of American society. In terms of your concerns about the overuse of metrics, do you see higher education as a leading offender or just average?
A: It’s much worse in K-12 education, where the linkage of performance metrics (based on standardized tests of English and math) to reward and punishment has led to an overemphasis on test preparation (as opposed to learning), a diversion of time away from other subjects (such as history), and from other valuable activities, such as creative play and the arts. That having been said, the misuse of metrics has created or exacerbated plenty of problems in higher education.
Q: Across parts of society, would you please summarize your critique of metrics.
A: My critique is not of measurement as such, which, used properly, may be valuable. Nor am I against rewarding those who demonstrate achievement. I’m not opposed to making information public, either.
My critique is of what I call “metric fixation.” The key components of metric fixation are the belief that it is possible and desirable to replace judgment, acquired by personal experience and talent, with numerical indicators of comparative performance based upon standardized data (metrics); that the best way to motivate people within organizations is by attaching rewards and penalties to their measured performance, rewards that are either monetary or reputational (college rankings, etc.); and that making the metrics public makes for greater professional “accountability” -- as if only that which can be counted in some standardized way makes for professional probity. My book is about why this so often fails to have the desired effects and leads to unintended negative outcomes, which, after decades of experience, ought to be anticipated.
Q: Supporters of the use of metrics in higher education -- including Democratic and Republican politicians -- argue that metrics lead to improvement. Knowing that a college has a low graduation rate, in a frequently cited example, can spur improvements. How would you respond?
A: It may indeed spur improvement, if, for example, institutions can find the resources to assure that students are better advised, that the courses necessary for them to graduate are offered, etc. But one source of low graduation rates is the low level of preparedness of admitted students, so the most efficient way to raise graduation rates would be to stiffen criteria for admission. But then legislators (having been misled by organizations such as the Lumina Foundation into believing that more and more people should go to college and that state governments ought to engage in “outcomes-based funding”) complain about lack of “access.”
So the most frequent method to increase graduation rates is to lower the standards for graduation -- easier courses, more lax grading, etc. There’s tremendous pressure on instructors (more and more of whom are adjuncts) to do just that -- especially once one gets below the level of flagship institutions. By allowing more students to graduate, a college transparently demonstrates its accountability through its excellent metric of performance. The legislators are appeased. The fact that no one has learned much is beside the point.
Q: How have metrics, and in particular rankings, hurt the admissions process?
A: Where to begin? The fact that colleges are ranked by their acceptance rates creates an incentive for each institution to attract as many applications as possible -- so that it can reject more of them, lower its acceptance rate, and hence improve its metrics. Not only is that wasteful, but it means that admissions officers have less time to devote to each application. Then there are many varieties of gaming the metrics, such as law schools that admit students with lower LSAT scores on a “part-time” or “probationary” basis, so that their scores are not included in the metric of admitted student scores.
Q: Some colleges, government agencies and businesses promote tools to evaluate faculty productivity -- number of papers written, number of citations, etc. What do you make of this use of metrics?
A: Here too, metrics have a place, but only if they are used together with judgment. There are many snares. The quantity of papers tells you nothing about their quality or significance. In some disciplines, especially in the humanities, books are a more important form of scholarly communication, and they don’t get included in such metrics. Citation counts are often distorted, for example by including only journals within a particular discipline, thereby marginalizing works that have a transdisciplinary appeal. And then of course evaluating faculty productivity by numbers of publications creates incentives to publish more articles, on narrower topics, and of marginal significance. In science, it promotes short-termism at the expense of developing long-term research capacity.
Q: You close your section on higher education by saying that metrics send a message that higher education is about making money. Why do you believe this to be the case?
A: I’m referring here to ratings and ranking systems such as the Department of Education’s College Scorecard, which in turn is used, with some further refinements, to try to account for “value added” (the difference between the actual outcomes and what would be expected based on the background of admitted students), by the Brookings Institution’s rankings, and by commercial rankings such as those of Money magazine.
For all of these, the major criterion is return on investment (ROI), understood as the relationship between the costs of college and the impact of college attendance on future earnings. They take into account graduation rates, faculty-student ratios, etc., but always with a view to ROI. So the message could hardly be more explicit.