Measuring Quality and Peformance: What Counts?
I recognize the importance of looking at higher education critically to determine what impact the experience has on individuals and societies but we seem to repeatedly resort to the same fallback strategies of “counting what can be counted.”
“Not everything that can be counted counts,
and not everything that counts can be counted.”
There has been a rash of articles in the press about the ways we might measure the effectiveness of a university education, often judging the success of higher education by the employment results of graduates. There have been several articles about the failure of higher education in China to guarantee jobs to graduates. The Voluntary Institutional Metrics Project also emphasizes post-graduate employment among the items (below) to be evaluated that include:
- repayment and default rates on student loans
- student progression and completion
- institutional cost per degree
- employment of graduates
- student learning
Four of five items above are primarily quantitative measures. [Re-read Einstein quote above.] I recognize the importance of looking at higher education critically to determine what impact the experience has on individuals and societies but we seem to repeatedly resort to the same fallback strategies of “counting what can be counted” for lack of an effective methodology for an alternative.
Quantitative data are relatively easy to collect and very simple to compare. This feeds into the attraction of rankings—gather data, create mechanisms for quantifying the information, and compare. Rankings, like the information that will be collected by the Voluntary Institutional Metric Project, will (supposedly) help us know which institution is better than which other institution.
But most anyone who has collected data knows that you can neatly summarize the results but that those results often hide a fair amount of messiness. Are all data from all institutions collected and reported in the same way? Are they truly comparable? Which variables were factored into the analysis and which were overlooked? Or ignored? We have read over and over again that there are serious methodological problems with the way rankings are constructed. I suggest that there are good reasons to doubt most comparative data collected from the diverse array of institutions that make up higher education in any country today. We are rarely comparing apples to apples. In fact, we are generally comparing apples to kumquats so if we resort to measuring common characteristics (number of seeds, for example) just to have something to compare, how useful is it?
Quantitative measures are appealing at so many levels but they limit what we can measure as well as how accurately we can measure something as elusive as education. I once read (I think it was Martin Trow) that to truly measure the quality and impact of higher education, we would have to monitor someone throughout their post-graduate lifetime. This makes sense to me. Several decades after receiving my liberal arts degree that prepared me for no job in particular, I am constantly reminded of the value of what I learned during those four years— it provided me with the foundation skills for all of the work I’ve done since. Now, how could I document that?
I recognize that in most modern societies, most of us need to work to survive, but I do wonder if the purpose of a college education is to guarantee employment. Perhaps it is time to separate the two. Perhaps we should be getting an education at college and job training somewhere else. What do employment statistics, or loan repayment data, or institutional costs really tell us about the education someone has experienced?
Institutions of all kinds need to be accountable but we should be careful about what conclusions we draw from the things that we can measure.
Read more by
Inside Higher Ed’s Blog U
What Others Are Reading