• Just Visiting

    John Warner is the author of Why They Can't Write: Killing the Five-Paragraph Essay and Other Necessities and The Writer's Practice: Building Confidence in Your Nonfiction Writing.

Title

Making Sense of Metrics

Some preliminary thoughts after a conference on finding a better way to measure institutions and learning.

May 23, 2019
 
 

Last week I had the pleasure[1]of attending, as both a presenter and listener, the Disquantified: Higher Education in the Age of Metrics conference held at UC Santa Barbara.

As is the nature of these things I spent far more time listening than presenting, and because of that, feel overwhelmed by the sheer volume of thoughts coming out of the experience. As a lifelong adjunct, being invited to share with those more rooted inside the academy is a relatively rare occurrence, so I’m grateful for the chance to have had this kind of deep exposure to other people who share some of my concerns about the trajectory of education who have considerably more experience with the inner workings of the system.[2]I benefit from those perspectives in the same way I hope they benefit from mine as a career-long visitor to these spaces.

Our shared concerns cluster around the use of metrics as instruments of influence and control in education. 

We are in the “age of metrics,” and the desire for quantification holds (in my view) a disproportionate sway over what we could and should value. The tools of quantification we have available in higher ed are somewhere between imperfect and actively terrible, for example, the U.S. News & World Report rankings, which are essentially meaningless at measuring the learning experiences inside of institutions and yet are a considerable focus of institutional efforts.

Personally, I’d like to see institutions focus on metrics that are actually, you know, meaningful, which is why I’ll be continuing to collaborate with the group to put in my oar and row the boat towards the goal of making those better metrics were possible, and redirecting attention to other values where not. 

These are some of the things that I took away from the conference that I suspect will be showing up in this space over the coming weeks and months, particularly because I’ve asked some of the other contributors to write guest blogs on their specific areas of expertise. I wish I could do full justice to every bit of the conference, but I simply don’t have the space.

1. The problem of “metrical cynicism.”This idea was invoked by Elizabeth Chatterjee of Queen Mary University of London in regards to the UK’s use of the Teaching Excellence Framework, and once raised, seemed to permeate many of the other topics. Essentially, it’s an attitude where the players inside the system recognize the metrics by which they’re held accountable are not meaningful (or sometimes even movable), and yet there’s periods where one must at least pretend they’re meaningful. Administrators wink that they shouldn’t be taken seriously even as they caution that the consequences of failing the metrics are deadly serious.

Assessment for external audiences in US higher ed is an example, as are the U.S. News rankings. As Chatterjee puts it, this cynicism can result in a kind of powerlessness, where the metrics remain influential becauseeveryone believes they’re essentially meaningless.[3]It’s a boogeyman no one is publicly scared of, but which still as the power to snuff your existence. Truly bizarre when you start to think about it.

Everyone knows that the U.S. News rankings are terrible and yet many institutions design their strategic initiatives around those rankings. It’s perverse.

This suggests when dealing with metrics, we shouldn’t give in to the cynicism, but instead be willing to challenge their use and abuse. We’re better off if we recognize that boogeyman is real.

2. The problem of a bunch of this stuff we believe to be meaningful actually being arbitrary and not transparent.This was another recurring theme. Zachary Bleemer of UC Berkeley will be doing a guest post on how not-transparent the data underpinning earnings by major is. Benjamin Schmidt in discussing “STEM: The First 20 Years” showed how “STEM” essentially started through what look to be almost random designations of careers as being STEM by U.S. immigration officials (ICE). 

3. The problem of using aggregated historical data to predict the future outcomes of individuals.Individuals are not averages, and the past is not a perfect guide to the future, and yet so much energy (College Scorecard anyone? Career pathways?) goes into mining aggregate historical data that is then framed as predictive we should be deeply worried. I’m not against these metrics per say, but we have to stop accepting them as predictive of individual student outcomes. Not only is it bad use of data, it has the potential to skew student choices in a way that may lowerthe chance of a positive outcome for some individuals. It’s irresponsible.

4. The problem of education’s “black box.”This was an overriding theme of the conference specifically articulated by Corbin Campbell of Teachers College Columbia University. Given that learning happens inside individuals and often manifests itself in different ways inside those individuals, it is very difficult to measure what’s happening. To some degree, some part of the process will always be “ineffable” to use a word often thrown around the conference.

But as Campbell argues, to give in to that ineffability risks allowing others to define what’s happening inside that black box. Unfortunately, that void is being filled with extremely limited metrics (hello again, College Scorecard), or worse, conspiracy narratives like those peddled by TPUSA and like of leftist indoctrination camps being established inside our classrooms.

Bill Gates recently announced his effort to define what college is “worth,” and his project seems to take for granted the notion that a college degree should be viewed exclusively through the lens of a private, rather than a public good. This view is what has led to the steady disinvestment in our public education infrastructure which has has created the status quo we all believe is unacceptable. 

I wouldn’t believe we’re going to make these same mistakes all over again except it’s utterly predictable. I’m sitting here wondering how we stop billionaires with terrible track records when it comes to education from engaging in their endless meddling. 

Part of the answer must be to better define what is meaningful about a college degree in terms and even metrics that people will accept as both meaningful and trustworthy. It’s a big task. I hope we’re up to it.

 

 

 

 

[1]Another significant pleasure was meeting one of my “after-academia role models,” Audrey Watters, in person. Her book on teaching machines is going to be a corker. It’s possible I’m using the word “corker” because I also had a chance to spend time with Ben Williamson who came all the way from the UK to share insights on how data is used in their higher ed system. (Short version: In my view? Scary.)

[2]I’m also pleased to note that I’ve convinced some of the other participants to contribute future guest blogs on their topics. I’m confident others will be as interested in what they have to say as I was.

[3]The news that the University of Oklahoma has been systematically mis-reporting data on alumni giving to the U.S. News rankings is a perfect example of metrical cynicism in action. 

Read more by

 
Back to Top