You have /5 articles left.
Sign up for a free account or log in.

 

At the start of the semester I dutifully include the prescribed language for general education learning outcomes, and at the end of the semester I turn in my randomly selected student artifacts meant to reflect those objectives, while simultaneously thinking about how little those artifacts reflect about what students have or have not learned.

Those artifacts are read by a committee which will score them against a rubric, then gather them and all the others taken from courses similar to mine into numbers meant to reflect how well we’re doing at teaching.

I understand the necessity of these actions, the rationales for monitoring and accountability. Public institutions are a public trust so the public has a right to know what we’re up to, and internally we should be seeking knowledge about the effects of our practices on students and learning.

But I can’t be the only person who sees these rituals as largely divorced from truly meaningful learning experiences. There has to be a better way to prove the work we do … works.

Or not. What if there is no metric for what we wish to know?

--

In “The Tyranny of Metrics,” published at the Chronicle of Higher Education, an adaptation (paywalled) of a book of the same title, Jerry Z. Muller of Catholic University of America illuminates the shortcomings of metrics in governing the work of higher education. Demands for more data result in increased administrative demands (and personnel), generating information of “no real use, and read by no one.”

Muller identifies “metric fixation” as a culture unto itself, where the generation and collection of data becomes self-justifying. The numbers matter because they are numbers. What can be counted, counts, even if that means, for example, juking the “citation score” stats in order to create an illusion of scholarly impact.

Education, a complex experience, is reduced to what can be scored in “purely economic terms.”

It doesn’t seem accidental that the age of austerity and precarity for higher education is linked to the rise of metrics.

What if those metrics are simply a way of justifying the cuts, the method to generate the endless demands to do more with less?

--

Something I’ve come to believe over the last ten or so years of exploring and altering my teaching is that the more freedom and agency I give students, the more they learn.

When I dropped minimum word counts on assignments, students wrote more.

When I allowed significant latitude on topics, students wrote more interestingly and exhibited greater effort.

When I let go of specific universal objectives (including grades), students made unique and surprising discoveries.

Sometimes these things would be captured in the general education assessment rubrics and sometimes they wouldn’t be. Given the arbitrary nature of the academic schedule, sometimes those unique and surprising discoveries remain a little muddled in the end-of-semester product, full clarity not necessarily arriving on a schedule.

Sometimes education is a time bomb, detonating at an unknown point in the future.

Still, I knew learning was happening because letting go of objectives and standards required a different approach for my own assessment. I started asking students if they were learning and what they’d learned.

They do this in reflective essays and in conversation with me and each other. I haven’t bothered to classify or quantify any of it, but it could be done I suppose. It honestly never occurred to me to try to figure out a way to measure it because the measurement didn’t seem meaningful, certainly not to the students anyway.

Students say they’re learning, and I believe them.

--

I sometimes joke with students that I didn’t learn anything in college, by which I mean, I retain very little specific knowledge of what I learned in my college classes, and yet, clearly college has been part of my educational journey, a trip that doesn’t seem likely to end anytime soon.

It would be impossible to quantify whatever I learned in college, and of course much of what I learned while I was in college was not in the classroom.

At the same time, no doubt I could’ve learned more. I was accountable only to myself and I tended to take the path of least resistance. The educational mode which I experienced all too often (extremely large lectures) was all too common.

The difference between when I went to college and now is that no one was too worried about any of it. Today, apparently, it is a crisis which can best be solved through more and better metrics, even though we have little evidence that metrics lead to improvement in complicated systems such as education.

Thirty years of data-driven policies have not resulted in improvement in K-12 education. Instead, I believe those policies have resulted in a students being more stressed, less curious, and less inclined to embrace the intrinsic pleasures of learning than ever. The accountability and assessment regimes have resulted in fewer people even wanting to enter education as a field to begin with. 

The metrics movement in medicine which has taken the form of “pay for performance” doesn’t work. The data they can collect is too crude. The time spent collecting the data is time not spent on more important work. 

The metrics may be leading to more people dying. How the number of people dying isn’t the most important metric I’ll leave aside for another time.

--

I feel stuck. The more freedom I have as an instructor, the better able I am to meet the needs of my students when it comes to learning. The more freedom I give my students the more the more they learn.

But the more freedom I receive, and the more I give, the harder all of this becomes to quantify because not every student is going to be learning the same lessons. This didn’t used to be a problem, but it sure seems to be now.

(For some, anyway. I can’t help but notice accountability seems to be less and less present as one moves up the ladder of prestige and wealth.)

If we’re required to develop metrics for learning, perhaps we can turn to two questions which may appear on your end-of-semester course evaluations. They’re used at my most recent employer, College of Charleston:

  • I found this course intellectually challenging and stimulating.
  • I have developed my skills and knowledge.

These are good questions. I’m kind of thinking if our goal is learning, they may be the only questions that truly matter.

And yet, they’re not questions we seem to trust or value when it comes to the metrics by which the work of education is judged. We do not trust students as the arbiters of their own learning experiences.

But I believe we should. 

 

 

Next Story

Written By