You have /5 articles left.
Sign up for a free account or log in.

Students at Shimer College in Chicago participate in a class discussion.

Shimer College

My first encounter with assessment came in the form of a joke. The seminary where I did my Ph.D. was preparing for a visit from the Association of Theological Schools, and the dean remarked that he was looking forward to developing ways to quantify all the students' spiritual growth. By the time I sat down for my first meeting on assessment as a full-time faculty member in the humanities at a small liberal arts college, I had stopped laughing. Even if we were not setting out to grade someone’s closeness to God on a scale from 1 to 10, the detailed list of "learning outcomes" made it seem like we were expected to do something close. Could education in the liberal arts — and particularly in the humanities — really be reduced to a series of measurable outputs?

Since that initial reaction of shock, I have come to hold a different view of assessment. I am suspicious of the broader education reform movement of which it forms a part, but at a certain point I asked myself what my response would be if I had never heard of No Child Left Behind or Arne Duncan. Would I really object if someone suggested that my institution might want to clarify its goals, gather information about how it’s doing in meeting those goals, and change its practices if they are not working? I doubt that I would: in a certain sense it’s what every institution should be doing. Doing so systematically does bear significant costs in terms of time and energy — but then so does plugging away at something that’s not working. Paying a reasonable number of hours up front in the form of data collection seems like a reasonable hedge against wasting time on efforts or approaches that don’t contribute to our mission. By the same token, getting into the habit of explaining why we’re doing what we’re doing can help us to avoid making decisions based on institutional inertia.

My deeper concerns come from the pressure to adopt numerical measurements. I share the skepticism of many of my colleagues that numbers can really capture what we do as educators in the humanities and at liberal arts colleges. I would note, however, that there is much less skepticism that numerical assessment can capture what our students are achieving — at least when that numerical assessment is translated into the alphabetical form of grades. In fact, some have argued that grades are already outcome assessment, rendering further measures redundant.

I believe the argument for viewing grades as a form of outcome assessment is flawed in two ways. First, I simply do not think it’s true that student grades factor significantly in professors’ self-assessment of how their courses are working. Professors who give systematically lower grades often believe that they are holding students to a higher standard, while professors who grade on a curve are simply ranking students relative to one another. Further, I imagine that no one would be comfortable with the assumption that the department that awarded the best grades was providing the best education — many of us would likely suspect just the opposite.

Second, it is widely acknowledged that faculty as a whole have wavered in their dedication to strict grading, due in large part to the increasingly disproportionate real-world consequences grades can have on their students’ lives. The "grade inflation" trend seems to have begun because professors were unwilling to condemn a student to die in Vietnam because his term paper was too short, and the financial consequences of grades in the era of ballooning student loan debt likely play a similar role today. Hence it makes sense to come up with a parallel internal system of measurement so that we can be more objective.

Another frequently raised concern about outcome assessment is that the pressure to use measures that can easily be compared across institutions could lead to homogenization. This suspicion is amplified by the fact that many (including myself) view the assessment movement as part of the broader neoliberal project of creating “markets” for public goods rather than directly providing them. A key example here is Obamacare: instead of directly providing health insurance to all citizens (as nearly all other developed nations do), the goal was to create a more competitive market in an area where market forces have not previously been effective in controlling costs.

There is much that is troubling about viewing higher education as a competitive market. I for one believe it should be regarded as a public good and funded directly by the state. The reality, however, is that higher education is already a competitive market. Even leaving aside the declining public support for state institutions, private colleges and universities have always played an important role in American higher education. Further, this competitive market is already based on a measure that can easily be compared across institutions: price.

Education is currently a perverse market where everyone is in a competition to charge more, because that is the only way to signal quality in the absence of any other reliable measure of quality. There are other, more detailed measures such as those collected by the widely derided U.S. News & World Report ranking system — but those standards have no direct connection to pedagogical effectiveness and are in any case extremely easy to game.

The attempt to create a competitive market based on pedagogical effectiveness may prove unsuccessful, but in principle, it seems preferable to the current tuition arms race. Further, while there are variations among accrediting bodies, most are encouraging their member institutions to create assessment programs that reflect their own unique goals and institutional ethos. In other words, for now the question is not whether we’re measuring up to some arbitrary standard, but whether institutions can make the case that they are delivering on what they promise.

Hence it seems possible to come up with an assessment system that would actually be helpful for figuring out how to be faithful to each school or department’s own goals. I have to admit that part of my sanguine attitude stems from the fact that Shimer’s pedagogy embodies what independent researchers have already demonstrated to be “best practices” in terms of discussion-centered, small classes — and so if we take the trouble to come up with a plausible way to measure what the program is doing for our students, I’m confident the results will be very strong. Despite that overall optimism, however, I’m also sure that there are some things that we’re doing that aren’t working as well as they could, but we have no way of really knowing that currently. We all have limited energy and time, and so anything that can help us make sure we’re devoting our energy to things that are actually beneficial seems all to the good.

Further, it seems to me that strong faculty involvement in assessment can help to protect us from the whims of administrators who, in their passion for running schools "like a business," make arbitrary decisions based on their own perception of what is most effective or useful. I have faith that the humanities programs that are normally targeted in such efforts can easily make the case for their pedagogical value, just as I am confident that small liberal arts schools like Shimer can make a persuasive argument for the value of their approach. For all our justified suspicions of the agenda behind the assessment movement, none of us in the humanities or at liberal arts colleges can afford to unilaterally disarm and insist that everyone recognize our self-evident worth. If we believe in what we’re doing, we should welcome the opportunity to present our case.

Next Story

More from Views