You have /5 articles left.
Sign up for a free account or log in.
I've been chewing on Shop Class as Soulcraft, Matthew Crawford's explanation of how the spiritual enervation that came from working for a think tank drove him (no pun intended) to repair motorcycles for a living, and how spiritually ennobling that move was.
It's a great book to argue with, since it's prickly and peculiar and weirdly un-self-aware. I'll admit a temperamental allergy to any argument that smacks of “those manly men with their earthy authenticity,” and the book sometimes shades into that. That said, I have to admit that I laughed out loud, half-guiltily, at his invocation of The Postcard.
(For older or younger readers: in the 90's, before online applications became commonplace, faculty job applicants mailed thick paper applications and waited for paper responses. More often than not, the only response would be The Postcard, which acknowledged receipt – fair enough – and then asked you to check boxes indicating your race and gender. As an unemployed white guy, The Postcard was offensive beyond belief. “Give us an excuse not to hire you.” No, screw you. Then you'd feel like a reactionary prick for being offended, and feel bad for that, but you still needed a job, dammit. So now you get to be unemployed and self-loathing. That's just ducky. Now, with online applications, the demographic questions usually get asked upfront, where they blend in with everything else. Substantively, there's no difference, but at least it feels less insulting. If we can't offer jobs, we can at least recognize applicants' basic human dignity.)
The valuable part of the book for me, though, is its discussion of craft and the sense of individual agency.
Crawford rightly takes issue with the easy equation of 'white collar' with 'intellectually challenging,' and of 'blue collar' with 'mindless.' Anyone who has actually worked in both settings (hi!) can attest that working with recalcitrant materials can require real ingenuity, and that many office jobs are just about as brainless as you can get without actually starting to decompose. (Dilbert and The Office draw their popularity from noticing exactly that.) From that correct observation, Crawford also notes that part of the joy of certain kinds of hands-on work comes from the relative autonomy it affords. When you're trying to diagnose a funny engine behavior, it's just you and the engine. You get the engine to work or you don't. (Of course, it isn't always that simple. But the case is recognizable.) When you're jockeying for position in an office, by contrast, direct measures of performance are scarce, so it often comes down to office politics, which can feel like junior high all over again. Having a sense of control over your own work can free you from that gnawing sense of dissatisfaction when you really can't explain to others just what you do all day.
It struck me that this sense of ownership of craft is part of what's behind resistance to evidence-based policy in higher ed.
Done correctly, evidence-based policy (or what we academics call 'outcomes assessment') shifts the basis for decision-making from 'expert opinion' to actual observed facts, preferably gathered over a large sample size. In deciding whether a given practice makes sense, data counts. The idea is that some facts are counterintuitive, so simply relying on what longtime practitioners say is right and proper will lead to suboptimal results. Rather than deferring to credentials, authority, or seniority, we are supposed to defer to documented outcomes. Solutions that work are better than solutions that don't, regardless of where they come from or whose position they threaten.
What Crawford's book helped me to crystallize was why something as obviously good as data-based decisionmaking is so widely resisted on the ground. It effectively reduces the practitioner's sense of control over his own work. At some level, it threatens to reduce the craftsman to a mere worker.
Take away the sense of ownership of craft, even with the best of intentions (like improving outcomes for students), and the reaction will be/is vicious, heated, and often incoherent. Since there's really no basis for arguing that student results are irrelevant – without students, it's not clear that we need teachers – the arguments will be indirect. The measure is bad; the statistics are misleading; this is an excuse to fire people; this is an excuse to destroy academic freedom; this is about administrative control; this is a fad; blah blah blah.
I draw hope, though, from Crawford's correct observation that the 'white collar mind/blue collar body' split isn't really true. The same can apply here. Outcomes assessment done right is focused on where students end up. How you get them there is where the real craft comes in. How, exactly, do the most successful programs work? (For that matter, without assessing outcomes, how do we even know which programs are the most successful?)
On an individual level, professors do this all the time. We try different ways of explaining things, of posing problems, of structuring simulations, and then judge how well they worked. But student outcomes encompass far more than the sum of individual classes; without some sort of institutional effort, those extra factors go largely unaddressed (or, worse, addressed only according to custom or internal politics).
That could involve some displacement of traditional craft practice, but it hardly eliminates the role of craft. For a while I've been mentally toying with a scheme that looks like this: separate teaching from grading, then reward teaching that results in good grades. The instructor wouldn't grade his own class; he'd trade with someone else, ideally at another institution. (In that scheme, we could also do away with evaluative class observations and most uses of student course evaluations. Replace 'expert opinion' with observable facts. If you manage to succeed with your students using a method I don't personally get, the success is what matters. Likewise, if you consistently fail, the fact that some big muckety-muck somewhere endorses your method means exactly nothing.) That way, you're eliminating the obvious conflict of interest that tempts some scared faculty to resort to grade inflation. The grades won't be theirs to inflate.
Admittedly, this method wouldn't work as cleanly at, say, the graduate level, but I see it working fairly well for most undergrad courses. Your job as the instructor is not to threaten/cajole/judge, but to coach students on how to produce high-quality work. The secret grader is the enemy. Students make use of your help or they don't, and the results speak for themselves. Faculty who get consistently better results get recognition, and those who get consistently poor results are given the chance to improve; those who still fail after a reasonable shot are shown the door.
Getting back to Crawford, though, I was disappointed that he largely reinscribes the white collar/blue collar dualism in his description of two different ways of knowing. In an extended rant against Japanese repair manuals -- seriously, it's in there -- he draws a distinction between inflexible rule-based knowledge and hard-won life wisdom, clearly favoring the latter. The implication seems to be that knowledge is either 'explicit' -- that is, theoretical and absolute -- or 'tacit,' meaning acquired through non-transferable practice. Think 'theoretical physics' versus 'practicing mechanic.'
Well, okay, but there's a much more interesting kind of knowledge that draws on each. It's the kind of knowledge that social scientists deal with every single day. It's the statistical tendency. The rule based on aggregated observations, rather than deductive logic. It's inductive, probabilistic, empirical, and useful as hell. Baseball fans call it sabermetrics. Economists call it heuristics. (Score one for baseball fans.) It's based on real world observation, but real world observation across lots of people.
This is the kind of knowledge that helps us get past the well-documented (though unaddressed by Crawford) observation biases that real people have. Practitioners of sabermetrics, for example, found that some of the longstanding hunches of baseball scouts simply didn't stand up to scrutiny. Individual craft practice falls prey to individual biases, individual blind spots, and individual prejudices. Testing those assumptions against accumulated evidence isn't applying procrustean logic to messy reality. If anything, it's reality-based theorizing.
Done correctly, that's exactly what any outcomes-based or evidence-based system does. And rather than crushing individual craft, it actually gives the thoughtful practitioner useful fodder for improvement.
Though I have my misgivings about Crawford's book, I owe it a real debt. The key in getting outcomes assessment to mean something on the ground is to distinguish it from the false binary choice of craft or theory. It's in between, and yet distinct. It's empirical, but not individual. The fact that a thinker as subtle as Crawford could miss that category completely suggests that the task won't be easy, but it also suggests that doing the task right could make a real contribution.