I still remember the terror and thrill of having my own class to teach for the first time.
It didn’t pay much -- this was graduate school, after all -- but the autonomy was wonderful. When I closed the classroom door, it was just me and the students. The course had goals of its own, of course -- anyone teaching Intro to American Government has to go through “Congress is divided into the House and the Senate” whether they want to or not -- but how I got there, and how I framed much of the content, was really my call. I went through a painful, if inevitable, bit of trial and error, but eventually found my stride.
After far too many years of being on the other side of the desk, and then a few years of t.a.’ing for some people whose styles and choices, um, let’s go with “were not my own,” I relished having the chance to do things the way I thought they should be done. When they worked, it was gratifying beyond belief; when they didn’t, at least I had the autonomy to make a change on the fly. That mattered.
(In administration, sometimes I miss that autonomy. In administration, the constraints are greater, and successes tend to be partial, collaborative, and often indirect. There’s nothing quite like the rush of a class that really nailed it. The closest I get to that now is when a blog post really nails it.)
I thought again about that experience yesterday as I read about an appealing new project that proposes using carefully crafted analytics to improve student learning outcomes. There’s nothing inherently sinister or silly about using documented, aggregate results to drive improvements; in most fields, that would be considered common sense. In fact, there’s a perfectly intelligent argument to be made to the effect that evidence-driven reform is one of the most promising avenues we have for raising student achievement. I’ve made that argument myself before, and still believe it.
But to someone in the classroom to whom autonomy is a major benefit of the job, evidence-driven reform can look an awful lot like someone else telling you what to do. If you’re sufficiently pessimistic, it can even look like de-skilling the faculty. Even if you don’t see it as a stalking horse for a shift from artisinal to mass production of education, it can still feel intrusive. And to be fair, much of it is in the early stages, in which it may not be as precise as one would like.
The trick, which I’m still struggling with, is to find ways for faculty to be able to make those findings their own. If they can draw benefit from the information without surrendering their autonomy -- a key source of new experiments anyway -- then we’ll be where we should be. In the best of all possible worlds, this kind of information would be a resource to be used for improvement. But right now, it’s often rejected out of hand in favor of personal observation and an appeal to authority.
Wise and worldly readers, have you seen a use of analytics or evidence-driven reform on campus that didn’t raise hackles? If so, how did it work? (Alternately, have you seen a de-hackling process that worked?) Anything useful is welcome...