• Confessions of a Community College Dean

    In which a veteran of cultural studies seminars in the 1990s moves into academic administration and finds himself a married suburban father of two. Foucault, plus lawn care.

Title

A “Worst Practices” Model

When things don't go according to plan.

November 23, 2014
 

At a meeting last week, a colleague mentioned that he had learned some lessons about how to do a particular project through making a series of mistakes, and that if we had more time, he’d be happy to describe some of them. And that’s when it hit me. For all of the talk of “best practices,” wouldn’t some open discussions of “worst practices” be more useful?

The beauty of “worst practices” is that they’re concrete. “Here’s something I did that blew up in my face.  In retrospect, here’s why.” In many cases, the moves people made in the course of worst practices actually made some sense at the time. The logic can be familiar. That’s the value of it.  If you can recognize the directions that tend to lead off the rails, you can avoid them.

To be fair, I’m really talking about something closer to “worst plausible practices.”  Some practice are just so awful that there’s not much to be gained by dwelling on them -- showing up to work drunk, say. We all know, or should know, that’s bad.  I’m thinking instead of the things that seem like good ideas at the time, but later reveal themselves as disastrous. 

Institutionally, almost every incentive aligns against candid discussions of lessons learned from failure.  Drawing lessons from failure involves first acknowledging and owning it. In many organizational cultures, that can be a career-limiting move.  If you work in a place with a strong culture of finger-pointing and blame-shifting, owning up to mistakes -- even small ones -- amounts to a kind of unilateral disarmament.  And even if you’re lucky enough to work in a setting in which people take a relatively enlightened view, you can’t assume the same will hold true externally. I’ve been to my share of AACC and League for Innovation conferences over the last several years, and I can report that the ratio of presentations on “here’s something we did well” to “here’s something we messed up” is approximately 100:0.

And that’s too bad, because the latter can teach lessons, too.

I’m told that something similar holds in the literature around academic science, oddly enough.  Although we’re all taught that the scientific method is all about replication and testing, papers based on replicating results are relatively scarce, and the few who do them are widely considered suspect.  They’re looked upon the same way that police look at Internal Affairs departments. But they serve a crucial function in the scientific ecosystem; to the extent that up-and-coming scientists are steered away from it, we lose a valuable method of quality control.

As more states and systems move to various forms of “performance funding,” the paradox of increased need and decreased room to learn from mistakes grows.  Performance funding schemes work on an annual basis, which means that there’s little margin for error; you can’t absorb the costs of the early stages of a learning curve, because the punishment you’d take in the next year’s allocation would prevent you from realizing gains in the later part of the curve.  (That’s part of the appeal of multi-year grants: by design, you are given the time to do the unglamorous groundwork first.)  Of course, the idea behind performance funding is to create an incentive to do things better, which usually involves doing new things. You just don’t have the room to make mistakes.  In that climate, it’s unsurprising that prepackaged solutions from various think tanks and foundations catch on; they offer the prospect of improvement without having to go through the messy process of learning first.  But those don’t always sit well in environments in which shared governance is prized.

I’m not sure how to create a space for candid and useful discussions of worst plausible practices, other than in off-the-record, informal interactions among peers who trust each other. In other words, in the interstices. Interstitial candor is well and good, but it’s nowhere the scale we need. I’m just not sure how, as an industry, to get there from here.

Wise and worldly readers, have you seen sustainable ways to discuss worst plausible practices? Or is this just one of those facts of life endemic to any industry?

Read more by

Back to Top