Wise and worldly readers, how much time -- and how many attempts -- do you allow a “pilot” course or project before deciding whether to keep it?
As finances get tighter and political pressures stronger, I’m seeing less patience for waiting for the results from pilots to come in. The meaning of the word is changing.
In my original understanding, a “pilot” was an experiment. It was designed to see if, say, a course had staying power. Did students respond to it? Did it achieve what it was supposed to? It might take a couple of attempts to really get a good read, especially if the goals are ambiguous or in tension with each other. A new math class might require a few semesters of follow-up to see how students fared later in the sequence, for example.
But I’m seeing “pilots” now used as something closer to “dress rehearsals.” In this version, the idea is to debug before scaling up, but the goal of scaling up is pretty much a given. In other words, the meaning has shifted from idea-testing to implementation-testing. We’ve already decided that the show must go on; we’re just making sure we get the blocking and lighting right.
There’s value in both versions of pilots, of course. But mistaking one for the other can lead to unhelpful conflicts.
In some cases, the shift in meaning is entirely unconscious. It happens largely as a result of scheduling. Let’s say we run a new math class in the Fall of 2014. Faculty schedules for the Fall of 2015 are done by mid-Spring of 2015. At that point, we have exactly zero data on how students in the pilot class did in the following course, but we have to make a decision anyway.
Alternately, a pilot held in suspension for an extended period -- if your bylaws allow -- accrues a sort of tenure. People start to count on it.
So I’m trying to find the right balance. Has anyone out there cracked the code?
Read more by
Inside Higher Ed’s Blog U
What Others Are Reading