In which a veteran of cultural studies seminars in the 1990s moves into academic administration and finds himself a married suburban father of two. Foucault, plus lawn care.
Although the very thought of it makes some academics blanch, I'm beginning to think that "evidence-based management" could be really useful in solving some nagging academic problems.
As I understand it -- and I'm no expert -- "evidence-based management" takes as a given that it's appropriate to look at statistical patterns that have emerged over time, and to use those as reality checks for future decisions. It's particularly helpful in testing long-held assumptions for which we somehow keep noticing exceptions.
Most colleges of any size have someone in the back corners of the administration whose title is something like "Institutional Researcher." (The really big places have entire offices devoted to IR.) It's a sort of locally applied social science. I think I've found something to ask my local IR guru to check out.
It's an article of faith among certain faculty that certain time slots for class meetings generate wildly higher attrition than others. ("Death Valley" is the local term.) The late afternoon weekday sections
of required Gen Ed courses -- English Composition, say -- are held to be the last to fill, and the first to shrink as students just vanish. The "Prime Time" sections -- basically late morning to early afternoon,
Monday through Thursday -- fill first, and usually finish the semester with almost as many students as they started.
So my questions for the IR guru, and for faithful readers who may know of research in this area:
1. Do Death Valley classes actually have higher drop rates? If so, does the effect disappear for upper-level courses? (If it does, I could see a pragmatic argument for running Gen Eds in prime time, and upper-level courses in Death Valley. That's pretty close to what Proprietary U used to do.)
2. If the drop rates are actually higher, is that really a function of the well-documented "last in, first out" rule of registration? (That is, the students who decide to sign up for classes the day before the semester starts have much higher drop/fail rates than the students who sign up months in advance. This is true, presumably, for many reasons.) Or do the students who sign up for Death Valley early also have higher drop rates?
3. Can we predict, with some level of specificity, the degree to which we could expect higher drop rates? Do those students usually come back, or are they usually lost for good? (Do they disappear at higher rates than other students who drop?)
4. If the answer to 3 is an actual number -- or at least a relatively modest numerical range -- could we do a little cost-benefit action to see if adding 'spillover' sections in Death Valley actually makes fiscal sense over time? (If that strikes you as cold, do the analysis in terms of future academic success.) Would we be better off, in the aggregate, simply abandoning Death Valley and going to waitlists and/or more online and/or hybrid offerings, or is Death Valley the least bad option?
5. Other than registering later, do Death Valley students differ in meaningful ways from Prime Time students? Are any higher dropout numbers actually consistent with those factors, rather than the time
slot per se? (If so, then moving them around wouldn't really solve anything.) Or are these really the same students, just with funny schedules?
6. Do the instructors who typically teach the Death Valley sections -- and it's usually the same cohort from year to year -- have higher drop rates in Prime Time? Is this an instructor effect written onto a timeslot?
Wise and worldly readers -- I'm asking for your help in a couple of ways.
1. Do you know of any good research on this? It isn't the area of my scholarly training, and I'm just a wee bit busy.
2. Can you think of other questions I should add to the list?
Thanks for your help! My IR guru will likely have me killed, but evidence-based answers to these questions might actually be useful, which is sort of the point.