You have /5 articles left.
Sign up for a free account or log in.

The second day of the NCPR conference struck a funny note. Many of us had noticed that the persistent theme on Thursday was “here’s a study that shows that (pick your intervention) doesn’t work.” Honestly, it was a little dispiriting.  To start Friday’s discussion, Tom Brock of the CCRC opened by acknowledging the relative bummer of the first day’s findings, but then suggesting that, tone aside, some consensus had emerged about measures that actually do work. For example, multi-factor placement actually does work better -- in the sense of more accurate placement -- than single tests do, and all else being equal, shorter developmental sequences work better than longer ones.  The devil was in the details.

Mike Weiss of the MDRC profiled the ASAP program at CUNY, which basically uses a boatload of grant money to treat part-time working students as if they were full-time, middle-class students.  (It requires them to be full-time, and it provides free tuition, books, and subway passes.)  The program is still young, but the results are promising.  Most of us, though, shrugged at what seemed like yet another demonstration of “given infinite funding, you could do a lot.”  Well, yes, we could.  That would be lovely.

The rest of the opening panel focused more clearly on constraints.  Janet Quint, of the MDRC, coined my new favorite title -- “Scaling Up is Hard to Do” -- and shared the challenges of taking a program that succeeds on a small scale and growing it to a larger scale.  Most of us could probably rattle them off from experience: the staff isn’t cherry-picked, manual workarounds for backoffice systems break down when the numbers are too large, resources are thin, and buy-in is rarely universal.  Nikki Edgecombe, of CCRC, and Susan Wood of the Virginia Community College System Office discussed the implementation of a statewide developmental math redesign in Virginia.  Starting in January of 2012, DE math was divided into 9 discrete credits, and the portion of that 9 that a given student needed varied by path of study.  Every campus in the state had to make the switch to the new system, although they each had some room to move in terms of implementation.

To their credit, Edgecombe and Wood noted that some of the resistance to the new approach stemmed from a sense among local faculty that they were being told what to do and basically deprofessionalized.  It’s hard to maintain an artisinal craft structure -- such as a guild -- and still achieve meaningful change on a large scale.  As David Longanecker would later point out, political leaders are growing impatient, and they don’t wait to wait ten years for everyone on campus to get used to a change; they want to see it now.  While it’s clearly madness to try to fulfill that literally, it’s probably quixotic to pretend that we can continue to indulge fantasies of smooth, conflict-free progress measured in generations.

In a breakout session focused on the developmental ed initiative at Sinclair Community College in Dayton, Ohio, Kathleen Cleary walked us through the implementation details of what happens when you actually try to change how sequences are taught.  As with several other initiatives at the conference, Sinclair started by targeting the “near-miss” students -- those whose placement scores barely put them in developmental -- and put them through academic boot camps, rather than full semester developmental courses.  (They started with math, and then moved to English.)  Apparently, the boot camp model achieves higher success rates most of the time.  Sinclair got around the “resistance” issue by continuing to run the old model alongside the new one, to give the new model a chance to either win people over or fail.  

They also moved to a self-paced math model, but quickly discovered that if they didn’t build in intermediate deadlines, students would procrastinate.  That finding shocked approximately no one.  

In English, they’ve adapted the Baltimore County model, which I have to admit seems to have some legs.  There they’ve found that although the co-req model (as opposed to the pre-req model) of developmental English doesn’t necessarily result in higher pass rates for the given course, it does result in much higher completion rates for the sequence.  Lose an exit point, and fewer people exit.  (SInclair is also apparently moving from a quarter system to a semester system, which should muck up the data pretty good for the next year or two.)

When the conference reconvened, much of the discussion turned from “here’s what works and doesn’t work” to “how do we move from study to implementation?”  Karen Stout of Montgomery Community College -- keep your eye on that place, it seems to keep popping up -- expressed a concern that many “flip the switch” models only work for the near-miss students, and leave the most vulnerable behind.  David Longanecker responded that most interventions do no harm and some seem to work, and “flip the switch” models at least answer the political pressure we’re under.  

Intriguingly, offline private conversations throughout the conference kept coming back to the same theme: what good is all this research if we’re too constrained to use it?  The constraints are both internal and external, both economic and political, some intentional and some just random, but they’re powerful.  Any intervention that relies on an unsustainable influx of funding, for example, is not a serious answer.  

That said, though, it was hard not to detect a high note as we left.  Yes, scaling up is hard to do, and yes, some of the studies didn’t inspire confidence.  But the fact that there are enough well-designed intentional experiments going on at community colleges to make a conference like that worthwhile is, in itself, a new development.  And the emerging areas of consensus, while modest, are both real and useful.  The obstacles we’re facing are many, varied, and intimidating, but at least we’re actually facing them.  It’s a start.

Next Story

Written By

More from Confessions of a Community College Dean