You have /5 articles left.
Sign up for a free account or log in.

I am a faculty member, and so began my career with an almost inborn distaste for assessment, which seemed like the advanced jargon of administrators with a quixotic envy for corporate processes. The only model for assessment that I could think of was legislatively or decanally mandated, and therefore it smacked of makework. Over the past two years, though, I've come round quite a bit, and now see assessment as both politically inevitable and pedagogically useful -- if done correctly. That it is politically inevitable doesn't mean it's wrong -- higher education should become more transparent to interested parties. Would you rather a legislator, donor, or prospective student base decisions on incomplete data, hearsay, and idiosyncratic assumptions? Of course not.

This essay is about a number, the kind of number that made me take an interest in assessment's possibilities. While John Lombardi is rightly skeptical about the National Survey of Student Engagement surveys, which measure student satisfaction, there is a wealth of data in those surveys that, when appropriately framed, can help us think creatively about our work with students.

Like many regional comprehensive universities, the institution where I teach worries about its six-year graduation rates. Our mission of providing access to first-generation and other precarious aspirants to higher education is imperiled if we cannot help these students graduate. Our numbers haven't always been great, but a series of initiatives over the past few years may have started nudging the percentages in the right direction.

Many faculty members respond -- I have responded -- to attention to graduation rates in a couple of different ways: first, to blame others (the students!), and second, to assume that we will be asked to make the curriculum less rigorous. It sounds like an attack: How can you be doing your job if so few students finish?

But at a recent meeting about assessment, I learned the following tantalizing datum: Sixty-three percent of our full-time students who complete their first semester with a 3.0 or better grade-point average graduate within six years. When full-time students finish the first semester with a GPA below 2.0, only 9 percent graduate within six years.

This sort of tracking, conceived and performed by experts in assessment and statistical analysis, ought to spur professors to think about their mission, about their individual courses, and about their institutions' political status in a state or system. What are we teaching our students? How can we convey to first-year students the seriousness of creditable habits? How can we discuss seriously with outside stakeholders the challenges posed by teaching adults?

Mission

For some time now, the great fetish of assessment gurus has been so-called "value-added" assessment: You can't just test what students know at the end of a semester or a program of study, because such a test can't discriminate between knowledge gained during the course and outside of it. Many professors and institutions use a combination of pre- and post-assessment as a kludge: "Here's what the students know at the start of the semester" and "Here's what they know at the end." This is a start, but it's still somewhat indirect, since improvement on such metrics doesn't always capture causal relationships.

The 63/9 percent statistic might call into question the value of pre- and post-assessments that aren't specifically about bodies of knowledge, since it suggests that differences in student performance arise from factors external to the particular class or course of study. The student with a 3.5 in her first semester doesn't need to be taught critical thinking; she is already an adept critical thinker, and will simply be refining that skill and adding to her base of knowledge. The student, by contrast, who struggles to achieve a 1.4 could very well improve -- and we all know students who have done, and perhaps some of us have even been that student. It's also possible that the student might have performed better on a different measure than grades. But it might also be the case that that student needs to pull away from college for a while. Perhaps he needs to try again in a semester when his childcare is more stable, or after she's saved up money, or after her father has weathered his major surgery. Or maybe he needs to come back after some time away, having reflected on what makes college success possible. (Again, some of us might have been this student.) Perhaps she needs to rethink whether college is, at present, as necessary to her career path as she believes. Is it the right thing to aspire to keep all such students on campus at all costs? Could a low graduation or retention rate mean that the institutions helps students make good long-term decisions, even if sometimes that decision is that they need to put off higher education?

To put all of this slightly more directly: The consistency of outcomes from first semester to sixth-year graduation suggests that we need to take a deep breath and think about what we're doing. Blaming K-12 educators for delivering us poor students isn't very credible when, to a surprising extent, we simply validate their outcomes.

Pedagogy

Surveys of student engagement repeatedly indicate that first-year students put in nothing like the mythical two to three hours of out-of-class preparation for each hour in class. Indeed, many students spend fewer hours studying outside of class than they spend in class during the week. The 63/9 split is relevant here: Do you pitch your course to those students who will do the work outside of class? ("Teaching to the six," as Michael Bérubé once called it.) Or do you try to make the course manageable by more students?

The split suggests that the latter strategy is a good example of the fallacy of good intentions. You can craft an intro course such that more students pass it, but such strategies smack of social promotion -- students not adept at managing college work in the first semester are going to continue to struggle. What's necessary instead is a pedagogy that bootstraps students into desired study habits. Technology can help: required posts to a class discussion board or blog, the use of social bookmarking tools to create a community of inquiry, the capacity of course management software to grade simple quizzes for you -- all of these things can help students learn how to prepare without necessarily sucking up vast quantities of time.

We can decry a generation brought up believing in the myth of multi-tasking (and that myth has done our students real harm), but unless we systematically design courses to inculcate sustained attention -- and then reward that attention by making class time intellectually meaningful--then we're not really contributing much beyond gripes and moans.

Politics

Assessment in college is different from assessment in elementary and secondary education, since college isn't mandatory. We control much less about our students than did the parents and teachers who have taught them (or not) over the previous 18 or more years. The choices of young adults drive their success far more than anything we offer.

It's true that legislators, tuition-payers, and future employers of our graduates have the right to demand effective teaching. But we can't teach students who are forced to work 35 hours while they're in college. We can't teach students who don't have access to affordable, reliable daycare. We can't teach students who have significant health concerns. The rhetoric of assessment is all too frequently pitched at whipping those tenured layabouts -- or, worse, tenured radicals -- into compliance. But turning any college into a legislators' paradise -- 5/5 teaching loads taught by contingent faculty -- won't have demonstrable results on student success. Effective assessment of colleges and universities needs to be thought of as promoting learning, not as disciplining the unruly faculty.

Many faculty are suspicious about assessment, whether for ideological reasons or because they perceive it as an unfunded administrative mandate. And faculty hear numbers, especially subpar numbers, as an indictment of their expertise or their empathy for students. I have reacted this way myself. Now, however, I try to remember that numbers are an opening salvo, not the final word: We've got a measurement -- how do we improve it? That number looks bad -- but what are its causes? Is the instrument measuring the right thing? Are we administering it in the best way? Are we making sure there's a tight fit between assessment measures and intended learning outcomes? Until we begin to think clearly, both within departments and across schools and even across peer institutions -- about what our students are up to, our own cultural position will continue to seem in crisis.

Next Story

More from Views