You have /5 articles left.
Sign up for a free account or log in.
When you’re at a higher education meeting these days and the topic is assessment, it’s a safe bet that the Secretary of Education's Commission on the Future of Higher Education factors prominently in the discussion.
But at a session Thursday of the American Anthropological Association, there was nary a mention of the federal panel that framed the debate on learning outcomes and value added during its run last year. Instead, there was plenty of griping about the university power structure, much skepticism about the assessment process and a consensus that faculty must take ownership when evaluation takes place.
Panelists noted that many college faculty members -- themselves often included -- view assessment as a threat. The threat comes not from federal agencies, they said, but from accrediting groups and administrators. College leaders pressure professors to measure the quality of their courses using quantifiable methods. Curricular committees form, a report is produced and everyone goes on their merry way. It's a top-down process with little faculty buy-in and no meaningful outcome, the time-tested complaint goes.
While articulating the above concerns, the anthropology professors who gathered for the session said it's time for a change. Peter N. Peregrine, a professor of anthropology at Lawrence University, said assessment works best when faculty members are involved and it's not a top-down mandate. They need to be the ones asking questions of themselves, each other and their own students. It could be about the utility of an assignment, he noted, or broader questions about a program. Either way, the assessment questions thought up by professors are almost always different from the ones asked by administrators.
"They tend to be more specific, personal and much less generalizable than administrative ones," Peregrine said. And the fact that administrators are the ones who most often end up setting the agenda explains why assessment tends to follow a "rather halting pattern," he added.
Peregrine cites a recent example that he said demonstrates why the top-down mandate is ineffective. Three years ago, Lawrence established the Office of Research on Academic Cultures and Learning Environments (ORACLE) as a way to get faculty more involved in the assessment process and to provide undergraduates with research opportunities. The university's accreditation review was coming, Peregrine said, so why not be prepared?
As coordinator of the office, Peregrine invited two students to research whatever topic they wanted related to assessment. They looked at how individualized instruction offered at Lawrence played into students' admissions decisions, and why current students pursued independent study. Both undergraduates produced a senior thesis from their research, and Peregrine said he was pleased with their work. For the second year, he asked two new students to respond to a question: What impact does individualized instruction have on students' academic performance? Peregrine said that while the research revealed noteworthy trends, neither student researcher pursued the topic as a senior thesis, "nor was their work done with the same eagerness and professionalism."
Why? Peregrine said it's simple: Because he decided the topic, there was little student buy-in.
“I’m a skeptic of mandated programs for assessment,” said Frank Salamone, a professor of sociology at Iona College. “Once you let administrators determine what specifics a class should teach, you’ve lost control.”
Salamone, one of the panelists, said he's concerned that assessment often means more institutional bureaucracy, and that administrators often favor the easiest methods of evaluation -- multiple choice or point scales that don't account for nuance. “I know when I do a good job and I know when I do a bad job,” he said. “Why do we have to quantify everything?”
And even when faculty have some control over what questions are asked during assessment exercises, there's no assurance of student buy-in. Instructors at the University of Minnesota's Duluth campus helped implement an online system in which students assess themselves in categories such as "knowing yourself" and "communication to a general audience," as a way to determine what they are learning from courses. Students never took to the system, said Jennifer E. Jones, an assistant professor in the sociology/anthropology department at Duluth, and the process suffered.
Amid the skeptical voices, Susan Sutton, associate dean of international programs at Indiana University-Purdue University Indianapolis, said she learned a great deal about her department through its assessment exercises. Another panelist, Darlene Smucny, academic director of social sciences in the School of Undergraduate Studies at the University of Maryland University College, said she's found it valuable to give out common exams in large courses that are often taught by adjunct instructors. It's a way to measure whether faculty members are looking at similar learning outcomes, she said.
Peregrine and others at the session said they would like to see the anthropology association publish suggested learning outcomes, as well as having professors list their objectives in course literature. That happens at Central Arizona College, where instructors use the same template and publish on their course Web pages what students are expected to learn. It's a helpful exercise, said Maren Wilson, a professor of anthropology, when it comes time for accreditation review.
Smucny said she and others at Maryland have yet to find a common test to give for anthropology 101 courses. Panelists said that's common in a field that prides itself on curricular diversity.
“I’m not sure we’ll ever have an across-the-board system that all anthropologists buy into,” Salamone said.