On many campuses, institutional research directors have long been the point people for assessing both how successfully students are learning and, by extension, where their colleges can do better still. They believe deeply in the importance of producing and sharing data that can help professors teach, and students learn, more effectively, and they take pleasure in seeing how small changes in curriculums or practices -- incorporating extra public speaking in a capstone course, for instance -- can significantly improve students' performance.
Yet as pressure has grown in recent years for colleges both to gauge their student learning outcomes with more objective measures and to report their results broadly -- assessment for external accountability, as opposed to internal improvement -- institutional researchers have increasingly found themselves caught in the middle.
On one end are politicians and other policy makers accusing college administrators and especially professors of dodging accountability because they are wary of using measurement tools that would allow for easy comparison among institutions. On the other are the faculty members themselves, the more moderate of whom warn that oversimplified assessment will provide no answers, while others do, per the easy stereotype, resent any intrusion into the classrooms where, they argue, they know in their hearts what works.
Dozens of institutional researchers gathered at the Educational Testing Service's conference center this week for a meeting, co-sponsored by the Association for Institutional Research, focused on that tension and how IR officers can navigate it.
"It's hard enough to help our institutions develop institutional assessment capacity, and it's hard enough to answer calls for accountability from policy makers," Victor Borden, associate vice president for university planning, institutional research and accountability at Indiana University at Bloomington, said in a lunch speech entitled "Measuring Success: Living Between a Rock and a Hard Place." "Trying to do both is very conflicting, and they can get in the way of each other."
"It's a hard gap to build bridges between," he added.
Yet that's just what institutional researchers are charged with doing, and at ETS on Monday they shared with each other, with a minimum of griping, strategies and advice for doing so -- including by using the external calls for accountability to their advantage internally.
David G. Payne, associate vice president for college and graduate programs at ETS, got a lot of heads nodding when he opened the day's discussion with the views about assessment from a variety of faculty "types," including the "historians" who've seen previous calls for accountability and think they can wait this one out and those who think it is somebody else's job -- like the institutional researcher.
Despite the easy stereotypes, which resonated with the IR folks for a reason, some of the assessment experts said it was unfair to portray professors as opponents of measuring their performance and that of their students.
"Many of our faculty have been measuring for a long, long time," said Dawn G. Terkla, associate provost for institutional research and evaluation at Tufts University. And with many other professors, she said, "it's not that they don't want to do it; sometimes they just need to be shown how."
Institutional researchers offered different strategies for involving faculty successfully. "Faculty ownership" is essential, said Beth Jones of West Virginia University, who noted that when the institution first sought to set up a committee for assessing learning outcomes in general education, administrators selected the faculty participants and, surprise surprise, "people didn't come to the meetings." The second time around, Jones said, departments solicited volunteers, and the result was far more engagement. "You need to find people who are interested, and directly involved in deciding what is the best approach to take."
Trudy Bers, executive director for research, curriculum and planning at Illinois's Oakton Community College, said that institutions seeking to involve faculty members in assessment work should "break the work into manageable tasks" unless they're prepared to provide rewards -- reduced teaching loads or other benefits -- "for faculty who take on really significant roles." About a third of institutions said they provided such incentives.
Even at institutions where professors are sold on the value of measuring their own performance, the goals are changing in response to accreditors, who in some cases are responding to pressure they are feeling from the federal government,
"We've been doing a lot but the accreditors are forcing us to be more purposeful," said Turkla of Tufts. Added Karen Froslid Jones, of American University: "We're really aware of external pressures on us. We know that if we don't do a good job with the assessment, it will come back to us one way or the other."
Regis University has found itself facing "major issues around measuring student learning" with its accreditor, the North Central Association of Colleges and Schools, said Kimberly Thompson, director of assessment and college research there. The problem is not that faculty were not measuring their performance; "we had pockets of excellence, certain departments that were doing an excellent job of assessing student learning, using multiple, direct measures of learning," Thompson said.
The pockets were just that, though -- islands unto themselves, she said. "They didn't have an opportunity to share what they knew and what they were doing with other departments. Our big challenge is taking the information and know-how we currently possess and sharing that with all our departments, and doing so in such a way that our faculty are driving assessment."
Bers said the challenge at Oakton -- and other institutions, too -- is to move the level of assessment "beyond the individual faculty member to the course or program level, or even the institutional level."
But as colleges and universities seek to respond to the pressure to find measures of student learning that will reflect outward, allowing for comparisons to other institutions, they are likely to run up against (often legitimate) criticism from professors and others that assessment measures that allow ready comparisons among differing institutions are likely to be simplistic to the point of meaninglessness, suggested Borden, of Indiana.
Borden urged the institutional researchers not to let external demands for accountability overwhelm their primary obligation to expand their own colleges' capacities to assess the quality of student learning -- but he also encouraged them to use the outside pressure to their advantage, one of several "points of leverage" they might use with recalcitrant faculty members.
"Use the external mandates to push the question," Borden said. "Ask, 'If these are not appropriate measures, what are? What data should we use?' "
Manager, Teaching and Learning with Technology (JOB ID # 5244) and Instructional Designers/Academic Technology Consultant (Multiple positions (JOB ID #5245)