You have /5 articles left.
Sign up for a free account or log in.

DURHAM, N.C. -- Over the last two years, as political pressure has intensified on colleges and universities to better measure and document their success in educating students, leaders in higher education have urged patience.

To the assertion that colleges don't do enough of this, they point to scores if not hundreds of examples in which individual departments, programs or colleges have used existing assessment tools or developed their own to gauge their effectiveness in imparting learning -- letting "a thousand flowers bloom."

And while many higher education leaders and faculty members reject the suggestion that colleges should agree on a handful of common measurement tools in the interests of allowing consumers to compare one institution against another, those academics who agree insist that it will happen over time, as some of those "thousand flowers" emerge as "best practices" that become widely embraced and used.

A small group of prominent researchers, foundation officials and association leaders have been gently imploring their higher education colleagues to take a more aggressive, "systemic" approach, arguing both that measuring student learning more systematically is the right thing to do, and that failure to do so will inevitably lead impatient politicians to impose their own -- inevitably flawed -- methods for doing so on colleges.

Over the weekend, at a meeting sponsored by the Teagle and Spencer Foundations, advocates for that view -- true believers in the value of assessing the quality of student learning in liberal education -- gathered here with two key purposes in mind: to figure out how they themselves can better do what they're already doing and to develop ideas for spreading the gospel to others.

The main business of the two-day meeting focused on the former, as officials from two dozen liberal arts colleges brainstormed, traded advice on what works and what doesn't in the classroom, and encouraged and exhorted each other.

But more quietly, a much smaller group of association presidents, foundation leaders, and others reached an agreement in principle to create some kind of new national organization focused on helping higher education, defined broadly, develop a collective and sustained approach to measuring how successfully students learn, and to increase that learning. Details on the exact structure and mission of the new entity remain to be worked out, but the agreement represents an advance in the groups’ effort to try to get coordinated movement among a broad range of higher education leaders, especially because those around the table included Molly Corbett Broad, president of the American Council on Education, which has hung back rather than joining in some previous efforts.

The ultimate goal, said David Paris, a professor of government at Hamilton College and Teagle consultant who is leading the foundation’s organizational efforts on behalf of the foundation’s president, W. Robert Connor, is “being able within the next 3-5 years to say confidently to the public and public officials we have engaged in systematic and even systemic improvement,” not only among the “true believers” at the Teagle meeting “but across the country.” That could be accomplished in large part, Paris suggested, by “harvesting ... some of the 1,000 flowers” now being nurtured on individual campuses.

On the divisive question of whether colleges are doing enough to ensure that their students are developing the skills they’ll need to enter the work force and be productive citizens, most of the people gathered in Durham this weekend have been among those most willing to accept the idea that higher education must do more.

The groups, such as the Association of American Colleges and Universities and the Council for Higher Education Accreditation, have argued (most notably in a “statement of principles” in January) that while higher education itself, not the government, must take primary responsibility for ensuring and proving that colleges provide an excellent education, institutions must set clear goals for student learning, gather more data about their success and failure, and use that evidence not just to improve themselves internally, but to prove themselves to the public.

It is that last point -- that colleges should assess the quality of their teaching and learning not only for themselves and their students, but also for their various public constituencies -- that has troubled many academics, and that point of view was represented even at a conference dominated by believers, as last weekend's here was.

"My concern is that our focus on improving student learning needs to be driven by institutional mission rather than an effort to appease external audiences who may not understand our missions," said Peter H. Quimby, deputy dean of the college at Princeton University, who expressed reservations about student learning assessment broached in the context of accountability. "When we buy into the market-oriented rhetoric of accountability, value added, and cross-institutional comparisons in order to placate others, we run the risk of making it harder to engage faculty members in conversations that are both meaningful to them and helpful to our students."

Such cautions are common among many higher education audiences, but they were a minority opinion at the North Carolina meeting, where many of the sessions and most of the speakers examined how, not if, to step up measurement of student learning.

The opening night's keynote speaker, Derek C. Bok, the former president of Harvard University and author of Our Underachieving Colleges (Princeton University Press, 2005), exhorted faculty members to overcome what he called a "conflict" between the values they say they hold dear and their actual behavior. Professors "believe in the scientific method," he said, but are disinclined to apply its rigors to assessing what works and doesn't in their own teaching. And many faculty members who say they care about the quality of their students' writing and that they learn to think critically are still sometimes reluctant to measure whether those goals are being met.

Campus and faculty leaders should "establish a cult of continuous improvement," Bok said, which "starts with identifying what the weaknesses are through evaluation" and "working with the faculty through experimentation and enlightened trial and error to improve." Academic leaders should identify "respected faculty members to help develop the measures" that they and their colleagues will use, "make sure that the results are brought up and discussed," and provide "modest funding for individual faculty members who want to experiment," he added.

When faculty leaders are confronted with evidence showing that students are falling short in key areas, and "realize they can't explain it away," Bok said, "they have to do something, they can't just walk away," because they care very much about their jobs.

If enough colleges get serious about assessing student learning on their campuses, being among the "1,000 flowers" that higher education leaders say are blooming, a fruitful "competition of ideas" will emerge, argued Robert J. Thompson Jr., a professor of psychology at Duke University who is leading a joint Teagle/Spencer initiative designed to help get major research universities on the assessment train, which they have been slow to board.

Rather than abandoning the approach of letting lots of individual colleges and groups of institutions work on their own mechanisms for measuring student learning in favor of imposing a few models from above, Thompson said, the goal should be, "How do we increase the rate of 'harvest' " of the many good ideas being developed so that a menu of best practices emerges?

Next Story

Written By

More from Learning & Assessment