You have /5 articles left.
Sign up for a free account or log in.

At another time, in another place, the conversation that unfolded in a conference room at the Washington office of the Educational Testing Service on Monday, about national efforts to measure student learning, might have focused on the sort of arcane concepts that usually dominate discussions about testing, such as "design," "validity" and "reliability." But coming as it did just days before the federal higher education commission prepares to gather less than a mile away to (in all likelihood) approve a report that calls for a national accountability system, the ETS discussion was, for better or worse, about the politics of the possible -- and the impossible.

Nominally, ETS brought together a small group of accreditors and higher education association officials to discuss the testing service's recent report, "A Culture of Evidence: Postsecondary Assessment and Learning Outcomes," which recommends that higher education leaders work together to create a "comprehensive national system for determining the nature and extent of college learning."

Mari Pearlman, senior vice president for higher education at ETS, acknowledged that the testing service had been motivated to explore the concept by the work of the Secretary of Education's Commission on the Future of Higher Education, which itself has been making the case for some sort of national system for measuring how successfully colleges educate their students.

"It is totally framed and contextualized by the work of the commission," Pearlman said of the ETS report. But its goal, she said, was not to endorse the federal panel's recommendations but to "help frame the conversation" for the commission's work within higher education, and to figure out "how we could get neutral ground on which to stand" in debating it. The testing service's report calls for the creation of a national system, overseen by the six regional accrediting groups, in which colleges would measure four aspects of student learning: workplace readiness and general education skills; content and discipline specific knowledge and skills; "soft" skills, such as teamwork, communication and creativity; and students' engagement in learning.

Although the ETS paper might be seen as aligning with the overall thrust of the commission's recommendations on measuring student learning, it became clear as Monday's conversation  unfolded that the testing service's officials, and the accreditors and other college officials involved in the discussion, see peril in elements of the federal panel's ideas.

Not the least of those is the view, apparent particularly in the public and private statements of the panel's chairman, Charles Miller, that technology and test development have advanced so much in recent years that various aspects of student learning can now be measured. Miller and some others affiliated with the commission have argued, for instance, that the Collegiate Learning Assessment, a relatively recent entrant on the testing scene, has largely proven itself as a successful measure of critical thinking and other general education skills.

But "there is not a measurement sitting on the shelf that is ready to address all" of the goals of a student learning measurement system, said Carol A. Dwyer, a distinguished scholar at ETS and one of three authors of its paper. She and the other authors laid out the various ways in which tools and techniques do and do not exist to measure the various types of student outcomes that the paper argues higher education ought to measure.

While higher education is relatively close to being able to measure workforce readiness and general education (with the CLA leading the way) and student engagement (thanks to the National Survey of Student Engagement and the Community College Survey of Student Engagement), adequate ways to measure domain-specific knowledge and the so-called soft skills are a long way off, said David G. Payne, senior executive director for higher education and another of the ETS authors. It will take time to develop testing models that can succesfully measure those skills, Payne said.

But "the danger" of the commission's aggressive push for outcomes testing, said Pearlman, is that "nobody's going to have the patience to wait for a model at all."

Perhaps the biggest disconnect between the "assessment" regime proposed by the Educational Testing Service and the "accountability" system endorsed by the Spellings commission is what their aims are. ETS conceives its system primarily as a way for colleges themselves and for the state and federal policy makers who oversee them to gauge their performance and to help them figure out how to do better. Only secondarily would the system serve as a "consumer tool," to help parents and students decide on the best college for them, which is by far the primary goal laid out by Miller and other commission leaders for its vision of the testing regime.

"It's a much more consumer defined accountability metric," Jane Wellman, a higher education consultant who has advised the Spellings commission, said at Monday's meeting. The more consumer oriented such a system is, Wellman and others agreed, the greater the push will be for having all institutions use comparable measures and make as much data as possible public.

Both of those thrusts could undermine the viability of such an accountability system, participants in the discussion said. The push for publishing more data will diminish support for such an accountability proposal among the college faculty members and administrators most responsible for carrying it out, said Steven D. Crow, executive director of the Higher Learning Commission of the North Central Association of Colleges and Schools.

And comparing one institution against another only makes sense if the measurement tools can truly tease out the skills that students have learned while in college, rather than those they entered with. Capturing that difference -- commonly referred to as measuring "value added" -- is an appealing but perhaps distant goal, though tests like the Collegiate Learning Assessment assert that they can gauge it.

ETS's Dwyer said that tests that claim to measure "value added" may be engaging in "overpromising." "A lot of these tests aren't really nuanced enough to say you're going to put everybody on the same scale," she said.

Crow and Belle S. Wheelan, president of the Commission on Colleges of the Southern Association of Colleges and Schools, both questioned whether some elements of higher education would tolerate a system that sought to make oversimplified comparisons about programs' quality and performance. Wheelan said that she "couldn't do diddly" about ETS's call for such an approach, or the federal commission's, unless her member colleges buy into the idea. Unless, that is, it is mandated from on high. "If [the Department of Education] puts it in, that's another story," she said.

But Travis Reindl, state policy director and assistant to the president of the American Association of State Colleges and Universities, warned that state and federal policy makers were getting tired of colleges' arguments that their operations were too varied and complicated to be captured by common measures. 

"The gas mileage we're getting out of the complexity argument is about to run out," Reindl said.

Next Story

Written By

More from Learning & Assessment