The nerve wracking parlor game of choice for many people in higher education these days is trying to predict where the Secretary of Education's Commission on the Future of Higher Education is heading.  But one thing has become clear: The panel, or at least its chairman, Charles Miller, believes that colleges must better measure the skills and knowledge they impart to students, and openly share that information with the public.
In its simplest form, Miller is advocating "testing" of what students learn while in college. Details -- on what measures to use, how to present the information and, perhaps most importantly, whether the testing would be encouraged or mandated -- are few at this point, though Miller pointed the way in a memo he sent  last month to commission members and in some of his public comments.
The bottom line: He believes that effective tools for measuring student learning now exist, and that instituting an accountability system that measures and reports student learning is essential, for higher education and for society. "We need to assure that the American public understand through access to sufficient information, particularly in the area of student learning, what they are getting for their investment in a college education," Miller wrote in his memo.
What is less clear -- and this, ultimately, is the $64,000 question -- is whether such testing and reporting would happen at the national (or federal government) level. In other words, might the commission propose that all colleges use the same test, or set of tests, to measure their students’ performance, in a way that would let consumers and policy makers make direct comparisons among institutions? On that question the commission, and Miller, have been relatively silent, although when pressed at a meeting this month, the chairman said that he doesn’t “see any way to regulate” or “mandate” the collection and reporting of such information.
For most higher education officials, reaction to the commission’s possible proposal on testing rises or falls on the answer to that question. Many of them agree that emerging tools have made it possible to measure student learning in some critical areas and to begin to assess the “added value” that individual colleges pass on to their students.
And a growing number of college leaders (though not all) also agree that higher education institutions, individually and collectively, must do a better job proving to the public that they are successfully educating students -- partly because the current political and economic climate demands it, and partly because it’s the right thing to do.
"Higher education cannot drag its feet,” says Lee Shulman, president of the Carnegie Foundation for the Advancement of Teaching. “It is time for us to do comprehensive, multifaceted assessments [of what students learn on campuses], and make that data public.”
Support for the idea tends to fall apart, however, at the notion of creating a national -- or certainly a federal -- standard that would apply similarly to all colleges, and that, in the worst case, might eventually be used as a basis for rating or even rewarding or punishing colleges. Here, the specter of the Bush administration’s No Child Left Behind program in elementary and secondary education looms large: College officials fear an overly simplified, one-size-fits-all approach that can’t possibly capture the differences in the missions and student bodies of major research universities and community colleges, liberal arts colleges filled with 18- to 22-year-olds and adult-focused for-profit institutions.
“Trying to create an über-instrument where we simply draw the line and say, ‘This is the measurement,’ will be a grave disservice to the individuals, the institutions and the country,” says David L. Warren, president of the National Association of Independent Colleges and Universities, which represents 1,000 private colleges (where opposition to the testing idea is strongest, as private institutions are less accustomed than public ones to such scrutiny). “We will get a meaningless outcome at a great cost.”
Avoiding that and finding some kind of middle ground -- identifying a meaningful way of reporting student achievement while avoiding the trap of an oversimplified national standard -- may be the single biggest puzzle facing the commission.
Accountability Pressure Grows
The push for higher education accountability is hardly new. State legislators and members of Congress, accrediting groups, and others  have for years been pushing and prodding colleges to justify the mammoth influx of public funds by proving, in measurable ways, that they are successfully fulfilling their many, varied missions. Although many individual state college systems, accreditors and institutions have crafted sets of data aimed at gauging various aspects of institutional success, academics have largely rebuffed calls to measure learning in a systemic way.
"Higher education has deflected the idea for the past quarter century by arguing that the kinds of things we want undergraduate education to teach are not really measurable,” says Patrick Callan, president of the National Center for Public Policy and Higher Education. “There’s been this idea that we’ll just pull some standardized test off the shelf, resulting in a dumbing down of what higher education means.”
The situation is changing in two ways. First, the pressure on higher education to prove itself is mounting, driven most significantly by perceptions that America’s economic competitiveness is slipping as other countries invest more heavily in higher education. Adding to the scrutiny was last fall’s release of a federal study  that found only a quarter of American college graduates to be “proficient“ on a set of literacy measures. The results were seen as evidence by observers -- including Miller, chairman of the federal higher education commission -- that colleges may not be serving their students well, and that the only way of knowing for sure would be to measure student learning more directly.
The other significant change in the climate is that years of research into assessment have, by most accounts, greatly improved the tools available  to measure what students learn. From the National Survey of Student Engagement  to a slew of institutionally developed exams to the Collegiate Learning Assessment  -- which is emerging as a favored test in several state  and national efforts  to measure student learning -- “the assessment business has become hugely more sophisticated,” says Callan.
“It has now been demonstrated that it is possible to measure what students learn, and we can no longer rest our case on the argument that it’s impossible,” he adds.
Miller seems especially enamored of the Collegiate Learning Assessment (CLA), which was developed by the Rand Corporation and is now administered by the Council for Aid to Education. The exam has been largely flying beneath the radar until recently, as its makers have kept a relatively low profile as they seek to build what they consider an airtight case about its effectiveness. But the CLA's supporters have been promoting it aggressively in recent months, and states such as Texas and groups of private institutions  have incorporated it into their initiatives to assess student outcomes.
The test aims to measure students’ critical thinking, analytic reasoning, and written communication skills through a series of “performance tasks” and “writing prompts.” In one sample question,  students are presented with newspaper articles about the crash of a private plane, federal reports about the accident, charts and other information, and asked to write a memorandum for a company contemplating buying the kind of plane involved in the accident. Test takers are also assessed on how well they can support or critique a stated point of view.
While its sponsors eventually would like campuses to use the CLA longitudinally -- measuring the same group of students as they enter as freshmen and when they leave as seniors -- institutions now typically give the test to a sample of 100 freshmen and 100 seniors.
Richard H. Hersh, the former president of Trinity and Hobart and William Smith Colleges who co-directs the CLA project, says he and others responsible for the test believe they are producing “valid and reliable data now” that show it is possible to measure “real learning gain as a function not only of the fact that you’re in college, but where you attend.”
Fans of the test are already sold. “We believe that the CLA provides a robust and flexible tool that allows higher education institutions of a very wide range of characteristics to assess specific kinds of cognitive outcomes among students – the kinds of macrolevel changes in students that you hope will happen over course of a four-year education,” says Geri H. Malandra, associate vice chancellor for institutional planning and accountability at the University of Texas System, which has incorporated the CLA into its system for assessing the performance of its nine institutions on a slew of measures. (Miller, the chairman of the federal higher education commission, helped to get the system in place when he headed the Texas Board of Regents.)
But Malandra acknowledges that “there isn’t any approach to learning assessment that would be sufficient on its own,” which is why the Texas system incorporates students’ scores on the National Survey of Student Engagement and on state certification exams, among other factors, into its assessment of what students learn at its institutions. “The literature on assessing student outcomes is very clear on this -- multiple frames [of measurement] is what you need.”
Or, in the words of Carol Geary Schneider of the American Association of Colleges and Universities, who sits on the Council for Aid to Education's board: "I like the CLA, I think it's a breakthrough, but it is by no means the solution."
A 'Marriage of Insufficiencies'?
That last point is crucial, even to the growing core of people in and around higher education who agree with Miller that it is possible, and politically necessary, for colleges and universities to better measure their success in educating students.
Robert J. Sternberg, the renowned psychologist and new dean of arts and sciences at Tufts University, largely agrees with Miller that parents, students and lawmakers are “paying a lot of money [for higher education] and they ought to know what they’re getting.” But an accountability system done badly could be worse than no system at all, Sternberg says. The CLA is “good as far as it goes,” he says, but it is far too narrow a measure of what students should learn in college. It ignores crucial skills such as creative thinking and the ability to collaborate with others – skills that Sternberg hopes to capture in a project he is beginning at Tufts that defines “ ’value added’ broadly.”
As president of the Teagle Foundation, which is sponsoring a series of grants in “value added assessment,”  W. Robert Connor, too, calls it “an impossible position” for faculty members and college administrators to “say that we don’t want more knowledge about our students and what they’re learning.” But Connor also agrees with Sternberg that despite “really good progress” on the CLA and other assessment tools, “there’s a lot that’s still to be done” in developing an effective system for measuring student learning.
Connor recognizes, he says, that higher education leaders “can’t sit there and say, ‘We’ll have better instruments in 10 years, so let’s wait until then’ “ to put in place an accountability system. But it would similarly be a mistake for the federal commission, or anyone else, to impose a top-down, flawed solution on academe, which is why, he argues, that “the way to deal with this is for higher education to get out in front and do it right.”
College leaders, urged on by the commission from its bully pulpit, “can do a much better job of assessing students’ progress in the development of important cognitive skills, using those measures that are available now while at the same time pushing ahead on finding other, better measures,” Connor says. Shulman of the Carnegie Foundation agrees, saying that “there ought to be an expectation that every institution takes on responsibility for demonstrating the ‘value added’ of their educational programs for their students, but recognizing that institutions vary so enormously in who their students are, and what their missions are.”
Advocates of this approach point to efforts like the one adopted by the State Council for Higher Education in Virginia,  which in 1999 began requiring public institutions there to gauge and report their own performance in a range of areas, including student learning, but left it to the individual institutions to decide which measuring sticks to use. Or, Shulman says, groups of institutions might collaborate to identify what he calls a “marriage of insufficiencies” -- a “carefully designed suite of assessments,” each of which might be “deeply flawed,” but “collectively is a robust and most sensitive set of measures.”
Foot dragging by colleges will not do, Shulman says. “The challenge is for higher education institutions to make some proposals that would guarantee that within a year, you begin to get policy-useful data generated by some of these approaches.”
If Shulman, Connor and others represent higher education’s attempt to find what Schneider of the American Association of Colleges and Universities, calls a “middle ground” on the testing issue, Stanley N. Katz reflects the reality that many people in higher education would prefer a more combative approach.
“If you think that No Child Left Behind is good for the schools, you’re likely to think this is good for the colleges,” says Katz, a professor at Princeton University and former president of the American Council of Learned Societies. Any attempt to define across the great variety of higher education institutions a common set of standards or measurements of what students should learn, Katz says, will be doomed: "Either there won't be agreement, and it will be overly controversial, or it will be reduced to an elastic, lowest common denominator, as in No Child Left Behind, in which case it will become trivial."
Katz insists that he is not saying colleges do not need to be accountable -- he just thinks they already are, by students and their families who are continuing to pour into the institutions. "There will always be legislators and legislatures that would like more bank for the buck, bigger results for less money," he says. "But I think the public is quite satisfied with what higher education is doing on the whole. This is a market system, and the customers are buying. We have by a considerable measure the finest system of higher education in the world. And if that’s the case, this is an ‘ain’t broke, don’t fix it’ situation."
He adds: "And I think some well-placed university presidents ought to pull up their socks and say that."
If college leaders agree with Katz, few if any have been willing to say so publicly thus far, partly to avoid prejudging the work of the federal commission and partly out of fear of looking like knee-jerk obstructionists. But over the next few months, as the panel crafts its recommendations that are due to Education Secretary Margaret Spellings August 1, one set of higher education officials may find themselves in an especially well-placed (but perhaps unenviable) position: those current and former college presidents who are members of the federal commission.
To date, panel members like David Ward, president of the American Council on Education, and Charles M. Vest and James Duderstadt, former presidents of the Massachusetts Institute of Technology and University of Michigan, respectively, have largely refrained from any overt criticism of the testing concept, preferring gentle suggestions that the panel favor exhortation and agenda setting over top-down mandates. And Miller, the commission's genial chairman, has so far said the things they want to hear in response.
But should the panel begin to shift its focus toward a more "consumer friendly" approach that would allow easy comparisons across institutions -- which would only be possible through a commonly applied set of standards and measurements -- the college leaders on the panel will be in the hot seat.