- Campus Accountability Proposals Evolve
- Setting the Record Straight
- A Test the Education Department Doesn't Love
- CLA experiment focused private colleges' attention on assessment
- Assessment From the Ground Up
- The OECD's AHELO: a PISA for higher education?
- Measuring Student Learning, Globally
- OECD test of student learning raises hackles in U.S., Canada and Europe
A Worldwide Test for Higher Education?
For much of the last year or two, debate has raged among American higher education officials and state and federal policy makers about the wisdom and practicality of creating a system that would allow for public comparison of how successfully individual colleges and/or programs are educating their students. Many college leaders have rejected the push, which has emanated primarily from the Secretary of Education's Commission on the Future of Higher Education and the U.S. Education Department, on the grounds that the nation's colleges and universities -- two-year and four-year, public and private, exclusive and open enrollment -- and their students are far too varied to be responsibly and intelligently measured by any single, standardized measure (or even a suite of them).
But the thirst among politicians and others seeking to hold colleges and universities more accountable for their performance is powerful, and it is not merely an American phenomenon. Proof of that can be found in the fact that the Organization for Economic Cooperation and Development has convened a small group of testing experts and higher education policy makers who have met quietly in recent months to discuss the possibility of creating a common international system to measure the learning outcomes of individual colleges and university systems, along the lines of the well-regarded test that OECD countries now administer to 15-year-olds, the Program for International Student Assessment.
In fact, a position paper that OECD prepared last fall raising the prospect of creating a worldwide higher education assessment was called "PISA for Higher Education;" it said that creating such a system could serve as an antidote to the ever-growing number of national and international rankings, focused excessively on "inputs or research performance," that may "distort the allocation of resources in higher education to the detriment of teaching and learning." A direct way of measuring the learning outcomes of institutions across the globe, the OECD paper said, "could provide member governments with a powerful instrument to judge the effectiveness and international competitiveness of their higher education institutions, systems and policies in the light of other countries' performance, in ways that better reflect the multiple aims and contributions of tertiary education to society."
OECD officials say their work has been purely exploratory so far, although Barbara Ischinger, director of the organization's education directorate, said at a Tuesday briefing in Washington about the group's new statistical compendium that the panel of experts hoped to present a "feasibility study" for an assessment system to a meeting of OECD officials next January in Tokyo.
To some American higher education officials, many of whom are a bit gun-shy from their battles with the Bush administration over its push for a U.S.-based accountability system in recent months, the idea of trying to create an international one seems both wrong-headed and ill-fated.
"The conversations in the last year have underscored for many folks both the importance of addressing issues of student learning outcomes and the difficulty of finding a common instrument for measuring them in the United States," said Terry W. Hartle, senior vice president for government and public affairs at the American Council on Education. "If we haven't been able to figure out how to do this in the United States, it's impossible for me to imagine a method or standard that would work equally well for Holyoke Community College, MIT and the Sorbonne."
It is clear from OECD's own documents about the possible assessment system that even those engaged in the discussion recognize the potential hurdles that Hartle describes. The initial paper laying out the idea last October was filled with questions -- What to assess? Whom to assess? What is going to be compared? -- to be answered in an "exploratory phase," which has been undertaken by a panel of experts who have met twice and plan a third meeting in Korea next month.
And summaries of the first two meetings of the group -- which can be found here and here -- suggest that the panelists spoke repeatedly about the potential difficulty of creating such a system. "The experts identified considerable challenges for the development of internationally comparative measures of higher education learning outcomes and acknowledged that there was no clear roadmap for overcoming these -- some compared the situation with when Columbus set sail," said the summary of the first meeting, which took place in April in Washington.
But the panel of experts -- whose U.S. representatives, at least, tilt heavily toward advocates for standardized testing, with three representatives from the Educational Testing Service, one researcher (Roger Benjamin) who is closely aligned with the Collegiate Learning Assessment, and a former U.S. Education Department official-turned-foundation executive, Marshall Smith of the William and Flora Hewlett Foundation -- also seem intent on making it work.
"None of the experts considered the goal unreachable and all recognized that reliable information on learning outcomes would only rise in importance," the summary of the first meeting said, "as higher education would continue to diversify [and] internationalize." American policy makers have increasingly sought ways of comparing U.S. institutions to their peers elsewhere as concerns have arisen about declining American competitiveness, for instance.
Outside the OECD's selected participants, other experts on testing and higher education assessment were generally skeptical about the prospects for a coherent and useful international learning outcomes system. Jane Wellman, executive director of the Delta Project on Postsecondary Costs, Productivity and Accountability, was the most favorably inclined, saying she was "sure you can do it," and that "it would be helpful if it developed."
Reviews were tougher elsewhere. Education International, which represents unions of teachers and educators in 169 countries, prepared an analysis of the idea that cited a wide range of practical and philosophical problems with the proposed approach.
Trudy W. Banta, a professor of higher education and senior adviser to the chancellor for academic planning and evaluation at Indiana University-Purdue University Indianapolis, said she believed that it would be extremely difficult to design one measure that could apply "across the many cultures, languages, institutions" that are part of the OECD universe. Beyond those practical concerns, though, Banta, who has written in Inside Higher Ed and other publications about the limitations of standardized measures of learning, expressed a more philosophical concern. "I'm afraid that everybody is looking for a silver bullet, a magic potion, that will tell them about quality" in higher education. "The latest tool in that arsenal is a standardized test," which inevitably results, she said, in oversimplified measurements of institutions', or in this case potentially countries', success or failure.
Clifford Adelman, a longtime Education Department researcher who has turned his attention of late to international higher education as a senior associate at the Institute for Higher Education Policy, said it would be extremely difficult to formulate any kind of measure that would deal with the incredible variation in institution types, student bodies, and other factors in the several dozen OECD countries. "There's too much going on that's different in terms of age of entry, social dimensions, who is seen as disadvantaged and why, the kinds of high schools they come out of, who are you testing, what institutions are they coming out of," said Adelman.
He and others also raised questions about the makeup of the panel OECD had drafted, noting the predominance of testing organization officials. "It's troubling that most of the people representing the United States in this have a stake invested in specific outcomes," said Hartle of the American Council on Education. "It does seem as if they've put together a group to give them the answer they want," he added, saying that college officials from the United States had been excluded from the conversation so far.
That is about to change, said Andreas Schleicher, who heads the indicators and analysis division of the OECD's Directorate of Education. Now that the panel of experts has done its exploratory work, he said in an e-mail message, "the next stage will be to involve [higher education] institutions to discuss some of these questions."
Search for Jobs