You have /5 articles left.
Sign up for a free account or log in.

Three months ago, an association of state universities stepped out in front of other higher education groups by announcing that it was exploring the possibility of creating a new, voluntary system to define, measure and make public how well its members educate undergraduates.

Today, the group, the National Association of State Universities and Land-Grant Colleges, is providing new details about the plan, which suggests that its members collect and make publicly available a common and clearly defined "bundle of information" including consumer information about costs and graduation rates, data about the "learning climate" on campuses, and information on how students perform in college.

The paper, which the leaders of NASULGC produced in collaboration with officials at the American Association of State Colleges and Universities, is still very much a work in progress. The two associations' member institutions have not yet had a chance to weigh in fully on the proposal; both associations' provosts' groups meet this month and are expected to review the plan carefully.

And the document leaves many questions unanswered, including such crucial ones as exactly which tests might be used to measure student learning and whether the information prepared by colleges would be collected and published by a central source. The Secretary of Education's Commission on the Future of Higher Education, which has been pushing a national accountability system for higher education, has recommended the creation of a federal database to which institutions could report such information.

Peter McPherson, who since he became president of NASULGC last year has made accountability a major focus, made clear in an interview Monday that the association's proposal complemented, but was not driven by, the federal commission's work. "We'd be here talking about this issue whether or not there was a commission," he said, noting that both his group and AASCU, whose members overlap with NASULGC's but lean more toward regional state institutions than flagship universities, have published papers stressing the need of public colleges to better measure and reveal their successes and failures in educating students.

Added Travis Reindl, director of state policy analysis and assistant to the president at the state college group: "This signals very clearly to the broader environment, both policy makers and the higher education community, that the public sector is going to be proactive about this. The public sector is not going to wait for the secretary's commission to say, 'This is the path.' " The colleges are setting out their own path, he said.

The paper being released today, which McPherson co-wrote with David Shulenburger, former provost at the University of Kansas and NASULGC's new vice president for academic affairs, lays out that course in significantly more detail than did the earlier paper, which was released in April. A new accountability system is necessary, the new paper argues, to satisfy public colleges' three major constituents: students and their parents, faculty and staff members, and policy makers and politicians. Although colleges already generate a wealth of information about their performance, the NASULGC paper argues, it is insufficient for two reasons: (1) the information is often incomparable from institution to institution, because institutions collect it using differing definitions, and (2) much of the data is not released to the public, in large part because institutions believe it will be misunderstood or misused.

The NASULGC paper acknowledges those concerns, arguing that "only comparison among comparable universities is appropriate," and saying that the groups "vigorously oppose creating any overall ranking scheme based on the bundle of accountability measures we recommend here."

But while it shuns the idea of creating a single index, the "bundle of measures" that it endorses would go a long way toward providing the kind of overall sense of institutional effectiveness that the chairman of the U.S. commission, Charles Miller, has urged. The measures fall into three categories: consumer information, campus learning climate, and educational outcomes.

The consumer information might include "various cost figures, degree offerings, living arrangements, graduate placement statistics, graduate rates, transfer rates [and] employment rates" (though the authors note in an appendix that information on employment and earnings measures is "particularly problematic," because colleges and states have great difficulty tracking how college graduates move through the workplace.) The paper also notes the flaws in the current federal methodology for calculating graduation rates, and notes that the state college groups favor discussions about creating a national "unit records" database that would allow policy makers to better follow students on their increasingly circuitious routes through higher education.

Most public colleges already collect and even publish most of this consumer information, McPherson said, but there is meaningful work to be done in standardizing how they define and collect it, to make it more valuable, he said.

Many institutions also already have good information on the "learning climate" either through the National Survey of Student Engagement or the Cooperative Institutional Research Program. Such information "helps faculty and stuff to know whether their students are being expected to read, write and participate in class discussions as much at their school as at comparable universities," and "whether their university's environment creates students reasonably engaged with their institutions," the report says. But while many institutions collect this information and use it for internal purposes, to improve their own performance, relatively few make it public.

The thorniest set of measures, the report acknowledges, is in the area of "educational outcomes" -- particularly trying to gauge the quality of the "general education" institutions offer. Part of the problem is figuring out what to measure: critical thinking, analytical reasoning, written communication? The answer to that question influences which tests might be used: the Collegiate Learning Assessment, the Measure of Academic Proficiency, the Collegiate Assessment of Academic Proficiency, to cite three the NASULGC report mentions. The paper says it is too early to decide which test or tests should be used, but suggests that whichever ones are chosen, college leaders must ensure that what is measured is the "educational value added" by the institutions, by taking into account the academic quality of students as they enter.

"Over the next few months, this topic of what test we use, and how we use it, is going to take the largest single chunk of our time," McPherson predicted.

But the groups "can't wait until we have the perfect one to do something," Shulenburger added. "We'll have to choose one, after careful study, and over time, we'll perfect it and move toward an instrument that really works."

Another major unanswered question, the NASULGC and AASCU officials acknowledged, was whether and how the greatly improved information that the institutions collect and produce would make their way to the public. "Once you've agreed on what data to collect, and on the format for collecting the data so that they are consistently collected and reported," McPherson said, there will be no shortage of entities willing to make it publicly available.

But the paper suggests one possibility that has the potential to transform the public colleges' proposal from one that applies only to those institutions to an accountability system that could have broader reach in higher education. Citing the "quality assurance role of major importance" that the six regional accrediting associations play, those groups "should consider substituting the resulting set of accountability measures for the measures they now require," the NASULGC paper argues.

Such an approach, if adopted, would mean that the bodies that oversee quality for virtually all institutions of higher education -- public and private, two-year and four-year -- would use the same set of indicators. That would be a significant first step toward the sort of national accountability system some leaders of the federal commission favor -- and that some college officials, particularly those at private institutions, greatly fear.

Next Story

Written By

More from Learning & Assessment