You have /5 articles left.
Sign up for a free account or log in.

It has been eight years since Measuring Up 2000 awarded every state a grade of “Incomplete” in the Learning category to highlight the fact that the nation lacks consistent measures of student learning in higher education. Since then, the National Center for Public Policy in Higher Education has been consistent in reporting progress on developing measures of student learning, culminating in Measuring Up 2004 when we reported state-level learning results for five states that participated in a national demonstration project. To signify progress, we awarded a “Plus” (+) grade to these five states and added six more “Plus” grades in Measuring Up 2006 to recognize states that participated in the National Assessment of Adult Literacy (NAAL) on a statewide basis.

We know that interest in assessment at the college level has grown in this period. The Collegiate Learning Assessment (CLA) and the National Survey of Student Engagement (NSSE) had not been launched when we began our effort. And stimulated, in part, by the report of the Commission on the Future of Higher Education convened by Secretary of Education Margaret Spellings, accreditors and institutions are far more serious about the need to look at educational results than they were back then.

But despite this apparent progress, an important dimension of collegiate learning has gotten lost in recent debates about assessing learning. That is the need for states and the nation to develop indicators of progress in building “educational capital” -- the levels of collective knowledge and skills possessed by their citizenry as a whole.

In K-12 education, the states have exit examinations that ensure that high school graduates meet minimum standards and that individual schools can be held accountable. But state leaders also rely on measures like the National Assessment of Educational Progress (NAEP) in order to benchmark progress, identify strengths and weaknesses, and compare themselves to other states. The Spellings Commission discussed these matters and recommended that more states follow the approach to measuring educational capital pioneered by the National Center’s five-state demonstration project. The commission also recommended increasing state participation in the NAAL, as well as administering it more frequently. We need these kinds of collective measures to keep our higher education policies pointed in the right direction and to tell us where we are strong and weak.

Unfortunately, as a nation, we appear to be going backwards on these types of measures. Only 6 states -- down from 12 for the 1992 assessment -- signed up to be “oversampled” for the 2003 literacy test, which means that they asked the test’s sponsors to collect enough data from their states that they could obtain a reliable state-level estimate of literacy. In addition, a repeat administration of this important literacy assessment is nowhere in sight. Despite promises to do so, moreover, the National Center for Education Statistics (NCES) has yet to produce 50-state estimates of citizen performance on NAAL prose literacy almost five years after the assessment was administered.

Meanwhile, the Organization for Economic Cooperation and Development (OECD) is moving forward with an international feasibility study on collegiate learning without a U.S. commitment to participate.

Individual state attention to this matter is equally uneven. A few states continue to assess students using established examinations for which national benchmarks are available. Among them are South Dakota, which requires all students attending public universities to achieve a certain standard on the ACT CAAP examination as a condition of graduation, and Kentucky, which will replicate a variant of the Learning Model developed by the National Center for its five-state demonstration project.

These states are joined by West Virginia, whose public institutions will administer the CLA on a statewide basis next year, and Oregon, which is experimenting as a state with portfolio measures in collaboration with the Association of American Colleges and Universities (AACU). But Arkansas abandoned its longstanding program of statewide testing centered on the ACT CAAP last year and a recent SHEEO survey on this topic found state agency engagement in assessment to be at an all-time low.

At least as important, states are doing assessment, where they are doing it at all, to demonstrate institutional accountability. They are not measuring learning to determine gaps in what their college-educated citizens as a group know and can do, consistent with a public agenda for higher education.

This is equally true for the growing number of institutions that are holding themselves accountable through such initiatives as the Voluntary System of Accountability developed by the National Association of State Universities and Land-Grant Colleges and the American Association of State Colleges and Universities. However admirable these efforts may be from the standpoint of responsible institutional accountability, they provide little real information for policy making. And they are being undertaken largely for political reasons -- to blunt the recent attempts by the Department of Education to impose new reporting requirements about student learning through accreditation -- rather than as part of a broader effort to systematically improve instruction.

In short, events in the wake of the Spellings Commission have served to politicize public debate about information on student learning and attainment at precisely the point at which such information should be collectively owned and generated. Nowhere has this condition been more apparent than in the realm of developing longitudinal databases of students. At a time when more than two-thirds of students earning baccalaureate degrees have attended several institutions, we still lack the capacity to track student progress on a national basis because of political opposition masquerading as a concern about privacy. As 42 states have already demonstrated, higher education agencies using today’s information technology are perfectly capable of creating powerful student unit record databases that do not compromise security.

With America’s competitive edge in producing college graduates eroding steadily among our younger citizens, we need benchmarked information about student attainment and learning more than ever. In the past decade, we have developed the technical capacity to generate such information and the policy wisdom to use it effectively. But, as a nation, we are no farther along on producing it in 2008 than we were in 2000 when Measuring Up first awarded every state an “Incomplete” in Learning.

Next Story

Written By

More from Views