WASHINGTON -- Last week’s annual gathering  of the Association of American Colleges and Universities here may have left you with very different impressions of the state of student learning assessment in higher education, depending on which sessions you sampled.
If you sat in on the many presentations by campus officials talking about their efforts to engage students,  improve retention and measure their results, you’d have been left with the unmistakable impression that there are lots of individual faculty members, departments and colleges very much dedicated to measuring how successfully their students are learning and using that information to improve the quality of the education they provide. The presentations gave the lie to the arguments of critics that college administrators and professors are casually indifferent to whether their students are learning, loath to analyze their own performance, and unwilling to change.
But if you sat in on sessions at AACU (and at this week's annual conference  of the Council for Higher Education Accreditation's annual forum) in which policy makers and outside critics talked about the national environment around student learning outcomes and assessment, it was equally clear that major questions remain about just how serious higher education as an industry has gotten about these issues.
Do the multitude of individual campus efforts amount to a comprehensive effort to change practices within higher education? And is the progress -- without something that ties it together nationally -- likely to satisfy external pressure from politicians and others on colleges to prove that they are giving students the skills that they (and their eventual employers) want and need?
When the public looks at higher education, it sees "little evidence ... of the urgency of the need to change," Michele Cahill, vice president for national programs at the Carnegie Corporation of New York, said at an AACU session on policy makers' skepticism to liberal education. "Higher education has been fragmented and idiosyncratic in its ability to change."
"We've got to end casual, undisciplined approaches to learning and assessment," added Paul Lingenfelter, president of the State Higher Education Executive Officers.
Echoes of 2007
The disconnect between those views and the beehive of activity on campuses might sound familiar to those who followed the intense debate within higher education, during the last years of the Bush administration, over the accountability push by Education Secretary Margaret Spellings' Commission on the Future of Higher Education.
Many higher education officials bristled at the commission's suggestion that colleges and universities were paying little attention to the academic success of their students, and viewed its calls for commonly accepted measures of student learning  that would allow students, parents and policy makers to compare colleges to one another as leading to dangerous oversimplification.
The Bush administration's most aggressive effort to carry out the commission's accountability agenda, through attempts to change federal rules governing accreditors, was stopped dead in its tracks in 2007. But Spellings and her commission had the undeniable effect of accelerating the work by accreditors, higher education associations and others to prod colleges to step up their assessment activity, with several groups of colleges going so far as to adopt voluntary systems in which they agreed to use common measures of student learning to allow for the sort of comparability Spellings advocated.
Three years later, the situation has changed little, as evidenced by the discussions at last week's AACU meeting and debate at this week's accreditation forum. Yes, there continues to be a wide array of initiatives and activities -- on lots of individual campuses -- aimed at measuring how students are learning and using that information to change curriculums and teaching methods.
But despite continued pushing and prodding by groups like AACU and others, and efforts like the Lumina Foundation on Education's "tuning" experiment  to develop statewide accords on the relevance and rigor of specific degrees, relatively little progress has been made to date toward forming a broad, cross-institutional consensus about what a liberally educated college graduate should know and be able to do, and toward more regularized reporting of how successfully colleges are producing graduates with those skills.
One possible explanation for the relative lack of movement may be that the political pressure on higher education to account for student learning outcomes has eased since Spellings and the team responsible for No Child Left Behind left office. The Obama administration largely focused elsewhere (on college access and completion) during its first year, perhaps signaling to college leaders (or at least the campus rank and file) that the accountability movement has faded.
But that is almost certainly a flawed assumption. While the new administration has indeed put its energies most visibly into other endeavors, it has quietly endorsed and expanded its predecessor's push to get states to build student databases that are designed, first and foremost, as accountability tools to produce better data on how students move (or don't) through the educational pipeline.
During the continuing negotiations over possible changes in federal rules governing the integrity of the financial aid programs, the Education Department is making various proposals that some college officials see as opening the way for states and the federal government to get much more involved in overseeing their institutions.
And at Tuesday's Council for Higher Education Accreditation meeting, Under Secretary of Education Martha J. Kanter echoed many of the criticisms that her predecessors in the Bush administration made of higher education's process of self-governance, saying that "accreditation is not transparent enough" and urging higher education to "join us in working toward a modern 'culture of accountability.' " Kanter said she believed the self-studies that colleges produce in accreditation should be made public, and urged accrediting agencies to open up the meetings at which they decide institutions' fates, as well.
So while many observers believe that this administration has more respect for higher education and is likely to be less heavy-handed in whatever pressure it puts on colleges than the last one was, they expect federal pressure to eventually intensify once again.
Accountability With a Smile
For that reason, among others, many higher education leaders argue  that colleges and universities cannot afford to stop their own quest to develop meaningful evidence of student learning. State policy makers, parents and others -- troubled by continually rising prices and low completion rates, and worried about whether students are being well prepared for work and life -- grow less and less willing to accept colleges' traditional assertions to "trust them" that students are learning.
Despite the uptick in activity, "I still feel like there's no there there" when it comes to colleges' efforts to measure student learning, Kevin Carey, policy director at Education Sector, said in a speech at the Council for Higher Education Accreditation meeting Tuesday.
Views like Carey's, which are widely held by policy experts who look at higher education from the outside, tend to aggravate faculty members and other professionals in the industry to no end (the reception at the CHEA meeting was cool, to be generous), given how much assessment activity is unfolding on the campuses.
That's where the disconnect comes in. Most of the assessment activity on campuses can be found in nooks and crannies of the institutions -- by individual professors, or in one department -- and it is often not tied to goals set broadly at the institutional level. Some of it has been undertaken directly in response to the outside calls for accountability, and seems workmanlike -- testing or measurement done for measurement's sake.
In some ways that's not surprising, given that higher education is largely responding to the Spellings Commission's (flawed) approach of pushing a top-down assessment mechanism, Peter McWalters argued Friday at an AACU session. McWalters, the former commissioner of elementary and secondary education in Rhode Island, who now works for the Council of Chief State School Officers, said that the Bush administration's accountability strategy was ill-advised because it emphasized assessment over standards -- focusing on getting colleges to use common measurements of learning outcomes and envisioning a federal role in defining what students should know.
To be ultimately successful, any meaningful assessment effort must be embraced widely by instructors, said McWalters -- and to do that, "you've got to start this conversation as an instructional conversation that includes assessment," he said. It must begin with agreement (in a department, a college, and ultimately across a discipline or institution) about the learning goals that students should derive from the curriculum -- and then intensive work to infuse the skills needed to reach those goals into the curriculum, course by course, McWalters said.
Only by incorporating the learning goals into the curriculum, and using them to change and improve instruction, can assessment be useful -- and accepted -- on campuses.
But that sort of assessment alone doesn't meet what McWalters called the "other part of the test" -- the comparability goal on which policy makers insist to hold institutions accountable. "A legitimate process for evaluating learning outcomes," Carey told the CHEA meeting Tuesday, "has to ... be consistent, it needs to be understandable to someone other than the institution itself, and ... it needs to be judged relative to some kind of standard."
One way to achieve that, Lingenfelter of the State Higher Education Executive Officers argued at AACU, is by getting broad consensus (across swaths of institutions, or within academic disciplines) on "coherent, concrete vision of what a liberal education is," so that the goals that individual colleges are infusing into their curriculums are common (or close to it) from institution to institution. AACU has undertaken work along those lines  as part of its Liberal Education and America's Promise initiative, and disciplines such as engineering have moved in that direction, but the idea of a commonly embraced set of learning outcomes is far from reality (and strongly opposed in some quarters ).
While the Bush administration often signaled that it favored standardized testing as the best way to persuade the public (and politicians) that meaningful learning is taking place, there is another way to validate what's happening in classrooms, McWalters said -- by making transparent the professional judgments that instructors make about their students' work.
Given the technology that is available today, he said, it is not difficult to imagine panels of experts reviewing the grades and scores that professors at different institutions have given to their students, with the goal of "anchoring" in the norms of the field the professors' judgments about how successfully the students have achieved a set of common learning goals. Countries such as Singapore and Ireland, he said, have adopted such approaches, "getting away from having no standards to having standards that are tracked either by testing or by professional judgment that is transparent."
"There are places in the world where the assessment instrumentation of choice is exhibition-oriented professional judgment [rather than testing], but assessment keeps anchoring the judgments" so that confidence develops that they have meaning beyond an individual institution, McWalters said. "You anchor the judgment by being public with others who share the responsibility for teaching and learning -- not the federal government, and not the testing companies."
That may not be the only way to build greater national confidence in what colleges are doing to measure student learning (the New Leadership Alliance for Student Learning and Accountability , for instance, is promoting the idea of a voluntary, "LEED-like" peer-review process through which colleges' would seek certification of their assessment programs).
Colleges (and their instructors) are unlikely to be able to hide when outside accountability pressures next build on them, McWalters and others argued -- so wouldn't it make sense for them to build an assessment structure that they own?