You have /5 articles left.
Sign up for a free account or log in.
First they told us that assessment is necessary for improvement. And so some of us assessed. Colleges in certain states counted anything that moved; they then aggregated, analyzed and stored the data. State higher education officers report warehousing billions of data elements. Some schools improved, some didn't. Just like the other four or five thousand colleges and universities that didn't buy into the first call for assessment.
Then they told us the public wants to know. Some of us wondered why the public wasn't calling to find out. With 17 million people in postsecondary education, almost 60 million Americans are intensely focused on college (students, parents, spouses, high school students…). Sixty million people needing answers translates to a lot of telephone calls. Certainly more than the relative handful that accreditors have been receiving.
So the focus moved to "comparability" as the driving force for assessment and measurement, ultimately to be algorithmed into a performance metric. A blizzard of "one size fits all" objections later, and comparability similarly beat a hasty retreat.
The message regarding measurement is much more comforting now. "Do it because I say so," we hear in the tone of mothers immemorial. Comply with authority, we are being told, and comply we will!
Which raises the question of what to measure. There are at least 30 characteristics that describe the transformation one seeks in the college student ("critical thinking skills," desire for truth, healthy skepticism…). Many of the above are nuanced by the skills, content and features needed for success in each discipline.
It was different in the 1600s, when educated gentlemen emerged with the same content outcomes and the same scholarship skills. Nowadays there may be well over a hundred different popular programs in a single institution. The physician needs to learn to listen, while the lawyer learns to talk; the English major glories in understanding how Chaucer influenced the language, while the art major learns to communicate with a paintbrush.
Colleges are not fact factories, and proper measurement must be comprehensive, in context, and at length. The most successful assessment is on a student by student basis. Every other measurement must be carried out by experts in the field, capable of capturing the unstated, and comparing it to a norm developed by years of experience. Reporting this kind of measurement as a single number requires so many asterisks and context boxes, as to sink the number in a sea of exceptions and explanations.
If we persist in emerging with a number, we risk misleading the public which will arrive at conclusions based on a narrow slice of the total outcome. We risk distorting the teaching/learning process because institutions will seek to excel in those areas which can be measured. And we face the danger of subduing the unique, the sparkling, the passionate, the unconventional, and the inspiring teaching that fits no mold, but that often makes the greatest impact on students' lives.
Not that people aren't trying. By my count there are probably two dozen different kinds of measurements, tests, data gathering initiatives and assessment schemes in effect. A variety that speaks to the ferment of a healthy field. Structured experiments are taking place, and answers with scientific validity will ultimately emerge.
But we aren't there yet, and this is why some of us are so opposed to the measurement of student learning that is being imposed on us. The same Department of Education that requires publishers of tests that measure students' "ability to benefit" from higher education to spend hundreds of thousands of dollars in establishing the reliability and validity of their tests, is encouraging the use of tests for student assessment purposes that are largely proxies for performance metrics, with no hard evidence for reliability, validity or relevance.
And that's why this whole scene seems to be unfolding backwards. Secretary of Education Margaret Spellings has had a more powerful impact on higher education than have any of her predecessors. As a result of her initiatives, we have examined what we are all about, and if we disagree with some of her conclusions, it is a knowledgeable and respectful disagreement.
But it's time to pause. We cannot measure and we cannot produce performance metrics with instruments that are limited and limiting. We cannot expect researchers working alone to do the large scale experimentation needed to design measuring strategies that are comprehensive and that will stand up to scientific scrutiny. Nor can we expect publishers to undertake the vast expenses associated with establishing that their assessment products are reliable, valid, relevant and comprehensive.
Before we go any further, the Secretary must direct the necessary resources to this work. If she does, the resulting outcomes may turn out to be useful to postsecondary institutions, and therefore remain a permanent part of American higher education. If the department does not lend a hand, we will be left with a bag full of could-have-beens.…