You have /5 articles left.
Sign up for a free account or log in.
Many people think they know what we should produce with the process we call a college education. Unfortunately, they don’t agree with each other, so the topic of measuring college success provides an endless opportunity for self-assured clarity about what is not at all clear. The current occasion for the revival of this topic, which has had various other high and low points on the national accountability agenda, comes from the Spellings commission’s discussion and draft reports that call for colleges and universities to tell their customers the college will produce for students.
This seemingly reasonable request is like most high level educational principles: dramatic and simple in general and remarkably complicated and difficult in specific. Let’s look at some of the complications.
The product of a college degree is, of course, the student. Many want to assure parents and other customers that their students will emerge from the process of higher education with a specific level of skills and abilities. Recognizing the difficulty and expense of enforcing exit testing on all students, some propose to test a sample of students and infer from the results an achievement score for the institution that customers can then compare with the scores from other institutions. Leaving aside for the moment the touchy question of exactly what we want the students to know, testing that produces a raw institutional score is not likely to work very well by itself.
Everyone knows that smart, well prepared freshmen usually end up as smart well prepared graduating seniors. If students test well entering the institution they are very likely to test well exiting the institution. Our egalitarian spirit worries that institutions whose students are less smart and less well prepared will necessarily score low on these exit tests in comparison to elite institutions with very well prepared students. Every institution that works hard to improve their students’ abilities should get a good score because the idea of improvement inspires everyone. A method to ensure that every institution, whatever the initial quality of its students’ preparation can score well on a national scale goes by the term “value added.”
Value-added methods attempt to measure the ability and preparation of students when they enter the institution, measure the ability and achievement of the students as they leave the institution, and then calculate an improvement score. Value added ascribes the improvement score to the wisdom and dedication of the institution (even if the achievement is actually the students’).
A value-added score, calculated using the same methodology for all higher education institutions in America, would enable an institution with limited resources that admits students with very poor high school records and very low SAT scores but graduates students who have pretty good GRE scores (as an example of an exit exam) to get a 100% score because the improvement or value-added is large. Colleges with superb facilities and resources that admit students with very high SAT scores and very fine high school preparation and graduate students with very good GRE scores could get a 50% score because the improvement measured by the tests would be modest (from terrific coming in to terrific going out). Then, in the national rankings, the first institution could claim to be a much better institution for improvement than the second one.
This discourse fools no one and would actually tell consumers that the institution they want their students to enroll in is the one that has high scores going in and high scores going out rather than the one that has low scores going in and medium scores going out. What matters, as everyone knows, is the score leaving the institution.
This approach also has the perverse effect of devaluing actual accomplishment and ability in favor of improvement. It implies that a student is doing just as well at an institution that graduates at the middle level of accomplishment (but with lots of improvement) as the student would do at an institution that graduates at the top level of accomplishment (but with less improvement).
It does the employer and the student no good to know that the student attended an institution that produces middle level performance from very poor preparation. The employer wants a graduate who has high performance, high skills, high levels of knowledge and ability. The employer is likely less interested in knowing that the student had to work hard to be a middle level performer and more interested in hiring someone with a high level of performance.
If we measure value added (by whatever means), we have to create a test for the end point: what the graduating student knows about the specific subjects studied, about the specific major completed. When we test for what the student knows about the substance of the various fields of study, on some national scale, then we will have a marker for achievement. Once we have this marker for achievement, no one will care much about the marker at the entry level. Everyone will want their student to be in an institution whose scores demonstrate high levels of graduating achievement. It may give struggling institutions a sense of accomplishment to move students from awful preparation to modest achievement, but it will not change the competitive nature of the marketplace nor will it reduce the incentive to get the very best students who will, even if they don’t improve at all, score high on exit exams.
In this discussion, as is true in all efforts to measure institutional quality and performance, nothing is simple and no single number or measure will achieve that national reference point for total college achievement. College, as so many of us repeat over and over, is a complicated experience. There is no standardized college experience.
What we have is a relatively standardized curriculum and time frame. We have a four to five year actual or virtual educational process for students pursuing a traditional four-year baccalaureate degree, we have a general education requirement and a major requirement, and we have a host of extra or enhanced optional or required experiences for students. Within these large categories, the experience of students, the learning of students, and the engagement of students varies dramatically from discipline to discipline within institutions as well as between institutions.
Much of the emphasis on accountability measurement has as its premise the highly destructive goal of homogenizing the content and process of American higher education so that all students have the same experience and the same process. This centralizing drive comforts regulators, but it does not reflect the reality of the marketplace. As we have emphasized before, the American commitment to universal access to higher education requires a high level of variability in institutions, in the educational process, and in the outcomes. We do need good data from our institutions about what they do and what success their graduates have, but we do not need elaborate, centralized, homogeneity enforced by an ever more intrusive regulatory apparatus.