You have /5 articles left.
Sign up for a free account or log in.

Something is certainly afoot. The public disclosure systems put forward by the National Association of Independent Colleges and Universities and the Voluntary System of Accountability from the country’s two big public university groups are major national initiatives encompassing some of America’s most impressive institutions. Miami Dade College’s effort to embed 10 desired learning outcomes into the curriculum, and a report of outcomes measurement by discipline, are two other accountability approaches that certainly bear watching.

Even homey old IPEDS has put on a fresh coat of paint, as the Education Department’s College Opportunities Online database (COOL, née PEER) has become Navigator, with an even more attractive set of tools, in the hopes that this time a few more somebodies will use the hundreds of millions of data elements sitting patiently by.

All of this activity is more a function of the skill of Secretary of Education Margaret Spellings in moving her agenda than it is a recognition that there is merit to the numerical assessment of student outcomes.

One would have expected a brief hiatus, a quiet spell to see where this activity will lead, and whether or not it will produce outcomes useful to teaching, to learning, to higher education.

One would also have expected a great deal more caution in pressing the assessment agenda onto colleges given the experience of the last 20 years, with no outcomes to show for all the time, money and effort invested in assessment. It’s not to be. In fact, the rhetoric from Washington hasn’t let up – and is now abetted by voices urging an international assessment effort of the kind being examined by the Organization for Economic Cooperation and Development.

Assessment has virtually engulfed American higher education. Thus, many of the national conferences that should be convening America’s foremost educators to address issues such as access, the achievement gap and diversity are instead devoting numerous sessions to assessment. Most of these gatherings are devoted to “how,” rather than to “why” or “whether.” Nor is there any effort to discuss outcomes, policies, or improvements that have emerged from all of this costly and all-consuming assessment activity. All this activity, without a shred of evidence that the data we have collected -- or will collect -- will ever address ”national needs” or “improve institutional performance.”

The colleges, too, have been diverted. Superimposed on each institutional mission is the need to produce outcomes to someone else’s satisfaction. Successful teaching, learning, and research aren’t enough anymore. Colleges must provide evidence of an ongoing outcomes evaluation, and produce evidence leading to continuous improvement, or some such. One thing is clear: the student as a human being is no longer the sole product of the institution.

One example: a school creates a special curriculum with lots of individualized tutorial and counseling help. These are inputs, and therefore not considered in the measurement of student learning outcomes. If the school cannot demonstrate numerically that students benefit from the extra help, the whole effort is ignored by the assessment scheme, and successful (but not numerically measurable) outcomes in the students, discounted.

In essence we are in danger of distorting the very nature of our colleges and universities.

Everyone teaches to the test, and therefore the test influences the curriculum, shrinking it by encouraging an emphasis on the items to be examined numerically. This applies to colleges, and of course, to faculty members, particularly the growing number of non-tenured adjuncts who are under pressure to show evidence of successful teaching outcomes. Whether explicit or not, the need to assess focuses our emphasis on what can be counted, rather than on what counts.

Assessment, as sometimes implemented, can monopolize the time and attention of a faculty. Does anyone have any idea how many meetings, conferences, and calls are needed for this purpose? Timetables and protocols, models and alternatives, memos, e-mails and faxes, reports, agendas and minutes all within a specified format, shepherded by experts and consultants, and overseen by levels of administrators – draining away time and productive energy in an activity that heretofore has proven to be useless. This is all intended to be a permanent feature of college life, with continuous iteration of instruments and strategies following closely upon all the continuous improvements that are pouring in. [How many faculty members does it take to create a survey? Answer. We don’t know. We’re waiting for the answer to our survey.]

Putting aside the human and career costs to faculty, there is a real loss to students. Students need professors who talk to them between class, in the hallways and in offices. They need relaxed, unhurried conversations which counsel, encourage, gently challenge and explain. Some need an extra few minutes at the blackboard after class to discuss something that wasn’t perfectly clear the first time. This human element is particularly important to students who are less confident, less secure about their place in a college, and usually first generation. Paradoxically, this new assessment pressure on teachers is coming at a time when a new demographic is beginning to appear in our college classrooms.

The pain to students is likely to be much more direct, and much more widespread. The six-year graduation rate, for example, flies in the face of the need for 18 to 21 year olds to be able to grow, to change their mind about a major, to discover new interests, new opportunities, and new paths to a career. Some need to take a year off, and to mature. Some of us succeeded precisely because we had that time, and nobody was standing by with a 6-year stopwatch in hand.

The need to show strong graduation rates will ultimately determine who will be admitted to a program or school, and who will be counseled away from more challenging (STEM, among others) sequences.

Retention rate considerations will also have a corrosive effect. Schools and faculty members like to view their role as one of serving, as part of a mission. Will this image, this attitude survive in an atmosphere where retention rates determine a school’s success and inevitably its rank? With a retention mandate in place, how many faculty members will advise a student to transfer to another more challenging school, write the letter of recommendation, and call colleagues in this other school encouraging them to accept this student?

Student engagement is often important for students and exceedingly so for schools which use this characteristic as an outcome measure. Will all students be encouraged to ‘engage,’ or will schools recognize that many of their young people need to hold jobs to pay for their schooling? I have had students who came to class virtually asleep on their feet, and others who needed every available moment to keep up with the work. Are these people going to be advised to become engaged?

Undergraduate research and civic service are other measurable outcomes which may create a conflict between the interest of the school and the needs of students. Clearly, such conflicts will be inadvertent and even unconscious, but they will increasingly appear. Finally, change will not be precipitous, although faculty members who have been tapped to contribute to the assessment effort will disagree vehemently.

For the most part we have embarked on a great social science experiment with today’s students paying a price for outcomes which may, at best, emerge in a decade or so. In a global sense, we may yet find some benefit from all this effort. For the student currently within the pipeline, the assessment movement will be all loss.

Next Story

More from Views