You have /5 articles left.
Sign up for a free account or log in.

In the movie "Ghostbusters," Dan Aykroyd commiserates with Bill Murray after the two lose their jobs as university researchers. “Personally, I like the university. They gave us money and facilities, and we didn’t have to produce anything. You’ve never been out of college. You don’t know what it’s like out there. I’ve worked in the private sector. They expect results.” I can find some amusement in this observation, in a self-deprecating sort of way, recognizing that this perception of higher education is shared by many beyond the characters in this 1980s movie.

Members of Secretary Spellings’ Commission on the Future of Higher Education were very clear about their expectations for higher education when they wrote, “Students increasingly care little about the distinctions that sometimes preoccupy the academic establishment, from whether a college has for-profit or nonprofit status to whether its classes are offered online or in brick-and-mortar buildings. Instead, they care -- as we do -- about results.”

This expectation for assessment as accountability has forced many faculty members and administrators to seek out ways to balance assessment for “us”, or assessment for “improvement,” with assessment for “them,” or assessment for “accountability.” We do assessment for “us” in our classrooms, to provide feedback to students on their progress, in our programs to provide direction for improvement efforts, for each other when we provide reviews of articles and of ourselves when we evaluate our own performance.

Conversely, assessment for “them” is done in response to an external demand to prove “how much students learn in colleges and whether they learn more at one college than another," as the Spellings Commission put it in its final report.

When we perform assessment for "us" we are not afraid to discover bad news. In fact, when we assess for "us," it is more stimulating to discover bad news about our students' performance because it provides clear direction for our improvement efforts. In contrast, when we perform assessment for "them," we try our best to hide bad news and often put a positive face on the bad news that we can’t hide.

When we perform assessment for "us" we do our best to create valid and reliable assessments but don’t let the technical details, particularly when they are not up to exacting research standards, derail our efforts. When we perform assessment for "them," if there is any deviation from strict standards for validity, reliability, norming group selection, sampling approach, testing procedures or scoring techniques, we are quick to dismiss the results, particularly when they are unfavorable.

We know the "us" -- faculty members, students, department chairs, deans -- and we know how to talk about what goes on at our institution with each other. Even amid the great diversity of institutions we often find a common core of experience and discover that we speak each other’s language.

But the "them" is largely a mystery. We may have some guesses about the groups that make up "them" -- parents, boards of regents, taxpayers, legislatures -- but we cannot be sure because accountability is usually described generically, not specifying any particular group, and because our interaction with any of these groups is limited or nonexistent.

When we perform assessment for "us," we operate under a known set of possible consequences. Some of these consequences could be severe, such as a budget reduction or a reprimand from our superior, but in general the possible consequences are a known and acceptable risk.

When we perform assessment for "them," the consequences are much more terrifying because we do not control who uses these data or the purposes of their use. One of the uses of assessment for "them" is for accreditation, which can bring particularly negative consequences. We wake up in the middle of the night with visions of newspaper headlines publicly disclosing our poor performance.

At its best this would bring years of embarrassment and shame that would hang over our heads like the cloud of dust that followed Charles Schulz’s Pig-Pen. At its worst we face losing accreditation and the labeling of our school as a “diploma mill,” causing our students to be ineligible for federal student aid and leading to a mass exodus of students from our institution. Assessment for "them" brings high levels of risk and low levels of reward.

Finding the balance between assessment for "us" and assessment for "them" is a significant challenge that is also full of uncertainty as the Department of Education pursues negotiated rule making and as the Higher Education Act comes up for renewal in Congress. It can feel a bit like the Eliminator challenge in the television game show "American Gladiators" that had contestants navigating a balance beam while Gladiators attempted to knock them off the beam with swinging medicine balls. There have, however, been a number of efforts by university systems and by individual institutions to find ways to balance assessment for "us" with assessment for "them."

The State University of New York (SUNY) Assessment Initiative seeks to strike a balance between assessment for "us," or assessment for “improvement,” with assessment for "them," or assessment for “accountability”. The SUNY Assessment Initiative can be divided into two parts: assessment of general education and assessment within academic majors.

For assessment of general education, SUNY first developed a set of learning outcomes for general education programs at undergraduate degree-granting institutions. All SUNY institutions are required to use “externally referenced measures” to determine whether or not their students are achieving in the areas of Critical Thinking, Basic Communication and Mathematics. However, to keep this approach in balance, the Assessment Initiative does not require all institutions to use the same measure. Rather, institutions can select from nationally-normed exams or rubrics developed by a panel that best represent their mission in the state. This holds institutions accountable for demonstrating student achievement in foundational areas but will not be used to “punish, publicly compare, or embarrass students, faculty, courses, programs, departments or institutions either individually or collectively,” according to a description of the program.

Institutions are also required to perform local assessment of their general education programs. Institutions are held accountable for attending to the process of assessment -- examining student learning on specific objectives through assessment and making decisions about ways to improve based on those data -- by an external group called the General Education Assessment Review group (GEAR). GEAR, composed of primarily faculty members from SUNY institutions, reviews and approves campus assessment plans but not the actual assessment outcomes. In this way, SUNY documents say, “emphasis is placed on assessment best practice without introducing an element of possible defensiveness campuses might feel if their assessment program does not yield evidence to support optimal student learning.”

At the institutional level, Colorado State University and the University of Nebraska-Lincoln partnered together to implement within their institutions the Plan for Researching Improvement and Supporting Mission (PRISM) and Program Excellence through Assessment, Research and Learning (PEARL), respectively. PRISM and PEARL engage faculty members in assessment of the academic major -- assessment for "us." Faculty members select learning outcomes that are important for students in that major, perform assessment of student learning on those outcomes and then make improvements to their program based on those data. A panel of faculty members from each institution holds the academic majors accountable by reviewing assessment plans and encouraging the use of higher quality assessment practices.

To balance assessment for "us" with assessment for "them," PRISM and PEARL utilize an online software system that allows for the classification of the academic major assessment activity for aggregation at higher levels. In this way the institutions can describe the kind of learning that is going on within the institution, the assessment instruments that are being used to examine that learning and the improvement activities that were performed in response to the assessment data.

The SUNY Assessment Initiative and the PRISM and PEARL approaches balance assessment for "us" and assessment for "them" by demonstrating a commitment to student learning, not by achieving benchmark scores on a specific assessment or by earning a particular ranking. In both of these examples participants are held accountable for engaging in the process of assessing student learning, a process that is reviewed for best practices by an external panel.

Dan Aykroyd and Members of Secretary Spellings’ Commission on the Future of Higher Education are correct in expecting “results.” If discussions for demonstrating these “results” continue to emphasize narrow and prescriptive assessment for "them" institutions will face large amounts of work, risk and agony for little benefit. However, if assessment for "them" can be about demonstrating a commitment to student learning and being accountable for a process, then institutions will be able to place their time an energy where it belongs: with the students.

Next Story

Written By

More from Views