Today, leaders of colleges and universities across the board, regardless of size or focus, are struggling to meaningfully demonstrate the true value of their institution for students, educators and the greater community because they can't really prove that students are learning.
Most are utilizing some type of evaluation or assessment mechanism to keep “the powers that be” happy through earnest narratives about goals and findings, interspersed with high-level data tables and colorful bar charts. However, this is not scientific, campuswide assessment of student learning outcomes aimed at the valid measure of competency.
The "Grim March" & the Meaning of Assessment
Campuswide assessment efforts rarely involve the rigorous, scientific inquiry about actual student learning that is aligned from program to program and across general education. Instead, year after year, the accreditation march has trudged grimly on, its participants working hard to produce a plausible picture of high “satisfaction” for the whole, very expensive endeavor.
For the past 20-plus years, the primary source of evidence for a positive impact of instruction has come from tools like course evaluation surveys. Institutional research personnel have diligently combined, crunched and correlated this data with other mostly indirect measures such as retention, enrollment and grade point averages.
Attempts are made to produce triangulation with samplings of alumni and employer opinions about the success of first-time hires. All of this is called “institutional assessment,” but this doesn’t produce statistical evidence from direct measurement that empirically demonstrates that the university is directly responsible for the students’ skill sets based on instruction at the institution. Research measurement methods like Chi-Square or Inter-rater reliability combined with a willingness to assess across the institution can demonstrably prove that a change in student learning is statistically significant over time and is the result of soundly delivered curriculum. This is the kind of “assessment” the world at large wants to know about.
The public is not satisfied with inferentially derived evidence. Given the cost, they yearn to know if their sons and daughters are getting better at things that matter to their long-term success. Employers routinely stoke this fire by expressing doubt about the out-of-the-box skills of graduates.
Who Owns Change Management
Whose responsibility is it to redirect the march to provide irrefutable reports that higher education is meeting the needs of all its stakeholders? Accreditors now wring their hands and pronounce that reliance on indirect measures will no longer suffice. They punish schools with orders to fix the shortfalls in the assessment of outcomes and dole out paltry five-year passes until the next audit. They will not, however, provide sound, directive steps for the marchers about how to systematically address learning outcomes.
How about the government? The specter of more third-party testing is this group’s usual response. They did it to K-12 and it has not worked there either. Few would be happy with that center of responsibility.
Back to the campus. To be fair, IR or offices of institutional effectiveness have been reluctant to get involved with direct measures of student performance for good reasons. Culture dictates that such measures belong to program leaders and faculty. The traditions and rules of “academic freedom” somehow demand this. The problem is that faculty and program leaders are indeed content experts, but they are no more versed in effective assessment of student outcomes than anyone else on campus.
This leaves us with campus leaders who have long suspected something is very wrong or at least misdirected. To paraphrase one highly placed academic officer, “We survey our students and a lot of other people and I’m told that our students are ‘happy.’ I just can’t find anyone who can tell me for sure if they’re ‘happy-smarter’ or not!” Their immersion in the compliance march does not give them much clue about what to do about the dissonance they are feeling.
The Assessment Renaissance
Still, the intelligent money is on higher ed presidents first and foremost, supported by their provosts and other chief academic officers. If there is to be deep change in the current culture they are the only ones with the proximal power to make it happen. The majority of their number has declared that “disruption” in higher education is now essential.
Leaders looking to eradicate the walking dead assessment march in a systematic way need to:
- Disrupt. This requires a college or university leader to see beyond the horizon and ultimately have an understanding of the long-term objective. It doesn’t mean they need to have all the ideas or proper procedures, but they must have the vision to be a leader and a disrupter. They must demand change on a realistic, but short timetable.
- Get Expertise. Outcomes/competency-based assessment has been a busy field of study over the past half-decade. Staff development and helping hands from outside the campus are needed.
- Rally the Movers and Shakers. In almost every industry, there are other leaders without ascribed power but whose drive is undeniable. They are the innovators and the early adopters. Enlist them as co-disruptors. On campuses there are faculty/staff that will be willing to take risks for the greater good of assessment and challenge the very fabric of institutional assessment. Gather them together and give them the resources, the authority and the latitude to get the job done. Defend them. Cheerlead at every opportunity.
- Change the Equation. Change the conversation from GPAs and satisfaction surveys to one essential unified goal: are students really learning and how can a permanent change in behavior be measurably demonstrated?
- Rethink your accreditation assessment software. Most accreditation software systems rely on processes that are narrative, not a systematic inquiry via data. Universities are full of people who research for a living. Give them tools (yes, like Chalk & Wire, which my company provides) to investigate learning and thereby rebuild a systematic approach to improve competency.
- Find the Carrots. Assume a faculty member in engineering is going to publish. Would a research-based study about teaching and learning in their field stand for lead rank and tenure? If disruption is the goal, then the correct answer is yes.
Assessment is complex, but it’s not complicated. Stop the grim march. Stand still for a time. Think about learning and what assessment really means and then pick a new proactive direction to travel with colleagues.