You have /5 articles left.
Sign up for a free account or log in.
It was exactly 10 years ago that I ended my year as director of the Assessment Forum at the old American Association for Higher Education. Over these 10 years I’ve done countless workshops and presentations on assessing student learning, and I've seen a real change in their focus. Ten years ago most of my workshops were what I call "Assessment 101": getting started with assessment. Today, most people seem to understand the basics, and more people are doing assessment, not just talking about it or creating a plan to do it. The arguments against doing assessment — and the hope of some that this is a fad that will go away soon — are fading. People increasingly recognize that accreditation standards for assessment are reasonable and appropriate, especially when compared with some alternatives such as those proposed by the Spellings Commission a few years ago.
And more and more people and organizations are getting into the assessment game, providing us with much-needed scholarship and support. A decade ago books on student learning assessment were relatively scarce, but today there’s a wealth of excellent resources:
- We now have a number of intriguing published instruments although, for many, evidence of their quality and value remains a work in progress.
- Assessment database systems — whether locally developed or commercial — can now make it easier to collect and make sense of the information we’re collecting.
- A decade ago, many philanthropies stopped funding research to improve higher education, because they saw little commitment to reform within the American higher education community. Today a number of important foundations are back in the game, And many of their grants focus on either assessment or ways to use assessment.
- The work of the Association of American Colleges and Universities has advanced us light-years in our capacity to understand and assess our general education and liberal education curriculums. The "Greater Expectations" report, LEAP goals, and VALUE rubrics have been particularly noteworthy achievements.
- Bob Mundhenk has initiated the Association for the Assessment of Learning in Higher Education, our first national organization for assessment practitioners.
- The New Leadership Alliance for Student Learning and Accountability, helmed by David Paris, is developing standards for excellence in assessment practice.
- The National Institute for Learning Outcomes Assessment, led by Stan Ikenberry, Peter Ewell, and George Kuh, has delivered a number of significant research papers on assessment practices.
- And, thanks to the research of Trudy Banta, Karen Black, Beth Jones, and others, we’re starting to see evidence that, yes, assessment can lead to improved teaching and learning.
So today many of us are now sitting on quite a pile of assessment data and information. Most of my workshops now focus not on getting started with assessment but on understanding and using the information that’s been collected.
Amid all this progress, however, we seem to have lost our way. Too many of us have focused on the route we’re traveling: whether assessment should be value-added; the improvement versus accountability debate; entering assessment data into a database; pulling together a report for an accreditor. We’ve been so focused on the details of our route that we’ve lost sight of our destination. As a result, we’re spending too much time and effort going off on side roads, dealing with roadblocks, and sometimes even going in circles.
Our destination, which is what we should be focusing on, is the purpose of assessment. Over the last decades, we've consistently talked about two purposes of assessment: improvement and accountability. The thinking has been that improvement means using assessment to identify problems — things that need improvement — while accountability means using assessment to show that we're already doing a great job and need no improvement. A great deal has been written about the need to reconcile these two seemingly disparate purposes.
Framing assessment's purpose as this dichotomy has always troubled me. It divides us, and it confuses a lot of our colleagues. We need to start viewing assessment as having common purposes that everyone — faculty, administrators, accreditors, government policymakers, employers, and others — can agree on.
The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education. Everyone wants them to learn what’s most important. A college’s mission statement and goals are essentially promises that the college is making to its students, their families, employers, and society. Today’s world needs people with the attributes we promise. We need skilled writers, thinkers, problem-solvers and leaders. We need people who are prepared to act ethically, to help those in need, and to participate meaningfully in an increasingly diverse and global society. Imagine what the world would be like if every one of our graduates achieved the goals we promise them! We need people with those traits, and we need them now. Assessment is simply a vital tool to help us make sure we fulfill the crucial promises we make to our students and society.
Too many people don’t seem to understand that simple truth. As a result, today we seem to be devoting more time, money, thought, and effort to assessment than to helping faculty help students learn as effectively as possible. When our colleagues have disappointing assessment results, and they don’t know what to do to improve them,
- I wonder how many have been made aware that, in some respects, we are living in a golden age of higher education, coming off a quarter-century of solid research on practices that promote deep, lasting learning.
- I wonder how many are pointed to the many excellent resources we now have on good teaching practices, including books, journals, conferences and, increasingly, teaching-learning centers right on campus.
- I wonder how many of the graduate programs they attended include the study and practice of contemporary research on effective higher education pedagogies.
No wonder so many of us are struggling to make sense of our assessment results! Too many of us are separating work on assessment from work on improving teaching and learning, when they should be two sides of the same coin. We need to bring our work on teaching, learning, and assessment together. We need organizations, conferences, publications, and grant funding on the triumvirate of teaching, learning, and assessment, not just teaching and learning or just assessment.
But even if we help faculty learn about research-informed pedagogies, do they have meaningful incentives to use them? Providing students with the best possible education often means changing what we do, and that means time and work. Much of the higher education community has no real incentive to change how we help students learn. And if there's little incentive to change or be innovative, there’s little reason to assess how well we're keeping our promises.
Our second common purpose of assessment should be making sure not only that students learn what’s important, but that their learning is of appropriate scope, depth, and rigor. Doug Eder frames this by suggesting three questions that we should answer through assessment:
- What have our students learned?
- Are we satisfied with what they’ve learned?
- If not, what are we doing about it?
What I’m talking about here is Doug’s second question: Are we satisfied with what our students have learned? In short, what’s good enough?
This is an incredibly difficult question to answer, and thus one that many of us have been avoiding. It’s a big reason why we’re seeing assessment results pile up and not get used. We may know that students average 3.4 on a 5-point rubric or score at the 68th percentile on a national exam, but too often we have no idea whether or not these results are good enough.
In order to decide whether our results are indeed good enough, we need to think about assessment results in new ways. First, we need to understand that assessment results — or indeed any numbers — have meaning only when we compare them against some kind of appropriate target or benchmark. So far I've seen too little discussion on how to set such targets, other than sweeping oversimplifications such as "assessments must always yield comparable results" or "assessments must always be value-added." In truth, there are many ways to set targets — at least 10, by my count. Each approach has pros and cons, and none is a panacea, appropriate for every situation.
Second, we need to move beyond navel-gazing. Yes, we are each proud of how much we expect of our students, and it’s easy to feel offended when our professional judgment is challenged. But a reality today, whether we like it or not, is that we are faced with a lack of trust. Big chunks of society no longer trust government, financial institutions, charities. So it shouldn’t be surprising that some government policymakers and employers don’t trust us to provide an appropriately rigorous education. And we don’t always trust one another, such as when students are transferring between colleges.
So the days of saying student work is good or bad based solely on our own private judgment are over. Today we need externally informed targets or standards that we can justify as appropriately rigorous. We need to consult more with others — employers, graduate programs, disciplinary associations, perhaps colleagues at peer institutions — about the knowledge and skills they expect from our graduates and the degree of scope, depth, and rigor they expect. Meaningful change will not come without broad conversations about what a degree means, along with recognition that a tenet of American higher education is that one size does not fit all.
Third, we need to accept how good we already are, so we can recognize success when we see it. We in the higher education community are so bright, so driven, so analytical, and so self-critical that we think anything less than perfection is failure. On the other hand, if we get anything close to perfection, we think that something must be wrong — the assessment is flawed or our standards are too low. This way lies madness.
Because we don't recognize our successes ourselves, we keep their light under the proverbial bushel. We don't yet share with employers and government policymakers systematic, clear, and convincing evidence of how effective we are and what we’re doing to be even more effective.
And we haven’t figured out a way to tell the story of our effectiveness in 25 words or less, which is what busy people want and need. Yes, some of us are starting to post some numbers publicly, but numbers need to be put into context and translated into information in order to have meaning. Yes, we brag about our award-winning student math team or star alumni. But today’s parents, employers, and government policymakers are savvier consumers, so those anecdotes don’t work anymore. Today people want and need to know not about our star math students but how successful we are with our run-of-the-mill students who struggle with math.
Because we're not telling the stories of our successful outcomes in simple, understandable terms, the public continues to define quality using the outdated concept of inputs like faculty credentials, student aptitude, and institutional wealth — things that by themselves don’t say a whole lot about student learning.
And people like to invest in success. Because the public doesn't know how good we are at helping students learn, it doesn't yet give us all the support we need in our quest to give our students the best possible education.
Our third common purpose of assessment is something we don't want to talk about, but it’s a reality that isn’t going away: it's how we spend our money. Actually, it's not our money. Every college and university is simply a steward of other people's money: tuition from our students and their families, funds from taxpayers, gifts from donors, grants from foundations. As stewards, we have an obligation to use our resources prudently, in ways that we are reasonably sure will be both successful and reasonably cost-effective. Here again, assessment is simply a vital tool to help us do this.
But while virtually every college and university has had to make draconian budget cuts in the last couple of years, with more to come, I wonder how many are using solid, systematic evidence — including assessment evidence — to inform those decisions.
For example, when class sizes are increased, are those increases based on evidence on how class size affects learning? When classes are moved online, do those transitions flow from evidence of online teaching practices that promote learning? When student support programs are cut back, are those decisions informed by evidence of the impact of the programs on student success? When academic programs are trimmed, do those decisions flow from evidence of student learning as well as costs?
We need to refocus our assessment work not only on making sure students get the best possible education but also on improving our cost-effectiveness in doing so. We can’t afford to spend a dime on anything unless we have evidence that the dime will be effectively spent. We can't afford to cut a dime without evidence of the impact of the cut on student learning and success. We need, more than ever, a culture of evidence-informed planning and decision-making.
And that includes looking at the cost-effectiveness of assessment itself. As we invest more and more time and money into assessment work, assessment instruments, assessment data systems, and so on, we need to ask whether these expenditures are giving us enough value to be worth the investment of our scarce resources.
So before we start another assessment cycle, we need to sit back and reflect, starting with my favorite assessment question, “Why?” Why are we assessing this particular goal and not others? Why do we think this particular goal is so important? Why did we choose this particular assessment strategy? How has it been helpful? And has its value been in proportion to the time and money we’ve spent on it?
Yes, we have accomplished a tremendous amount in the last decade, and we have so much to be proud of. But we are not yet at our destination.
Now is the time to bring these three common purposes of assessment to the forefront. In order to tackle them, we need to work as a community, with greater and broader dialogue and collaboration than we see now. Now is the time to move our focus from the road we are traveling to our destination: a point at which we all are prudent, informed stewards of our resources… a point at which we each have clear, appropriate, justifiable, and externally-informed standards for student learning. Most importantly, now is the time to move our focus from assessment to learning, and to keeping our promises. Only then can we make higher education as great as it needs to be.