Assessment

CIC/Collegiate Learning Assessment Consortium Meeting

Date: 
Mon, 08/01/2011 to Tue, 08/02/2011

Location

Pittsburgh , Pennsylvania
United States

Assessment Training and Research Institute

Date: 
Wed, 05/11/2011 to Fri, 05/13/2011

Location

Tallahassee , Florida
United States

Conference on Innovative Assessment Practices for Colleges and Universities

Date: 
Tue, 05/17/2011 to Wed, 05/18/2011

Location

Dallas , Pennsylvania
United States

Toward A Science of Learning

In travels around the country, I’ve been seeing signs of a trend in higher education that could have profound implications: a growing interest in learning about learning. At colleges and universities that are solidly grounded in a commitment to teaching, groups of creative faculty are mobilizing around learning as a collective, and intriguing, intellectual inquiry.

This trend embraces the advances being made in the cognitive sciences and the study of consciousness. It resides in the fast-moving world of changing information technology and social media. It recognizes and builds upon new pedagogies and evolving theories of multiple ways of knowing and learning. It encompasses but transcends the evolution of new and better measures of student learning outcomes.

As more and more institutions sign on to administer the National Survey of Student Engagement and the Collegiate Learning Assessment, some see the resulting data as sufficient to close the books on the question of student learning, while others see them as no more than a rudimentary beginning. The advent of new instruments reflects in part the desire to unseat the commercial rating systems that wield enormous influence despite their well-known shortcomings and distortions. The new measurement regimes are responding, as well, to demands from accrediting and regulatory agencies for convincing data on "value-added educational outcomes." But educators know that assessing what students have learned is far less valuable than finding out how they learn.

Uri Treisman’s landmark study at Berkeley a quarter century ago validated this proposition. He compared how students of African and Chinese descent learned calculus, used the findings to export successful strategies from one group to the other, and evaluated the results. Richard Light’s studies at Harvard carry on the Treisman tradition.

Efforts to identify fruitful points of intervention in the classroom and in co-curricular offerings are picking up steam, importing into the councils of higher education -- and strengthening -- a line of educational research that had been largely overlooked by faculty and administrators whose disciplinary allegiances were with the liberal arts and sciences, not the study of pedagogical practice. A number of foundations, notably Teagle, Spencer, and Mellon, are funding empirical studies that are uniting these worlds. The Carnegie Foundation for the Advancement of Teaching has been a leading voice in this conversation for many years as, more recently, has the Association of American Colleges and Universities.

Faculty at Indiana University have since 1998 been fostering interdisciplinary communities for innovative course-focused research to improve undergraduate learning, and exporting the work through conferences of a growing International Society for the Scholarship of Teaching and Learning. Georgetown’s Center for New Design in Learning and Scholarship is hosting cutting-edge events to feed faculty interest in the scholarship of teaching and learning. John Seely Brown, former chief scientist at Xerox, has been exploring the edges of this new field, drawing, for example, on Polanyi’s distinction between "learning about" and "learning to be," activities that take place in iterative cycles ("I get stuck; I need to know more"). "Learning about" involves explicit knowledge, "learning to be" is more tacit: sensing an interesting question, feeling the rightness of an elegant solution. Now we can enable with ease the "socially-constructed understanding" that fuels the cycles of being stuck and learning more through "interactions with others and the world" in this new digital age, he observes.

"Something is in the air," adds Michael Wesch in a YouTube video that has been watched by over three million viewers. He’s standing in an old-fashioned auditorium at Kansas State University and the "something" that all teachers have no choice now but to reckon with is all of human knowledge instantly available to all students through their wi-fi connections. The pioneers on this new frontier are pursuing novel learning technologies that can be harnessed in the service of greater intellectual connection between students and faculty, enhanced student learning, less drudgery, more creativity, more freedom and more joy for students and faculty alike. Clay Christensen warns that if we fail at this task, "disruptive technologies" will do it for us, and eat our lunch.

Where might this lead? If groups of faculty were to think deeply and systematically over a number of years about student learning and student success, they could create for their own institutions and the wider field a more robust evidence-based culture of learning, a “science of improvement,” as groups of medical leaders are advancing for their profession.

An effort like this at one institution would require the gradual creation of highly-intentional learning (not teaching) cultures with explicit cycles of improvement in place throughout the college or university, starting with academic departments and working up from there. The results would be widely discussed by everyone: faculty, students, staff, trustees. Over time, and without much fanfare, they would influence hiring decisions and criteria for promotion and other rewards. Resources would be re-allocated to activities that were demonstrably advancing student learning in the context (not in lieu) of serious disciplinary scholarship.

This work would necessarily be multidisciplinary, iterative, and methodologically inventive and yet tight. It would come over time to define an inquisitive and ambitious learning community. The findings would not be available for use as a punitive club to force accountability to the state or federal government or to other external groups. Pressure for accountability must not be allowed to confound and corrupt the assessment and continuous improvement of learning outcomes.

I know that this essay is loaded with fighting words. But I believe we need, and are now beginning to see, ways to reframe the problem of learning outcomes, ways that might galvanize positive energy and support within a faculty. Imagine “the administration” saying to faculty, in effect: We want you to be learning all you can about who your students are now, how they learn and what they need to know in order to be successful in a world that is changing faster than we can imagine much less anticipate. And we want you to have the resources and collegial connections you will need to make the pursuit of that question an exciting and fruitful complement to your scholarship. From learning science there are stunning advances that need translation before they can be brought successfully into classrooms, findings and possibilities that at least some faculty might find inherently fascinating if they were approached right, offered a supportive culture with meaningful incentives and rewards and scholarly payoffs.

More than a decade ago, at Wellesley, I watched a group of faculty from several liberal arts colleges, with Trinity in the lead, take up the issue of how to close the academic achievement gap, an issue brought to attention by Bill Bowen and Derek Bok in The Shape of the River and one about which faculty cared deeply, an institutional failure they felt keenly as their responsibility. They found allies in their own and other institutions and created an organization (Consortium on High Achievement and Success), a collaborative learning group that invented an emergent process, adjusting as they went. They assembled data; consulted experts they could respect; found local champions in their own institutions and raised up their work; sought out promising strategies in other institutions; listened to their students’ accounts of challenges they were facing and developed student partnerships to address those issues. They pooled knowledge, shared data, assembled resources, designed honest conversations and entered them with inquiring minds. The element that was missing then was systematic research: testing pilot initiatives and developing intervention studies. Without solid research it’s impossible to know what really works. The learning initiative I have in mind would need to build this in from the start. But, first and foremost, it would have to be rooted, as was CHAS, in the belief among a group of faculty that their students could be better served.

I’m convinced that some faculty could become absorbed in a sophisticated intellectual collaboration to learn about learning. Throughout higher education, we fret about unsound expenditures we know are driven by crude rating systems and the fierce competitive dynamic they fuel. We are not going to eliminate competition between institutions of higher learning, even if we wanted to, which we probably don’t. But could we conceivably change the terms of the competition, put learning rather than amenities at the center of the arms race, spend less on making students more and more comfortable at college and more on making them more and more curious?

Now there’s a question worth asking.

Author's email: 
info@insidehighered.com

Diana Chapman Walsh served as president of Wellesley College from 1993 to 2007.

General Education and Assessment 3.0: Next-Level Practices Now

Date: 
Thu, 03/03/2011 to Sat, 03/05/2011

Location

Chicago , Illinois
United States

11th Annual Texas A

Date: 
Sun, 02/20/2011

Location

College Station , Texas
United States

RosEvaluation Conference 2011

Date: 
Sun, 04/17/2011 to Tue, 04/19/2011

Location

Terre Haute , Indiana
United States

Why Are We Assessing?

It was exactly 10 years ago that I ended my year as director of the Assessment Forum at the old American Association for Higher Education. Over these 10 years I’ve done countless workshops and presentations on assessing student learning, and I've seen a real change in their focus. Ten years ago most of my workshops were what I call "Assessment 101": getting started with assessment. Today, most people seem to understand the basics, and more people are doing assessment, not just talking about it or creating a plan to do it. The arguments against doing assessment — and the hope of some that this is a fad that will go away soon — are fading. People increasingly recognize that accreditation standards for assessment are reasonable and appropriate, especially when compared with some alternatives such as those proposed by the Spellings Commission a few years ago.

And more and more people and organizations are getting into the assessment game, providing us with much-needed scholarship and support. A decade ago books on student learning assessment were relatively scarce, but today there’s a wealth of excellent resources:

  • We now have a number of intriguing published instruments although, for many, evidence of their quality and value remains a work in progress.
  • Assessment database systems — whether locally developed or commercial — can now make it easier to collect and make sense of the information we’re collecting.
  • A decade ago, many philanthropies stopped funding research to improve higher education, because they saw little commitment to reform within the American higher education community. Today a number of important foundations are back in the game, And many of their grants focus on either assessment or ways to use assessment.
  • The work of the Association of American Colleges and Universities has advanced us light-years in our capacity to understand and assess our general education and liberal education curriculums. The "Greater Expectations" report, LEAP goals, and VALUE rubrics have been particularly noteworthy achievements.
  • Bob Mundhenk has initiated the Association for the Assessment of Learning in Higher Education, our first national organization for assessment practitioners.
  • The New Leadership Alliance for Student Learning and Accountability, helmed by David Paris, is developing standards for excellence in assessment practice.
  • The National Institute for Learning Outcomes Assessment, led by Stan Ikenberry, Peter Ewell, and George Kuh, has delivered a number of significant research papers on assessment practices.
  • And, thanks to the research of Trudy Banta, Karen Black, Beth Jones, and others, we’re starting to see evidence that, yes, assessment can lead to improved teaching and learning.

So today many of us are now sitting on quite a pile of assessment data and information. Most of my workshops now focus not on getting started with assessment but on understanding and using the information that’s been collected.

Amid all this progress, however, we seem to have lost our way. Too many of us have focused on the route we’re traveling: whether assessment should be value-added; the improvement versus accountability debate; entering assessment data into a database; pulling together a report for an accreditor. We’ve been so focused on the details of our route that we’ve lost sight of our destination. As a result, we’re spending too much time and effort going off on side roads, dealing with roadblocks, and sometimes even going in circles.

Our destination, which is what we should be focusing on, is the purpose of assessment. Over the last decades, we've consistently talked about two purposes of assessment: improvement and accountability. The thinking has been that improvement means using assessment to identify problems — things that need improvement — while accountability means using assessment to show that we're already doing a great job and need no improvement. A great deal has been written about the need to reconcile these two seemingly disparate purposes.

Framing assessment's purpose as this dichotomy has always troubled me. It divides us, and it confuses a lot of our colleagues. We need to start viewing assessment as having common purposes that everyone — faculty, administrators, accreditors, government policymakers, employers, and others — can agree on.

The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education. Everyone wants them to learn what’s most important. A college’s mission statement and goals are essentially promises that the college is making to its students, their families, employers, and society. Today’s world needs people with the attributes we promise. We need skilled writers, thinkers, problem-solvers and leaders. We need people who are prepared to act ethically, to help those in need, and to participate meaningfully in an increasingly diverse and global society. Imagine what the world would be like if every one of our graduates achieved the goals we promise them! We need people with those traits, and we need them now. Assessment is simply a vital tool to help us make sure we fulfill the crucial promises we make to our students and society.

Too many people don’t seem to understand that simple truth. As a result, today we seem to be devoting more time, money, thought, and effort to assessment than to helping faculty help students learn as effectively as possible. When our colleagues have disappointing assessment results, and they don’t know what to do to improve them,

  • I wonder how many have been made aware that, in some respects, we are living in a golden age of higher education, coming off a quarter-century of solid research on practices that promote deep, lasting learning.
  • I wonder how many are pointed to the many excellent resources we now have on good teaching practices, including books, journals, conferences and, increasingly, teaching-learning centers right on campus.
  • I wonder how many of the graduate programs they attended include the study and practice of contemporary research on effective higher education pedagogies.

No wonder so many of us are struggling to make sense of our assessment results! Too many of us are separating work on assessment from work on improving teaching and learning, when they should be two sides of the same coin. We need to bring our work on teaching, learning, and assessment together. We need organizations, conferences, publications, and grant funding on the triumvirate of teaching, learning, and assessment, not just teaching and learning or just assessment.

But even if we help faculty learn about research-informed pedagogies, do they have meaningful incentives to use them? Providing students with the best possible education often means changing what we do, and that means time and work. Much of the higher education community has no real incentive to change how we help students learn. And if there's little incentive to change or be innovative, there’s little reason to assess how well we're keeping our promises.

Our second common purpose of assessment should be making sure not only that students learn what’s important, but that their learning is of appropriate scope, depth, and rigor. Doug Eder frames this by suggesting three questions that we should answer through assessment:

  1. What have our students learned?
  2. Are we satisfied with what they’ve learned?
  3. If not, what are we doing about it?

What I’m talking about here is Doug’s second question: Are we satisfied with what our students have learned? In short, what’s good enough?

This is an incredibly difficult question to answer, and thus one that many of us have been avoiding. It’s a big reason why we’re seeing assessment results pile up and not get used. We may know that students average 3.4 on a 5-point rubric or score at the 68th percentile on a national exam, but too often we have no idea whether or not these results are good enough.

In order to decide whether our results are indeed good enough, we need to think about assessment results in new ways. First, we need to understand that assessment results — or indeed any numbers — have meaning only when we compare them against some kind of appropriate target or benchmark. So far I've seen too little discussion on how to set such targets, other than sweeping oversimplifications such as "assessments must always yield comparable results" or "assessments must always be value-added." In truth, there are many ways to set targets — at least 10, by my count. Each approach has pros and cons, and none is a panacea, appropriate for every situation.

Second, we need to move beyond navel-gazing. Yes, we are each proud of how much we expect of our students, and it’s easy to feel offended when our professional judgment is challenged. But a reality today, whether we like it or not, is that we are faced with a lack of trust. Big chunks of society no longer trust government, financial institutions, charities. So it shouldn’t be surprising that some government policymakers and employers don’t trust us to provide an appropriately rigorous education. And we don’t always trust one another, such as when students are transferring between colleges.

So the days of saying student work is good or bad based solely on our own private judgment are over. Today we need externally informed targets or standards that we can justify as appropriately rigorous. We need to consult more with others — employers, graduate programs, disciplinary associations, perhaps colleagues at peer institutions — about the knowledge and skills they expect from our graduates and the degree of scope, depth, and rigor they expect. Meaningful change will not come without broad conversations about what a degree means, along with recognition that a tenet of American higher education is that one size does not fit all.

Third, we need to accept how good we already are, so we can recognize success when we see it. We in the higher education community are so bright, so driven, so analytical, and so self-critical that we think anything less than perfection is failure. On the other hand, if we get anything close to perfection, we think that something must be wrong — the assessment is flawed or our standards are too low. This way lies madness.

Because we don't recognize our successes ourselves, we keep their light under the proverbial bushel. We don't yet share with employers and government policymakers systematic, clear, and convincing evidence of how effective we are and what we’re doing to be even more effective.

And we haven’t figured out a way to tell the story of our effectiveness in 25 words or less, which is what busy people want and need. Yes, some of us are starting to post some numbers publicly, but numbers need to be put into context and translated into information in order to have meaning. Yes, we brag about our award-winning student math team or star alumni. But today’s parents, employers, and government policymakers are savvier consumers, so those anecdotes don’t work anymore. Today people want and need to know not about our star math students but how successful we are with our run-of-the-mill students who struggle with math.

Because we're not telling the stories of our successful outcomes in simple, understandable terms, the public continues to define quality using the outdated concept of inputs like faculty credentials, student aptitude, and institutional wealth — things that by themselves don’t say a whole lot about student learning.

And people like to invest in success. Because the public doesn't know how good we are at helping students learn, it doesn't yet give us all the support we need in our quest to give our students the best possible education.

Our third common purpose of assessment is something we don't want to talk about, but it’s a reality that isn’t going away: it's how we spend our money. Actually, it's not our money. Every college and university is simply a steward of other people's money: tuition from our students and their families, funds from taxpayers, gifts from donors, grants from foundations. As stewards, we have an obligation to use our resources prudently, in ways that we are reasonably sure will be both successful and reasonably cost-effective. Here again, assessment is simply a vital tool to help us do this.

But while virtually every college and university has had to make draconian budget cuts in the last couple of years, with more to come, I wonder how many are using solid, systematic evidence — including assessment evidence — to inform those decisions.

For example, when class sizes are increased, are those increases based on evidence on how class size affects learning? When classes are moved online, do those transitions flow from evidence of online teaching practices that promote learning? When student support programs are cut back, are those decisions informed by evidence of the impact of the programs on student success? When academic programs are trimmed, do those decisions flow from evidence of student learning as well as costs?

We need to refocus our assessment work not only on making sure students get the best possible education but also on improving our cost-effectiveness in doing so. We can’t afford to spend a dime on anything unless we have evidence that the dime will be effectively spent. We can't afford to cut a dime without evidence of the impact of the cut on student learning and success. We need, more than ever, a culture of evidence-informed planning and decision-making.

And that includes looking at the cost-effectiveness of assessment itself. As we invest more and more time and money into assessment work, assessment instruments, assessment data systems, and so on, we need to ask whether these expenditures are giving us enough value to be worth the investment of our scarce resources.

So before we start another assessment cycle, we need to sit back and reflect, starting with my favorite assessment question, “Why?” Why are we assessing this particular goal and not others? Why do we think this particular goal is so important? Why did we choose this particular assessment strategy? How has it been helpful? And has its value been in proportion to the time and money we’ve spent on it?

Yes, we have accomplished a tremendous amount in the last decade, and we have so much to be proud of. But we are not yet at our destination.

Now is the time to bring these three common purposes of assessment to the forefront. In order to tackle them, we need to work as a community, with greater and broader dialogue and collaboration than we see now. Now is the time to move our focus from the road we are traveling to our destination: a point at which we all are prudent, informed stewards of our resources… a point at which we each have clear, appropriate, justifiable, and externally-informed standards for student learning. Most importantly, now is the time to move our focus from assessment to learning, and to keeping our promises. Only then can we make higher education as great as it needs to be.

Author's email: 
info@insidehighered.com

Linda Suskie is vice president of the Middle States Commission on Higher Education. This essay is adapted from her talk at the 2010 Assessment Institute.

Pages

Subscribe to RSS - Assessment
Back to Top