Although the Spellings Commission report has generated a lot of controversy in higher education circles, its ideas are hardly new. In fact, it might be viewed as a kind of summary of decades of criticism by a variety of stakeholders -- employers, government officials, accrediting bodies, even parents -- that higher education is not delivering the goods in terms of students’ learning and professional performance.
One dimension of the report that has received much attention is the notion that standardized testing can produce data on learning that would allow comparison across institutions. I share with other educators the concerns about the reliability, validity and relevance of these tests. But what’s most striking to me is that the rationale for using these measures is seldom discussed in terms of improving results for those most directly affected, those whose voices are almost entirely absent from this discussion, the students themselves. How could we assess learning in a way that benefits individual students directly, that contributes to the improvement of their knowledge and skills, rather than merely testing across an institution, using measures that may or may not be valid, and hoping that in time improvements in learning will trickle down to the students?
Since 1989, I have been teaching philosophy at Alverno College, a women's college with an outcomes-based, developmental curriculum -- a curriculum where assessment happens from the ground up, where faculty see assessment as integral to teaching. Every day my colleagues and I give our students feedback on their performance in relation to very specific, faculty-designed outcomes for our courses, our programs, and the institution as a whole. Each student must demonstrate competence in eight core abilities in order to graduate and she and her teachers carefully track her progress toward achieving these goals. The list of abilities adopted by the Alverno faculty several decades ago - communication, analysis, problem solving, social interaction, valuing in decision-making, effective citizenship, developing a global perspective, and aesthetic engagement -- is very similar to the lists of core abilities adopted in institutions around the world, in response to the call for all students to be able to make effective use of what they have learned.
At Alverno, expectations for mastery of the abilities are integrated by faculty into course and program outcomes, so that, for example, when I teach philosophy and humanities I am also consciously teaching analytic skill and the ability to make ethical decisions based on an understanding of one’s own and others’ values. In practice, this means that when I teach Kant’s ethics, it is to give students theoretical tools to make their own ethical decisions, and for this purpose, I am more likely to have them explain Kant’s texts to one another than to lecture about Kant. The goal is to have them actively involved in coming to understanding, and to take responsibility for sharing their understanding with others. When I am assessing their learning, I ask them to apply Kant’s thinking to the resolution of an ethical issue, rather than merely checking what they have memorized with a multiple choice test.
As an Alverno faculty member, it is no longer possible to imagine teaching without assessing, because for us to teach is to assess, continuously, what our students are learning, and what they can do with what they know. We assess in order to improve the learning process, to give each student, and groups of students, guidance for their learning. At this point in the life of our curriculum and our academic culture, if our accrediting body were to say: “You no longer have to go to the trouble of assessing student learning,” we would still do it anyway.
In the Alverno curriculum, the continuous assessment of student performance produces data at all levels that can be -- and are -- used to make changes in course sequences, programs, and across the entire curriculum. When, for example, several years ago, the instructors of our intermediate communication seminar shared with one another their concerns that students were struggling to meet writing expectations, we examined the development of students’ writing in the three seminar courses. As a result, all the faculty involved in teaching the seminars -- from departments across the college -- decided to redesign the whole series. As someone who has accepted (as all my colleagues do) the responsibility for teaching communication in all my courses, what keeps me committed to “going to the trouble” of assessing student learning is that Alverno has a college-wide understanding of what constitutes effective communication – and of all the other abilities - and this shared understanding supports me in being a more effective teacher.
I want to emphasize this point: I benefit, as a teacher, from a college-wide system of assessment. When I give feedback to a student on her communication skills in an ethics course, I am reminding her that there are standards for effective communication, that she has come to understand what these are through her work in our curriculum, and that there are ways in which she can improve her performance in relation to the standards. Through revising her work in response to feedback, her ability to articulate what she understands about ethical theory and its application will improve – she will learn ethics more effectively. The feedback I give to individual students in relation to course and program outcomes encourages their growth, and the observations I make of the patterns of their performance give me the evidence I need to improve my teaching. The mid-term assessment, in which they make a reasoned judgment about an ethical issue I assign, gives them practice for the final assessment in which they publicly share their reasoning and judgment about an ethical issue of their own choosing. At the same time, the mid-semester assessment gives me data about how well students have grasped the ethical theories we are exploring together, so I can make teaching adjustments to help students improve.
Now, there is a sense in which this is how all good teachers improve their teaching -- seeing whether and how students are learning and fine-tuning their teaching in response. The advantage of our curriculum is that the learning expectations are made explicit in every course at every level, so the process of fine-tuning is intentional, shared, and systematic, for students and faculty both. The students experience the curriculum as coherent, developmental, and designed to support their learning. The faculty experience a shared sense of mission and mutual support for their efforts as educators, and act as faculty developers for one another, sharing effective pedagogy and assessment practices.
Our commitment to the assessment-as-learning curriculum is thus reinforced by the benefits we receive from working together as faculty to maintain it. This working together requires a different way of communicating than is typical in most colleges and universities. We meet several times a year as a whole faculty, and we meet frequently in cross-disciplinary groups to discuss the meaning of the core abilities and how best to teach and assess them. The work that we do to maintain and develop the curriculum is a significant factor in our tenure and promotion, which also strengthens our commitment and makes our efforts visible to one another.
The use of technology in assessment has also proved a benefit for both our faculty and students. Our students’ continuous learning progress is captured in an online Diagnostic Digital Portfolio. Each student has her own portfolio, to which she can upload work samples and self assessments of her performance in relation to learning outcomes, while her faculty members upload feedback. The DDP provides a longitudinal view of each student’s progress, giving her the opportunity to look back to see how far she has come, and to look forward to set new goals. Over time her self assessments become more sophisticated, and through them we see her take increasing responsibility for her learning. It is important to note that this technology only works for us because it is imbedded in the teaching and assessment practices of the faculty, otherwise the digital portfolio would be just a repository for documents.
Even with the technology, isn’t such a curriculum based on faculty-designed learning outcomes, assessments of student learning, and frequent, targeted feedback more work for faculty? Yes, clearly, in some ways it is, since the design and implementation of effective learning and assessment strategies takes time. But my colleagues and I would say that the work is more efficient. For the collaborative effort we put in, we receive much greater evidence of genuine and durable learning on the part of students. Rather than assessment being a process of gathering data for administrators who gather data for accrediting bodies, assessment is first and foremost for our students.
Is this approach to assessment compatible with providing data to our stakeholders about the effectiveness of the education we provide? We believe that it provides the best possible evidence: We explicitly state our learning goals, and we have the data to show our students are meeting them. We have made our philosophy and results of our work of the last several decades available to our higher education colleagues in Learning That Lasts: Integrating Learning, Development and Performance in College and Beyond (Ohio State University Press, 2002). We have also shared our approach to student assessment-as-learning with universities, community and technical colleges, professional schools and K-12 schools both nationally and internationally. These consortial and consultative conversations have demonstrated that student assessment-as-learning can be taken up by institutions of diverse missions and classifications, as long as faculty are willing to engage in the effort of making their learning expectations explicit, and are committed to making sure that students meet these expectations.
Is our approach to assessment consistent with using standardized measures of student learning? Yes, if the focus of these measures continues to be on the improvement of learning for our students. For a number of years, we have administered the National Survey of Student Engagement to our students. We are proud of the high marks our students have given their Alverno education for the diverse, challenging and supportive learning environment the college provides. The NSSE instrument measures what is very important to us -- students’ experience of their learning and their engagement in it -- and we have used the results to guide improvements in both advising processes and co-curricular life.
In an article in the Association of American Colleges and Universities’ Peer Review, “Can Assessment for Accountability Complement Assessment for Improvement?” Trudy Banta observed that across the country “some faculty in virtually every institution” are trying out the assessment of learning outcomes for their potential for improving student learning. She recommends that we should look very carefully at the validity and reliability of standardized tests before we adopt them wholesale. If we must compare student performance across institutions, in those cases where institutions share learning goals, comparing student performance in relation to common rubrics would give much richer and more relevant evidence of what students are learning than standardized tests. Accountability for results is not inconsistent with assessing to promote student learning, but promoting student learning should always come first. Banta hopes, as I do, that calls for assessment for accountability -- what I have called “trickle down assessment” -- will not stifle this movement for assessing from the ground up.