You have /5 articles left.
Sign up for a free account or log in.
Robert Shireman is right. The former official at the U.S. Department of Education correctly wrote recently that there is little evidence that using accreditation to compel institutions to publicly state their desired student learning outcomes (SLOs), coupled with the rigid and frequently ritualistic ways in which many accreditation teams now apply these requirements, has done much to improve the quality of teaching and learning in this country.
But the answer, surely, is not to abolish such statements. It is to use them as they were intended -- as a way to articulate collective faculty intent about the desired impact of curricula and instruction. For example, more than 600 colleges and universities have used the Degree Qualifications Profile (DQP). Based on my firsthand experience with dozens of them as one of the four authors of the DQP, their faculties do not find the DQP proficiency statements to be “brief blurbs” that give them “an excuse to dismiss the process,” as Shireman wrote. Instead, they are using these statements to guide a systematic review of their program offerings, to determine where additional attention is needed to make sure students are achieving the intended skills and dispositions, and to make changes that will help students do so.
As another example, the Accreditation Board for Engineering Technology (ABET) established a set of expectations for engineering programs that have guided the development of both curricula and accreditation criteria since 2000. Granted, SLOs are easier to establish and use in professional fields than they are in the liberal arts. Nevertheless, a 10-year retrospective study, published about two years ago, provided persuasive empirical evidence that engineering graduates were achieving the intended outcomes and that these outcomes have been supported and used by faculties in engineering worldwide.
Shireman also is on point about the most effective way to examine undergraduate quality: looking at actual student work. But what planet has he been living on to not recognize that this method isn’t already in widespread use? Results of multiple studies by the National Institute for Learning Outcomes Assessment (NILOA) and the Association of American Colleges and Universities (AAC&U) indicate that this is how most institutions look at academic quality -- far exceeding the numbers that use standardized tests, surveys or other methods. Indeed, faculty by and large already agree that the best way to judge the quality of student work is to use a common scoring guide or rubric to determine how well students have attained the intended proficiency. Essential to this task is to set forth unambiguous learning outcomes statements. There is simply no other way to do it.
As an example of the efficacy of starting with actual student work, 69 institutions in nine states last year looked at written communications, quantitative fluency and critical thinking based on almost 9,000 pieces of student work scored by faculty using AAC&U’s VALUE rubrics. This was done as part of an ongoing project called the Multi-State Collaborative (MSC) undertaken by AAC&U and the State Higher Education Executive Officers (SHEEO). The project is scaling up this year to 12 states and more than 100 institutions. It’s a good example of how careful multi-institutional efforts to assess learning using student work as evidence can pay considerable dividends. And this is just one of hundreds of individual campus efforts that use student work as the basis for determining academic quality, as documented by NILOA.
One place where the SLO movement did go off the rails, though, was allowing SLOs to be so closely identified with assessment. When the assessment bandwagon really caught on with accreditors in the mid-1990s, it required institutions and programs to establish SLOs solely for the purpose of constructing assessments. These statements otherwise weren’t connected to anything. So it was no wonder that they were ignored by faculty who saw no link with their everyday tasks in the classroom. The hundreds of DQP projects catalogued by NILOA are quite different in this respect, because all of them are rooted closely in curriculum or course design, implementing new approaches to teaching or creating settings for developing particular proficiencies entirely outside the classroom. This is why real faculty members in actual institutions remain excited about them.
At the same time, accreditors can vastly improve how they communicate and work with institutions about SLOs and assessment processes. To begin with, it would help a lot if they adopted more common language. As it stands, they use different terms to refer to the same things and tend to resist reference to external frameworks like the DQP or AAC&U’s Essential Learning Outcomes. As Shireman maintains, and as I have argued for decades, they also could focus their efforts much more deliberately on auditing actual teaching and learning processes -- a common practice in the quality assurance approaches of other nations. Indeed, starting with examples of what is considered acceptable-quality student work can lead directly to an audit approach.
Most important, accreditors need to carefully monitor what they say to institutions about these matters and the consistency with which visiting teams “walk the talk” about the centrality of teaching and learning. Based on volunteer labor and seriously undercapitalized, U.S. accreditation faces real challenges in this arena. The result is that institutions hear different things from different people and constantly try to second-guess “what the accreditors really want.” This compliance mentality is extremely counterproductive and accreditors themselves are only partially responsible for it. Instead, as my NILOA colleagues and I argue in our recent book, Using Evidence of Student Learning to Improve Higher Education, faculty members and institutional leaders need to engage in assessment primarily for purposes of improving their own teaching and learning practices. If they get that right, success with actors like regional accreditors will automatically follow.
So let’s take a step back and ponder whether we can realistically improve the quality of student learning without first clearly articulating what students should know and be able to do as result of their postsecondary experience. Such learning outcomes statements are essential to evaluating student attainment and are equally important in aligning curricula and pedagogy.
Can we do better about how we talk about and use SLOs? Absolutely. But abandoning them would be a serious mistake.