In a recent Century Foundation essay, I raised a concern that accreditors of traditional colleges are allowing low-quality education to go unaddressed while insisting, in a misguided attempt to prove they care about learning, that colleges engage in inane counting exercises involving meaningless phantom creatures they call student learning outcomes, or SLOs.
The approach to quality assurance I recommend, instead, is to focus not on artificially created measures but on the actual outputs from students -- the papers, tests and presentations professors have deemed adequate for students to deserve a degree.
I got a lot of positive feedback on the essay, especially, as it happens, from people involved in some of the processes I was criticizing. Peter Ewell, for example, acknowledged in an email that “the linear and somewhat mindless implementation of SLOs on the part of many accreditors is not doing anybody any good.”
This story began in the 1990s, when reformers thought they could improve teaching and learning in college if they insisted that colleges declare their specific “learning goals,” with instructors defining “the knowledge, intellectual skills, competencies and attitudes that each student is expected to gain.” The reformers’ theory was that these faculty-enumerated learning objectives would serve as the hooks that would then be used by administrators to initiate reviews of actual student work, the key to improving teaching.
That was the idea. But it hasn’t worked out that way. Not even close. Here is one example of how the mindless implementation of this idea distracts rather than contributes to the goal of improved student learning. When a team from the western accreditor, the WASC Senior College and University Commission, visited San Diego State University in 2005, it raised concerns that the school had shut down its review process of college majors, which was supposed to involve outside experts and the review of student work. Now, 10 years have passed and the most recent review by WASC (the team visit is scheduled for this month) finds there are still major gaps, with “much work to be done to ensure that all programs are fully participating in the assessment process.”
What has San Diego State been doing instead of repairing its program review process? It has been writing all of its meaningless student learning outcome blurbs that accreditors implemented largely in response to the Spellings Commission in 2006. San Diego State reported its progress in that regard in a self-review it delivered to WASC last year:
Course Learning Outcomes (CLOs) are required for all syllabi; curricular maps relating Degree Learning Outcomes (DLOs) to major required courses are now a required component for Academic Program Review; programs are being actively encouraged to share their DLOs with students and align DLOs with CLOs to provide a broader programmatic context for student and to identify/facilitate course-embedded program assessment.
All this SLO-CLO-DLO gibberish and the insane curriculum map database (really crazy, take a look) is counterproductive, giving faculty members ample ammunition for dismissing the idiocy of the whole process. The insulting reduction of learning to brief blurbs, using a bizarre system of verb-choice rules, prevents rather than leads to the type of quality assurance that has student work at the center.
The benefits of, instead, starting with student work as the unit of analysis is that it respects the unlimited variety of ways that colleges, instructors and students alike, arriving with different skill levels, engage in the curriculum.
Validating colleges’ own quality-assurance systems should become the core of what accreditors do if they want to serve as a gateway to federal funds. Think of it as an outside audit of the university’s academic accounting system.
With this approach, colleges are responsible for establishing their own systems for the occasional review of their majors and courses by outside experts they identify. Accreditors, meanwhile, have the responsibility of auditing those campus review processes, to make sure that they are comprehensive and valid, involving truly independent outsiders and the examination of student work.
SLO madness has to stop. If accreditors instead focus on the traditional program-review processes, assuring that both program reviews and audits include elements of random selection, no corner of the university can presume to be immune from scrutiny.
Robert Shireman is a senior fellow at the Century Foundation and a former official at the U.S. Department of Education.
Robert Shireman is right. The former official at the U.S. Department of Education correctly wrote recently that there is little evidence that using accreditation to compel institutions to publicly state their desired student learning outcomes (SLOs), coupled with the rigid and frequently ritualistic ways in which many accreditation teams now apply these requirements, has done much to improve the quality of teaching and learning in this country.
But the answer, surely, is not to abolish such statements. It is to use them as they were intended -- as a way to articulate collective faculty intent about the desired impact of curricula and instruction. For example, more than 600 colleges and universities have used the Degree Qualifications Profile (DQP). Based on my firsthand experience with dozens of them as one of the four authors of the DQP, their faculties do not find the DQP proficiency statements to be “brief blurbs” that give them “an excuse to dismiss the process,” as Shireman wrote. Instead, they are using these statements to guide a systematic review of their program offerings, to determine where additional attention is needed to make sure students are achieving the intended skills and dispositions, and to make changes that will help students do so.
As another example, the Accreditation Board for Engineering Technology (ABET) established a set of expectations for engineering programs that have guided the development of both curricula and accreditation criteria since 2000. Granted, SLOs are easier to establish and use in professional fields than they are in the liberal arts. Nevertheless, a 10-year retrospective study, published about two years ago, provided persuasive empirical evidence that engineering graduates were achieving the intended outcomes and that these outcomes have been supported and used by faculties in engineering worldwide.
Shireman also is on point about the most effective way to examine undergraduate quality: looking at actual student work. But what planet has he been living on to not recognize that this method isn’t already in widespread use? Results of multiple studies by the National Institute for Learning Outcomes Assessment (NILOA) and the Association of American Colleges and Universities (AAC&U) indicate that this is how most institutions look at academic quality -- far exceeding the numbers that use standardized tests, surveys or other methods. Indeed, faculty by and large already agree that the best way to judge the quality of student work is to use a common scoring guide or rubric to determine how well students have attained the intended proficiency. Essential to this task is to set forth unambiguous learning outcomes statements. There is simply no other way to do it.
As an example of the efficacy of starting with actual student work, 69 institutions in nine states last year looked at written communications, quantitative fluency and critical thinking based on almost 9,000 pieces of student work scored by faculty using AAC&U’s VALUE rubrics. This was done as part of an ongoing project called the Multi-State Collaborative (MSC) undertaken by AAC&U and the State Higher Education Executive Officers (SHEEO). The project is scaling up this year to 12 states and more than 100 institutions. It’s a good example of how careful multi-institutional efforts to assess learning using student work as evidence can pay considerable dividends. And this is just one of hundreds of individual campus efforts that use student work as the basis for determining academic quality, as documented by NILOA.
One place where the SLO movement did go off the rails, though, was allowing SLOs to be so closely identified with assessment. When the assessment bandwagon really caught on with accreditors in the mid-1990s, it required institutions and programs to establish SLOs solely for the purpose of constructing assessments. These statements otherwise weren’t connected to anything. So it was no wonder that they were ignored by faculty who saw no link with their everyday tasks in the classroom. The hundreds of DQP projects catalogued by NILOA are quite different in this respect, because all of them are rooted closely in curriculum or course design, implementing new approaches to teaching or creating settings for developing particular proficiencies entirely outside the classroom. This is why real faculty members in actual institutions remain excited about them.
At the same time, accreditors can vastly improve how they communicate and work with institutions about SLOs and assessment processes. To begin with, it would help a lot if they adopted more common language. As it stands, they use different terms to refer to the same things and tend to resist reference to external frameworks like the DQP or AAC&U’s Essential Learning Outcomes. As Shireman maintains, and as I have argued for decades, they also could focus their efforts much more deliberately on auditing actual teaching and learning processes -- a common practice in the quality assurance approaches of other nations. Indeed, starting with examples of what is considered acceptable-quality student work can lead directly to an audit approach.
Most important, accreditors need to carefully monitor what they say to institutions about these matters and the consistency with which visiting teams “walk the talk” about the centrality of teaching and learning. Based on volunteer labor and seriously undercapitalized, U.S. accreditation faces real challenges in this arena. The result is that institutions hear different things from different people and constantly try to second-guess “what the accreditors really want.” This compliance mentality is extremely counterproductive and accreditors themselves are only partially responsible for it. Instead, as my NILOA colleagues and I argue in our recent book, Using Evidence of Student Learning to Improve Higher Education, faculty members and institutional leaders need to engage in assessment primarily for purposes of improving their own teaching and learning practices. If they get that right, success with actors like regional accreditors will automatically follow.
So let’s take a step back and ponder whether we can realistically improve the quality of student learning without first clearly articulating what students should know and be able to do as result of their postsecondary experience. Such learning outcomes statements are essential to evaluating student attainment and are equally important in aligning curricula and pedagogy.
Can we do better about how we talk about and use SLOs? Absolutely. But abandoning them would be a serious mistake.
Peter Ewell is president of the National Center for Higher Education Management Systems (NCHEMS), a research and development center.
A coalition of consumer groups, legal aid organizations and unions object to the state of New York joining an agreement that would change how colleges offering distance education courses in the state would be regulated. As coalition members asserted in an Inside Higher Ed article, the state would be ceding its authority to other states. Students would be left with no protection from predatory colleges, and it would make it easier for “bad actors to take advantage of students and harder for states to crack down on them.”
That all sounds ominous. It would be, if it were true.
Even in the digital era, the regulation of educational institutions is left to each state. The resulting array of requirements confuses both students and institutional faculty and staff. The State Authorization Reciprocity Agreement (SARA) was created to apply consistent review standards across the states. An institution approved in its home state is eligible to enroll students (within limits) in any other SARA member state. As of this writing, 36 states have joined in a little over two years. That number may approach 45 by the end of 2016.
SARA means now there is a consistently applied set of regulations over distance education when students from one state take courses from an institution in another SARA state. Chief critic Robert Shireman, a senior fellow at the Century Foundation and former official at the U.S. Department of Education, cites Iowa as proof that “some states have discovered they can’t add more qualifications,” as if that were a surprise. Reciprocity agreements depend upon consistency. If Iowa wishes to change a policy, there is a process for regulators in the state to suggest a change. States enter into the agreement openly knowing that consistency is a requirement.
Currently, many states -- notably including New York -- have no regulations in place to protect their in-state students who enroll in courses from many out-of-state colleges. SARA’s critics depict New York as “a national leader in protecting its citizens from unfair business practices.” If a college has no other physical presence in New York other than enrolling students in an online course, it is not regulated and those students are not protected. The state has not allocated any funds to regulate the estimated hundreds of colleges from throughout the country currently serving online students in the state. Asking each state to regulate the institutions headquartered in their state regardless of where they serve students is a much more reasonable solution. Put another way, SARA increases the amount of regulatory oversight of distance education, but does it in a manner more relevant to today’s economy.
To be fair, New York has been aggressive in pursuing bad actors in the for-profit education sector, as evidenced by its $10.25 million settlement with Career Education Corporation. It is worth noting, however, that the lawsuit was largely based on brick-and-mortar schools that have nothing to do with SARA. In addition, this action was brought by the New York attorney general’s office and was not the result of education-based regulation. There is a relevant section in the SARA policy stating that nothing precludes “a state from using its laws of general application to pursue action against an institution that violates those laws” and another stating that “nothing precludes the state in which the complaining person is located from also working to resolve the complaint.”
The reality of SARA hardly qualifies as “ceding the ability to guard its citizens against abusive practices,” as a Century Foundation letter objecting to New York signing the SARA agreement claims.
What would be lost if New York were not to sign the SARA agreement? There is certainly a downside for institutions offering distance education courses and programs for out-of-state students. It might surprise readers of the letter, but fully 70 percent of students who take all of their courses at a distance do so from public and nonprofit institutions. Institutions like Empire State College, a longtime leader in distance education that is part of the SUNY system. Furthermore, the large for-profit institutions referenced in the article have the budget and history of obtaining state-by-state approval already. It is the smaller-profile nonprofits that have the most difficulty in obtaining authorization to serve students in different states.
A reciprocity agreement between Massachusetts and Connecticut is cited as an alternative. As best we can tell, it allows each state to continue using its own current regulations. This is not reciprocity and does not improve the consumer protection landscape for students or institutions.
Were New York to avoid signing the agreement, students who live in the state would end up with fewer choices, primarily from fewer nonprofit institutions that can operate there. Under SARA, New York students actually would have more consumer protection than currently exists as well as regulatory support for any complaint process, including from in-state agencies. Additionally, states systematically working in concert through SARA will more quickly find and deal with institutions that treat students poorly. This is far better than hypothetical, unfunded regulatory oversight by New York trying to operate independently from any other state.
New York has the opportunity to sign an agreement that would expand the regulatory oversight of distance education programs, would leave the state with the same ability to go after bad actors as they have done in the past and would increase choices for resident students -- particularly working adults -- seeking to get a valuable degree that is only enabled by distance education. It would be a mistake to let a complaint based on hypotheticals and misrepresentations of reality derail this progress.
Phil Hill is co-publisher of the e-Literate blog, co-producer of e-Literate TV and partner at MindWires Consulting. Russ Poulin is director of policy and analysis at WCET (WICHE Cooperative for Educational Technologies), which is a division of the Western Interstate Commission for Higher Education.