Almost from the beginning of its work, the Secretary of Education’s Commission on the Future of Higher Education  made it clear that it considered the American system of higher education accreditation to be falling short of its mission  to be the chief guardian of quality among the nation’s colleges. And yet the panel’s leaders, and the Education Department officials charged with carrying out the commission’s recommendations, have also clearly viewed the accreditation system -- because it touches virtually all colleges and universities -- as a potential lever for bringing about the broader changes they envision for higher education.
The commission’s foremost recommendation, arguably, is that colleges and universities must do more to ensure that students are actually learning what the institutions are promising to teach them or train them to do. So at the core of the Education Department’s full-court press on accreditors  is a desire to have the agencies ratchet up the pressure they in turn place on colleges to measure (and prove) that their students are learning and, importantly, to try to find ways to compare the institutions’ success to one another.
That issue stirred controversy in December when a department advisory committee was accused of trying to unfairly change the criteria it uses to judge accrediting agencies. And it came front and center Thursday on the second day of the Education Department’s first negotiated rule making session on accreditation. (A full recap of Day 1, which might be helpful context for the uninitiated, appears here. )
On Wednesday night, a subgroup of the members of the federal accrediting panel altered an “issue paper” that department officials had proposed on the topic to strip language that said the department was considering requiring accrediting agencies to define a common “core set of student achievement measures, both quantitative and qualitative,” and to define an “acceptable level of performance” that all colleges they oversaw would have to meet.
The working group also dropped language that said that an institution’s performance could only be measured based on “what the performance is being compared to.” In its place appeared mushier language that said: “Given the diversity of institutional missions and the diversity of accrediting agencies, there needs to be further attention on the criteria that each agency applies to determine the adequacy of student academic achievement at the institutions it accredits.”
Despite that softening, though, the ultimate question at the core of the department’s (and the Spellings Commission’s) campaign remained: Noting that accreditors have primarily focused their judgment of institutions’ quality on whether an individual college is showing progress, the statement said: “This institutional improvement model has its strengths, but it does not lead to answers to questions such as whether the performance of the institution is good enough” (emphasis added).
And that question -- How does an accreditor measure whether a college or university is doing a “good enough” job educating its students? -- got a full if somewhat unsatisfying hearing Thursday, set up by another question posed by Vickie L. Schray, the Education Department’s lead negotiator in the accreditation rule making process. “The law requires accrediting agencies to have a standard for student achievement,” Schray said. “We were curious to hear your various interpretations or definitions of what a ‘standard’ is.”
The accreditors’ answers were enlightening. Thelma Thompson, president of the University of Maryland Eastern Shore, described a standard as a “level below which you shouldn’t fall” (an answer that would seem to support the department’s push to get accreditors to set minimum levels of performance for institutions to meet. And Craig Swenson, provost of Western Governors University, said he said believed it was reasonable that accreditors “ought to have a benchmark or a basis of comparison that you establish to say that this is sufficient.”
But several accreditors seemed distinctly uncomfortable with that approach. Ralph Wolff, executive director of the senior college commission of the Western Association of Schools and Colleges, which was one of the agencies that felt the ground shift under it at the December meeting mentioned above, said accreditors have traditionally put the onus on “an institution to define its learning outcomes, and to assess the achievement of those outcomes and through that assessment to determine whether improvement is needed.” He added: “We believe we should keep that locus of responsibility at the institutional level.”
Elise Scanlon, executive director of the Accrediting Commission of Career Schools and Colleges of Technology, said it was “very reasonable for an accrediting commission to set expectations for the institutions they accredit and to hold them to that expectation.” But, Scanlon asked Wolff, “what exactly is the standard you’re using to determine whether that institution is a performer or a non-performer?” she asked.
Wolff’s answer – that the agency’s peer reviewers and officials would “rely upon qualitative judgments” to “make sure institutions are using good processes and to improve the processes that institutions are using” – drew an exasperated followup from Scanlon: “Would it ever be possible to say that an institution is not meeting the standard?” Yes, Wolff replied, citing a variety of reasons – “lack of rigor, inadequate assessment activity, lack of good information” – why an institution might be deemed to fall short on student learning outcomes. “What we don’t have are quantitative ‘bright line’ indicators that suggest that if you fall below” your entire institution is in trouble, Wolff said.
But isn’t there a level of performance beneath which institutions shouldn’t fall, asked James H. McCormick, chancellor of the Minnesota State Colleges and Universities? “If a college has complete autonomy, they might have low standards and be shown to have met them. Don’t we have to push colleges and universities to aspire higher and to meet certain standards? It’s hard to do, really hard to do. But don’t we have to push people to aspire harder, and aren’t you in a terrific position to push institutions to do that?”
Steven D. Crow, executive director of the North Central Association of Colleges and Schools’ Higher Learning Commission, asked Schray whether department officials envisioned finding one or a handful of institution-wide measures that would somehow “summarize performance,” or finding “a number for every program’s success, and we tally those up at the end for an institution’s success.” “I’m just not sure what you’re really looking for. How narrow is this proposal?”
Schray said -- as department officials have done repeatedly, in answer to complaints that the government is seeking an oversimplified “one size fits all” solution to the perceived performance problem (like a new standardized test) -- that the department is “not looking for one assessment to be used by all institutions.” “We’ve made every effort not to be prescriptive, and to try instead to rely on the expertise of this peer review system” to come up with appropriate performance measures.
But, she pressed on, “we are asking you all to help us figure out the best way to draft regulations that will encourage and support and promote not only the identification of those measures, but also some explicit statement about how you know when there is quality at an institution.”
Mark L. Pelesh, a top official of Corinthian Colleges, Inc., who is representing the Coalition for an American Competitive Workforce on the accrediting panel, suggested that a potential middle ground – or at least a starting point – might be for much greater transparency about the standards that colleges and accreditors are using to judge whether students are learning and advancing.
“If an accrediting agency, in the area of student achievement, makes a decision not to set objective standards,” Pelesh said, “it seems to me what we might do is require accredited institutions to set objectives and goals for the programs they’re offering, require them to show that they’ve made them transparent to their students and customers, collect data on how well they’re meeting those objectives and goals, and then either have the accreditor make a judgment about whether the institution is doing a good enough job, or at least make that information available to the students.”
As the discussion neared its end, Crow insisted that the accreditors were already pushing hard in the general direction the department wants, prodding institutions to “be clear about their goals, find a way to measure their success, and then continue to improve.” He acknowledged that “the product of this needs to be public,” and that the “possibility of benchmarking [one institution’s performance against others] is an important tool that needs to be brought into this.”
But he warned that accreditors had been “trying to encourage this culture shift in the past decade,” oftentimes facing stiff resistance from college leaders and rank and file faculty members, and that the “the very first thing that could kill” the accreditors’ “success in changing this culture” is a federally imposed mandate that is seen as oversimplified and destructive.
“What I’m asking,” Crow said, “is that the regulatory environment not take this plant we’ve been nurturing for so long and try to hothouse it.” That anti-regulation plea was echoed by several other participants in the accreditation rule making session.
That session continues Friday, after which Education Department officials will "go away," as Schray put it (drawing laughs from the college officials in the crowd, some of whom might not be sorry to see the department vanish right now), to figure out how to turn some or all of the vague ideas and debating points discussed this week into possible new federal rules.