Accreditation's Accidental Transformation

An accrediting organization, in the past year, grants accreditation to a university despite concerns regarding the institution’s assignment of credit hours for certain courses. What happens next? The U.S. Department of Education’s inspector general recommends a review that could lead to suspension or termination of the accreditor’s recognition and the U.S. House of Representatives holds a hearing on how accrediting organizations review institutions’ credit hour policies. At the same time, a legal definition of a credit hour is included in USDE’s recently proposed regulations.

An accrediting organization denies a request by a nonprofit college to continue its accreditation, as part of a planned purchase by a for-profit corporation, only weeks after a U.S. Senate hearing on for-profit education drew widespread media coverage. Subsequently, the 125-year old college announces that it will close. What happens next? The accreditation decision is questioned not only by the college and the for-profit corporation, but also by lawmakers and by the media.

Why the national attention? Why the second-guessing of the accreditation decisions? It is part of the accidental transformation of accreditation.

Academic quality assurance and collegiality -- the defining features of traditional accreditation -- are, at least for now, taking a backseat to consumer protection and compliance with law and regulation. Government and the public expect accreditation to essentially provide a guarantee that students are getting what they pay for in terms of the education they seek.

Blame the enormous amount of taxpayer money involved (some $150 billion every year at the federal level alone), which puts more and more pressure on accreditors to give more and more attention to assuring that taxpayers’ money is well-spent. “Well-spent” is not about abstract notions of quality.

Blame the powerful demand that, above all, colleges and universities provide credentials that lead directly to employment or advancement of employment. Driven by public concerns about the difficult job market and the persistent rise in the price of tuition, accrediting organizations are now expected to assure that the colleges, universities and programs they accredit will produce these pragmatic results.

The worth of higher education is determined less and less through the professional judgments made by the academic community. The deference at one time accorded accrediting organizations to decide the worth of colleges and universities is diminished and perhaps disappearing.

Accreditation decisions about individual institutions are now scrutinized by additional actors -- whether U.S.Department of Education or Congress or the press -- who make their own judgments here. Simply put, this is “co-accreditation.” For these additional actors, “quality” is about compliance with federal law and regulation and about the practical gains of students -- judgments that government and the public can readily make.

Why does this matter?

  • Because of the transformation of what counts as quality. The worth of higher education, once judged by the quality of faculty, curriculum, research and academic standards, is more and more judged in solely pragmatic terms – earning a credential or getting a job or promotion. What happens to the essential role of colleges and universities in assuring intellectual development and vitality in our society?
  • Because of the transformation of who decides quality. For more than 100 years, the accreditation process has been a key factor in creating an outstanding national higher education enterprise. Will we still enjoy outstanding colleges and universities as government, the press and the public become more prominent deciders here?
  • Because of the transformation of the role of money in judging quality. Over and over again, government and the public point to the ever-growing taxpayer investment in higher education and demand more and more accountability from accreditation. While money is a vital factor in all aspects of society, do we want it to be the centerpiece of quality judgments?

Do we know the consequences of this accidental transformation? Are we prepared to accept them? These changes may be unintended, but they are dramatic and far-reaching. Is this how we want to proceed?

The White Noise of Accountability

“Accountability,” a term that has been with us, late and soon. Its six syllables trip by as the background white noise in the liturgy of higher education -- in steady anti-strophe and strophe, and repeated so often that one assumes it must be a magic incantation.

You know what happens with liturgies: after so many repetitions, there is no recompense. We don’t really know what we are saying. In this case, the six-syllable perfect scan, “accountability,” simply floats by as what we assume to be a self-evident reality. Even definitions wind up in circles, e.g., “In education, accountability usually means holding colleges accountable for the learning outcomes produced.” One hopes Burck Smith, whose paper containing this sentence was delivered at an American Enterprise Institute conference last November, held a firm tongue-in-cheek with the core phrase.

The language is hardly brand-new, nor confined to the commentariat. The 2005 report of the National Commission on Accountability in Higher Education puts “accountability” in a pinball machine where “goals” become “objectives” become “priorities” become “goals” again. One wins points along the way, but has no idea of what they represent.

Another trope in this presentation involves uttering the two words “accountability” and “transparency” together, as if one defines by proximity. In its 2008 monograph, "A Culture of Evidence," the Educational Testing Service works the phrase, “transparency and accountability” so often that it unquestionably takes on a liturgical character. The Texas Higher Education Coordinating Board starts right off in its 2007 report "Accountability in Higher Education" with a typical variation of the genre: “Making accountability more transparent ... will require...” and no further discussion of the first clause. If I am going to make something called “accountability” “more transparent,” isn’t it incumbent upon me to tell the reader what that something is and how, at the present moment, it is cloudy, opaque, etc.? THECB never does. Its use of “accountability” is just another piece of white noise. It’s a word you utter because it lends gravitas.

So what kind of creature is this species called “accountability”? Readers who recall Joseph Burke’s introductory chapter to his Achieving Accountability in Higher Education (Wiley, 2004) will agree that I am hardly the first nearsighted crazy person to ask the question. This essay will come at the word in a different way and from a different tradition than Burke’s political theory.

I am inviting readers to join in thinking about accountability together, with the guidance of some questions that are both metaphysical and practical. Our adventure through these questions is designed as a prodding to all who use the term to tell us what they are talking about before they otherwise simply echo the white noise.

Basic Questions About Relationships

We are now surrounded by a veritable industry producing enormous quantities of data and information on various performances of institutions of higher education in the name of something called “accountability,” and it is fair to ask where this production sits in terms of the potential meaning of its banner. It is also necessary to note that, in the rhetoric of higher education, “institution” is usually the subject of sentences including “accountability,” as if a single entity were responsible for a raft of consequences. But, as noted below, when our students attend three or four schools, the subject of these sentences is considerably weakened in terms of what happens to those students. The relationship is attenuated.

For now we start with a postulate: however we define accountability, we are describing a relationship in which obligations and responsibilities dwell. Our questions sound simple: What kind of relationship? What kind of obligations? What kind of responsibilities? What actions within the relationship justify its type? The exploration is conducted not to convince you that one configuration is “better” than another, rather to make sure that we all think better about the dynamics of each one.

What types of relationships might be at issue?

  • Contractual, both classic and unilateral
  • Regulatory
  • Warranty
  • Ethical
  • Market
  • Environmental

That is not a complete list, to be sure, and I trust readers will add to it. But it is one where we can ask, at each station, whether there are clear and unambiguous parties on both sides of the relationship. And, for each of these frameworks, in their applications in higher education, it is also fair to ask:

  • Who or what is one accountable to?
  • For what?
  • Why that particular “what” -- and not another “what”?
  • To what extent is the relationship reciprocal?
  • Are there rewards and/or sanctions inherent in the relationship?
  • How continuous is the relationship?

Accountability as Implicit Contract

We mix our blood or spit on the same ground to seal our agreements. There is an offer, an acceptance, and a named party standing behind each side. Every law student learns the ritual in the first week of the first term of classes. The arrangement includes the provision of goods, services, or spirit; the exchange is specified; the agreement is binding, and remedies are specified if either party breaks the terms of the exchange. There are, of course, a lot of legal weeds here, and more variations than galaxies, but that’s the general idea.

Where do we see contracts in higher education between an institution and parties outside an institution? As a general principle, wherever the money flows. Indeed, one of the key factors that propels consideration of accountability as either a contract or regulatory construct lies in cost. There is a dollar sign on every college door in the U.S., strongly implying that those who pass through the doors are purchasing something that the offeror is bound to deliver.

From a contractual standpoint, when the parents of students or students themselves pay tuition and fees, they are accepting an offer from the institution to provide services – both major and minor, explicit and implicit. As practice stands, they are not contracting for results; rather for services that may produce consequences, some of which can be reasonably anticipated, some of which not. And when the institution takes public funds (federal or state), it has entered into a contractual relationship in which it has agreed to provide generalized or specific services (and, sometime, products). These examples apply equally to public, not-for-profit, and for-profit institutions.

If accountability in higher education is a contractual relationship, we’ve got problems. The “goods” or “services” to be rendered by the offeror are usually indeterminate; there is no formal statement of obligations. The institution does not pledge to students that its efforts will produce specified learning, persistence and graduation, productive labor market entry, or a good life. We don’t put low persistence or graduation rates in a folder subject to educational malpractice suits. Nor does the institution pledge to public funding authorities that it will produce X number of graduates, Y dollars of economic benefits, or Z volume of specified community services, or be subject to litigation if it fails to reach these benchmarks.

The Business-Higher Education Forum’s 2004 Public Accountability for Student Learning in Higher Education: Issues and Options notes that a number of non-student-referenced “measures of institutional performance ... shape [italics mine] public accountability in higher education,” including “resource use, research and service, and contributions to economic development.” Even before one gets to student learning, one has to ask where something called “public accountability” lies in these activities and outputs. Are private institutions under implicit contract to the public for their efficiencies in “resource use”? Where does that obligation come from? What is “economic development,” and was it agreed to in a state charter for an institution? If students and staff simply spend money in the districts surrounding an institution, does that constitute purposeful economic development by the institution?

Look more closely at how the institution guides us, and one usually finds a mission statement with very generalized goals and assurances of care for: the student, the surrounding community, the search for knowledge, the provision of opportunity, the value of a “diverse” human environment (even if “diverse” is never translated), and maybe more. These pledged general “services” are chosen by the provider, who thus executes what the law would call a “unilateral contract.” The unilateral contract mode also allows established and ad hoc organizations to delineate what individual institutions and state systems must/should do (the “what” of a relationship) to validate their responsibilities.

The unilateral contract starts out as an intriguing vehicle for “accountability,” but swiftly heads into a dead end because the “with whom or what” that stands on the other side of the contract is more a matter of conjecture and interpretation than fact. There is no obvious party to accept the offer, no obvious reward for provision, and no obvious sanction if the provision falls short of promise. If the unilateral declaration claims consensus status, one would want to know the parties to the consensus. Were faculty partners (one doesn’t hear much about the instructional workforce in all the white noise of accountability)? Students? Students, last we looked, haven’t stopped buying the product no matter what the institution issuing a unilateral declaration of mission and care actually does, so, from a student perspective, the unilateral contract is moot. From any other perspective, it is a fog bank.

Accountability as Regulatory Relationship

The concentric circle questions on contractual relationships lead to an intermediary step on the way to formal regulation: performance funding, better labeled (as Burke and Minassians did in their 2003 Performance Reporting: “Real” Accountability or Accountability “Lite”) as performance budgeting. This is a case that affects only public institutions, with state authorities acting as de facto contractual offerors, promising to reward accepting parties (the schools) for meeting specified thresholds or increases of production, inclusion, public promulgation of internal performance metrics, etc. Historically, performance funding is not a mandate, and there are no sanctions for nonperformance. Institutions that fall short are held harmless. Those that exceed may or may not receive extra funds, depending on a state’s financial state. One can budget, after all, but not necessarily fund.

The true regulatory relationship tightens the actions and obligations one observes dimly in performance funding. We begin to see divergent paths of financial representation and non-financial information, both required, in different ways, by state authorities. Both public and private institutions are subject to requirements for basic financial disclosure as a byproduct of their status as state-chartered institutions doing public business. After that point, annual financial reports are required of public institutions by state authorities, e.g., Texas asks for operating expenses per FTE, with different calculations for each level of degree program, and administrative costs as a proportion of operating expenses (seen as a measure of institutional efficiency). Private institutions may report similar information to their boards of trustees, but are under no obligation to reveal internal financial information to anyone else.

What happens if a public institution, under legislative mandate, presents incomplete or dubious financial information, or finance data that clearly reveal inefficiencies? Are there sanctions? Does the state ask for the CFO’s head? To be “accountable” in this regulatory framework is to provide information, not to suffer for it. One can fulfill one’s obligations, but not necessarily one’s responsibilities. Is that what we mean by “accountability”?

As for non-financial information, the closest we come to state regulations with consequences are recent legislative proposals to fund public institutions of higher education on the basis of course or degree completions and not enrollments.But this type of regulation holds the institution responsible for the behavior of students, thus clouding the locus of both obligation and responsibility. Is this what we mean? If so, then legislators and other policy makers ought to be more explicit about it.

It should be noted that recent performance funding “rewards” for increased degree completion are, to put it gently, rather creative. The Louisiana Board of Regents, for example, will provide extra funding for institutions that increase not the percentage, but the numbers, of graduates by … allowing them to raise tuition. The irony is delicious: you will pay more to attend an institution that graduates not a greater percentage of, but more, students. In Indiana, where all public institutions are scheduled for budget cuts in 2010, those that produce more degrees will not be cut as much as others. In other words, in both cases, no public money really changes hands. Clever!

Accountability as Warranty

The warranty interpretation of accountability is a variation on a unilateral contract. The manufacturer attests that the product you buy is free of defects, and, within a specified period (1 year, 3 years), unless you abuse it in ways that transcend the capacity of its structure, components, and ordinary siting, the manufacturer will repair or replace the product. Translated into the principal function of institutions of higher education, the distribution of knowledge and skills, the warranty implies that students to whom degrees are awarded are analogous to products (human beings filled with knowledge and skills), behind which the institute of higher education stands. The recipient of the warranty is generalized -- the “public,” or “employers,” or “policy makers” -- not a very concise locus for the accountability relationship.

The warranty gloss sounds intriguing, and one is drawn to see where it leads. Does the warranty form mean that all those to whom an institution grants degrees have demonstrated X, M, and Q, and that these competencies will function at qualifying or higher levels for at least Z years? If so, then at least there are substantive reference points in such a warranty statement.

A warranty is a public act; the institution is the responsible party, hence also responsible for bearing witness -- publicly -- to what the credential represents. We’re back on the border of contracts: the institution offers programs of study and criteria for awarding degrees; the student implicitly accepts by registering for courses. The student then fulfills the terms of the offer, demonstrating X, M, and Q, whereupon the institution awards the degree. One arm of the contract is fulfilled, with both sides meeting their obligations.

With that fulfillment in hand, the institution, as a publicly chartered entity whose primary obligation is the distribution of knowledge and skills, can turn to the chartering authority, the state (and its implicit ground, “the public”), and testify that it has fulfilled its primary function, justified its charter. In this case, the testimony becomes a de facto warranty, with the second arm of the implicit contract fulfilled. Sounds like all the conditions of “accountability” are met.

But there are problems here, too. The warranty is wholly a representation of the provider. It does not require evidence of the users of alumni work, civic involvement, or cultural life. The terms of maintenance and advancement of knowledge and skills beyond students’ periods of study are wholly subjunctive. Higher education leaders and followers are justly wary of staking their work on the performance of alumni. “We are not manufacturing a product with fixed attributes,” they would cry -- and they are so right about that. “Too many intervening variables that are beyond our obligations!” “We aren’t responsible for the labor market!” The threat to any warranty is endogenous.

Relationship. Obligation. Responsibility. Is this vocabulary sufficient for understanding accountability in the context of higher education? Maybe, but we need a different lens to see how.

Accountability According to Socrates

The non-financial information that institutions of higher education are providing in increasingly significant volumes raises a Socratic formulation for “accountability.” In the Socratic moral universe, one is simultaneously witness and judge. The Greek syneidesis (“conscience” and “consciousness”) means to know something with, so to know oneself with oneself becomes an obligation of institutions and systems -- to themselves. “Obligation,” in its Socratic formulation, is an ethical touchstone, a universal governing principle of human relations. Outsiders (“the public,” “employers,” “policy makers”) may observe the information we produce as witnesses to our own behavior, processes, and outcomes, but if the Socratic mantra is adhered to, they are bystanders. Obligation becomes self-reflexive.

There are no external authorities here. We offer, we accept, we provide evidence, we judge. There is nothing wrong with this: it is indispensable, reflective self-knowledge. And provided we judge without excuses, we hold to this Socratic moral framework. As Peter Ewell has noted, the information produced under this rubric, particularly in the matter of student learning, is “part of our accountability to ourselves.”

But is this “accountability” as the rhetoric of higher education uses the white noise -- or something else?

I contend that, in response to shrill calls for “accountability,” U.S. higher education has placed all its eggs in the Socratic basket, but in a way that leaves the basket half-empty. It functions as the witness, providing enormous amounts of information, but does not judge that information. It is here that the dominant definitions of accountability in U.S. higher education can be found:

“Accountability is the public communication about difference dimensions of performance, geared to general audiences, and framed in the context of goals and standards.” (Business-Higher Education Forum, 2004)
“Accountability is the public presentation and communication of evidence about performance in relation to goals.” (Texas Higher Education Coordinating Board, 2007)
“VSA [Voluntary System of Accountability] is a program to provide greater accountability by public institutions through accessible, transparent, and comparable information. . .” (AASCU and NASULGC 2007)

There are a couple of wrinkles in these direct and implied definitions (“standards” and “comparable”), but we’ll set them aside. The Socratic position yields accountability by metrics. And we certainly get them. For example, the University of California System’s 2009 Accountability Report provided no less than 131 indicators that turn over most of the stones of system operation. Some 41 percent of these indicators are basically “census” data, e.g. enrollments, full-time “ladder rank” faculty, R&D expenditures. These are generally available in other places and fairly inconsequential in terms of the obligations of institutions, but it is very nice to have them all in one place. It’s certainly public, it’s certainly transparent, and it is certainly overwhelming. Whoever wants to select an item of interest has a wide array of choice.

By one interpretation, this report may be an unconscious satire on the entire enterprise of the witness producing numbers, for the only relationship a document such as this implies is to brush off the nags. “You wanted data about everything we do? Here it is! Now go away!”

We frequently observe a plea for excuses on data production in this context, e.g. “Measurement isn’t sufficient for accountability, but it is necessary” (for instance, in Chad Aldeman and Kevin Carey's "Ready to Assemble: Grading State Higher Education Accountability Systems," 2009). Well, if the indicator menu “isn’t sufficient,” what else do these advocates suggest complete the offerings?

Every single “best practice” cited by Aldeman and Carey is subject to measurement: labor market histories of graduates, ratios of resource commitment to various student outcomes, proportion of students in learning communities or taking capstone courses, publicly-posted NSSE results, undergraduate research participation, space utilization rates, licensing income, faculty patents, volume of non-institutional visitors to art exhibits, etc. etc. There’s nothing wrong with any of these, but they all wind up as measurements, each at a different concentric circle of putatively engaged acceptees of a unilateral contract to provide evidence. By the time one plows through Aldeman and Carey’s banquet, one is measuring everything that moves -- and even some things that don’t.

Market-Based Accountability

From a different corner of the analytic universe has come the notion that the reasons one presents all that information, data, and indicators of institutional performance are (a) to position one’s institution in a market of student choice, and (b), as a necessary condition of that positioning, to compare one’s performance indicators with those of like institutions. There are two markets here that provide the judgment of institutional success: one where bodies are counted as applicants and transfers, and one of media exposure and attention.

Burck Smith, cited earlier, sees this “market accountability” in more complex terms. His “market” is an invisible field of information on which players presumably compete. Their only obligations are to the invisible force. They are, in effect, selling services, and the market judges by a quality-to-price ratio. By this interpretation, the market is a mediating ground on which providers and consumers meet, with the latter judging the former with markers of commerce (everything from tuition to research grants to general or program specific support). Under these formulas, there will be “market winners” and “market losers.”

Is accountability a game of winners and losers? Are there judges who issue decisions about best-in-show? If prospective students and their parents are the judges, then best-in-show gets a volume of applications that would swamp the campus for the next three generations. This bizarre market assumes unlimited numerus clausus at every institution of higher education.

Sorry, but basic capacity facts mean that consumers cannot vote with their feet in higher education. We’re not selling toothpaste or shampoo, as Kelly and Aldeman’s 2010 "False Fronts?: Behind Higher Education’s Voluntary Accountability Systems" assumes. And if state legislatures and/or state higher education authorities are the judges, do they really sanction Old Siwash as the pit of performance and close down a campus that cost them $100 million to start with and on which at least a small part of a local economy depends?

More to the point of questioning the market interpretation, we can ask whether the provision-of-information designed to compare institutions -- in a particular region, of a particular category, etc. -- is “accountability”? If an institution is buying tests such as the CLA, and claims that its 100 paid test-taking volunteers improved at a 0.14 Effect Size rate greater than matched students at the peer school in another state, who are the receiving parties of the advertisement? Which of these parties even begins to understand the advertisement? And by what authority are institutions obligated to provide this elusive understanding?

If we glossed the Socratic notion on provision-of-information, the purpose is self-improvement, not comparison. The market approach to accountability implicitly seeks to beat Socrates by holding that I cannot serve as both witness and judge of my own actions unless the behavior of others is also on the table. The self shrinks: others define the reference points. “Accountability” is about comparison and competition, and an institution’s obligations are only to collect and make public those metrics that allow comparison and competition. As for who judges the competition, we have a range of amorphous publics and imagined authorities.

In fact, “accountability’ fades into an indeterminate background landscape under this “market” formulation precisely because there are no explicit and credible second parties. It fades even more under what the Business-Higher Education Forum wisely termed “deinstitutionalization.”

That is, given both accelerating student mobility (multi-institutional attendance, staggered attendance patterns, geo-demography that turns enrollment management on its head) and e-Learning, the “institution” as the subject of accountability sentences has lost considerable status as the primary claimant for results involving student attainment and learning. Hmmmm!

Accountability as Environment

When Peter Ewell (in Assessment, Accountability, and Improvement 2009) observes that “the central leitmotifs of this new accountability environment are transparency and learning outcomes,” he stumbles across (though doesn’t play it out) yet another intriguing notion of accountability. It is not a form of action within a specific type of relationships; it is an “environment.” What kind of environment? One in which those with visibility and access to mass media have pushed higher education to provide understandable (“transparent”) data and information on what they do, and indicators (which may not be so clear, but which come with short-hand white noise phrases such as “critical thinking” and “teamwork”) of what happens to students’ knowledge and skills as a result of spending some time (though how much is rarely addressed) and effort (something that is never addressed) in higher education (no matter how many institutions the student attends). These are all “messages,” and their aggregation constitutes public propaganda.

There are no formal agreements here: this is not a contract, it is not a warranty, it is not a regulatory relationship. It isn’t even an issue of becoming a Socratic self-witness and judge. It is, instead, a case in which one set of parties, concentrated in places of power, asks another set of parties, diffuse and diverse, “to disclose more and more about academic results,” with the second set of parties responding in their own terms and formulations. The environment itself determines behavior.

Ewell is right about the rules of the information game in this environment: when the provider is the institution, it will shape information “to look as good as possible, regardless of the underlying performance.” The most prominent media messenger, U.S. News & World Report’s rankings, and the most media/policy-maker-connected of the glossy Center reports, "Measuring Up" (which grades states with formulas resembling Parker Brothers board games) simply parse information in different ways, and spill it into the “accountability environment.” The messengers become self-appointed arbiters of performance, establishing themselves as the second party to which institutions and aggregates of institutions become “accountable.” Can we honestly say that the implicit obligation of feeding these arbiters constitutes “accountability”?

Decidedly not, even though higher education willingly engages in such feeding. But if the issue is student learning, there is nothing wrong with -- and a good deal to be said for -- posting public examples of comprehensive examinations, summative projects, capstone course papers, etc. within the information environment, and doing so irrespective of anyone requesting such evidence of the distribution of knowledge and skills. Yes, institutions will pick what makes them look good, but if the public products resemble AAC&U’s “Our Students’ Best Work” project, they set off peer pressure for self-improvement and very concrete disclosure. The other prominent media messengers simply don’t engage in constructive communication of this type.

Conclusions and Reflections

At the end of this exploratory flight, I am not sure where to land, other than to acknowledge an obvious distinction between the practice and the nature of “accountability”: the former is accessible; the latter is still a challenge. Empirically, U.S. higher education has chosen a quasi-Socratic framework, providing an ever-expanding river of data to indeterminate (or not very persuasive) audiences, but with no explicit quality assurance commitment. Surrounding this behavior is an environment of requests and counter-requests, claims and counter-claims, with no constant locus of authority.

Ironically, a “market” in the loudest voices, the flashiest media productions, and the weightiest panels of glitterati has emerged to declare judgment on institutional performance in an age when student behavior has diluted the very notion of an “institution” of higher education. The best we can say is that this environment casts nothing but fog over the specific relationships, responsibilities, and obligations that should be inherent in something we call “accountability.”

Perhaps it is about time that we defined these components and their interactions with persuasive clarity. I hope that this essay will invite readers to do so.

Clifford Adelman is senior associate at the Institute for Higher Education Policy. The analysis and opinions expressed in this essay are those of the author, and do not necessarily represent the positions or opinions of the institute, nor should any such representation be inferred.

The Future of Accreditation?

With Congress poised to renew the Higher Education Act, the push for accountability has opened the door to proposed federal changes to accreditation of higher education. If not properly countered, federal accountability demands will set us firmly on a path where self-regulation of academic quality through accreditation is significantly diminished by government regulation. We will experience a shrinking of the presence of accreditation. If confronted with our situation, Alice in Wonderland might have said: "Self-regulation is, after all, just government regulation that I like."

It was the year 2014 and the shrinking of accreditation was complete. Self-regulation through voluntary accreditation had almost disappeared from the higher education landscape. It had been replaced with federal control of thousands of U.S. colleges and universities.

Just as the 2008 amendments to the Higher Education Act (HEA) enlarged the footprint for federal control over higher education, the 2014 amendments enabled the government to erase accreditation as an arbiter of quality from federal statute. Congress removed the standards for recognition of accreditation from the law and shut down the federal advisory committee that reviewed the accreditors, halting the 60-year federal reliance on the enterprise as a gatekeeper of federal funds. The voluminous regulations that accompanied the law and certified the reliability of accrediting organizations were rescinded as well.

How It Happened

How did this take place? Voluntary accreditation was undermined by a public that now vested greater authority in government judgment about performance of colleges and universities rather than accreditation, a nongovernmental, rather obscure and “private” source of judgment of quality that had come to be viewed as inadequate. The press and elected officials, increasingly reflecting public sentiment, were routinely describing accreditation as insular and, at times, even arrogant in its lack of full transparency and responsiveness to the public.

Accreditation, which had claimed the mantle of primary authority on higher education quality for many years was, above all, diminished by the public accountability movement that had roots in the 1980s. This demand for accountability reached a crescendo toward the end of the George W. Bush presidency (2000-2008), with the federal Secretary of Education’s Commission on the Future of Higher Education in 2005-2006 and subsequent activities in 2007. The twin themes of the commission’s report and attendant activities -- the inadequacy of accreditation and the consequent need for additional government control of quality -- coincided with the culmination of efforts to reauthorize the HEA that had been under way since 2003 and was completed in 2008. The reauthorization now incorporated much of the thinking of the commission, setting the stage for the diminution of accreditation and the assertion of federal control of higher education.

The federal government, by establishing an alternative system of quality judgment that had immediate credibility with the public, eclipsed the need for accreditation. The key element was the replacement of accreditation standards with government standards for quality, comparable in a number of ways to the 2002 No Child Left Behind Act that established government expectations for success in elementary and middle schools. Accreditation could no longer compete.

Institutions, to their credit, did try to sustain their considerable loyalty to accreditation.

But, in the end, they could not continue to invest in the process. College and university presidents had conducted a cost-benefit analysis that made it painfully clear that future perceptions of their institutions rested more and more with judgments emerging from government scrutiny, not accreditation. The accreditation process that institutions had undergone for years - self-studies, site visits, peer review and collegial system of careful judgments about quality – no longer provided a significant return on investment.

Higher Education and Accreditation React

Looking back, it was clear that colleges, universities and accreditors underestimated the persistence and intensity of calls for greater public accountability. Despite a series of valuable and important initiatives in this area, higher education’s otherwise compelling and forceful responses did not match the urgency of the accountability demands.

And, in a number of instances, higher education institutions and accreditors had remained emphatically resistant about public accountability. They often disagreed with government about the appropriate tools needed to address this vital subject. From the perspective of many educators, current approaches to accountability often rested on either erroneous assumptions or inadequate evidence or poor methodologies. This was simply unacceptable when addressing such complex and nuanced issues as institutional performance and student achievement.

Accountability that was mandated also did not sit well with higher education and accreditation leaders who firmly believed that it had to be addressed voluntarily. The result was a good deal of higher education discussion and activity to address accountability dating to the 1980s, but not enough robust action to fully engage public demands.

Compounding the problem, some institutional and accreditation leaders were no longer fighting for the privilege of self-regulation. Perhaps self-regulation was simply taken for granted. Perhaps it had been a fundamental feature of higher education for so long that it had become invisible. Whatever the reasons, leaders tended, less and less, to make the case for self-regulation as the responsible exercise of a coveted independence and self-determination for the academy, especially in academic matters.

In retrospect, it would have been helpful if more academic leaders had publicly re-affirmed the importance of self-regulation. It would have been valuable to emphasize that the resultant academic independence was at its best when serving the public interest. In the face of the accountability challenge, failing to provide powerful advocacy for self-regulation that went beyond “self” resulted in higher education surrendering one of its most precious assets: the public trust vested in its institutions for leadership in academic quality.

Moreover, the long-held distinction between self-regulation and government regulation was beginning to blur. As early as the beginning of reauthorization of HEA in 2003, some institutions and accreditors appeared more and more comfortable defining “self-regulation” as “government regulation that we like.” They were ignoring the vital importance of locating responsibility for academic quality and direction with the leadership of colleges and universities. Institutions and accreditors demonstrated, over and over again, that they were willing to allow government to step in, trumping institutional leadership when it came to prescribing academic quality. This was clear during the 2007 negotiated rule making on accreditation where some of the members of the panel from higher education and accreditation supported government efforts to add regulatory language that strengthened the federal role in setting expectations of student achievement, a responsibility that historically rests with institutions.

The impact, however unintentional, was a transition from government holding higher education and accreditation responsible for producing quality institutions and programs to government prescribing what counts as quality and thus regulating higher education. It was one thing, for example, when the government requires that institutions report graduation rates of students and quite another if the government actually stipulates acceptable graduation rates for all colleges and universities.

In short, the demand for greater accountability pressed higher education and accreditation to assure the public that self-regulation was rigorous, transparent and accountable. However, in the years following the 2008 reauthorization, it was clear that the public was not assured.

How Did Accreditation Shrink?

The Federal Government Took Action

To replace accreditation standards, the federal government went on to develop four tools to judge academic quality: (1) a data collection tool to expand information on institutional characteristics and results, (2) a tool of government benchmarks of academic quality, (3) a U.S. Qualifications Framework and (4) a national ranking system for all colleges and universities.

Using the authority that the government gained in the 2008 reauthorization, the Department of Education created its data collection tool by requiring that institutions submit significantly enriched data on institutional performance – not only graduation, but also, e.g., transfer, job placement and entry to graduate schools. By 2009-2010, the government was using these expanded data to develop cut-off points or “bright line” indicators to make judgments about institutional quality. There were now government-required levels for many areas of institutional performance. These data were also to be used to populate the government qualifications framework and rankings.

By 2011, a U.S. Qualifications Framework was complete. It created a lockstep approach to student achievement with expectations of specific competencies aligned with each degree level (associate, baccalaureate, masters, doctorate) offered by an institution. The federal qualifications framework took its place beside those already established by a number of other countries and regions, e.g., China, India, the European Union and some Canadian provinces. The framework, more than any of the other tools, essentially standardized national expectations about higher education quality. Finally, by 2012, the federal government, modeled on US News and World Report, had also completed a ranking system for all institutions.

The federal government also assured that the public could easily access the framework and rankings, as well as customize the data for its own use. Building on the models of the “College Navigator” mounted by the Department in 2007 and the “Mapping America’s Education Progress” for elementary and secondary education in 2008, the Department Website, as of 2012, included a search engine for all higher education institutions that anyone could use to find and rank institutions by academic quality indicators as well as by, e.g., type of institution, size, budget, and financial aid to students. Prospective students, the press and the public could readily access these sites and make their own independent judgments of what counted as a quality school.

By 2014, these tools resulted in government judgment about quality as the primary driver of federal funds, now totaling some $150 billion annually, to thousands of colleges and universities through student grants and loans as well as research and program funds. From now on, these federal dollars were conditioned on the respective performance levels of institutions as determined by the government, including adherence to the U.S. Qualifications Framework and positioning in the federal rankings scheme. Only institutions with government-defined “acceptable,” e.g., graduation rates and transfer rates were eligible. These institutions also had to document government-acceptable rates of entry to graduate school and job placement.

State Government Moves as Well

State government, responding to the tools developed following the 2008 reauthorization and reacting to 2014, also replaced accreditation, using either the federally generated quality judgments or rankings or establishing state indicators, qualifications frameworks and rankings. States discarded accreditation when deciding whether an institution could operate in their respective domains. For many states, the movement to indicators, frameworks and rankings was quite straightforward, building on their many years of state performance budgeting and performance reporting for public universities that went back to the 1980s. The scope of state authority was greatly enlarged, however, with thousands of nonprofit and for-profit private institutions also included in these requirements as a condition of being licensed to operate.

State licensure of individuals in the professions also ceased to rest on whether or not the programs from which students graduated were accredited. Rather, states now required that programs such as law, medicine and other licensed professions no longer be based upon accredited status, but on program performance as measured by federal or state indicators, including whether graduates met the expectations of competencies captured by qualifications frameworks, or the positioning of programs in national or state rankings. States also conditioned their mutual reciprocity with regard to licensure, agreeing to acknowledge each others’ licensure of professionals only if the states from which the professionals came used either the federal or state quality standards.

Employers and Foundations Change Course

Following the lead of the public sector, many private employers and foundations abandoned the requirement that institutions sustain accredited status for a demand that institutions meet specific federal or state benchmarks for performance as a condition of providing, e.g., tuition assistance or awarding grants. This shift affected millions of employees in computer and electronic fields, the automotive industry and many service industries. The private foundations that had long favored higher education with research and program funding based on accredited status now replaced this demand with requirements for evidence of how well institutions fared with regard to federal or state indicators and rankings as a condition of receiving foundation funds.

Accrediting Organizations as Enablers of Government Control

Some of the 81 recognized accrediting organizations that were active in 2008 closed their doors within several years of the establishment of federal quality standards and, ultimately, the federal government’s abandoning of the gatekeeping role. Others continued to operate, but fundamentally retooled.

Accrediting bodies transformed themselves from arbiters of higher education quality to providing audit and consulting services to colleges and universities. They became enablers of government control of quality. Institutional accreditors assisted colleges and universities in the data collection required by the federal government. They provided advice to institutions about how to analyze and use these data to showcase college and university efforts. They provided consulting assistance to establish profiles of excellence based on government indicators.

In a similar vein, programmatic accreditors, instead of supplying the standards that drove, e.g., law, medicine, business and many other professions, now worked with programs to meet state or federal standards that were aligned with state licensure requirements. They provided technical assistance to programs needing data collection and analysis and, as with institutional accreditors, worked with programs to establish profiles of excellence that would be affirmed by government review.

Shrinking Accreditation Did Not Improve Higher Education Quality

While government was successful in establishing this new system of quality judgments, this did not, contrary to public expectations, translate into additional success for higher education. Standardization of quality expectations and emphasis on transparency under government control did not, as many had anticipated, launch a new era of blossoming higher education quality. To the contrary, the new government-directed quality standards, with increasing bureaucratic emphasis on a single set of performance levels, coincided with an era of declining success in higher education. The government-based accountability brought U.S. practice closer to the ministerial approaches of many other countries. At the same time, however, U.S. institutions did not fare as well as in the past when compared to international colleagues.

As voluntary accreditation withered and government control flourished, U.S. higher education increasingly lagged beyond many other nations in academic standing, participation, success with degree attainment, innovation in teaching and success in research. Gone were the days of international leadership and successful competitiveness of a U.S. higher education enterprise that once was routinely described as “the best in the world.” Gone was the conspicuous and often overwhelming presence of U.S. institutions in the major international rankings of higher education such as the Shanghai rankings and the Times Higher.


Did the shrinking of accreditation serve the public interest? No. It was clear, by 2014, that public accountability, however valuable and desirable as an end in itself, was not a driver of academic quality. It was clear that replacing self-regulation through accreditation with government regulation did not enhance academic quality.

Above all, it was clear that, absent additional energetic action about accountability on the part of higher education and accreditation, the shrinking of accreditation could actually occur.

Judith S. Eaton is president of the Council for Higher Education Accreditation.


Subscribe to RSS - Accreditation
Back to Top