You have /5 articles left.
Sign up for a free account or log in.
“Accountability,” a term that has been with us, late and soon. Its six syllables trip by as the background white noise in the liturgy of higher education -- in steady anti-strophe and strophe, and repeated so often that one assumes it must be a magic incantation.
You know what happens with liturgies: after so many repetitions, there is no recompense. We don’t really know what we are saying. In this case, the six-syllable perfect scan, “accountability,” simply floats by as what we assume to be a self-evident reality. Even definitions wind up in circles, e.g., “In education, accountability usually means holding colleges accountable for the learning outcomes produced.” One hopes Burck Smith, whose paper containing this sentence was delivered at an American Enterprise Institute conference last November, held a firm tongue-in-cheek with the core phrase.
The language is hardly brand-new, nor confined to the commentariat. The 2005 report of the National Commission on Accountability in Higher Education puts “accountability” in a pinball machine where “goals” become “objectives” become “priorities” become “goals” again. One wins points along the way, but has no idea of what they represent.
Another trope in this presentation involves uttering the two words “accountability” and “transparency” together, as if one defines by proximity. In its 2008 monograph, "A Culture of Evidence," the Educational Testing Service works the phrase, “transparency and accountability” so often that it unquestionably takes on a liturgical character. The Texas Higher Education Coordinating Board starts right off in its 2007 report "Accountability in Higher Education" with a typical variation of the genre: “Making accountability more transparent ... will require...” and no further discussion of the first clause. If I am going to make something called “accountability” “more transparent,” isn’t it incumbent upon me to tell the reader what that something is and how, at the present moment, it is cloudy, opaque, etc.? THECB never does. Its use of “accountability” is just another piece of white noise. It’s a word you utter because it lends gravitas.
So what kind of creature is this species called “accountability”? Readers who recall Joseph Burke’s introductory chapter to his Achieving Accountability in Higher Education (Wiley, 2004) will agree that I am hardly the first nearsighted crazy person to ask the question. This essay will come at the word in a different way and from a different tradition than Burke’s political theory.
I am inviting readers to join in thinking about accountability together, with the guidance of some questions that are both metaphysical and practical. Our adventure through these questions is designed as a prodding to all who use the term to tell us what they are talking about before they otherwise simply echo the white noise.
Basic Questions About Relationships
We are now surrounded by a veritable industry producing enormous quantities of data and information on various performances of institutions of higher education in the name of something called “accountability,” and it is fair to ask where this production sits in terms of the potential meaning of its banner. It is also necessary to note that, in the rhetoric of higher education, “institution” is usually the subject of sentences including “accountability,” as if a single entity were responsible for a raft of consequences. But, as noted below, when our students attend three or four schools, the subject of these sentences is considerably weakened in terms of what happens to those students. The relationship is attenuated.
For now we start with a postulate: however we define accountability, we are describing a relationship in which obligations and responsibilities dwell. Our questions sound simple: What kind of relationship? What kind of obligations? What kind of responsibilities? What actions within the relationship justify its type? The exploration is conducted not to convince you that one configuration is “better” than another, rather to make sure that we all think better about the dynamics of each one.
What types of relationships might be at issue?
- Contractual, both classic and unilateral
- Regulatory
- Warranty
- Ethical
- Market
- Environmental
That is not a complete list, to be sure, and I trust readers will add to it. But it is one where we can ask, at each station, whether there are clear and unambiguous parties on both sides of the relationship. And, for each of these frameworks, in their applications in higher education, it is also fair to ask:
- Who or what is one accountable to?
- For what?
- Why that particular “what” -- and not another “what”?
- To what extent is the relationship reciprocal?
- Are there rewards and/or sanctions inherent in the relationship?
- How continuous is the relationship?
Accountability as Implicit Contract
We mix our blood or spit on the same ground to seal our agreements. There is an offer, an acceptance, and a named party standing behind each side. Every law student learns the ritual in the first week of the first term of classes. The arrangement includes the provision of goods, services, or spirit; the exchange is specified; the agreement is binding, and remedies are specified if either party breaks the terms of the exchange. There are, of course, a lot of legal weeds here, and more variations than galaxies, but that’s the general idea.
Where do we see contracts in higher education between an institution and parties outside an institution? As a general principle, wherever the money flows. Indeed, one of the key factors that propels consideration of accountability as either a contract or regulatory construct lies in cost. There is a dollar sign on every college door in the U.S., strongly implying that those who pass through the doors are purchasing something that the offeror is bound to deliver.
From a contractual standpoint, when the parents of students or students themselves pay tuition and fees, they are accepting an offer from the institution to provide services – both major and minor, explicit and implicit. As practice stands, they are not contracting for results; rather for services that may produce consequences, some of which can be reasonably anticipated, some of which not. And when the institution takes public funds (federal or state), it has entered into a contractual relationship in which it has agreed to provide generalized or specific services (and, sometime, products). These examples apply equally to public, not-for-profit, and for-profit institutions.
If accountability in higher education is a contractual relationship, we’ve got problems. The “goods” or “services” to be rendered by the offeror are usually indeterminate; there is no formal statement of obligations. The institution does not pledge to students that its efforts will produce specified learning, persistence and graduation, productive labor market entry, or a good life. We don’t put low persistence or graduation rates in a folder subject to educational malpractice suits. Nor does the institution pledge to public funding authorities that it will produce X number of graduates, Y dollars of economic benefits, or Z volume of specified community services, or be subject to litigation if it fails to reach these benchmarks.
The Business-Higher Education Forum’s 2004 Public Accountability for Student Learning in Higher Education: Issues and Options notes that a number of non-student-referenced “measures of institutional performance ... shape [italics mine] public accountability in higher education,” including “resource use, research and service, and contributions to economic development.” Even before one gets to student learning, one has to ask where something called “public accountability” lies in these activities and outputs. Are private institutions under implicit contract to the public for their efficiencies in “resource use”? Where does that obligation come from? What is “economic development,” and was it agreed to in a state charter for an institution? If students and staff simply spend money in the districts surrounding an institution, does that constitute purposeful economic development by the institution?
Look more closely at how the institution guides us, and one usually finds a mission statement with very generalized goals and assurances of care for: the student, the surrounding community, the search for knowledge, the provision of opportunity, the value of a “diverse” human environment (even if “diverse” is never translated), and maybe more. These pledged general “services” are chosen by the provider, who thus executes what the law would call a “unilateral contract.” The unilateral contract mode also allows established and ad hoc organizations to delineate what individual institutions and state systems must/should do (the “what” of a relationship) to validate their responsibilities.
The unilateral contract starts out as an intriguing vehicle for “accountability,” but swiftly heads into a dead end because the “with whom or what” that stands on the other side of the contract is more a matter of conjecture and interpretation than fact. There is no obvious party to accept the offer, no obvious reward for provision, and no obvious sanction if the provision falls short of promise. If the unilateral declaration claims consensus status, one would want to know the parties to the consensus. Were faculty partners (one doesn’t hear much about the instructional workforce in all the white noise of accountability)? Students? Students, last we looked, haven’t stopped buying the product no matter what the institution issuing a unilateral declaration of mission and care actually does, so, from a student perspective, the unilateral contract is moot. From any other perspective, it is a fog bank.
Accountability as Regulatory Relationship
The concentric circle questions on contractual relationships lead to an intermediary step on the way to formal regulation: performance funding, better labeled (as Burke and Minassians did in their 2003 Performance Reporting: “Real” Accountability or Accountability “Lite”) as performance budgeting. This is a case that affects only public institutions, with state authorities acting as de facto contractual offerors, promising to reward accepting parties (the schools) for meeting specified thresholds or increases of production, inclusion, public promulgation of internal performance metrics, etc. Historically, performance funding is not a mandate, and there are no sanctions for nonperformance. Institutions that fall short are held harmless. Those that exceed may or may not receive extra funds, depending on a state’s financial state. One can budget, after all, but not necessarily fund.
The true regulatory relationship tightens the actions and obligations one observes dimly in performance funding. We begin to see divergent paths of financial representation and non-financial information, both required, in different ways, by state authorities. Both public and private institutions are subject to requirements for basic financial disclosure as a byproduct of their status as state-chartered institutions doing public business. After that point, annual financial reports are required of public institutions by state authorities, e.g., Texas asks for operating expenses per FTE, with different calculations for each level of degree program, and administrative costs as a proportion of operating expenses (seen as a measure of institutional efficiency). Private institutions may report similar information to their boards of trustees, but are under no obligation to reveal internal financial information to anyone else.
What happens if a public institution, under legislative mandate, presents incomplete or dubious financial information, or finance data that clearly reveal inefficiencies? Are there sanctions? Does the state ask for the CFO’s head? To be “accountable” in this regulatory framework is to provide information, not to suffer for it. One can fulfill one’s obligations, but not necessarily one’s responsibilities. Is that what we mean by “accountability”?
As for non-financial information, the closest we come to state regulations with consequences are recent legislative proposals to fund public institutions of higher education on the basis of course or degree completions and not enrollments.But this type of regulation holds the institution responsible for the behavior of students, thus clouding the locus of both obligation and responsibility. Is this what we mean? If so, then legislators and other policy makers ought to be more explicit about it.
It should be noted that recent performance funding “rewards” for increased degree completion are, to put it gently, rather creative. The Louisiana Board of Regents, for example, will provide extra funding for institutions that increase not the percentage, but the numbers, of graduates by … allowing them to raise tuition. The irony is delicious: you will pay more to attend an institution that graduates not a greater percentage of, but more, students. In Indiana, where all public institutions are scheduled for budget cuts in 2010, those that produce more degrees will not be cut as much as others. In other words, in both cases, no public money really changes hands. Clever!
Accountability as Warranty
The warranty interpretation of accountability is a variation on a unilateral contract. The manufacturer attests that the product you buy is free of defects, and, within a specified period (1 year, 3 years), unless you abuse it in ways that transcend the capacity of its structure, components, and ordinary siting, the manufacturer will repair or replace the product. Translated into the principal function of institutions of higher education, the distribution of knowledge and skills, the warranty implies that students to whom degrees are awarded are analogous to products (human beings filled with knowledge and skills), behind which the institute of higher education stands. The recipient of the warranty is generalized -- the “public,” or “employers,” or “policy makers” -- not a very concise locus for the accountability relationship.
The warranty gloss sounds intriguing, and one is drawn to see where it leads. Does the warranty form mean that all those to whom an institution grants degrees have demonstrated X, M, and Q, and that these competencies will function at qualifying or higher levels for at least Z years? If so, then at least there are substantive reference points in such a warranty statement.
A warranty is a public act; the institution is the responsible party, hence also responsible for bearing witness -- publicly -- to what the credential represents. We’re back on the border of contracts: the institution offers programs of study and criteria for awarding degrees; the student implicitly accepts by registering for courses. The student then fulfills the terms of the offer, demonstrating X, M, and Q, whereupon the institution awards the degree. One arm of the contract is fulfilled, with both sides meeting their obligations.
With that fulfillment in hand, the institution, as a publicly chartered entity whose primary obligation is the distribution of knowledge and skills, can turn to the chartering authority, the state (and its implicit ground, “the public”), and testify that it has fulfilled its primary function, justified its charter. In this case, the testimony becomes a de facto warranty, with the second arm of the implicit contract fulfilled. Sounds like all the conditions of “accountability” are met.
But there are problems here, too. The warranty is wholly a representation of the provider. It does not require evidence of the users of alumni work, civic involvement, or cultural life. The terms of maintenance and advancement of knowledge and skills beyond students’ periods of study are wholly subjunctive. Higher education leaders and followers are justly wary of staking their work on the performance of alumni. “We are not manufacturing a product with fixed attributes,” they would cry -- and they are so right about that. “Too many intervening variables that are beyond our obligations!” “We aren’t responsible for the labor market!” The threat to any warranty is endogenous.
Relationship. Obligation. Responsibility. Is this vocabulary sufficient for understanding accountability in the context of higher education? Maybe, but we need a different lens to see how.
Accountability According to Socrates
The non-financial information that institutions of higher education are providing in increasingly significant volumes raises a Socratic formulation for “accountability.” In the Socratic moral universe, one is simultaneously witness and judge. The Greek syneidesis (“conscience” and “consciousness”) means to know something with, so to know oneself with oneself becomes an obligation of institutions and systems -- to themselves. “Obligation,” in its Socratic formulation, is an ethical touchstone, a universal governing principle of human relations. Outsiders (“the public,” “employers,” “policy makers”) may observe the information we produce as witnesses to our own behavior, processes, and outcomes, but if the Socratic mantra is adhered to, they are bystanders. Obligation becomes self-reflexive.
There are no external authorities here. We offer, we accept, we provide evidence, we judge. There is nothing wrong with this: it is indispensable, reflective self-knowledge. And provided we judge without excuses, we hold to this Socratic moral framework. As Peter Ewell has noted, the information produced under this rubric, particularly in the matter of student learning, is “part of our accountability to ourselves.”
But is this “accountability” as the rhetoric of higher education uses the white noise -- or something else?
I contend that, in response to shrill calls for “accountability,” U.S. higher education has placed all its eggs in the Socratic basket, but in a way that leaves the basket half-empty. It functions as the witness, providing enormous amounts of information, but does not judge that information. It is here that the dominant definitions of accountability in U.S. higher education can be found:
“Accountability is the public communication about difference dimensions of performance, geared to general audiences, and framed in the context of goals and standards.” (Business-Higher Education Forum, 2004) |
“Accountability is the public presentation and communication of evidence about performance in relation to goals.” (Texas Higher Education Coordinating Board, 2007) |
“VSA [Voluntary System of Accountability] is a program to provide greater accountability by public institutions through accessible, transparent, and comparable information. . .” (AASCU and NASULGC 2007) |
There are a couple of wrinkles in these direct and implied definitions (“standards” and “comparable”), but we’ll set them aside. The Socratic position yields accountability by metrics. And we certainly get them. For example, the University of California System’s 2009 Accountability Report provided no less than 131 indicators that turn over most of the stones of system operation. Some 41 percent of these indicators are basically “census” data, e.g. enrollments, full-time “ladder rank” faculty, R&D expenditures. These are generally available in other places and fairly inconsequential in terms of the obligations of institutions, but it is very nice to have them all in one place. It’s certainly public, it’s certainly transparent, and it is certainly overwhelming. Whoever wants to select an item of interest has a wide array of choice.
By one interpretation, this report may be an unconscious satire on the entire enterprise of the witness producing numbers, for the only relationship a document such as this implies is to brush off the nags. “You wanted data about everything we do? Here it is! Now go away!”
We frequently observe a plea for excuses on data production in this context, e.g. “Measurement isn’t sufficient for accountability, but it is necessary” (for instance, in Chad Aldeman and Kevin Carey's "Ready to Assemble: Grading State Higher Education Accountability Systems," 2009). Well, if the indicator menu “isn’t sufficient,” what else do these advocates suggest complete the offerings?
Every single “best practice” cited by Aldeman and Carey is subject to measurement: labor market histories of graduates, ratios of resource commitment to various student outcomes, proportion of students in learning communities or taking capstone courses, publicly-posted NSSE results, undergraduate research participation, space utilization rates, licensing income, faculty patents, volume of non-institutional visitors to art exhibits, etc. etc. There’s nothing wrong with any of these, but they all wind up as measurements, each at a different concentric circle of putatively engaged acceptees of a unilateral contract to provide evidence. By the time one plows through Aldeman and Carey’s banquet, one is measuring everything that moves -- and even some things that don’t.
Market-Based Accountability
From a different corner of the analytic universe has come the notion that the reasons one presents all that information, data, and indicators of institutional performance are (a) to position one’s institution in a market of student choice, and (b), as a necessary condition of that positioning, to compare one’s performance indicators with those of like institutions. There are two markets here that provide the judgment of institutional success: one where bodies are counted as applicants and transfers, and one of media exposure and attention.
Burck Smith, cited earlier, sees this “market accountability” in more complex terms. His “market” is an invisible field of information on which players presumably compete. Their only obligations are to the invisible force. They are, in effect, selling services, and the market judges by a quality-to-price ratio. By this interpretation, the market is a mediating ground on which providers and consumers meet, with the latter judging the former with markers of commerce (everything from tuition to research grants to general or program specific support). Under these formulas, there will be “market winners” and “market losers.”
Is accountability a game of winners and losers? Are there judges who issue decisions about best-in-show? If prospective students and their parents are the judges, then best-in-show gets a volume of applications that would swamp the campus for the next three generations. This bizarre market assumes unlimited numerus clausus at every institution of higher education.
Sorry, but basic capacity facts mean that consumers cannot vote with their feet in higher education. We’re not selling toothpaste or shampoo, as Kelly and Aldeman’s 2010 "False Fronts?: Behind Higher Education’s Voluntary Accountability Systems" assumes. And if state legislatures and/or state higher education authorities are the judges, do they really sanction Old Siwash as the pit of performance and close down a campus that cost them $100 million to start with and on which at least a small part of a local economy depends?
More to the point of questioning the market interpretation, we can ask whether the provision-of-information designed to compare institutions -- in a particular region, of a particular category, etc. -- is “accountability”? If an institution is buying tests such as the CLA, and claims that its 100 paid test-taking volunteers improved at a 0.14 Effect Size rate greater than matched students at the peer school in another state, who are the receiving parties of the advertisement? Which of these parties even begins to understand the advertisement? And by what authority are institutions obligated to provide this elusive understanding?
If we glossed the Socratic notion on provision-of-information, the purpose is self-improvement, not comparison. The market approach to accountability implicitly seeks to beat Socrates by holding that I cannot serve as both witness and judge of my own actions unless the behavior of others is also on the table. The self shrinks: others define the reference points. “Accountability” is about comparison and competition, and an institution’s obligations are only to collect and make public those metrics that allow comparison and competition. As for who judges the competition, we have a range of amorphous publics and imagined authorities.
In fact, “accountability’ fades into an indeterminate background landscape under this “market” formulation precisely because there are no explicit and credible second parties. It fades even more under what the Business-Higher Education Forum wisely termed “deinstitutionalization.”
That is, given both accelerating student mobility (multi-institutional attendance, staggered attendance patterns, geo-demography that turns enrollment management on its head) and e-Learning, the “institution” as the subject of accountability sentences has lost considerable status as the primary claimant for results involving student attainment and learning. Hmmmm!
Accountability as Environment
When Peter Ewell (in Assessment, Accountability, and Improvement 2009) observes that “the central leitmotifs of this new accountability environment are transparency and learning outcomes,” he stumbles across (though doesn’t play it out) yet another intriguing notion of accountability. It is not a form of action within a specific type of relationships; it is an “environment.” What kind of environment? One in which those with visibility and access to mass media have pushed higher education to provide understandable (“transparent”) data and information on what they do, and indicators (which may not be so clear, but which come with short-hand white noise phrases such as “critical thinking” and “teamwork”) of what happens to students’ knowledge and skills as a result of spending some time (though how much is rarely addressed) and effort (something that is never addressed) in higher education (no matter how many institutions the student attends). These are all “messages,” and their aggregation constitutes public propaganda.
There are no formal agreements here: this is not a contract, it is not a warranty, it is not a regulatory relationship. It isn’t even an issue of becoming a Socratic self-witness and judge. It is, instead, a case in which one set of parties, concentrated in places of power, asks another set of parties, diffuse and diverse, “to disclose more and more about academic results,” with the second set of parties responding in their own terms and formulations. The environment itself determines behavior.
Ewell is right about the rules of the information game in this environment: when the provider is the institution, it will shape information “to look as good as possible, regardless of the underlying performance.” The most prominent media messenger, U.S. News & World Report’s rankings, and the most media/policy-maker-connected of the glossy Center reports, "Measuring Up" (which grades states with formulas resembling Parker Brothers board games) simply parse information in different ways, and spill it into the “accountability environment.” The messengers become self-appointed arbiters of performance, establishing themselves as the second party to which institutions and aggregates of institutions become “accountable.” Can we honestly say that the implicit obligation of feeding these arbiters constitutes “accountability”?
Decidedly not, even though higher education willingly engages in such feeding. But if the issue is student learning, there is nothing wrong with -- and a good deal to be said for -- posting public examples of comprehensive examinations, summative projects, capstone course papers, etc. within the information environment, and doing so irrespective of anyone requesting such evidence of the distribution of knowledge and skills. Yes, institutions will pick what makes them look good, but if the public products resemble AAC&U’s “Our Students’ Best Work” project, they set off peer pressure for self-improvement and very concrete disclosure. The other prominent media messengers simply don’t engage in constructive communication of this type.
Conclusions and Reflections
At the end of this exploratory flight, I am not sure where to land, other than to acknowledge an obvious distinction between the practice and the nature of “accountability”: the former is accessible; the latter is still a challenge. Empirically, U.S. higher education has chosen a quasi-Socratic framework, providing an ever-expanding river of data to indeterminate (or not very persuasive) audiences, but with no explicit quality assurance commitment. Surrounding this behavior is an environment of requests and counter-requests, claims and counter-claims, with no constant locus of authority.
Ironically, a “market” in the loudest voices, the flashiest media productions, and the weightiest panels of glitterati has emerged to declare judgment on institutional performance in an age when student behavior has diluted the very notion of an “institution” of higher education. The best we can say is that this environment casts nothing but fog over the specific relationships, responsibilities, and obligations that should be inherent in something we call “accountability.”
Perhaps it is about time that we defined these components and their interactions with persuasive clarity. I hope that this essay will invite readers to do so.