Assessment

Applied Assessment Seminar

Date: 
Thu, 11/18/2010 to Fri, 11/19/2010

Location

Tampa , Florida
United States

Accreditation's Accidental Transformation

An accrediting organization, in the past year, grants accreditation to a university despite concerns regarding the institution’s assignment of credit hours for certain courses. What happens next? The U.S. Department of Education’s inspector general recommends a review that could lead to suspension or termination of the accreditor’s recognition and the U.S. House of Representatives holds a hearing on how accrediting organizations review institutions’ credit hour policies. At the same time, a legal definition of a credit hour is included in USDE’s recently proposed regulations.

An accrediting organization denies a request by a nonprofit college to continue its accreditation, as part of a planned purchase by a for-profit corporation, only weeks after a U.S. Senate hearing on for-profit education drew widespread media coverage. Subsequently, the 125-year old college announces that it will close. What happens next? The accreditation decision is questioned not only by the college and the for-profit corporation, but also by lawmakers and by the media.

Why the national attention? Why the second-guessing of the accreditation decisions? It is part of the accidental transformation of accreditation.

Academic quality assurance and collegiality -- the defining features of traditional accreditation -- are, at least for now, taking a backseat to consumer protection and compliance with law and regulation. Government and the public expect accreditation to essentially provide a guarantee that students are getting what they pay for in terms of the education they seek.

Blame the enormous amount of taxpayer money involved (some $150 billion every year at the federal level alone), which puts more and more pressure on accreditors to give more and more attention to assuring that taxpayers’ money is well-spent. “Well-spent” is not about abstract notions of quality.

Blame the powerful demand that, above all, colleges and universities provide credentials that lead directly to employment or advancement of employment. Driven by public concerns about the difficult job market and the persistent rise in the price of tuition, accrediting organizations are now expected to assure that the colleges, universities and programs they accredit will produce these pragmatic results.

The worth of higher education is determined less and less through the professional judgments made by the academic community. The deference at one time accorded accrediting organizations to decide the worth of colleges and universities is diminished and perhaps disappearing.

Accreditation decisions about individual institutions are now scrutinized by additional actors -- whether U.S.Department of Education or Congress or the press -- who make their own judgments here. Simply put, this is “co-accreditation.” For these additional actors, “quality” is about compliance with federal law and regulation and about the practical gains of students -- judgments that government and the public can readily make.

Why does this matter?

  • Because of the transformation of what counts as quality. The worth of higher education, once judged by the quality of faculty, curriculum, research and academic standards, is more and more judged in solely pragmatic terms – earning a credential or getting a job or promotion. What happens to the essential role of colleges and universities in assuring intellectual development and vitality in our society?
  • Because of the transformation of who decides quality. For more than 100 years, the accreditation process has been a key factor in creating an outstanding national higher education enterprise. Will we still enjoy outstanding colleges and universities as government, the press and the public become more prominent deciders here?
  • Because of the transformation of the role of money in judging quality. Over and over again, government and the public point to the ever-growing taxpayer investment in higher education and demand more and more accountability from accreditation. While money is a vital factor in all aspects of society, do we want it to be the centerpiece of quality judgments?

Do we know the consequences of this accidental transformation? Are we prepared to accept them? These changes may be unintended, but they are dramatic and far-reaching. Is this how we want to proceed?

2010 ABET Annual Conference

Date: 
Thu, 10/28/2010 to Fri, 10/29/2010

Location

Baltimore , Maryland
United States

The White Noise of Accountability

“Accountability,” a term that has been with us, late and soon. Its six syllables trip by as the background white noise in the liturgy of higher education -- in steady anti-strophe and strophe, and repeated so often that one assumes it must be a magic incantation.

You know what happens with liturgies: after so many repetitions, there is no recompense. We don’t really know what we are saying. In this case, the six-syllable perfect scan, “accountability,” simply floats by as what we assume to be a self-evident reality. Even definitions wind up in circles, e.g., “In education, accountability usually means holding colleges accountable for the learning outcomes produced.” One hopes Burck Smith, whose paper containing this sentence was delivered at an American Enterprise Institute conference last November, held a firm tongue-in-cheek with the core phrase.

The language is hardly brand-new, nor confined to the commentariat. The 2005 report of the National Commission on Accountability in Higher Education puts “accountability” in a pinball machine where “goals” become “objectives” become “priorities” become “goals” again. One wins points along the way, but has no idea of what they represent.

Another trope in this presentation involves uttering the two words “accountability” and “transparency” together, as if one defines by proximity. In its 2008 monograph, "A Culture of Evidence," the Educational Testing Service works the phrase, “transparency and accountability” so often that it unquestionably takes on a liturgical character. The Texas Higher Education Coordinating Board starts right off in its 2007 report "Accountability in Higher Education" with a typical variation of the genre: “Making accountability more transparent ... will require...” and no further discussion of the first clause. If I am going to make something called “accountability” “more transparent,” isn’t it incumbent upon me to tell the reader what that something is and how, at the present moment, it is cloudy, opaque, etc.? THECB never does. Its use of “accountability” is just another piece of white noise. It’s a word you utter because it lends gravitas.

So what kind of creature is this species called “accountability”? Readers who recall Joseph Burke’s introductory chapter to his Achieving Accountability in Higher Education (Wiley, 2004) will agree that I am hardly the first nearsighted crazy person to ask the question. This essay will come at the word in a different way and from a different tradition than Burke’s political theory.

I am inviting readers to join in thinking about accountability together, with the guidance of some questions that are both metaphysical and practical. Our adventure through these questions is designed as a prodding to all who use the term to tell us what they are talking about before they otherwise simply echo the white noise.

Basic Questions About Relationships

We are now surrounded by a veritable industry producing enormous quantities of data and information on various performances of institutions of higher education in the name of something called “accountability,” and it is fair to ask where this production sits in terms of the potential meaning of its banner. It is also necessary to note that, in the rhetoric of higher education, “institution” is usually the subject of sentences including “accountability,” as if a single entity were responsible for a raft of consequences. But, as noted below, when our students attend three or four schools, the subject of these sentences is considerably weakened in terms of what happens to those students. The relationship is attenuated.

For now we start with a postulate: however we define accountability, we are describing a relationship in which obligations and responsibilities dwell. Our questions sound simple: What kind of relationship? What kind of obligations? What kind of responsibilities? What actions within the relationship justify its type? The exploration is conducted not to convince you that one configuration is “better” than another, rather to make sure that we all think better about the dynamics of each one.

What types of relationships might be at issue?

  • Contractual, both classic and unilateral
  • Regulatory
  • Warranty
  • Ethical
  • Market
  • Environmental

That is not a complete list, to be sure, and I trust readers will add to it. But it is one where we can ask, at each station, whether there are clear and unambiguous parties on both sides of the relationship. And, for each of these frameworks, in their applications in higher education, it is also fair to ask:

  • Who or what is one accountable to?
  • For what?
  • Why that particular “what” -- and not another “what”?
  • To what extent is the relationship reciprocal?
  • Are there rewards and/or sanctions inherent in the relationship?
  • How continuous is the relationship?

Accountability as Implicit Contract

We mix our blood or spit on the same ground to seal our agreements. There is an offer, an acceptance, and a named party standing behind each side. Every law student learns the ritual in the first week of the first term of classes. The arrangement includes the provision of goods, services, or spirit; the exchange is specified; the agreement is binding, and remedies are specified if either party breaks the terms of the exchange. There are, of course, a lot of legal weeds here, and more variations than galaxies, but that’s the general idea.

Where do we see contracts in higher education between an institution and parties outside an institution? As a general principle, wherever the money flows. Indeed, one of the key factors that propels consideration of accountability as either a contract or regulatory construct lies in cost. There is a dollar sign on every college door in the U.S., strongly implying that those who pass through the doors are purchasing something that the offeror is bound to deliver.

From a contractual standpoint, when the parents of students or students themselves pay tuition and fees, they are accepting an offer from the institution to provide services – both major and minor, explicit and implicit. As practice stands, they are not contracting for results; rather for services that may produce consequences, some of which can be reasonably anticipated, some of which not. And when the institution takes public funds (federal or state), it has entered into a contractual relationship in which it has agreed to provide generalized or specific services (and, sometime, products). These examples apply equally to public, not-for-profit, and for-profit institutions.

If accountability in higher education is a contractual relationship, we’ve got problems. The “goods” or “services” to be rendered by the offeror are usually indeterminate; there is no formal statement of obligations. The institution does not pledge to students that its efforts will produce specified learning, persistence and graduation, productive labor market entry, or a good life. We don’t put low persistence or graduation rates in a folder subject to educational malpractice suits. Nor does the institution pledge to public funding authorities that it will produce X number of graduates, Y dollars of economic benefits, or Z volume of specified community services, or be subject to litigation if it fails to reach these benchmarks.

The Business-Higher Education Forum’s 2004 Public Accountability for Student Learning in Higher Education: Issues and Options notes that a number of non-student-referenced “measures of institutional performance ... shape [italics mine] public accountability in higher education,” including “resource use, research and service, and contributions to economic development.” Even before one gets to student learning, one has to ask where something called “public accountability” lies in these activities and outputs. Are private institutions under implicit contract to the public for their efficiencies in “resource use”? Where does that obligation come from? What is “economic development,” and was it agreed to in a state charter for an institution? If students and staff simply spend money in the districts surrounding an institution, does that constitute purposeful economic development by the institution?

Look more closely at how the institution guides us, and one usually finds a mission statement with very generalized goals and assurances of care for: the student, the surrounding community, the search for knowledge, the provision of opportunity, the value of a “diverse” human environment (even if “diverse” is never translated), and maybe more. These pledged general “services” are chosen by the provider, who thus executes what the law would call a “unilateral contract.” The unilateral contract mode also allows established and ad hoc organizations to delineate what individual institutions and state systems must/should do (the “what” of a relationship) to validate their responsibilities.

The unilateral contract starts out as an intriguing vehicle for “accountability,” but swiftly heads into a dead end because the “with whom or what” that stands on the other side of the contract is more a matter of conjecture and interpretation than fact. There is no obvious party to accept the offer, no obvious reward for provision, and no obvious sanction if the provision falls short of promise. If the unilateral declaration claims consensus status, one would want to know the parties to the consensus. Were faculty partners (one doesn’t hear much about the instructional workforce in all the white noise of accountability)? Students? Students, last we looked, haven’t stopped buying the product no matter what the institution issuing a unilateral declaration of mission and care actually does, so, from a student perspective, the unilateral contract is moot. From any other perspective, it is a fog bank.

Accountability as Regulatory Relationship

The concentric circle questions on contractual relationships lead to an intermediary step on the way to formal regulation: performance funding, better labeled (as Burke and Minassians did in their 2003 Performance Reporting: “Real” Accountability or Accountability “Lite”) as performance budgeting. This is a case that affects only public institutions, with state authorities acting as de facto contractual offerors, promising to reward accepting parties (the schools) for meeting specified thresholds or increases of production, inclusion, public promulgation of internal performance metrics, etc. Historically, performance funding is not a mandate, and there are no sanctions for nonperformance. Institutions that fall short are held harmless. Those that exceed may or may not receive extra funds, depending on a state’s financial state. One can budget, after all, but not necessarily fund.

The true regulatory relationship tightens the actions and obligations one observes dimly in performance funding. We begin to see divergent paths of financial representation and non-financial information, both required, in different ways, by state authorities. Both public and private institutions are subject to requirements for basic financial disclosure as a byproduct of their status as state-chartered institutions doing public business. After that point, annual financial reports are required of public institutions by state authorities, e.g., Texas asks for operating expenses per FTE, with different calculations for each level of degree program, and administrative costs as a proportion of operating expenses (seen as a measure of institutional efficiency). Private institutions may report similar information to their boards of trustees, but are under no obligation to reveal internal financial information to anyone else.

What happens if a public institution, under legislative mandate, presents incomplete or dubious financial information, or finance data that clearly reveal inefficiencies? Are there sanctions? Does the state ask for the CFO’s head? To be “accountable” in this regulatory framework is to provide information, not to suffer for it. One can fulfill one’s obligations, but not necessarily one’s responsibilities. Is that what we mean by “accountability”?

As for non-financial information, the closest we come to state regulations with consequences are recent legislative proposals to fund public institutions of higher education on the basis of course or degree completions and not enrollments.But this type of regulation holds the institution responsible for the behavior of students, thus clouding the locus of both obligation and responsibility. Is this what we mean? If so, then legislators and other policy makers ought to be more explicit about it.

It should be noted that recent performance funding “rewards” for increased degree completion are, to put it gently, rather creative. The Louisiana Board of Regents, for example, will provide extra funding for institutions that increase not the percentage, but the numbers, of graduates by … allowing them to raise tuition. The irony is delicious: you will pay more to attend an institution that graduates not a greater percentage of, but more, students. In Indiana, where all public institutions are scheduled for budget cuts in 2010, those that produce more degrees will not be cut as much as others. In other words, in both cases, no public money really changes hands. Clever!

Accountability as Warranty

The warranty interpretation of accountability is a variation on a unilateral contract. The manufacturer attests that the product you buy is free of defects, and, within a specified period (1 year, 3 years), unless you abuse it in ways that transcend the capacity of its structure, components, and ordinary siting, the manufacturer will repair or replace the product. Translated into the principal function of institutions of higher education, the distribution of knowledge and skills, the warranty implies that students to whom degrees are awarded are analogous to products (human beings filled with knowledge and skills), behind which the institute of higher education stands. The recipient of the warranty is generalized -- the “public,” or “employers,” or “policy makers” -- not a very concise locus for the accountability relationship.

The warranty gloss sounds intriguing, and one is drawn to see where it leads. Does the warranty form mean that all those to whom an institution grants degrees have demonstrated X, M, and Q, and that these competencies will function at qualifying or higher levels for at least Z years? If so, then at least there are substantive reference points in such a warranty statement.

A warranty is a public act; the institution is the responsible party, hence also responsible for bearing witness -- publicly -- to what the credential represents. We’re back on the border of contracts: the institution offers programs of study and criteria for awarding degrees; the student implicitly accepts by registering for courses. The student then fulfills the terms of the offer, demonstrating X, M, and Q, whereupon the institution awards the degree. One arm of the contract is fulfilled, with both sides meeting their obligations.

With that fulfillment in hand, the institution, as a publicly chartered entity whose primary obligation is the distribution of knowledge and skills, can turn to the chartering authority, the state (and its implicit ground, “the public”), and testify that it has fulfilled its primary function, justified its charter. In this case, the testimony becomes a de facto warranty, with the second arm of the implicit contract fulfilled. Sounds like all the conditions of “accountability” are met.

But there are problems here, too. The warranty is wholly a representation of the provider. It does not require evidence of the users of alumni work, civic involvement, or cultural life. The terms of maintenance and advancement of knowledge and skills beyond students’ periods of study are wholly subjunctive. Higher education leaders and followers are justly wary of staking their work on the performance of alumni. “We are not manufacturing a product with fixed attributes,” they would cry -- and they are so right about that. “Too many intervening variables that are beyond our obligations!” “We aren’t responsible for the labor market!” The threat to any warranty is endogenous.

Relationship. Obligation. Responsibility. Is this vocabulary sufficient for understanding accountability in the context of higher education? Maybe, but we need a different lens to see how.

Accountability According to Socrates

The non-financial information that institutions of higher education are providing in increasingly significant volumes raises a Socratic formulation for “accountability.” In the Socratic moral universe, one is simultaneously witness and judge. The Greek syneidesis (“conscience” and “consciousness”) means to know something with, so to know oneself with oneself becomes an obligation of institutions and systems -- to themselves. “Obligation,” in its Socratic formulation, is an ethical touchstone, a universal governing principle of human relations. Outsiders (“the public,” “employers,” “policy makers”) may observe the information we produce as witnesses to our own behavior, processes, and outcomes, but if the Socratic mantra is adhered to, they are bystanders. Obligation becomes self-reflexive.

There are no external authorities here. We offer, we accept, we provide evidence, we judge. There is nothing wrong with this: it is indispensable, reflective self-knowledge. And provided we judge without excuses, we hold to this Socratic moral framework. As Peter Ewell has noted, the information produced under this rubric, particularly in the matter of student learning, is “part of our accountability to ourselves.”

But is this “accountability” as the rhetoric of higher education uses the white noise -- or something else?

I contend that, in response to shrill calls for “accountability,” U.S. higher education has placed all its eggs in the Socratic basket, but in a way that leaves the basket half-empty. It functions as the witness, providing enormous amounts of information, but does not judge that information. It is here that the dominant definitions of accountability in U.S. higher education can be found:

“Accountability is the public communication about difference dimensions of performance, geared to general audiences, and framed in the context of goals and standards.” (Business-Higher Education Forum, 2004)
“Accountability is the public presentation and communication of evidence about performance in relation to goals.” (Texas Higher Education Coordinating Board, 2007)
“VSA [Voluntary System of Accountability] is a program to provide greater accountability by public institutions through accessible, transparent, and comparable information. . .” (AASCU and NASULGC 2007)

There are a couple of wrinkles in these direct and implied definitions (“standards” and “comparable”), but we’ll set them aside. The Socratic position yields accountability by metrics. And we certainly get them. For example, the University of California System’s 2009 Accountability Report provided no less than 131 indicators that turn over most of the stones of system operation. Some 41 percent of these indicators are basically “census” data, e.g. enrollments, full-time “ladder rank” faculty, R&D expenditures. These are generally available in other places and fairly inconsequential in terms of the obligations of institutions, but it is very nice to have them all in one place. It’s certainly public, it’s certainly transparent, and it is certainly overwhelming. Whoever wants to select an item of interest has a wide array of choice.

By one interpretation, this report may be an unconscious satire on the entire enterprise of the witness producing numbers, for the only relationship a document such as this implies is to brush off the nags. “You wanted data about everything we do? Here it is! Now go away!”

We frequently observe a plea for excuses on data production in this context, e.g. “Measurement isn’t sufficient for accountability, but it is necessary” (for instance, in Chad Aldeman and Kevin Carey's "Ready to Assemble: Grading State Higher Education Accountability Systems," 2009). Well, if the indicator menu “isn’t sufficient,” what else do these advocates suggest complete the offerings?

Every single “best practice” cited by Aldeman and Carey is subject to measurement: labor market histories of graduates, ratios of resource commitment to various student outcomes, proportion of students in learning communities or taking capstone courses, publicly-posted NSSE results, undergraduate research participation, space utilization rates, licensing income, faculty patents, volume of non-institutional visitors to art exhibits, etc. etc. There’s nothing wrong with any of these, but they all wind up as measurements, each at a different concentric circle of putatively engaged acceptees of a unilateral contract to provide evidence. By the time one plows through Aldeman and Carey’s banquet, one is measuring everything that moves -- and even some things that don’t.

Market-Based Accountability

From a different corner of the analytic universe has come the notion that the reasons one presents all that information, data, and indicators of institutional performance are (a) to position one’s institution in a market of student choice, and (b), as a necessary condition of that positioning, to compare one’s performance indicators with those of like institutions. There are two markets here that provide the judgment of institutional success: one where bodies are counted as applicants and transfers, and one of media exposure and attention.

Burck Smith, cited earlier, sees this “market accountability” in more complex terms. His “market” is an invisible field of information on which players presumably compete. Their only obligations are to the invisible force. They are, in effect, selling services, and the market judges by a quality-to-price ratio. By this interpretation, the market is a mediating ground on which providers and consumers meet, with the latter judging the former with markers of commerce (everything from tuition to research grants to general or program specific support). Under these formulas, there will be “market winners” and “market losers.”

Is accountability a game of winners and losers? Are there judges who issue decisions about best-in-show? If prospective students and their parents are the judges, then best-in-show gets a volume of applications that would swamp the campus for the next three generations. This bizarre market assumes unlimited numerus clausus at every institution of higher education.

Sorry, but basic capacity facts mean that consumers cannot vote with their feet in higher education. We’re not selling toothpaste or shampoo, as Kelly and Aldeman’s 2010 "False Fronts?: Behind Higher Education’s Voluntary Accountability Systems" assumes. And if state legislatures and/or state higher education authorities are the judges, do they really sanction Old Siwash as the pit of performance and close down a campus that cost them $100 million to start with and on which at least a small part of a local economy depends?

More to the point of questioning the market interpretation, we can ask whether the provision-of-information designed to compare institutions -- in a particular region, of a particular category, etc. -- is “accountability”? If an institution is buying tests such as the CLA, and claims that its 100 paid test-taking volunteers improved at a 0.14 Effect Size rate greater than matched students at the peer school in another state, who are the receiving parties of the advertisement? Which of these parties even begins to understand the advertisement? And by what authority are institutions obligated to provide this elusive understanding?

If we glossed the Socratic notion on provision-of-information, the purpose is self-improvement, not comparison. The market approach to accountability implicitly seeks to beat Socrates by holding that I cannot serve as both witness and judge of my own actions unless the behavior of others is also on the table. The self shrinks: others define the reference points. “Accountability” is about comparison and competition, and an institution’s obligations are only to collect and make public those metrics that allow comparison and competition. As for who judges the competition, we have a range of amorphous publics and imagined authorities.

In fact, “accountability’ fades into an indeterminate background landscape under this “market” formulation precisely because there are no explicit and credible second parties. It fades even more under what the Business-Higher Education Forum wisely termed “deinstitutionalization.”

That is, given both accelerating student mobility (multi-institutional attendance, staggered attendance patterns, geo-demography that turns enrollment management on its head) and e-Learning, the “institution” as the subject of accountability sentences has lost considerable status as the primary claimant for results involving student attainment and learning. Hmmmm!

Accountability as Environment

When Peter Ewell (in Assessment, Accountability, and Improvement 2009) observes that “the central leitmotifs of this new accountability environment are transparency and learning outcomes,” he stumbles across (though doesn’t play it out) yet another intriguing notion of accountability. It is not a form of action within a specific type of relationships; it is an “environment.” What kind of environment? One in which those with visibility and access to mass media have pushed higher education to provide understandable (“transparent”) data and information on what they do, and indicators (which may not be so clear, but which come with short-hand white noise phrases such as “critical thinking” and “teamwork”) of what happens to students’ knowledge and skills as a result of spending some time (though how much is rarely addressed) and effort (something that is never addressed) in higher education (no matter how many institutions the student attends). These are all “messages,” and their aggregation constitutes public propaganda.

There are no formal agreements here: this is not a contract, it is not a warranty, it is not a regulatory relationship. It isn’t even an issue of becoming a Socratic self-witness and judge. It is, instead, a case in which one set of parties, concentrated in places of power, asks another set of parties, diffuse and diverse, “to disclose more and more about academic results,” with the second set of parties responding in their own terms and formulations. The environment itself determines behavior.

Ewell is right about the rules of the information game in this environment: when the provider is the institution, it will shape information “to look as good as possible, regardless of the underlying performance.” The most prominent media messenger, U.S. News & World Report’s rankings, and the most media/policy-maker-connected of the glossy Center reports, "Measuring Up" (which grades states with formulas resembling Parker Brothers board games) simply parse information in different ways, and spill it into the “accountability environment.” The messengers become self-appointed arbiters of performance, establishing themselves as the second party to which institutions and aggregates of institutions become “accountable.” Can we honestly say that the implicit obligation of feeding these arbiters constitutes “accountability”?

Decidedly not, even though higher education willingly engages in such feeding. But if the issue is student learning, there is nothing wrong with -- and a good deal to be said for -- posting public examples of comprehensive examinations, summative projects, capstone course papers, etc. within the information environment, and doing so irrespective of anyone requesting such evidence of the distribution of knowledge and skills. Yes, institutions will pick what makes them look good, but if the public products resemble AAC&U’s “Our Students’ Best Work” project, they set off peer pressure for self-improvement and very concrete disclosure. The other prominent media messengers simply don’t engage in constructive communication of this type.

Conclusions and Reflections

At the end of this exploratory flight, I am not sure where to land, other than to acknowledge an obvious distinction between the practice and the nature of “accountability”: the former is accessible; the latter is still a challenge. Empirically, U.S. higher education has chosen a quasi-Socratic framework, providing an ever-expanding river of data to indeterminate (or not very persuasive) audiences, but with no explicit quality assurance commitment. Surrounding this behavior is an environment of requests and counter-requests, claims and counter-claims, with no constant locus of authority.

Ironically, a “market” in the loudest voices, the flashiest media productions, and the weightiest panels of glitterati has emerged to declare judgment on institutional performance in an age when student behavior has diluted the very notion of an “institution” of higher education. The best we can say is that this environment casts nothing but fog over the specific relationships, responsibilities, and obligations that should be inherent in something we call “accountability.”

Perhaps it is about time that we defined these components and their interactions with persuasive clarity. I hope that this essay will invite readers to do so.

Clifford Adelman is senior associate at the Institute for Higher Education Policy. The analysis and opinions expressed in this essay are those of the author, and do not necessarily represent the positions or opinions of the institute, nor should any such representation be inferred.
 

The Accountability/Improvement Paradox

In the academic literature and public debate about assessment of student learning outcomes, it has been widely argued that tension exists between the two predominant presses for higher education assessment: the academy's internally driven efforts as a community of professional practitioners to improve their programs and practices, and calls for accountability by various policy bodies representing the “consuming public.”

My recent review of the instruments, resources and services available to faculty members and administrators for assessing and improving academic programs and institutions has persuaded me that much more than merely a mismatch exists between the two perspectives; there is an inherent paradox in the relationship between assessment for accountability and for improvement. More importantly, there is an imbalance in emphasis that is contributing to a widening gap between policy makers and members of the academy with regard to their interests in and reasons for engaging in assessment. Specifically, not enough attention is being paid to the quality of measurement (and thought) in the accountability domain, which undermines the quality of assessment activity on college campuses.

The root of the paradoxical tension between forces that shape external accountability and those that promote quality improvement is the discrepancy between extrinsic and intrinsic motivations for engaging with assessment. When the question “why do assessment?” arises, often the answer is “because we have to.” Beyond this reaction to the external pressure is a more fundamental reason: professional responsibility.

Given the specialized knowledge and expertise required of academic staff (i.e., the faculty and other professionals involved in delivering higher education programs and services), members of the academy have the rights and responsibilities of professionals, as noted by Donald Schön in 1983, to “put their clients' needs ahead of their own, and hold themselves to standards of competence and morality” (p. 11). The strong and often confrontational calls for assessment from external constituents result from mistrust and perceptions that members of professions are “serving themselves at the expense of their clients, ignoring their obligations to public service, and failing to police themselves effectively,” Schon writes. The extent of distrust correlates closely with the level of influence the profession has over the quality of life for its clients.

That is, as an undergraduate degree comes to replace the high school diploma as a gateway to even basic levels of sustainable employment, distrust increases in the professional authority of the professoriate. With increasing influence and declining trust, the focal point of professional accountability shifts from members of the profession to the clients and their representatives.

The most recent decade, and especially the last five years, has been marked by a series of critical reports, regional and national commissions (e.g., the Spellings Commission), state and federal laws (e.g., the 2008 Higher Education Opportunity Act) and nongovernmental organization initiatives to rein in higher education. In response to these pressures, academic associations and organizations have become further energized to both protect the academy and to advocate for reform from within. They seek to recapture professional control and re-establish the trust necessary to work autonomously as self-regulated practitioners. Advocates for reform within the academy reason that conducting systematic evaluation of academic programs and student outcomes, and using the results of that activity for program improvement, are the best ways to support external accountability.

Unfortunately, as Peter Ewell points out, conducting assessment for internal improvement purposes entails a very different approach than does conducting assessment for external accountability purposes. Assessment for improvement entails a granular (bottom-up), faculty-driven, formative approach with multiple, triangulated measures (both quantitative and qualitative) of program-specific activities and outcomes that are geared towards very context-specific actions. Conversely, assessment for accountability requires summative, policy-driven (top-down), standardized and comparable (typically quantitative) measures that are used for public communication across broad contexts.

Information gleaned from assessment for improvement does not aggregate well for public communication, and information gleaned from assessment for accountability does not disaggregate well to inform program-level evaluation.

But there is more than just a mismatch in perspective. Nancy Shulock describes an “accountability culture gap” between policy makers, who desire relatively simple, comparable, unambiguous information that provides clear evidence as to whether basic goals are achieved, and members of the academy, who find such bottom line approaches threatening, inappropriate, and demeaning of deeply held values. Senior academic administrators and professional staff that work to develop a culture of assessment within the institution can leverage core academic values to promote assessment for improvement. But their efforts are often undermined by external emphasis on overly simplistic, one-size-fits-all measures like graduation rates, and their credibility can be challenged if they rely on those measures to stimulate action or make budget decisions.

In the book Paradoxical Life (Yale University Press, 2009), Andreas Wagner describes paradoxical tension as a fundamental condition found throughout the biological and non-biological world. Paradoxical tension exists in a relationship when there are both conflicting and converging interests. Within the realm of higher education, converging and conflicting interests are abundant. They exist between student and faculty; faculty and program chair; chair and dean; dean and provost; provost and president; president and trustee; trustee and public commissioner; commissioner and legislator; and so on. These layers help to shield the processes at the lower levels from those in the policy world, but at the same time make transparency extremely difficult, as each layer adds a degree of opacity.

According to Wagner, paradoxical tensions have several inherent dualisms, two of which provide particular insight into the accountability/improvement paradox. The self/other dualism highlights the “outside-in” vs. “inside-out” perspectives on each side of the relationship, which can be likened to what social psychologists describe as the actor-observer difference in attributions of causality, captured colloquially in the sentiment, “I tripped but you fell.” The actor is likely to focus on external causes of a stumble, such as a crack in the sidewalk, whereas the observer focuses on the actor's misstep as the cause.

From within the academy, problems are often seen as related to the materials with which and the environments within which the work occurs; that is, the attitude and behavior of students and the availability of resources. The view from outside focuses on the behavior of faculty and the quality of programs and processes they enact.

The “matter/meaning” dualism is closely related to the seemingly irreconcilable positivist and constructivist epistemologies. The accountability perspective in higher education (and elsewhere) generally favors the mechanical, “matter” point of view, presuming that there are basic “facts” (graduation rates, levels of critical thinking, research productivity) that can be observed and compared across a broad array of contexts. Conversely, the improvement perspective generally takes a “meaning” focus. Student progress takes on differing meaning depending on the structure of programs and the concurrent obligations of the student population.

Dealing effectively with the paradoxical tensions between the accountability and improvement realms requires that we understand clearly the differing viewpoints, accommodate the converging and conflicting interests and recognize the differing activities required to achieve core objectives. Although there is not likely to be an easy reconciliation, we can work together more productively by acknowledging that each side has flaws and limits but both are worthwhile pursuits.

The key to a more productive engagement is to bolster the integrity of work in both realms through guidelines and standards for effective, professional practice. Much has been written and said about the need for colleges and universities to take seriously their responsibilities for assessing and improving student learning. Several national associations and advocacy groups have taken this as a fundamental purpose. What is less often documented, heard and acted on is the role of accountability standards in shaping effective and desired forms of assessment.

Principles for Effective Accountability

Just as members of the academy should take professional responsibility for assessment as a vehicle for improvement and accountability, so too should members of the policy domain take professional responsibility for the shape that public accountability takes and the impact it has on institutional and program performance. Reporting on a forum sponsored by the American Enterprise Institute, Inside Higher Ed concluded, “if a major theme emerged from the assembled speakers, most of whom fall clearly into the pro-accountability camp, it was that as policy makers turn up the pressure on colleges to perform, they should do so in ways that reinforce the behaviors they want to see -- and avoid the kinds of perverse incentives that are so evident in many policies today.”

Principle 1: Quality of What? Accountability assessments and measures should be derived from a broad set of clearly articulated and differentiated core objectives of higher education (e.g., access and affordability, learning, research and scholarship, community engagement, technology transfer, cultural enhancement, etc.).

The seminal reports that catalyzed the current focus on higher education accountability, and many of the reform efforts from within the academy since that time, place student learning at the center of attention. The traditional “reputation and resource” view has been criticized as inappropriate, but it has not abated. While this debate continues, advocates of other aspects of institutional quality, such as equity in participation and performance, student character development, and the civic engagement of institutions in their communities, seek recognition for their causes. Student learning within undergraduate-level programs is a nearly universal and undeniably important enterprise across the higher education landscape that deserves acute attention. Because of their pervasiveness and complexity, it is important to recognize that student learning outcomes cannot be reduced into a few quantifiable measures, lest we reduce incentive for faculty to engage authentically in assessment processes. It is essential that we accommodate both the diverse range of student learning objectives evident across the U.S. higher education landscape and other mission-critical purposes that differentiate and distinguish postsecondary institutions.

Principle 2: Quality for Whom? Accountability assessments and measures should recognize differences according to the population spectrum that is served by institutions and programs, and should do so in a way that does not suggest that there is greater value in serving one segment of the population than in serving another.

Using common measures and standards to compare institutions that serve markedly different student populations (e.g., a highly selective, residential liberal arts college compared to an open-access community college with predominantly part-time students, or a comprehensive public university serving a heterogeneous mix of students) results in lowered expectations for some types of institutions and unreasonable demands for others. If similar measures are used but “acceptable standards” are allowed to vary, an inherent message is conveyed that one type of mission is inherently superior to the other. The diversity of the U.S. higher education landscape is often cited as one of its key strengths. Homogenous approaches to quality assessment and accountability work against that strength and create perverse incentives that undermine important societal goals.
For example, there is a growing body of evidence that the focus on graduation rates and attendant concerns with student selectivity (the most expeditious way to increase graduation rates) has incentivized higher education institutions as well as state systems to direct more discretionary financial aid dollars to recruiting better students rather than meeting financial need. This, in turn, has reduced the proportions of students from under-served and low-income families that attend four-year institutions and that complete college degrees.

Programs and institutions should be held accountable for their particular purposes and on the basis of whom they serve. Those who view accountability from a system-level perspective should recognize explicitly how institutional goals differentially contribute to broader societal goals by virtue of the different individuals and objectives the institutions serve. Promulgating common measures or metrics, or at least comparing performance on common measures, does not generally serve this purpose.

Principle 3: Connecting Performance with Outcomes. Assessment methods and accountability measures should facilitate making connections between performance (programs, processes, and structures), transformations (student learning and development, research/scholarship and professional practice outcomes), and impacts (how those outcomes affect the quality of life of individuals, communities, and society at large).

Once the basis for quality (what and for whom) is better understood and accommodated, we can assess, for both improvement and accountability purposes, how various programs, structures, organizations and systems contribute to the production of quality education, research and service. To do so, it is helpful to distinguish among three interrelated aspects for our measures and inquiries:

Efforts to improve higher education require that, within the academy, we understand better how our structures, programs and processes perform to produce desired transformations that result in positive impacts. Accountability, as an external catalyst for improvement, will work best if we reduce the perverse incentives that arise from measures that do not connect appropriately among the aspects of performance, transformation and impact sought by the diverse array of postsecondary organizations and systems that encompass our national higher education landscape.

Principle 4: Validity for purpose. Accountability measures should be assessed for validity related specifically to their intended use, that is, as indicators of program or institutional effectiveness.

In the realm of measurement, the terms, “reliability” and “validity” are the quintessential criteria. Reliability refers to the mechanical aspects of measurement, that is, the consistency of a measure or assessment within itself and across differing conditions. Validity, on the other hand, refers to the relationship between the measure and meaning. John Young and I discuss the current poor state of validity assessment in the realm of higher education accountability measures and describe a set of standards for validating accountability measures. The standards include describing the kinds of inferences and claims that are intended to be made with the measure, the conceptual basis for these claims and the basis of evidence that is sufficient for backing the claims.

Currently, there is little if any attempt to ensure that accountability measures support the claims that are intended by their use. This is not surprising, given the processes that are used to develop accountability measures. Often (at best), significant thought, negotiation and technical review go into designing measures. However, there is generally little done to empirically assess the validity of the measures in relation to the inferences and claims that are made using them.

Those who promulgate accountability need to take professional responsibility (and be held accountable by members of the academy) for establishing the validity of required measures and methods. The state of validity assessment within the higher education realm (and education more generally) contrasts starkly with the more stringent requirements for validity imposed within the scientific research and health domains. Although we do not propose that the requirements should be precisely similar, there would be considerable merit to imposing appropriate professional standards and requirements for any and all measures that are required by state or federal law.

Although we may not be able to reconcile the complex paradoxical tensions between the improvement and accountability realms, it is possible to advance efforts in both spheres if we recognize the inherent paradoxical tensions and accord the individuals pursuing these efforts the rights and responsibilities for doing so.

Members of the academy should accept the imposition of accountability standards, recognizing the increasing importance of higher education to a broader range of vested interests.

At the same time, the academic community and others should hold those invoking accountability (government agencies, NGOs and the news media) to professional standards so as to promote positive (and not perverse) incentives for pursuing core objectives. Those seeking more accountability, in turn, should recognize that a “one size fits all” approach to accountability does not accommodate well the diverse landscape of U.S. higher education and the diversity of the populations served.

With the increasing pressure from outside the academy for higher education accountability measures and for demonstrated quality assurance, it becomes more necessary than ever that we manage the tensions between assessment for accountability and improvement carefully. Given that accountability pressures both motivate and shape institutional and program assessment behaviors, the only way to improve institutional improvement is to make accountability more accountable through the development and enforcement of appropriate professional standards.

Victor M.H. Borden is associate vice president for university planning and institutional research and accountability at Indiana University at Bloomington and professor of psychology at Indiana University-Purdue University at Indianapolis.

RosEvaluation Conference 2010 Assessment for Program and Institutional Accreditation

Date: 
Fri, 04/09/2010 to Sat, 04/10/2010

Location

Terre Haute , Indiana
United States

You Say You Want a Revolution?

It seems everybody is talking revolution in higher ed these days.

How many times have I read in the higher ed news of the coming revolution in classroom instruction, in the major, in the tenure system, in governance?

Google "higher education revolution" and you find radical reform rising in every direction. Many are sparked by the billions state systems are losing as our economy lurches out of the tank, others by the increasing commodification of the college degree. Some promise to "transform" the American university as they have transformed -- egad! -- the American newspaper. New models of for-profit education promise a revolution in the higher education business model that is already threatening the viability of traditional colleges across the country.

But I can't help wondering if we've spirited all our revolutionary rhetoric for another day at the office.

We tend to talk ourselves right past revolutions in higher education. Our burning impulse to revitalize learning often concludes with a return to the status quo: we end up arguing, say, over our respective roles in shared governance, or over the turf we'd have to give up for genuine improvement in learning.

We can do better.

At a recent conference, I had a glimpse into how the real transformation might unfold. The Teagle Foundation brought together professors, administrators and researchers from across the country to discuss with its board members key questions the foundation has been addressing in recent years:

  • How might we make systematic improvements in student learning?
  • What evidence is there that we’re using what we know about student learning to reform academe?

These, of course, were the very same questions asked by the ill-fated Spellings Commission. Teagle has found success by engaging the strengths of the academy -- and especially the talents and creativity of faculty--by supporting liberal arts college in piloting solutions to the challenges before academe. In doing so, they have started transformative efforts that will deepen student learning while also balancing resources.

With the public university system in crisis -- Clark Kerr's master plan for California has been set adrift along with the strategies for renewal in state after state -- a focus on liberal arts colleges could seem to some like a boutique project. The Teagle Foundation's great insight has been that the nation's liberal arts colleges remain a bellwether for the health of the academy and that small colleges have a great opportunity to model what the 21st century higher education might become.

Teagle has funded dozens of collaborative efforts at liberal arts colleges over the past six years supporting faculty-driven, ground-up assessment projects of student learning outcomes at colleges and universities across the country.

The work that colleges are doing in these Teagle pilots tests the basic assumptions of a college education. Some have examined the meaning and value of general education, exploring radical revision of the ways in which general education might come to be embedded in helping students to think about the ways they will live their lives. One project brought four colleges together to assess how effectively undergraduate students acquire and refine the spiritual values that lie at the heart of their institutional missions. Another explores effective models of community-based learning efforts at three prominent colleges.

Such work aims to deepen student learning and growth at colleges across the country. As importantly, it will help small colleges to think about ways to distinguish themselves in a landscape that increasingly sees no difference between a liberal arts college degree and a degree from, say, the University of Phoenix. Liberal arts colleges must, to use Robert Zemsky's phrase, be "market-smart and mission-centered," and the pilots that Teagle has funded in recent years point us toward solutions to drifting missions and to struggling finances alike.

At Augustana College, we are taking seriously the Teagle Foundation's charge to find ways to use what we know about student learning for reform. Working in a Teagle-funded collaborative of seven colleges across the Midwest -- Alma, Augustana, Illinois Wesleyan, Luther, Gustavus Adolphus, Washington and Jefferson, and Wittenberg -- over the past five years, we have begun to question the 100-year-old credit model system that is at the heart of the American baccalaureate. Our consortium of colleges has begun to ask whether we can still justify the existence of a system that was brought into being mostly to serve the needs of our business offices.

Will federal pressure for transferability of credit only make more secure a system that is now straining under the weight of new understanding of learning and the new pedagogies that follow? In an era when we ask faculty to be deeply engaged with students through interdisciplinary education, undergraduate research, international study, and other high impact practices, can we continue to justify a credit system that has remained unchanged for a century? We are questioning whether the course unit as now constituted -- that three- or four-hour sliver of a college degree or the correlating seat time -- is the best means of measuring student learning.

My colleagues at Augustana and I have begun other pilots that will explore the other hard questions before our college, and all colleges: how will we make better use of vital resources while demonstrating the value of a liberal education to parents, employers, and graduate schools?

We have developed a series of experiments that may answer the question. Our faculty have created a senior capstone program -- Senior Inquiry -- by using a backward design model to re-envision nearly every major on campus, ensuring that all Augustana students will have the sort of hands-on, experiential learning opportunity that will demonstrate their skills to employers and graduate schools alike (even as it provides us with a great chance to evaluate all they have done in four years here). We have redefined scholarship in the Boyer model, embracing the scholarship of teaching and learning. We are piloting new partnerships with universities, community colleges and high schools; we are asking how technology might deepen the advantages of traditional classroom learning models. And we have built our newest program- - Augie Choice -- around the idea that experiential learning -- through research, international study and internships -- ought to be the heart of a liberal arts education.

We don't yet know where all of these experiments will lead us. But, in our 150th year at Augustana, we have learned from the Teagle Foundation that pilots may help us to ensure that we will thrive for the next 150 years.

That, I'm certain, is revolution enough.

Jeff Abernathy is vice president and dean of the college at Augustana College, in Illinois. This summer, he will become president of Alma College, in Michigan.

'Design Thinking' and Higher Education

As an advocate for the position that higher education benefits from studying the lessons of business and selectively implementing those ideas that help corporate and non-profit entities to prosper, I was pleased to come across Inside Higher Ed’s report on the publication of the multi-part work The Business of Higher Education (Praeger), edited by John C. Knapp and David J. Siegel. The author observed, correctly, that “many college and faculty leaders bristle at the suggestion that the institutions -- and their students -- would be better off if only institutions operated more like their counterparts in the private sector.”

That’s why I propose a model that may meet with the approval of those who think higher education is just fine ignoring business models: design thinking. For starters, it’s an idea with origins as remote from business as design itself. While their work is hardly nonprofit, designers are rarely found destroying the competition, maximizing profit margins and exploiting their employees. Few of the designers I know personally would fit the negative perception of corporate America held by many academicians. Design thinking is about helping people and organizations to solve their problems for long-term satisfaction, not achieving efficiency for short-run gains.

While it is true that more businesses are adopting design thinking as a model for achieving better results, enhanced innovation and improved service to customers, as evidenced by several new books about innovation design and design thinking targeted for the business market, the ideas behind design thinking emerged from the methods that are common to nearly all design fields, be it industrial, graphic, instructional or any other design profession. These basic operating principles constitute a process that might be expressed most simply as the way that designers approach problems and achieve solutions. Designers think of themselves as problem finders more so than problem solvers because their solutions start with a deep understanding of the problem requiring a solution.

What can design thinking offer to higher education? In a word, change. Not just change for the sake of creating change or trying the latest fad, but thoughtful change for the higher education institution that wants to position itself to better withstand the challenges presented by both old and new competitors. Change not just for technology’s sake, but change based on better understanding students and putting into a place a mechanism for institution-wide innovation. (I’ll provide some examples later.)

The seminal work on design thinking, The Art of Innovation (Currency/Doubleday, 2001) came from a business outsider, Tom Kelley, then general manager of IDEO, one of the world’s leading design firms. Those interested in learning more about design thinking are well advised to start with Kelley’s book, as it introduces the “IDEO Method,” a five-step approach to understanding how designers think. In a nutshell, the process requires its practitioners to internalize the following:

  • Understand: be an empathic thinker and put yourself in the shoes of your student or whomever it is that you provide a service to.
  • Observe: watch people in real-life situations to better understand how they really use a service or product and those things that both please and frustrate them.
  • Visualize: brainstorm with colleagues to identify new ideas and concepts that will give those you serve or teach a better (learning) experience.
  • Prototype: take time to explore multiple iterations of an idea before exposing those you serve or teach to a potential solution or enhancement.
  • Implementation and Evaluation: be thoughtful about when and how to implement a new idea and invest time to evaluate its impact, and then re-design as needed

For those who need a faster introduction to design thinking, take 22 minutes to watch "The Deep Dive," an episode of “Nightline” that profiled how the staff at IDEO tackle a new problem and develop a solution. As one learns from this video, the designers at IDEO are experts in using the design thinking process to identify and approach problems and then develop elegant solutions to them. That’s how IDEO has designed everything from the mouse you use nearly every day to NASA equipment to toothpaste dispensers and microwave ovens.

As the design thinking method gained popularity, IDEO added organizational consulting to its product design business, and now works with health care and K-12 education systems on restructuring and re-engineering workflows to eliminate dysfunctional practices and improve user experiences. One recent book about design thinking, Change by Design, by Tim Brown, CEO of IDEO, reads more like Zen philosophy than it does a how-to for businesspeople out to rule the world.

But even those who count themselves among higher education’s anti-business faction may benefit from another new book on design thinking authored by -- shudder -- a business school dean. The Design of Business, by Roger Martin, dean of the Rotman Business School at the University of Toronto, is a good example of a business book that even the most business-phobic humanist could enjoy reading. To be certain, there are a number of case studies profiling businesses that achieved success with design thinking, but there is still much food for thought for those who think their school or department could do better.

For example, Martin elegantly explains how businesses emerge and evolve, through a multi-stage process he describes as the “knowledge funnel.” It begins with a mystery in which the fledgling innovator seeks to build a better mousetrap, such as how to organize all the world’s information. The business creates a heuristic or an intuitive sense of how to solve the mystery that allows it to offer an initial product or service. As it moves out of the exploration stage, it develops an algorithm to operate the business so that the core solutions are efficiently exploited.

To illustrate this end stage of the knowledge funnel he points to companies like McDonald’s. What started as mostly a guess that Americans would eat fast food became a highly mechanized process that is easily replicated with great efficiency. McDonald’s has no interest in stimulating employee creativity or innovation; just keep the burgers and fries coming. But blindly adhering to algorithms can cost dearly when a competitor, like Subway, brings new thinking and imagination to the same mystery.

The problem, according to Martin, is that some organizations are operated primarily by intuition while others are rigidly controlled by algorithms. His core message is that organizations guided by design thinking achieve a balance between the two so that both intuition and algorithms merge to keep the organization searching for and solving new mysteries while avoiding the extreme exploitation that leads to obsolescence. When they “satisfice” for exploiting old ideas, businesses are ultimately confronted by upstart competitors exploring new mysteries. Thus emerge disruptive innovations offering products or services that better meet people’s needs.

It’s a cycle that endlessly repeats itself, and higher education is equally susceptible. Consider higher education’s long reign using the same delivery and organizational structures after hundreds of years. It now is pressured by new competitors, many of them for-profit businesses offering low-cost, convenient options that leverage advanced educational technologies. The new mystery is how to deliver higher education in ways that are both affordable and sustainable, and that meet the needs of a new generation of both traditional and nontraditional learners.

Are there ways in which design thinking could help America retain its place as the crown jewel of the world’s higher education system? Admittedly, higher education is a unique industry owing to the vast independence of its primary employees, the faculty. Each faculty member is in his or her own way an independent agent trusted with the responsibility to deliver learning to the students and pursue a highly individualized research agenda. Learning is not a McDonald’s hamburger that can be manufactured on demand guided by a scientific algorithm designed to assure a predetermined outcome. What faculty do in classrooms is largely guided by intuition; there is no algorithm for great teaching.

Despite the differences that distinguish colleges and universities from the corporations that Martin profiles in his book, design thinking is a potential solution by which higher education institutions could create the balance between intuitive and algorithmic methods. But to make that happen both faculty and administrators need to take a closer look at what design thinking can do for organizations.

Martin’s book offers a case study that provides a good example: the turnaround at Procter & Gamble. In 2000, when A.G. Lafley was appointed CEO, P&G had lost market leadership to newer competitors across a wide range of its consumer products. Like higher education in 2010, P&G’s expenses were soaring while Walmart and others introduced cheaper, lower quality private-label products that attracted consumers away from P&G’s more expensive branded goods. Lafley needed to boost innovation at P&G while simultaneously becoming more efficient; a blending of the intuitive and algorithmic sides of the organization.

In 2001 he appointed Claudia Kotchka to turn P&G into a design thinking organization. He invited outside designers to assist with the development of new products, a strategy not previously invoked at P&G, and by 2006 about 35 percent of P&G’s new products had origins outside the company. Perhaps the most critical change was obtaining a deeper understanding of the company’s consumers. Hair care product team members began to visit salons and homes to see how the products were actually used, and listened to the suggestions and complaints of consumers. Within three years of Lafley’s arrival P&G was achieving growth and recapturing market share in nearly every brand category.

The parallels between P&G at its weakest and the plight of many contemporary colleges and universities seem strong enough to suggest that design thinking is an idea worth considering. What might it look like to do so?

To follow the Procter & Gamble example, in higher education students endlessly evaluate courses, but what’s lacking is a committed effort to observe students as they learn and to then listen to their concerns. In a design thinking culture, faculty and administrators would empathically put themselves in the place of the students at their own institution and elsewhere to fully understand how to improve what happens in and beyond the classroom.

Consider the perplexing conundrum presented by scholarly publishing. Faculty members produce research that, in order to achieve tenure, they give away to journal publishers. Publishers, particularly in science, technology and medicine, edit and package faculty’s intellectual property and sell it back to higher education institutions at high prices that require constant increases to library budgets. Despite years of discussion about the scholarly communications crisis, we still have only partial and little used potential solutions.

The scholarly communications crisis presents what Martin would describe as a “wicked problem”, one that requires more than analytical or intuitive thinking. In his book The Opposable Mind, Martin states that when neither option A nor B works, design thinkers must create option C that blends A and B, offering a new and completely untested solution. Proposals to solve the scholarly communications crisis tend to fall into two camps: different pricing models and open access. The former attempts an analytical solution by transferring the existing system to a new price model so money changes hands differently. The latter attempts an intuitive solution by encouraging scholars to distribute their manuscripts through free (to the reader) distribution systems. Could design thinkers develop a C solution?

The design thinker would start by unraveling the real problem that fuels the crisis, which might well be the nature of the research and tenure system itself. Identifying the problem is paramount. The design thinker (or design team) would talk to all the parties involved and learn as much as possible about the scholarly communication process from the experts, both authors and publishers.

Next, the design team would bring back for analysis all the pieces of information, and process it in a brainstorming (“deep dive”) session. Out of the brainstorming would emerge prototypes for a new or modified system of scholarly communications. The design team would implement the prototypes deemed to have the most promise. The prototype that most closely approaches a “C” solution would emerge for implementation. That C solution might be some combination of a change in the tenure process and what counts as scholarship, a de-emphasis on publication in high-impact journals, an editing and publication process in which some publishers could participate, and options for self-publishing and archiving that are simpler, with clear benefits to faculty. In other words, some combination of existing practices and untested ideas that offers a completely new solution.

Design thinking is no panacea for all that ails higher education. Resolving challenges such as low retention and graduations rates, escalating textbook costs, an overdependence on adjunct faculty, lean budgets, for-profit competitors and myriad other problems will take more than a business as usual approach. However, higher education is not a business, and faculty and students will always respond caustically if they believe corporate solutions are being foisted on them by an unsympathetic administration.

This is where design thinking can make a difference. It’s more than a short-term strategy for boosting profits. It’s a roadmap for future-proofing one of society’s most valued resources. And since it involves no acquisition of or investment in sophisticated new technology, only a desire to try a new way of identifying and tackling institutional challenges, it’s right for the times. Those who want to engage with these ideas can begin with the Deep Dive video mentioned above or choose from a host of blogs written by experts in design thinking and user experience.

In 1972 Cohen, March and Olsen introduced the Garbage Can Theory of decision making as an effort to create a predictive model for how decisions are made in higher education organizations. The model describes colleges and universities as “organized anarchies” that make their decision by heaping multiple solutions into garbage cans. The detached solutions in the garbage present no utility until a problem presents itself to which one of the solutions could be attached. For too long the organized anarchy label has proven itself relatively accurate in describing what derails progress in higher education.

Design thinking, based on the premise of correctly identifying the problem before developing solutions, is as far removed from the garbage can theory as a decision making model can be. What is relatively the same since Cohen, March and Olsen devised their model is that higher education still confronts what Martin calls the “wicked problem,” a challenge that is not merely complex but is characterized by ambiguity, shifting qualities and no clear solution. Design thinking may be just what higher education needs to clean up its garbage can.

Steven Bell is associate university librarian at Temple University and co-author of the book Academic Librarianship by Design. He blogs at Designing Better Libraries and From the Bell Tower.

Accreditation 2.0

After years of dialogue, debate and deliberation, we are at the beginning of the next generation of accreditation. An “Accreditation 2.0” is emerging, one that reflects attention to calls for change while sustaining and even enhancing some of the central features of current accreditation operation.

The emerging consensus stems from three major national conversations, all focused on accreditation and accountability, all with roots in much older discussions and intensified in the face of the heightened national emphasis on access and attainment of quality higher education. Taken together, these conversations, despite their differences, provide the foundation for the future and a next iteration: Accreditation 2.0.

Three Conversations

The first major conversation is led by the academic and accreditation communities themselves. It focuses on how accreditation is addressing accountability, with particular emphasis on the relationship (some would say tension, or even conflict) between accountability and institutional improvement. The discussion frequently includes consideration of common expectations of general education across all institutions as well as the need to more fully address transparency. This conversation takes place at meetings of higher education associations and accrediting organizations and has been underway since the 1980s, when the assessment movement began.

The second conversation is led by critics of accreditation who question its effectiveness in addressing accountability, with some who even want to jettison the public policy role of accreditation as a gatekeeper or provider of access to federal funds. These critics often argue that conflicts of interest are inherent in accreditation as a result of peer review and the current funding and governance of the enterprise. The most recent version of this conversation was triggered by the 2005-6 Spellings Commission and continues today in various associations and think tanks.

The third conversation is led by federal officials who also focus on the gatekeeping role of accreditation. In contrast to the call in the second conversation to eliminate this function, attention here is on expanding use of the gatekeeping role of accreditation – to enforce expanding accountability expectations at the federal level.

Convergence

As different as the three conversations are, they reflect some shared assumptions or beliefs about quality in higher education and the role of accreditation. All acknowledge that accreditation provides value in assuring and improving quality, though views differ about how much value and in what way. All are based on a belief that accreditation needs to change, though in what way and at what pace is seen differently. All accept that accountability must be addressed in a more comprehensive and robust way – though they disagree about how to go about this.

The elements common to these conversations provide a foundation, an opportunity, for thinking about a next generation of accreditation or an “Accreditation 2.0.” They provide a basis to fashion the future of accreditation by strengthening accountability and enhancing service to the public while maintaining the benefits of quality improvement and peer review.

Some Thoughts About an Accreditation 2.0

The emerging Accreditation 2.0 is likely to be characterized by six key elements. Some are familiar features of accreditation; some are modifications of existing practice, some are new:

  • Community-driven, shared general education outcomes.
  • Common practices to address transparency.
  • Robust peer review.
  • Enhanced efficiency of quality improvement efforts.
  • Diversification of the ownership of accreditation.
  • Alternative financing models for accreditation.

Community-driven, shared general education outcomes are emerging from the work of institutions and faculty, whether through informal consortiums, higher education associations or other means of joining forces. The Essential Learning Outcomes of the Association of American Colleges and Universities, the Collegiate Learning Assessment and the Voluntary System of Accountability of the Association of Public and Land-grant Universities all provide for agreement across institutions about expected outcomes. This work is vital as we continue to address the crucial question of “What is a college education?” Accreditors, working in partnership with institutions, assure that these community-driven outcomes are in place and that evidence of student achievement is publicly available as well as used for improvement.

Common practices to address transparency in Accreditation 2.0 require that accredited institutions and programs routinely provide readily understandable information to the public about performance. This includes, for example, completion of educational goals, including graduation, success with transfer, and entry to graduate school. Second, accrediting organizations would provide information to the public about the reasons for the accredited status they award in the same readily understandable style, perhaps using an audit-like instrument such as a management letter. A number of institutions and accreditors already offer this transparency. Accreditation 2.0 would mean that it becomes standard practice.

Robust peer review -- colleagues reviewing colleagues -- is a major strength of current accreditation, not a weakness as some critics maintain. It is the difference between genuine quality review and bureaucratic scrutiny for compliance. Peer review serves as our most reliable source of independent and informed judgment about the intellectual development experience we call higher education. In the current environment, peer review can be further enhanced through, for example, encouraging greater diversity of teams, including more faculty and expanding public participation. As such, peer review has a prominent place in Accreditation 2.0, just as it plays a major role in government and other nongovernmental organizations in research, medicine and the sciences, among other fields.

Enhanced efficiency of quality improvement efforts builds on the enormous value of the “improvement” function in current accreditation. Improvement is about what an institution learns from its own internal review and the peer review team that prompts it to make changes to build on strengths or address perceived weaknesses. This is the dimension of accreditation to which institutions and programs most often point when speaking to the value of the enterprise.

However, for the limited number of institutions that are experiencing severe difficulties in meeting accreditation standards but remain “accredited” for a considerable number of years, there can be a downside for students and the public. Students enroll, but may have trouble graduating or meeting other educational goals because of weaknesses of the institution that were identified in the accreditation review, even as the institution is trying to improve and remedy these difficulties. Accreditation 2.0 can include means to assure more immediate institutional action to address the weaknesses and prevent their being sustained over long periods of time.

Diversification of the ownership of accreditation can provide for additional approaches to the process and even additional constructive competition, as well as provide a response to allegations of conflict of interest. At present, most accrediting organizations are either owned and operated by the institutions or programs they accredit or function as extensions of professional bodies. However, there is nothing to stop other parties interested in quality review of higher education from establishing accrediting organizations and obtaining the legal authority to operate. Accreditation 2.0 can encourage exploration of this diversification that can be a source of fresh thinking about sustaining and enhancing quality in higher education. Private foundations or nonprofit citizen groups, for example, can make excellent owners of accrediting organizations.

Alternative financing models for accreditation call for separating the reviews of individual institutions and programs from the financing of an accrediting organization. In Accreditation 1.0, most accreditors are funded through the fees they charge individual institutions and programs for their periodic accreditation review and for the annual operating costs of the accrediting organization – with the latter a condition of keeping accredited status. This mode of financing is viewed by some as an inappropriate enticement to expand the organization’s numbers of accredited institutions and programs and by others as a conflict of interest or disincentive to impose harsh penalties on institutions that might diminish membership numbers. It can create problems for some accreditors, especially smaller operations.

In Accreditation 2.0, an “accreditation bank” might be established by a third party, neither the accrediting organization nor the party seeking accreditation. Institutions and programs interested in investing in the accreditation enterprise would pay into the bank annually, independent of individual reviews. Alternative sources of financing include third parties such as private foundations and endowments.

*****

Accreditation 2.0 builds on the emerging consensus across the major national conversations about accreditation and accountability. It is one means to strengthen accreditation, but not at the price of some of Accreditation 1.0’s most valuable features. It keeps key academic decisions in the hands of institutions and faculty. It strengthens accountability, but through community-based decisions about common outcomes and transparency. It maintains the benefits of peer review, yet opens the door to alternative thinking about the organization, management and governance of accreditation.

Judith Eaton is president of the Council for Higher Education Accreditation, which is a national advocate for self-regulation of academic quality through accreditation. CHEA has 3,000 degree-granting colleges and universities as members and recognizes 59 institutional and programmatic accrediting organizations.

Pages

Subscribe to RSS - Assessment
Back to Top