SHARE

The Accountability/Improvement Paradox

The Accountability/Improvement Paradox

April 30, 2010

In the academic literature and public debate about assessment of student learning outcomes, it has been widely argued that tension exists between the two predominant presses for higher education assessment: the academy's internally driven efforts as a community of professional practitioners to improve their programs and practices, and calls for accountability by various policy bodies representing the “consuming public.”

My recent review of the instruments, resources and services available to faculty members and administrators for assessing and improving academic programs and institutions has persuaded me that much more than merely a mismatch exists between the two perspectives; there is an inherent paradox in the relationship between assessment for accountability and for improvement. More importantly, there is an imbalance in emphasis that is contributing to a widening gap between policy makers and members of the academy with regard to their interests in and reasons for engaging in assessment. Specifically, not enough attention is being paid to the quality of measurement (and thought) in the accountability domain, which undermines the quality of assessment activity on college campuses.

The root of the paradoxical tension between forces that shape external accountability and those that promote quality improvement is the discrepancy between extrinsic and intrinsic motivations for engaging with assessment. When the question “why do assessment?” arises, often the answer is “because we have to.” Beyond this reaction to the external pressure is a more fundamental reason: professional responsibility.

Given the specialized knowledge and expertise required of academic staff (i.e., the faculty and other professionals involved in delivering higher education programs and services), members of the academy have the rights and responsibilities of professionals, as noted by Donald Schön in 1983, to “put their clients' needs ahead of their own, and hold themselves to standards of competence and morality” (p. 11). The strong and often confrontational calls for assessment from external constituents result from mistrust and perceptions that members of professions are “serving themselves at the expense of their clients, ignoring their obligations to public service, and failing to police themselves effectively,” Schon writes. The extent of distrust correlates closely with the level of influence the profession has over the quality of life for its clients.

That is, as an undergraduate degree comes to replace the high school diploma as a gateway to even basic levels of sustainable employment, distrust increases in the professional authority of the professoriate. With increasing influence and declining trust, the focal point of professional accountability shifts from members of the profession to the clients and their representatives.

The most recent decade, and especially the last five years, has been marked by a series of critical reports, regional and national commissions (e.g., the Spellings Commission), state and federal laws (e.g., the 2008 Higher Education Opportunity Act) and nongovernmental organization initiatives to rein in higher education. In response to these pressures, academic associations and organizations have become further energized to both protect the academy and to advocate for reform from within. They seek to recapture professional control and re-establish the trust necessary to work autonomously as self-regulated practitioners. Advocates for reform within the academy reason that conducting systematic evaluation of academic programs and student outcomes, and using the results of that activity for program improvement, are the best ways to support external accountability.

Unfortunately, as Peter Ewell points out, conducting assessment for internal improvement purposes entails a very different approach than does conducting assessment for external accountability purposes. Assessment for improvement entails a granular (bottom-up), faculty-driven, formative approach with multiple, triangulated measures (both quantitative and qualitative) of program-specific activities and outcomes that are geared towards very context-specific actions. Conversely, assessment for accountability requires summative, policy-driven (top-down), standardized and comparable (typically quantitative) measures that are used for public communication across broad contexts.

Information gleaned from assessment for improvement does not aggregate well for public communication, and information gleaned from assessment for accountability does not disaggregate well to inform program-level evaluation.

But there is more than just a mismatch in perspective. Nancy Shulock describes an “accountability culture gap” between policy makers, who desire relatively simple, comparable, unambiguous information that provides clear evidence as to whether basic goals are achieved, and members of the academy, who find such bottom line approaches threatening, inappropriate, and demeaning of deeply held values. Senior academic administrators and professional staff that work to develop a culture of assessment within the institution can leverage core academic values to promote assessment for improvement. But their efforts are often undermined by external emphasis on overly simplistic, one-size-fits-all measures like graduation rates, and their credibility can be challenged if they rely on those measures to stimulate action or make budget decisions.

In the book Paradoxical Life (Yale University Press, 2009), Andreas Wagner describes paradoxical tension as a fundamental condition found throughout the biological and non-biological world. Paradoxical tension exists in a relationship when there are both conflicting and converging interests. Within the realm of higher education, converging and conflicting interests are abundant. They exist between student and faculty; faculty and program chair; chair and dean; dean and provost; provost and president; president and trustee; trustee and public commissioner; commissioner and legislator; and so on. These layers help to shield the processes at the lower levels from those in the policy world, but at the same time make transparency extremely difficult, as each layer adds a degree of opacity.

According to Wagner, paradoxical tensions have several inherent dualisms, two of which provide particular insight into the accountability/improvement paradox. The self/other dualism highlights the “outside-in” vs. “inside-out” perspectives on each side of the relationship, which can be likened to what social psychologists describe as the actor-observer difference in attributions of causality, captured colloquially in the sentiment, “I tripped but you fell.” The actor is likely to focus on external causes of a stumble, such as a crack in the sidewalk, whereas the observer focuses on the actor's misstep as the cause.

From within the academy, problems are often seen as related to the materials with which and the environments within which the work occurs; that is, the attitude and behavior of students and the availability of resources. The view from outside focuses on the behavior of faculty and the quality of programs and processes they enact.

The “matter/meaning” dualism is closely related to the seemingly irreconcilable positivist and constructivist epistemologies. The accountability perspective in higher education (and elsewhere) generally favors the mechanical, “matter” point of view, presuming that there are basic “facts” (graduation rates, levels of critical thinking, research productivity) that can be observed and compared across a broad array of contexts. Conversely, the improvement perspective generally takes a “meaning” focus. Student progress takes on differing meaning depending on the structure of programs and the concurrent obligations of the student population.

Dealing effectively with the paradoxical tensions between the accountability and improvement realms requires that we understand clearly the differing viewpoints, accommodate the converging and conflicting interests and recognize the differing activities required to achieve core objectives. Although there is not likely to be an easy reconciliation, we can work together more productively by acknowledging that each side has flaws and limits but both are worthwhile pursuits.

The key to a more productive engagement is to bolster the integrity of work in both realms through guidelines and standards for effective, professional practice. Much has been written and said about the need for colleges and universities to take seriously their responsibilities for assessing and improving student learning. Several national associations and advocacy groups have taken this as a fundamental purpose. What is less often documented, heard and acted on is the role of accountability standards in shaping effective and desired forms of assessment.

Principles for Effective Accountability

Just as members of the academy should take professional responsibility for assessment as a vehicle for improvement and accountability, so too should members of the policy domain take professional responsibility for the shape that public accountability takes and the impact it has on institutional and program performance. Reporting on a forum sponsored by the American Enterprise Institute, Inside Higher Ed concluded, “if a major theme emerged from the assembled speakers, most of whom fall clearly into the pro-accountability camp, it was that as policy makers turn up the pressure on colleges to perform, they should do so in ways that reinforce the behaviors they want to see -- and avoid the kinds of perverse incentives that are so evident in many policies today.”

Principle 1: Quality of What? Accountability assessments and measures should be derived from a broad set of clearly articulated and differentiated core objectives of higher education (e.g., access and affordability, learning, research and scholarship, community engagement, technology transfer, cultural enhancement, etc.).

The seminal reports that catalyzed the current focus on higher education accountability, and many of the reform efforts from within the academy since that time, place student learning at the center of attention. The traditional “reputation and resource” view has been criticized as inappropriate, but it has not abated. While this debate continues, advocates of other aspects of institutional quality, such as equity in participation and performance, student character development, and the civic engagement of institutions in their communities, seek recognition for their causes. Student learning within undergraduate-level programs is a nearly universal and undeniably important enterprise across the higher education landscape that deserves acute attention. Because of their pervasiveness and complexity, it is important to recognize that student learning outcomes cannot be reduced into a few quantifiable measures, lest we reduce incentive for faculty to engage authentically in assessment processes. It is essential that we accommodate both the diverse range of student learning objectives evident across the U.S. higher education landscape and other mission-critical purposes that differentiate and distinguish postsecondary institutions.

Principle 2: Quality for Whom? Accountability assessments and measures should recognize differences according to the population spectrum that is served by institutions and programs, and should do so in a way that does not suggest that there is greater value in serving one segment of the population than in serving another.

Using common measures and standards to compare institutions that serve markedly different student populations (e.g., a highly selective, residential liberal arts college compared to an open-access community college with predominantly part-time students, or a comprehensive public university serving a heterogeneous mix of students) results in lowered expectations for some types of institutions and unreasonable demands for others. If similar measures are used but “acceptable standards” are allowed to vary, an inherent message is conveyed that one type of mission is inherently superior to the other. The diversity of the U.S. higher education landscape is often cited as one of its key strengths. Homogenous approaches to quality assessment and accountability work against that strength and create perverse incentives that undermine important societal goals.
For example, there is a growing body of evidence that the focus on graduation rates and attendant concerns with student selectivity (the most expeditious way to increase graduation rates) has incentivized higher education institutions as well as state systems to direct more discretionary financial aid dollars to recruiting better students rather than meeting financial need. This, in turn, has reduced the proportions of students from under-served and low-income families that attend four-year institutions and that complete college degrees.

Programs and institutions should be held accountable for their particular purposes and on the basis of whom they serve. Those who view accountability from a system-level perspective should recognize explicitly how institutional goals differentially contribute to broader societal goals by virtue of the different individuals and objectives the institutions serve. Promulgating common measures or metrics, or at least comparing performance on common measures, does not generally serve this purpose.

Principle 3: Connecting Performance with Outcomes. Assessment methods and accountability measures should facilitate making connections between performance (programs, processes, and structures), transformations (student learning and development, research/scholarship and professional practice outcomes), and impacts (how those outcomes affect the quality of life of individuals, communities, and society at large).

Once the basis for quality (what and for whom) is better understood and accommodated, we can assess, for both improvement and accountability purposes, how various programs, structures, organizations and systems contribute to the production of quality education, research and service. To do so, it is helpful to distinguish among three interrelated aspects for our measures and inquiries:

Efforts to improve higher education require that, within the academy, we understand better how our structures, programs and processes perform to produce desired transformations that result in positive impacts. Accountability, as an external catalyst for improvement, will work best if we reduce the perverse incentives that arise from measures that do not connect appropriately among the aspects of performance, transformation and impact sought by the diverse array of postsecondary organizations and systems that encompass our national higher education landscape.

Principle 4: Validity for purpose. Accountability measures should be assessed for validity related specifically to their intended use, that is, as indicators of program or institutional effectiveness.

In the realm of measurement, the terms, “reliability” and “validity” are the quintessential criteria. Reliability refers to the mechanical aspects of measurement, that is, the consistency of a measure or assessment within itself and across differing conditions. Validity, on the other hand, refers to the relationship between the measure and meaning. John Young and I discuss the current poor state of validity assessment in the realm of higher education accountability measures and describe a set of standards for validating accountability measures. The standards include describing the kinds of inferences and claims that are intended to be made with the measure, the conceptual basis for these claims and the basis of evidence that is sufficient for backing the claims.

Currently, there is little if any attempt to ensure that accountability measures support the claims that are intended by their use. This is not surprising, given the processes that are used to develop accountability measures. Often (at best), significant thought, negotiation and technical review go into designing measures. However, there is generally little done to empirically assess the validity of the measures in relation to the inferences and claims that are made using them.

Those who promulgate accountability need to take professional responsibility (and be held accountable by members of the academy) for establishing the validity of required measures and methods. The state of validity assessment within the higher education realm (and education more generally) contrasts starkly with the more stringent requirements for validity imposed within the scientific research and health domains. Although we do not propose that the requirements should be precisely similar, there would be considerable merit to imposing appropriate professional standards and requirements for any and all measures that are required by state or federal law.

Although we may not be able to reconcile the complex paradoxical tensions between the improvement and accountability realms, it is possible to advance efforts in both spheres if we recognize the inherent paradoxical tensions and accord the individuals pursuing these efforts the rights and responsibilities for doing so.

Members of the academy should accept the imposition of accountability standards, recognizing the increasing importance of higher education to a broader range of vested interests.

At the same time, the academic community and others should hold those invoking accountability (government agencies, NGOs and the news media) to professional standards so as to promote positive (and not perverse) incentives for pursuing core objectives. Those seeking more accountability, in turn, should recognize that a “one size fits all” approach to accountability does not accommodate well the diverse landscape of U.S. higher education and the diversity of the populations served.

With the increasing pressure from outside the academy for higher education accountability measures and for demonstrated quality assurance, it becomes more necessary than ever that we manage the tensions between assessment for accountability and improvement carefully. Given that accountability pressures both motivate and shape institutional and program assessment behaviors, the only way to improve institutional improvement is to make accountability more accountable through the development and enforcement of appropriate professional standards.

Bio

Victor M.H. Borden is associate vice president for university planning and institutional research and accountability at Indiana University at Bloomington and professor of psychology at Indiana University-Purdue University at Indianapolis.

 

 

Please review our commenting policy here.

Most

  • Viewed
  • Commented
  • Past:
  • Day
  • Week
  • Month
  • Year
Loading results...
Back to Top