Assessment

The Accountability/Improvement Paradox

In the academic literature and public debate about assessment of student learning outcomes, it has been widely argued that tension exists between the two predominant presses for higher education assessment: the academy's internally driven efforts as a community of professional practitioners to improve their programs and practices, and calls for accountability by various policy bodies representing the “consuming public.”

My recent review of the instruments, resources and services available to faculty members and administrators for assessing and improving academic programs and institutions has persuaded me that much more than merely a mismatch exists between the two perspectives; there is an inherent paradox in the relationship between assessment for accountability and for improvement. More importantly, there is an imbalance in emphasis that is contributing to a widening gap between policy makers and members of the academy with regard to their interests in and reasons for engaging in assessment. Specifically, not enough attention is being paid to the quality of measurement (and thought) in the accountability domain, which undermines the quality of assessment activity on college campuses.

The root of the paradoxical tension between forces that shape external accountability and those that promote quality improvement is the discrepancy between extrinsic and intrinsic motivations for engaging with assessment. When the question “why do assessment?” arises, often the answer is “because we have to.” Beyond this reaction to the external pressure is a more fundamental reason: professional responsibility.

Given the specialized knowledge and expertise required of academic staff (i.e., the faculty and other professionals involved in delivering higher education programs and services), members of the academy have the rights and responsibilities of professionals, as noted by Donald Schön in 1983, to “put their clients' needs ahead of their own, and hold themselves to standards of competence and morality” (p. 11). The strong and often confrontational calls for assessment from external constituents result from mistrust and perceptions that members of professions are “serving themselves at the expense of their clients, ignoring their obligations to public service, and failing to police themselves effectively,” Schon writes. The extent of distrust correlates closely with the level of influence the profession has over the quality of life for its clients.

That is, as an undergraduate degree comes to replace the high school diploma as a gateway to even basic levels of sustainable employment, distrust increases in the professional authority of the professoriate. With increasing influence and declining trust, the focal point of professional accountability shifts from members of the profession to the clients and their representatives.

The most recent decade, and especially the last five years, has been marked by a series of critical reports, regional and national commissions (e.g., the Spellings Commission), state and federal laws (e.g., the 2008 Higher Education Opportunity Act) and nongovernmental organization initiatives to rein in higher education. In response to these pressures, academic associations and organizations have become further energized to both protect the academy and to advocate for reform from within. They seek to recapture professional control and re-establish the trust necessary to work autonomously as self-regulated practitioners. Advocates for reform within the academy reason that conducting systematic evaluation of academic programs and student outcomes, and using the results of that activity for program improvement, are the best ways to support external accountability.

Unfortunately, as Peter Ewell points out, conducting assessment for internal improvement purposes entails a very different approach than does conducting assessment for external accountability purposes. Assessment for improvement entails a granular (bottom-up), faculty-driven, formative approach with multiple, triangulated measures (both quantitative and qualitative) of program-specific activities and outcomes that are geared towards very context-specific actions. Conversely, assessment for accountability requires summative, policy-driven (top-down), standardized and comparable (typically quantitative) measures that are used for public communication across broad contexts.

Information gleaned from assessment for improvement does not aggregate well for public communication, and information gleaned from assessment for accountability does not disaggregate well to inform program-level evaluation.

But there is more than just a mismatch in perspective. Nancy Shulock describes an “accountability culture gap” between policy makers, who desire relatively simple, comparable, unambiguous information that provides clear evidence as to whether basic goals are achieved, and members of the academy, who find such bottom line approaches threatening, inappropriate, and demeaning of deeply held values. Senior academic administrators and professional staff that work to develop a culture of assessment within the institution can leverage core academic values to promote assessment for improvement. But their efforts are often undermined by external emphasis on overly simplistic, one-size-fits-all measures like graduation rates, and their credibility can be challenged if they rely on those measures to stimulate action or make budget decisions.

In the book Paradoxical Life (Yale University Press, 2009), Andreas Wagner describes paradoxical tension as a fundamental condition found throughout the biological and non-biological world. Paradoxical tension exists in a relationship when there are both conflicting and converging interests. Within the realm of higher education, converging and conflicting interests are abundant. They exist between student and faculty; faculty and program chair; chair and dean; dean and provost; provost and president; president and trustee; trustee and public commissioner; commissioner and legislator; and so on. These layers help to shield the processes at the lower levels from those in the policy world, but at the same time make transparency extremely difficult, as each layer adds a degree of opacity.

According to Wagner, paradoxical tensions have several inherent dualisms, two of which provide particular insight into the accountability/improvement paradox. The self/other dualism highlights the “outside-in” vs. “inside-out” perspectives on each side of the relationship, which can be likened to what social psychologists describe as the actor-observer difference in attributions of causality, captured colloquially in the sentiment, “I tripped but you fell.” The actor is likely to focus on external causes of a stumble, such as a crack in the sidewalk, whereas the observer focuses on the actor's misstep as the cause.

From within the academy, problems are often seen as related to the materials with which and the environments within which the work occurs; that is, the attitude and behavior of students and the availability of resources. The view from outside focuses on the behavior of faculty and the quality of programs and processes they enact.

The “matter/meaning” dualism is closely related to the seemingly irreconcilable positivist and constructivist epistemologies. The accountability perspective in higher education (and elsewhere) generally favors the mechanical, “matter” point of view, presuming that there are basic “facts” (graduation rates, levels of critical thinking, research productivity) that can be observed and compared across a broad array of contexts. Conversely, the improvement perspective generally takes a “meaning” focus. Student progress takes on differing meaning depending on the structure of programs and the concurrent obligations of the student population.

Dealing effectively with the paradoxical tensions between the accountability and improvement realms requires that we understand clearly the differing viewpoints, accommodate the converging and conflicting interests and recognize the differing activities required to achieve core objectives. Although there is not likely to be an easy reconciliation, we can work together more productively by acknowledging that each side has flaws and limits but both are worthwhile pursuits.

The key to a more productive engagement is to bolster the integrity of work in both realms through guidelines and standards for effective, professional practice. Much has been written and said about the need for colleges and universities to take seriously their responsibilities for assessing and improving student learning. Several national associations and advocacy groups have taken this as a fundamental purpose. What is less often documented, heard and acted on is the role of accountability standards in shaping effective and desired forms of assessment.

Principles for Effective Accountability

Just as members of the academy should take professional responsibility for assessment as a vehicle for improvement and accountability, so too should members of the policy domain take professional responsibility for the shape that public accountability takes and the impact it has on institutional and program performance. Reporting on a forum sponsored by the American Enterprise Institute, Inside Higher Ed concluded, “if a major theme emerged from the assembled speakers, most of whom fall clearly into the pro-accountability camp, it was that as policy makers turn up the pressure on colleges to perform, they should do so in ways that reinforce the behaviors they want to see -- and avoid the kinds of perverse incentives that are so evident in many policies today.”

Principle 1: Quality of What? Accountability assessments and measures should be derived from a broad set of clearly articulated and differentiated core objectives of higher education (e.g., access and affordability, learning, research and scholarship, community engagement, technology transfer, cultural enhancement, etc.).

The seminal reports that catalyzed the current focus on higher education accountability, and many of the reform efforts from within the academy since that time, place student learning at the center of attention. The traditional “reputation and resource” view has been criticized as inappropriate, but it has not abated. While this debate continues, advocates of other aspects of institutional quality, such as equity in participation and performance, student character development, and the civic engagement of institutions in their communities, seek recognition for their causes. Student learning within undergraduate-level programs is a nearly universal and undeniably important enterprise across the higher education landscape that deserves acute attention. Because of their pervasiveness and complexity, it is important to recognize that student learning outcomes cannot be reduced into a few quantifiable measures, lest we reduce incentive for faculty to engage authentically in assessment processes. It is essential that we accommodate both the diverse range of student learning objectives evident across the U.S. higher education landscape and other mission-critical purposes that differentiate and distinguish postsecondary institutions.

Principle 2: Quality for Whom? Accountability assessments and measures should recognize differences according to the population spectrum that is served by institutions and programs, and should do so in a way that does not suggest that there is greater value in serving one segment of the population than in serving another.

Using common measures and standards to compare institutions that serve markedly different student populations (e.g., a highly selective, residential liberal arts college compared to an open-access community college with predominantly part-time students, or a comprehensive public university serving a heterogeneous mix of students) results in lowered expectations for some types of institutions and unreasonable demands for others. If similar measures are used but “acceptable standards” are allowed to vary, an inherent message is conveyed that one type of mission is inherently superior to the other. The diversity of the U.S. higher education landscape is often cited as one of its key strengths. Homogenous approaches to quality assessment and accountability work against that strength and create perverse incentives that undermine important societal goals.
For example, there is a growing body of evidence that the focus on graduation rates and attendant concerns with student selectivity (the most expeditious way to increase graduation rates) has incentivized higher education institutions as well as state systems to direct more discretionary financial aid dollars to recruiting better students rather than meeting financial need. This, in turn, has reduced the proportions of students from under-served and low-income families that attend four-year institutions and that complete college degrees.

Programs and institutions should be held accountable for their particular purposes and on the basis of whom they serve. Those who view accountability from a system-level perspective should recognize explicitly how institutional goals differentially contribute to broader societal goals by virtue of the different individuals and objectives the institutions serve. Promulgating common measures or metrics, or at least comparing performance on common measures, does not generally serve this purpose.

Principle 3: Connecting Performance with Outcomes. Assessment methods and accountability measures should facilitate making connections between performance (programs, processes, and structures), transformations (student learning and development, research/scholarship and professional practice outcomes), and impacts (how those outcomes affect the quality of life of individuals, communities, and society at large).

Once the basis for quality (what and for whom) is better understood and accommodated, we can assess, for both improvement and accountability purposes, how various programs, structures, organizations and systems contribute to the production of quality education, research and service. To do so, it is helpful to distinguish among three interrelated aspects for our measures and inquiries:

Efforts to improve higher education require that, within the academy, we understand better how our structures, programs and processes perform to produce desired transformations that result in positive impacts. Accountability, as an external catalyst for improvement, will work best if we reduce the perverse incentives that arise from measures that do not connect appropriately among the aspects of performance, transformation and impact sought by the diverse array of postsecondary organizations and systems that encompass our national higher education landscape.

Principle 4: Validity for purpose. Accountability measures should be assessed for validity related specifically to their intended use, that is, as indicators of program or institutional effectiveness.

In the realm of measurement, the terms, “reliability” and “validity” are the quintessential criteria. Reliability refers to the mechanical aspects of measurement, that is, the consistency of a measure or assessment within itself and across differing conditions. Validity, on the other hand, refers to the relationship between the measure and meaning. John Young and I discuss the current poor state of validity assessment in the realm of higher education accountability measures and describe a set of standards for validating accountability measures. The standards include describing the kinds of inferences and claims that are intended to be made with the measure, the conceptual basis for these claims and the basis of evidence that is sufficient for backing the claims.

Currently, there is little if any attempt to ensure that accountability measures support the claims that are intended by their use. This is not surprising, given the processes that are used to develop accountability measures. Often (at best), significant thought, negotiation and technical review go into designing measures. However, there is generally little done to empirically assess the validity of the measures in relation to the inferences and claims that are made using them.

Those who promulgate accountability need to take professional responsibility (and be held accountable by members of the academy) for establishing the validity of required measures and methods. The state of validity assessment within the higher education realm (and education more generally) contrasts starkly with the more stringent requirements for validity imposed within the scientific research and health domains. Although we do not propose that the requirements should be precisely similar, there would be considerable merit to imposing appropriate professional standards and requirements for any and all measures that are required by state or federal law.

Although we may not be able to reconcile the complex paradoxical tensions between the improvement and accountability realms, it is possible to advance efforts in both spheres if we recognize the inherent paradoxical tensions and accord the individuals pursuing these efforts the rights and responsibilities for doing so.

Members of the academy should accept the imposition of accountability standards, recognizing the increasing importance of higher education to a broader range of vested interests.

At the same time, the academic community and others should hold those invoking accountability (government agencies, NGOs and the news media) to professional standards so as to promote positive (and not perverse) incentives for pursuing core objectives. Those seeking more accountability, in turn, should recognize that a “one size fits all” approach to accountability does not accommodate well the diverse landscape of U.S. higher education and the diversity of the populations served.

With the increasing pressure from outside the academy for higher education accountability measures and for demonstrated quality assurance, it becomes more necessary than ever that we manage the tensions between assessment for accountability and improvement carefully. Given that accountability pressures both motivate and shape institutional and program assessment behaviors, the only way to improve institutional improvement is to make accountability more accountable through the development and enforcement of appropriate professional standards.

Victor M.H. Borden is associate vice president for university planning and institutional research and accountability at Indiana University at Bloomington and professor of psychology at Indiana University-Purdue University at Indianapolis.

RosEvaluation Conference 2010 Assessment for Program and Institutional Accreditation

Date: 
Fri, 04/09/2010 to Sat, 04/10/2010

Location

Terre Haute , Indiana
United States

You Say You Want a Revolution?

It seems everybody is talking revolution in higher ed these days.

How many times have I read in the higher ed news of the coming revolution in classroom instruction, in the major, in the tenure system, in governance?

Google "higher education revolution" and you find radical reform rising in every direction. Many are sparked by the billions state systems are losing as our economy lurches out of the tank, others by the increasing commodification of the college degree. Some promise to "transform" the American university as they have transformed -- egad! -- the American newspaper. New models of for-profit education promise a revolution in the higher education business model that is already threatening the viability of traditional colleges across the country.

But I can't help wondering if we've spirited all our revolutionary rhetoric for another day at the office.

We tend to talk ourselves right past revolutions in higher education. Our burning impulse to revitalize learning often concludes with a return to the status quo: we end up arguing, say, over our respective roles in shared governance, or over the turf we'd have to give up for genuine improvement in learning.

We can do better.

At a recent conference, I had a glimpse into how the real transformation might unfold. The Teagle Foundation brought together professors, administrators and researchers from across the country to discuss with its board members key questions the foundation has been addressing in recent years:

  • How might we make systematic improvements in student learning?
  • What evidence is there that we’re using what we know about student learning to reform academe?

These, of course, were the very same questions asked by the ill-fated Spellings Commission. Teagle has found success by engaging the strengths of the academy -- and especially the talents and creativity of faculty--by supporting liberal arts college in piloting solutions to the challenges before academe. In doing so, they have started transformative efforts that will deepen student learning while also balancing resources.

With the public university system in crisis -- Clark Kerr's master plan for California has been set adrift along with the strategies for renewal in state after state -- a focus on liberal arts colleges could seem to some like a boutique project. The Teagle Foundation's great insight has been that the nation's liberal arts colleges remain a bellwether for the health of the academy and that small colleges have a great opportunity to model what the 21st century higher education might become.

Teagle has funded dozens of collaborative efforts at liberal arts colleges over the past six years supporting faculty-driven, ground-up assessment projects of student learning outcomes at colleges and universities across the country.

The work that colleges are doing in these Teagle pilots tests the basic assumptions of a college education. Some have examined the meaning and value of general education, exploring radical revision of the ways in which general education might come to be embedded in helping students to think about the ways they will live their lives. One project brought four colleges together to assess how effectively undergraduate students acquire and refine the spiritual values that lie at the heart of their institutional missions. Another explores effective models of community-based learning efforts at three prominent colleges.

Such work aims to deepen student learning and growth at colleges across the country. As importantly, it will help small colleges to think about ways to distinguish themselves in a landscape that increasingly sees no difference between a liberal arts college degree and a degree from, say, the University of Phoenix. Liberal arts colleges must, to use Robert Zemsky's phrase, be "market-smart and mission-centered," and the pilots that Teagle has funded in recent years point us toward solutions to drifting missions and to struggling finances alike.

At Augustana College, we are taking seriously the Teagle Foundation's charge to find ways to use what we know about student learning for reform. Working in a Teagle-funded collaborative of seven colleges across the Midwest -- Alma, Augustana, Illinois Wesleyan, Luther, Gustavus Adolphus, Washington and Jefferson, and Wittenberg -- over the past five years, we have begun to question the 100-year-old credit model system that is at the heart of the American baccalaureate. Our consortium of colleges has begun to ask whether we can still justify the existence of a system that was brought into being mostly to serve the needs of our business offices.

Will federal pressure for transferability of credit only make more secure a system that is now straining under the weight of new understanding of learning and the new pedagogies that follow? In an era when we ask faculty to be deeply engaged with students through interdisciplinary education, undergraduate research, international study, and other high impact practices, can we continue to justify a credit system that has remained unchanged for a century? We are questioning whether the course unit as now constituted -- that three- or four-hour sliver of a college degree or the correlating seat time -- is the best means of measuring student learning.

My colleagues at Augustana and I have begun other pilots that will explore the other hard questions before our college, and all colleges: how will we make better use of vital resources while demonstrating the value of a liberal education to parents, employers, and graduate schools?

We have developed a series of experiments that may answer the question. Our faculty have created a senior capstone program -- Senior Inquiry -- by using a backward design model to re-envision nearly every major on campus, ensuring that all Augustana students will have the sort of hands-on, experiential learning opportunity that will demonstrate their skills to employers and graduate schools alike (even as it provides us with a great chance to evaluate all they have done in four years here). We have redefined scholarship in the Boyer model, embracing the scholarship of teaching and learning. We are piloting new partnerships with universities, community colleges and high schools; we are asking how technology might deepen the advantages of traditional classroom learning models. And we have built our newest program- - Augie Choice -- around the idea that experiential learning -- through research, international study and internships -- ought to be the heart of a liberal arts education.

We don't yet know where all of these experiments will lead us. But, in our 150th year at Augustana, we have learned from the Teagle Foundation that pilots may help us to ensure that we will thrive for the next 150 years.

That, I'm certain, is revolution enough.

Jeff Abernathy is vice president and dean of the college at Augustana College, in Illinois. This summer, he will become president of Alma College, in Michigan.

'Design Thinking' and Higher Education

As an advocate for the position that higher education benefits from studying the lessons of business and selectively implementing those ideas that help corporate and non-profit entities to prosper, I was pleased to come across Inside Higher Ed’s report on the publication of the multi-part work The Business of Higher Education (Praeger), edited by John C. Knapp and David J. Siegel. The author observed, correctly, that “many college and faculty leaders bristle at the suggestion that the institutions -- and their students -- would be better off if only institutions operated more like their counterparts in the private sector.”

That’s why I propose a model that may meet with the approval of those who think higher education is just fine ignoring business models: design thinking. For starters, it’s an idea with origins as remote from business as design itself. While their work is hardly nonprofit, designers are rarely found destroying the competition, maximizing profit margins and exploiting their employees. Few of the designers I know personally would fit the negative perception of corporate America held by many academicians. Design thinking is about helping people and organizations to solve their problems for long-term satisfaction, not achieving efficiency for short-run gains.

While it is true that more businesses are adopting design thinking as a model for achieving better results, enhanced innovation and improved service to customers, as evidenced by several new books about innovation design and design thinking targeted for the business market, the ideas behind design thinking emerged from the methods that are common to nearly all design fields, be it industrial, graphic, instructional or any other design profession. These basic operating principles constitute a process that might be expressed most simply as the way that designers approach problems and achieve solutions. Designers think of themselves as problem finders more so than problem solvers because their solutions start with a deep understanding of the problem requiring a solution.

What can design thinking offer to higher education? In a word, change. Not just change for the sake of creating change or trying the latest fad, but thoughtful change for the higher education institution that wants to position itself to better withstand the challenges presented by both old and new competitors. Change not just for technology’s sake, but change based on better understanding students and putting into a place a mechanism for institution-wide innovation. (I’ll provide some examples later.)

The seminal work on design thinking, The Art of Innovation (Currency/Doubleday, 2001) came from a business outsider, Tom Kelley, then general manager of IDEO, one of the world’s leading design firms. Those interested in learning more about design thinking are well advised to start with Kelley’s book, as it introduces the “IDEO Method,” a five-step approach to understanding how designers think. In a nutshell, the process requires its practitioners to internalize the following:

  • Understand: be an empathic thinker and put yourself in the shoes of your student or whomever it is that you provide a service to.
  • Observe: watch people in real-life situations to better understand how they really use a service or product and those things that both please and frustrate them.
  • Visualize: brainstorm with colleagues to identify new ideas and concepts that will give those you serve or teach a better (learning) experience.
  • Prototype: take time to explore multiple iterations of an idea before exposing those you serve or teach to a potential solution or enhancement.
  • Implementation and Evaluation: be thoughtful about when and how to implement a new idea and invest time to evaluate its impact, and then re-design as needed

For those who need a faster introduction to design thinking, take 22 minutes to watch "The Deep Dive," an episode of “Nightline” that profiled how the staff at IDEO tackle a new problem and develop a solution. As one learns from this video, the designers at IDEO are experts in using the design thinking process to identify and approach problems and then develop elegant solutions to them. That’s how IDEO has designed everything from the mouse you use nearly every day to NASA equipment to toothpaste dispensers and microwave ovens.

As the design thinking method gained popularity, IDEO added organizational consulting to its product design business, and now works with health care and K-12 education systems on restructuring and re-engineering workflows to eliminate dysfunctional practices and improve user experiences. One recent book about design thinking, Change by Design, by Tim Brown, CEO of IDEO, reads more like Zen philosophy than it does a how-to for businesspeople out to rule the world.

But even those who count themselves among higher education’s anti-business faction may benefit from another new book on design thinking authored by -- shudder -- a business school dean. The Design of Business, by Roger Martin, dean of the Rotman Business School at the University of Toronto, is a good example of a business book that even the most business-phobic humanist could enjoy reading. To be certain, there are a number of case studies profiling businesses that achieved success with design thinking, but there is still much food for thought for those who think their school or department could do better.

For example, Martin elegantly explains how businesses emerge and evolve, through a multi-stage process he describes as the “knowledge funnel.” It begins with a mystery in which the fledgling innovator seeks to build a better mousetrap, such as how to organize all the world’s information. The business creates a heuristic or an intuitive sense of how to solve the mystery that allows it to offer an initial product or service. As it moves out of the exploration stage, it develops an algorithm to operate the business so that the core solutions are efficiently exploited.

To illustrate this end stage of the knowledge funnel he points to companies like McDonald’s. What started as mostly a guess that Americans would eat fast food became a highly mechanized process that is easily replicated with great efficiency. McDonald’s has no interest in stimulating employee creativity or innovation; just keep the burgers and fries coming. But blindly adhering to algorithms can cost dearly when a competitor, like Subway, brings new thinking and imagination to the same mystery.

The problem, according to Martin, is that some organizations are operated primarily by intuition while others are rigidly controlled by algorithms. His core message is that organizations guided by design thinking achieve a balance between the two so that both intuition and algorithms merge to keep the organization searching for and solving new mysteries while avoiding the extreme exploitation that leads to obsolescence. When they “satisfice” for exploiting old ideas, businesses are ultimately confronted by upstart competitors exploring new mysteries. Thus emerge disruptive innovations offering products or services that better meet people’s needs.

It’s a cycle that endlessly repeats itself, and higher education is equally susceptible. Consider higher education’s long reign using the same delivery and organizational structures after hundreds of years. It now is pressured by new competitors, many of them for-profit businesses offering low-cost, convenient options that leverage advanced educational technologies. The new mystery is how to deliver higher education in ways that are both affordable and sustainable, and that meet the needs of a new generation of both traditional and nontraditional learners.

Are there ways in which design thinking could help America retain its place as the crown jewel of the world’s higher education system? Admittedly, higher education is a unique industry owing to the vast independence of its primary employees, the faculty. Each faculty member is in his or her own way an independent agent trusted with the responsibility to deliver learning to the students and pursue a highly individualized research agenda. Learning is not a McDonald’s hamburger that can be manufactured on demand guided by a scientific algorithm designed to assure a predetermined outcome. What faculty do in classrooms is largely guided by intuition; there is no algorithm for great teaching.

Despite the differences that distinguish colleges and universities from the corporations that Martin profiles in his book, design thinking is a potential solution by which higher education institutions could create the balance between intuitive and algorithmic methods. But to make that happen both faculty and administrators need to take a closer look at what design thinking can do for organizations.

Martin’s book offers a case study that provides a good example: the turnaround at Procter & Gamble. In 2000, when A.G. Lafley was appointed CEO, P&G had lost market leadership to newer competitors across a wide range of its consumer products. Like higher education in 2010, P&G’s expenses were soaring while Walmart and others introduced cheaper, lower quality private-label products that attracted consumers away from P&G’s more expensive branded goods. Lafley needed to boost innovation at P&G while simultaneously becoming more efficient; a blending of the intuitive and algorithmic sides of the organization.

In 2001 he appointed Claudia Kotchka to turn P&G into a design thinking organization. He invited outside designers to assist with the development of new products, a strategy not previously invoked at P&G, and by 2006 about 35 percent of P&G’s new products had origins outside the company. Perhaps the most critical change was obtaining a deeper understanding of the company’s consumers. Hair care product team members began to visit salons and homes to see how the products were actually used, and listened to the suggestions and complaints of consumers. Within three years of Lafley’s arrival P&G was achieving growth and recapturing market share in nearly every brand category.

The parallels between P&G at its weakest and the plight of many contemporary colleges and universities seem strong enough to suggest that design thinking is an idea worth considering. What might it look like to do so?

To follow the Procter & Gamble example, in higher education students endlessly evaluate courses, but what’s lacking is a committed effort to observe students as they learn and to then listen to their concerns. In a design thinking culture, faculty and administrators would empathically put themselves in the place of the students at their own institution and elsewhere to fully understand how to improve what happens in and beyond the classroom.

Consider the perplexing conundrum presented by scholarly publishing. Faculty members produce research that, in order to achieve tenure, they give away to journal publishers. Publishers, particularly in science, technology and medicine, edit and package faculty’s intellectual property and sell it back to higher education institutions at high prices that require constant increases to library budgets. Despite years of discussion about the scholarly communications crisis, we still have only partial and little used potential solutions.

The scholarly communications crisis presents what Martin would describe as a “wicked problem”, one that requires more than analytical or intuitive thinking. In his book The Opposable Mind, Martin states that when neither option A nor B works, design thinkers must create option C that blends A and B, offering a new and completely untested solution. Proposals to solve the scholarly communications crisis tend to fall into two camps: different pricing models and open access. The former attempts an analytical solution by transferring the existing system to a new price model so money changes hands differently. The latter attempts an intuitive solution by encouraging scholars to distribute their manuscripts through free (to the reader) distribution systems. Could design thinkers develop a C solution?

The design thinker would start by unraveling the real problem that fuels the crisis, which might well be the nature of the research and tenure system itself. Identifying the problem is paramount. The design thinker (or design team) would talk to all the parties involved and learn as much as possible about the scholarly communication process from the experts, both authors and publishers.

Next, the design team would bring back for analysis all the pieces of information, and process it in a brainstorming (“deep dive”) session. Out of the brainstorming would emerge prototypes for a new or modified system of scholarly communications. The design team would implement the prototypes deemed to have the most promise. The prototype that most closely approaches a “C” solution would emerge for implementation. That C solution might be some combination of a change in the tenure process and what counts as scholarship, a de-emphasis on publication in high-impact journals, an editing and publication process in which some publishers could participate, and options for self-publishing and archiving that are simpler, with clear benefits to faculty. In other words, some combination of existing practices and untested ideas that offers a completely new solution.

Design thinking is no panacea for all that ails higher education. Resolving challenges such as low retention and graduations rates, escalating textbook costs, an overdependence on adjunct faculty, lean budgets, for-profit competitors and myriad other problems will take more than a business as usual approach. However, higher education is not a business, and faculty and students will always respond caustically if they believe corporate solutions are being foisted on them by an unsympathetic administration.

This is where design thinking can make a difference. It’s more than a short-term strategy for boosting profits. It’s a roadmap for future-proofing one of society’s most valued resources. And since it involves no acquisition of or investment in sophisticated new technology, only a desire to try a new way of identifying and tackling institutional challenges, it’s right for the times. Those who want to engage with these ideas can begin with the Deep Dive video mentioned above or choose from a host of blogs written by experts in design thinking and user experience.

In 1972 Cohen, March and Olsen introduced the Garbage Can Theory of decision making as an effort to create a predictive model for how decisions are made in higher education organizations. The model describes colleges and universities as “organized anarchies” that make their decision by heaping multiple solutions into garbage cans. The detached solutions in the garbage present no utility until a problem presents itself to which one of the solutions could be attached. For too long the organized anarchy label has proven itself relatively accurate in describing what derails progress in higher education.

Design thinking, based on the premise of correctly identifying the problem before developing solutions, is as far removed from the garbage can theory as a decision making model can be. What is relatively the same since Cohen, March and Olsen devised their model is that higher education still confronts what Martin calls the “wicked problem,” a challenge that is not merely complex but is characterized by ambiguity, shifting qualities and no clear solution. Design thinking may be just what higher education needs to clean up its garbage can.

Steven Bell is associate university librarian at Temple University and co-author of the book Academic Librarianship by Design. He blogs at Designing Better Libraries and From the Bell Tower.

Course Evaluations, Years Later

Just recently I got a set of teaching evaluations for a course that I taught in the fall of 2008 -- and another set for a course I taught in 2006.

This lag wasn't the fault of campus mail (it can be slow, but not that slow). Instead, the evaluations were part of small experiment with long-delayed course assessments, surveys that ask students to reflect on the classes that they have taken a year or two or three earlier.

I've been considering such evaluations ever since I went through the tenure a second time: the first was at a liberal arts college, the second two years later when I moved to a research university. Both institutions valued teaching but took markedly different approaches to student course evaluations. The research university relied almost exclusively on the summary scores of bubble-sheet course evaluations, while the liberal arts college didn't even allow candidates to include end-of-semester forms in tenure files. Instead they contacted former students, including alumni, and asked them to write letters.

In my post-tenure debriefing at the liberal arts college, the provost shared excerpts from the letters. Some sounded similar to comments I would typically see in my end-of-semester course evaluations; others, especially those by alumni, resonated more deeply. They let me know what in my assignments and teaching had staying power.

But how to get that kind of longitudinal feedback at a big, public university?

My first try has been a brief online survey sent to a selection of my former students. Using SurveyMonkey, I cooked up a six-item questionnaire. I'm only mildly tech-savvy and this was my first time creating an online survey, but the software escorted me through the process quickly and easily. I finished in half an hour.

Using my university's online student administration system, I downloaded two course rosters-one from a year ago, one from three years ago. I copied the e-mail address columns and pasted them into the survey. Eight clicks of the mouse later I was ready to send.

I sent the invitation to two sections of a small freshman honors English seminar I teach every other year. This course meets the first-year composition requirement and I teach it with a focus on the ways that writing can work as social action, both inside and outside the academy. During the first half of the semester students engage with a range of reading -- studies of literacy, theories of social change, articles from scholarly journals in composition studies, short stories and poems keyed to questions of social justice, essays from Harpers and The New York Times Magazine, papers written by my former students -- and they write four essays, all revised across drafts. During the latter part of the semester students work in teams on service-learning projects, first researching their local community partner organizations and then doing writing projects that I have worked out in advance of the semester with those organizations.

I taught the course pretty much the same in fall 2008 as I did in fall 2006, except that in 2008 I introduced a portfolio approach to assessment that deferred much of the final paper grading until the end of the course.

Through my online survey I wanted to know what stuck -- which readings (if any) continued to rattle around in their heads, whether all the drafting and revising we did proved relevant (or not) to their writing in other courses, and how the service experience shaped (or didn't) any future community engagement.

My small sample size -- only 28 (originally 30, but 2 students from the original rosters had left or graduated) -- certainly would not pass muster with the psychometricians. But the yield of 18 completed surveys, a response rate of over 60 percent, was encouraging.

I kept the survey short-just six questions -- and promised students that it would take five to ten minutes of their winter break and that their identities would be kept anonymous.

The first item asked them to signal when they had taken the course, in 2006 or 2008. The next two were open-ended: "Have any particular readings, concepts, experiences, etc. from Honors English 1 stayed with you? If so, which ones? Are there any ways that the course shaped how you think and/or write? If so, how?" and "Given your classwork and experiences since taking Honors English 1, what do you wish would have been covered in that course but wasn't?" These were followed by two multiple-choice questions: one about their involvement in community outreach (I wanted to get a rough sense of whether the service-learning component of the course had or hadn't influenced future community engagement); and another that queried whether they would recommend the course to an incoming student. I concluded with an open invitation to comment.

As might be expected from a small, interactive honors seminar, most who responded had favorable memories of the course. But more interesting to me were the specifics: they singled out particular books, stories, and assignments. Several of those I was planning to keep in the course anyway, a few of those I was considering replacing (each semester I fiddle with my reading list). The student comments rescued a few of those.

I also attend to what was not said. The readings and assignments that none of the 18 mentioned will be my prime candidates for cutting from the syllabus.

Without prompting, a few students from the 2008 section singled out the portfolio system as encouraging them to take risks in their writing, which affirms that approach. Students from both sections mentioned the value of the collaborative writing assignments (I'm always struggling with the proportion of individual versus collaborative assignments). Several surprised me by wishing that we had spent more time on prose style.

I also learned that while more than half of the respondents continued to be involved in some kind of community outreach (not a big surprise because they had self-selected a service-learning course), only one continued to work with the same community partner from the course. That suggested that I need to be more deliberate about encouraging such continuity.

In all, the responses didn't trigger a seismic shift in how I'll next teach the course, but they did help me revise with greater confidence and tinker with greater precision.

I am not suggesting that delayed online surveys should replace the traditional captive-audience, end-of-semester evaluations. Delayed surveys likely undercount students who are unmotivated or who had a bad experience in the course and miss entirely those who dropped or transferred out of the institution (and we need feedback from such students). Yet my small experiment suggests that time-tempered evaluations are worth the hour it takes to create and administer the survey.

Next January, another round, and this time with larger, non-honors courses.

Author's email: 
info@insidehighered.com

Tom Deans is associate professor of English at the University of Connecticut.

Accreditation 2.0

After years of dialogue, debate and deliberation, we are at the beginning of the next generation of accreditation. An “Accreditation 2.0” is emerging, one that reflects attention to calls for change while sustaining and even enhancing some of the central features of current accreditation operation.

The emerging consensus stems from three major national conversations, all focused on accreditation and accountability, all with roots in much older discussions and intensified in the face of the heightened national emphasis on access and attainment of quality higher education. Taken together, these conversations, despite their differences, provide the foundation for the future and a next iteration: Accreditation 2.0.

Three Conversations

The first major conversation is led by the academic and accreditation communities themselves. It focuses on how accreditation is addressing accountability, with particular emphasis on the relationship (some would say tension, or even conflict) between accountability and institutional improvement. The discussion frequently includes consideration of common expectations of general education across all institutions as well as the need to more fully address transparency. This conversation takes place at meetings of higher education associations and accrediting organizations and has been underway since the 1980s, when the assessment movement began.

The second conversation is led by critics of accreditation who question its effectiveness in addressing accountability, with some who even want to jettison the public policy role of accreditation as a gatekeeper or provider of access to federal funds. These critics often argue that conflicts of interest are inherent in accreditation as a result of peer review and the current funding and governance of the enterprise. The most recent version of this conversation was triggered by the 2005-6 Spellings Commission and continues today in various associations and think tanks.

The third conversation is led by federal officials who also focus on the gatekeeping role of accreditation. In contrast to the call in the second conversation to eliminate this function, attention here is on expanding use of the gatekeeping role of accreditation – to enforce expanding accountability expectations at the federal level.

Convergence

As different as the three conversations are, they reflect some shared assumptions or beliefs about quality in higher education and the role of accreditation. All acknowledge that accreditation provides value in assuring and improving quality, though views differ about how much value and in what way. All are based on a belief that accreditation needs to change, though in what way and at what pace is seen differently. All accept that accountability must be addressed in a more comprehensive and robust way – though they disagree about how to go about this.

The elements common to these conversations provide a foundation, an opportunity, for thinking about a next generation of accreditation or an “Accreditation 2.0.” They provide a basis to fashion the future of accreditation by strengthening accountability and enhancing service to the public while maintaining the benefits of quality improvement and peer review.

Some Thoughts About an Accreditation 2.0

The emerging Accreditation 2.0 is likely to be characterized by six key elements. Some are familiar features of accreditation; some are modifications of existing practice, some are new:

  • Community-driven, shared general education outcomes.
  • Common practices to address transparency.
  • Robust peer review.
  • Enhanced efficiency of quality improvement efforts.
  • Diversification of the ownership of accreditation.
  • Alternative financing models for accreditation.

Community-driven, shared general education outcomes are emerging from the work of institutions and faculty, whether through informal consortiums, higher education associations or other means of joining forces. The Essential Learning Outcomes of the Association of American Colleges and Universities, the Collegiate Learning Assessment and the Voluntary System of Accountability of the Association of Public and Land-grant Universities all provide for agreement across institutions about expected outcomes. This work is vital as we continue to address the crucial question of “What is a college education?” Accreditors, working in partnership with institutions, assure that these community-driven outcomes are in place and that evidence of student achievement is publicly available as well as used for improvement.

Common practices to address transparency in Accreditation 2.0 require that accredited institutions and programs routinely provide readily understandable information to the public about performance. This includes, for example, completion of educational goals, including graduation, success with transfer, and entry to graduate school. Second, accrediting organizations would provide information to the public about the reasons for the accredited status they award in the same readily understandable style, perhaps using an audit-like instrument such as a management letter. A number of institutions and accreditors already offer this transparency. Accreditation 2.0 would mean that it becomes standard practice.

Robust peer review -- colleagues reviewing colleagues -- is a major strength of current accreditation, not a weakness as some critics maintain. It is the difference between genuine quality review and bureaucratic scrutiny for compliance. Peer review serves as our most reliable source of independent and informed judgment about the intellectual development experience we call higher education. In the current environment, peer review can be further enhanced through, for example, encouraging greater diversity of teams, including more faculty and expanding public participation. As such, peer review has a prominent place in Accreditation 2.0, just as it plays a major role in government and other nongovernmental organizations in research, medicine and the sciences, among other fields.

Enhanced efficiency of quality improvement efforts builds on the enormous value of the “improvement” function in current accreditation. Improvement is about what an institution learns from its own internal review and the peer review team that prompts it to make changes to build on strengths or address perceived weaknesses. This is the dimension of accreditation to which institutions and programs most often point when speaking to the value of the enterprise.

However, for the limited number of institutions that are experiencing severe difficulties in meeting accreditation standards but remain “accredited” for a considerable number of years, there can be a downside for students and the public. Students enroll, but may have trouble graduating or meeting other educational goals because of weaknesses of the institution that were identified in the accreditation review, even as the institution is trying to improve and remedy these difficulties. Accreditation 2.0 can include means to assure more immediate institutional action to address the weaknesses and prevent their being sustained over long periods of time.

Diversification of the ownership of accreditation can provide for additional approaches to the process and even additional constructive competition, as well as provide a response to allegations of conflict of interest. At present, most accrediting organizations are either owned and operated by the institutions or programs they accredit or function as extensions of professional bodies. However, there is nothing to stop other parties interested in quality review of higher education from establishing accrediting organizations and obtaining the legal authority to operate. Accreditation 2.0 can encourage exploration of this diversification that can be a source of fresh thinking about sustaining and enhancing quality in higher education. Private foundations or nonprofit citizen groups, for example, can make excellent owners of accrediting organizations.

Alternative financing models for accreditation call for separating the reviews of individual institutions and programs from the financing of an accrediting organization. In Accreditation 1.0, most accreditors are funded through the fees they charge individual institutions and programs for their periodic accreditation review and for the annual operating costs of the accrediting organization – with the latter a condition of keeping accredited status. This mode of financing is viewed by some as an inappropriate enticement to expand the organization’s numbers of accredited institutions and programs and by others as a conflict of interest or disincentive to impose harsh penalties on institutions that might diminish membership numbers. It can create problems for some accreditors, especially smaller operations.

In Accreditation 2.0, an “accreditation bank” might be established by a third party, neither the accrediting organization nor the party seeking accreditation. Institutions and programs interested in investing in the accreditation enterprise would pay into the bank annually, independent of individual reviews. Alternative sources of financing include third parties such as private foundations and endowments.

*****

Accreditation 2.0 builds on the emerging consensus across the major national conversations about accreditation and accountability. It is one means to strengthen accreditation, but not at the price of some of Accreditation 1.0’s most valuable features. It keeps key academic decisions in the hands of institutions and faculty. It strengthens accountability, but through community-based decisions about common outcomes and transparency. It maintains the benefits of peer review, yet opens the door to alternative thinking about the organization, management and governance of accreditation.

Judith Eaton is president of the Council for Higher Education Accreditation, which is a national advocate for self-regulation of academic quality through accreditation. CHEA has 3,000 degree-granting colleges and universities as members and recognizes 59 institutional and programmatic accrediting organizations.

Institute for the Development of Excellence in Assessment Leadership (IDEAL)

Date: 
Mon, 08/02/2010 to Fri, 08/06/2010

Location

Baltimore , Maryland
United States

Pages

Subscribe to RSS - Assessment
Back to Top