Competency-based education puts efficiency before learning (essay)

When concerns about the quality of education swept the nation in the 1990s, test results were said to promise a reliable measure of instructional effectiveness. They offered a way to make comparisons across teachers, schools and students, all while assuring good value for Americans’ tax or tuition dollars. Faith in data, long built into U.S. educational practices, now came to support the ideal of schooling as a fair, honest, and well-managed service. The costs to Americans of public or private education would now need to be justified by those doing the educating.

Unfortunately, that justification, like any economic calculation, started from presumptions about what is worth paying for, and increased public spending on poorer communities was not on the table. The weaker performances of under-resourced urban or rural schools called forth not more public funding but less under No Child Left Behind. However precise its format and consistent its application, measurement in this instance served entirely subjective ideas about public good, and old race, class and geographic differentials were reproduced.

That standards-based heart of No Child Left Behind beats on in current advocacy for outcomes as the main drivers of educational design and evaluation. New metrics such as President Obama’s “College Scorecard” have helped make the idea of a measurable educational “return on investment” meaningful to schools and to students and their families. And this strong emphasis on the free market as a means of quality assurance in teaching and learning continues to spread.

For example, in “competency-based learning,” the organization of higher education shifts from the familiar credit hour system to one based on assessments of student mastery of skills and content. This means that familiar units such as courses, or classroom and contact hours, may disappear altogether in some programs. It also means that students pay for credentials not on the basis of certain numbers or types of instructional activities undertaken in a degree program, but on the basis of their own educational achievements.

A kind of industrial model of efficiency and market competition emerges in competency-based education. Advocates for this shift point to lowered tuition costs as classroom time, faculty wages and other institutional expenditures are reduced (the same savings often used to justify the use of MOOCs). And Lumina Foundation’s Jamie Merisotis predicts gains in quality control because colleges and students will undertake measurement of “what is learned” rather than “what is taught.” Federal officials also firmly endorsed competency-based college programs earlier this year by declaring them eligible for Title IV financial aid.

But learning is poorly served by such supposed efficiencies. There is a fundamental inequity in the character of competency-based education as a kind of scrimping: The “saving” of money supposedly in the interest of affordability and inclusion that in actuality achieves only social demarcation. Those students with the least money to spend on college will not be walking away with the same product as their more affluent fellow enrollees, uplifting rhetoric notwithstanding. Budget versions of education, like surgery or car repairs, are no bargain. In such outcomes-focused college curriculums, stripped of “unnecessary” instruction, open-ended, liberal learning easily is deemed wasteful. And so much for the profoundly energizing (and developmentally crucial) experience of encountering messy, uncertain arguments -- of experiencing cognition without identifiable outcomes. The distance will grow between the student who can afford traditional university instruction and the one who needs to save money.

We should be careful not to presume that those who teach in competency-based programs are necessarily weaker or less committed instructors. Yet, if a pre-set body of skills, identifiable upon graduation, is what demarcates one program from another in this kind of higher education, bringing revenue and market share to a school, in whose interest is an inventive classroom experience, or one that leads to diverse intellectual experiences for different students? What faculty member will take pedagogical risks or welcome the challenging student?

There’s an important echo here, I think, with recently renewed interest in K-12 classroom tracking. New proponents of that practice recently interviewed by The New York Times point out how such tracking matches the level, speed and style of teaching more closely to divergent student needs than can any single, unified classroom. It sounds like an inclusive reform. But both trends threaten a kind of separate but equal educational system, reasserting group identities even as they claim to customize education. They do so through projections of how best to distribute resources in our society, and also through more subtle projections of student abilities and the assertion that such abilities may be predicted.

Both propose tiered education on the presumption that underachievement and differentials in life opportunities are not something we can try to prevent. Tracking and competency-based education both assert that solutions to missing or poorly executed education involve reshaping student experiences, not expanding resources. That’s a very different ideology than the one that fueled compensatory programs of the 1970s. Those initiatives managed to accommodate diverse learning styles and paces while also bolstering educational provisions for disadvantaged communities.

Competency-based education, for its part, engages in some extraordinarily selective definitions of efficiency and inclusion. The results-based model of higher education supposedly weds quality control to flexibility; some competency-based programs give equal credit for students’ classroom, online, life-experience and video-, book- or game-based learning. Those students who are shown through assessment to have pertinent skills are credentialed, however those skills were obtained; they need not pay for “unneeded credits.” For federal supporters of this scheme and approving think-tank voices, standards in each subject will reliably determine what is worth knowing and what learning counts. They also assure that the “consumer” will be well-served throughout.

Let’s think about this. A conflict of interest certainly resides in a system whereby educational providers measure learning outcomes in their own institutions. But to be fair, that conflict can afflict any instructional effort, whether good performance promises a school more revenue, more public funding or simply greater prestige. Competency-based education, however, seems systematically to deny criticality about its own operations. It uses only its own terms to judge its success. That’s troubling. If educational standards are conflated with the instruments of industry, we should not be surprised to encounter the self-serving methods of industrial quality control. Here, as in a profitable factory, the system claims a basis in economies and managerial oversight, the supposedly no-lose technics of mass-production. But industry standards invariably best serve their creators.

The multi-tiered and modular have certainly long been the American educational way. The new instructional models simply extend older beliefs in natural distributions of talent and diligence, in inborn differentials of cognition and character. Calling such schooling “diverse,” “flexible,” or “customer focused” will not make it democratic.

In outcomes-focused education, I see strong support for the idea that each individual who enters the classroom, aged 5, 15 or 25, is one with predetermined potential, with an identifiable niche on the ladder of aptitude that will match with a certain amount and kind of instruction. High or low, that ascription of talent is more than merely a subjective judgment, it is an iniquitous one: The customized learning experiences currently being praised proceed from the idea that an individual can be known by such categories and then placed in an appropriate position in a classroom or curriculum. Ultimately, that will also continue with the employment ladder. These so-called innovations don’t promise enriched learning and expanded opportunity, but outward rippling discrimination.

Amy Slaton
Author's email:

Amy Slaton is a professor of history in the department of history and politics at Drexel University.

Voluntary performance measures from Gates-backed group

Smart Title: 

Diverse group of 18 institutions, with Gates's backing, releases new set of metrics to measure colleges' performance and return on investment.

Higher ed discovers competency, again (essay)

In every spring, it seems, higher education finds something attractive in the flower pollen. This year, it is the discovery of competence as superior to course credits, and in an embrace of that notion in ways suitable to the age and its digital environments  This may be all well and good for the enterprise, as long as we acknowledge its history and key relationships over many springs.

Alverno offered authentic competency-based degrees in the 1970s (as did a few others at the periphery of our then institutional universe), and, for those who noticed, started teaching us what assessing competence means. Competence vaulted over credits in the 1984 higher education follow-up to "A Nation at Risk," blandly titled "Involvement in Learning." In fact, 9 of the 27 recommendations in that federal document addressed competence and assessment (though the parameters of the assessments recommended were fuzzy). Nonetheless, "Involvement" gave birth to the “assessment movement” in higher education, and, for the moment of a few years, some were hopeful that faculty and administrators would take advantage of the connections between their regular assignments and underlying student behaviors in such a way as to improve those connections in one direction, improve their effects on instruction in another direction, and provide evidence of impact to overseers public and private. There were buds on the trees.

But the buds did not fully blossom. Throughout the 1990s, “assessment” became mired in scores of restricted response examinations, mostly produced by external parties, and, with those examinations, “value added” effect size metrics that had little to do with competence and even less impact on the academic lives of students. The hands of faculty -- and their connecting stitching of instruction, learning objectives, and evidence -- largely disappeared. The educati took over; and when another spring wind brought in business models of TQM and CQI and Deming Awards, assessment got hijacked, for a time, by corporate approaches to organizational improvement which, for better or worse, nudged more than a few higher education institutions to behave in corporate ways.

Then cometh technology, and in four forms:

First, as a byproduct of the dot-com era, the rise of industry and vendor IT certifications. We witnessed the births of at least 400 of these, ranging from the high-volume Microsoft Certified Systems Engineer to documentation awards by the International Web Masters Association and the industrywide COMPTia.  It was not only a parallel postsecondary universe, but one without borders, and based in organizations that didn’t pretend to be institutions of higher education.  Over 2 million certifications (read carefully: I did not call them “certificates”) by such organizations had been issued worldwide by 2001, and, no doubt, some multiple of that number since. No one ever kept records as to how many individuals this number represented, where they were located, or anything about their previous levels of education. Credits were a foreign commodity in this universe: demonstrated competence was everything. Examinations delivered by third parties (I flunked 3 of them in the course of writing an analysis of this phenomenon) documented experience, and an application process run by the vendor determined who was anointed.

No one knows whether institutions of higher education recognized these achievements, because no one ever asked.  The only question we knew how to ask was whether credit was granted for different IT competencies, and, if so, how much. Neither governments nor foundations were interested. The IT certification universe was primarily a corporate phenomenon, marked in minor ways, and forgotten.

Second, the overlapping expansion of online course and partial-course delivery by traditional institutions of higher education. This was once known as “distance education,” delivered by a combination of television and written mail-in assignments, administered typically by divisions on the periphery of most IHEs. Only when computer network systems moved into large or multicampus institutions could portions of courses be broadly accessed, but principally by resident or on-site students. Broadband and wireless access in the mid-1990s broke the fence of residency, though in some disciplines more than others. Some chemistry labs, case study analyses, cost accounting problems, and computer programming simulations could be delivered online. These were partial deliveries in that they constituted those slices of courses that could be technologically encapsulated and accessed at the student’s discretion. “Distance education” was no longer the exclusive purvey of continuing education or extension divisions: it was everywhere. 

Were the criteria for documenting acceptable student performance expressed as “competencies,” with threshold performance levels? Some were; most were not. They were pieces of course completion, and with completion, the standard award of credits and grades. They came to constitute the basis for more elaborated “hybrid” courses, and what is now called “blended” delivery.

Third, the rise of the for-profit, online providers of full degree programs. If we could do pieces of courses on-line, why not whole courses? Why not whole degree programs -- and sell them? Take a syllabus and digitize its contents, mix in some digital quizzes and final exams (maintain a rotating library of both).  Acquire enough syllabuses, and you have a degree.  But not in every field, of course. You aren’t going to get a B.S. in physics online -- or biology, agricultural science, chemistry, engineering of any kind, art, or music (pieces, yes; whole degrees, no). 

But business, education, IT, accounting, finance, marketing, health care administration, and psychology? No problem! Add online advisers, e-mail exchanges both with instructor and small groups of students labeled a “section,” and the enterprise begins to resemble a full operation. The growing market of space-and-time mobile adults makes it easy to avoid questions about high school preparation and SAT scores. A lot of self-pacing and flexibility for those space-time mobile students. Adding a few optional hybrid courses means leasing some brick-and-mortar space, but that is not a burden. Make sure a majority of faculty who write the content that gets translated into courseware hold Ph.D.s or other appropriate terminal degrees, obtain provisional accreditation, market and enroll, start awarding paper, become fully accredited and, with it, Title IV eligibility for enrollees, and ... voila! But degree criteria were still expressed in terms of courses/credits.

Fourth, the MOOCs, a natural extension of combinations of the above. “Distance education” for whoever wants it and whenever they want it; lecture sets, except this time principally by the “greats,” delivered almost exclusively from elite universities, big audiences, no borders (like IT certifications), and standard quizzes and tests -- if you wish to document your own learning, regardless of whether credit would ever be granted by anybody. You get what you came for -- a classic lecture series. Think about what’s missing here: papers, labs, fieldwork, exhibits, performances. In other words, the assignments through which students demonstrate competency are absent because they cannot be implemented or managed for crowds of 30,000, let alone 100,000 -- unless, of course, the framework organization (not a university) limits attendees (and some have) to a relatively elite circle.

Everyone will learn something, no doubt, whether or not they finish the course. The courses offered are of a limited range, and dependent on the interests (teaching as well as themes of research) of the “greats” or the rumblings of state legislators to include a constricted set of “gateways” so as to relieve enrollment pressures. These are signature portraits, and as the model expands to other countries and in other languages, we’ll see more signatures. But signatures cannot be used as proxies for competencies, any more than other courses can be used that way. There is nothing wrong with them otherwise. They serve the equivalent of all those kids who used to sit on the floor of the former Borders on Saturdays, reading for the Java2 platform exam.

This time, though, we sit on the floor for the insights of a great mind or for basic  understanding of derivatives and integrals. If this is what learners and legislators want, fine! But let’s be clear: there are no competencies here. And since degrees are not at issue, there are no summative comprehensive judgments of competence, either.

The Discontents

Obviously missing across all of the technologies, culminating in the current fad for MOOCs, are the mass of faculty, including all our adjuncts, hence potential within-course assignments linked to student-centered learning behaviors that demand and can document competencies of different ranges.  Missing, too: within-institutional collaboration, connections, and control.  However a MOOC twists and turns, those advocating formal credit relationships with the host organizations of such entities are handing over both instruction and its assessment to third parties -- and sometimes fourth parties. There is no organic set of interactions we can describe as teaching-and-learning-and-judgment-and-learning again-and teaching again-and judging again.  At the bottom line, there are, at best, very few people on the teaching and judging side. Ah, technology!  It leaves us no choice but to talk about credits.

And then there is that word on every 2013 lip of higher education, “competence.” Just about everyone in our garden uses the word as a default, but nobody can tell you what it is. In both academic and non-academic discourse, “competence” seems to mean everything and hence nothing. We have cognitive, social, performance, specialized, procedural, motivational, and emotional competencies. We have one piled on top of another in the social science literature, and variation upon variation in the psychological literature.

OECD ran a four-year project to sort through the thickets of economic, social, civil, emotional, and functional competencies. The related literature is not very rewarding, but OECD was not wrong in its effort: what we mean and want by way of competence is not an idle topic. Life, of course, is not higher education, and one’s negotiation of life in its infinite variety of feeling and manifestation does not constitute the set of criteria on which degrees are awarded. Our timeline is more constrained, and our variables closer at hand.  So what are all the enthusiasts claiming for the “competence base” of online degrees or pieces, such as MOOCs, that may become part of competence-based degrees (whatever that may mean)?  And is there any place that one can find a true example?

We are not talking about simple invocations of tools such as language (just about everyone uses language) and “technology” (the billion people buried in iPhones or tweeting certainly are doing that, and have little trouble figuring out the mechanics and reach of the next app).        

Neither are the competencies required for the award of credentials those of becoming an adult.  We don’t teach “growing up.”  At best, higher education institutions may facilitate, but that doesn’t happen online, where authentic personal interactions (hence major contributors to growing up) are limited to e-mails, occasional videos, and some social media.  Control in online environments is exercised by whoever designed the interaction software, and one doesn’t grow up with third-party control.

At the core of the conundrum is the level of abstraction with which we define a competence. For students, current and prospective, that level either locks or unlocks understanding of what they are expected to do to earn a credential.  For faculty, that level either locks or unlocks the connection between what they teach or facilitate and their assignments.  Both connections get lost at high levels of abstraction, e.g., “critical thinking” or “teamwork,” that we read in putative statements of higher education outcomes that wind up as vacuous wishlists.  Tell us, instead, what students do when they “think critically,” what they do in “teamwork,” and perhaps we can unlock the gate using verbs and verb phrases such as “differentiate,” “reformulate,” “prioritize,” and “evaluate” for the former, and “negotiate,” “exchange,” and “contribute” for the latter.  Students understand such verbs; they don’t understand blah.

How “Competence” in Higher Education Should be Read

How will we know it if we see it?  One clue will be statements describing documented execution of either related cognitive tasks or related cognitive-psychomotor tasks. To the extent to which these related statements are not discipline-specific (though they may be illustrated in the context of disciplines and fields) they are generic competencies.  To the extent to which these related statements are discipline- or field-specific, they are contextual competencies.  In educational contexts, the former are benchmarks for the award of credentials, the latter are benchmarks for the award of credentials in a particular field.  All such statements should be grounded in such active verbs as assemble, retrieve, differentiate, aggregate, create, design, adapt, calibrate, and evaluate. These language markers allow current and prospective students to understand what they will actually do. These action verbs lead directly and logically to assignments that would elicit student behaviors that allow faculty to judge whether competencies have been achieved.  Such verbs address both cognitive and psychomotor activities, hence offer a universe that addresses both generic performance benchmarks for degrees and subject-specific benchmarks in both occupationally-oriented and traditional arts and sciences fields.

Competencies are not wishlists: they are learned, enhanced, expanded; they mark empirical performance, and a competency statement either directly — or at a slant — posits a documented execution.  Competencies are not “abilities,” either.  In American educational discourse, “ability” should be a red-flag word (it invokes both unseemly sides of genetics and contentious Bell curves), and, at best, indicates only abstract potential, not actualization.  One doesn’t know a student has the “ability” or “capacity” to do something until the student actually does it, and the “it” of the action is the core of competence.

What pieces of the various definitions of competence fit in a higher education setting where summative judgments are levied on individuals’ qualifications for degrees?

  • the unit of analysis is the individual student;
  • the time frame for the award of degrees is sometimes long and often uneven;
  • the actions and proof of a specific competence can be multiple and take place in a variety of contexts over that long and uneven time frame;
  • cognitive and/or psychomotor prerequisites of action and application are seen and defined in actions and applications, and not in theories, speculations, or goals;
  • the key to improving any configuration of competencies lies in feedback, clarification questions, and guidance, i.e., multiple information exchange;
  • there is a background hum of intentionality in a student’s motivation and disposition to prove competence; faculty do not teach motivation, intentionality, and disposition — these qualities emerge in the environment of a formal enterprise dedicated to the generation and distribution of knowledge and skills; they are in the air you breath in institutions of higher education;
  • competencies can be described in clusters, then described again in more discrete learning outcome statements;
  • the competencies we ascribe to students in higher education are exercised and documented only in the context of discipline-based knowledge and skills, hence in courses or learning experiences conducted or authorized by academic units;
  • that is, the Kantian maxim applies: forms without intuitions are empty; we can describe the form, the generic competence, without reference to field-specific knowledge, but the competence is only observed and documented in field-specific contexts;
  • the Kantian maxim works in the other direction, too: intuitions without forms are blind, i.e., if we think about it carefully, we don’t walk into a laboratory and simply learn the sequence of proper titration processes, nor are the lab specifications simply assigned.  Rather, there is an underlying set of cognitive forms for that sequence — planning, selection, timing, observation, recording, abstracting — that, together, constitute the prerequisite competencies that allow the student to enact the Kantian sentence.                                    

When Technology and Competence Intersect

How does all this interact with current technological environments?  First, acknowledge that institutions, independent sponsors, vendors, and students will use the going technologies in the normal course of their work in higher education.  That’s a given, and, in a society, economy, and culture that surrounds our daily life with such technologies, students know how to use them long before they enter higher education.  They are like musical instruments, yes, in that it takes practice to use them sufficiently well, but unless you are writing code or designing Web navigation systems, there’s a cap on what “sufficiently well” means, and abetted by peer interactions, most students hit that cap fairly easily.

Second, there are a limited number of contexts in which competencies can be demonstrated online.  For example, laboratory science simulations can’t get to stages at which smell or texture comes into play (try Benzene, characterized as an aromatic compound for a good reason); studio art is limited in terms of texture and materials; plants do not grow for you in simulations to measure for firmness in agricultural science. Culinary arts?  When was the last time you tasted a Beef Wellington online? Forget it!

Third, if improvement of competency involves a process of multiple information-exchange, with the student contributing clarification questions, there are few forms of technological communication that allow for this flexibility, with all its customary pauses and tones. Students cannot be assisted in the course of assignments that take place beyond the broadband classroom, e.g., ethnographic field work. Those students who have attained a high degree of autonomy might be at home in a digital environment and can fill in the ellipses; most students are not in that position, and require conversation and consultation in the flesh.  And since when did an online restricted response exam provide more than a feedback system that explains why your incorrect answer was incorrect, but you may not understand two of the four explanations -- and there is no further loop to help you out other than sending you back to a basal level that lies far outside the exam.

All of that is part of the limited universe of assessment and assignments in digital environments, and hence part of the disconnect between what is assumed to be taught, what is learned, and whether underlying competencies are elicited, judged, and linked.  People do all these jobs; circuits don’t.

So much for what we should see. But what do we see. Not much. Not from the MOOC business; not from the online providers of full degree programs; not from most traditional institutions of higher education.  Pretend you are a prospective student, go online to your sample of these sources, and see if you can find any competency statements -- let alone those that tell you precisely what you are going to do in order to earn a degree.  You are more likely to see course lists, offerings, credit blocks, and sequences as proxies for competence. You are more likely to read dead-end mush nouns such as “awareness,” “appreciation,” and the champion mush of them all -- “critical thinking.” None of these are operational cognitive or psychomotor tasks. None of these indicate the nature of the execution that will document your attainment. The recitations, if and when you find them, fall like snow, obliterating all meaningful actions and distinctions.

So Where Do We Turn in Higher Education?

There’s only one document I know that can get us halfway there, and it is more an iterative process than a document, and a process that will take a decade to reach a modicum of satisfaction. Departing from both customary practice and language is the Degree Qualifications Profile (DQP) set in an iterative motion by the Lumina Foundation in early 2011, and for which, in the interests of full disclosure, I was one of four authors. What does it do? What did we have in mind? And how does it address the frailties of both technology and the language of competence?

Its purposes are to provide an alternative to metric-driven “accountability” statements of IHEs, and to clarify what degrees mean using statements of specific generic competencies. Its roots are in what other countries call “qualification frameworks,” as well as in a discipline-specific cousin called tuning (in operation in 60 countries, including five state systems in the U.S.). The first edition DQP includes 19 competencies at the associate level, 24 for the bachelor’s, and 15 for the master’s -- all irrespective of field. The competencies are organized in five archipelagos of knowledge, intellectual skills, and applications, and all set up in a ratcheting of challenge level from one degree to the next.  They are summative learning statements, describing the documented execution of cognitive tasks -- not credits and GPAs -- as conditions for the award of degrees. The documented execution can take place at any time in a student’s degree-level career, but principally through assignments embedded in course-based instruction (though that does not exclude challenge examinations or other non-course based assessments). However course-based the documentation might be, the DQP is a degree-level statement and courses cannot be used as proxies for what it specifies. Competencies as expressed here, after all, can be demonstrated in multiple courses.

The DQP is neither set in stone nor sung in one key. Don’t like the phrasing of a competency task? Change it! Think another archipelago of criteria should be included? Add it! Does the DQP miss competencies organic to the mission of yours and similar institutions? Tell the writers, and you will see those issues addressed in the next edition, due out by the end of 2013. 

For example, the writers know that the document needs a stronger account of the relation between discipline-based and generic degree requirements, so you will see more of tuning (Lumina's effort to work with faculty to define discipline-based knowledge and skills) in the second edition. They also know that the DQP needs a more muscular account of the relation between forms of documentation (assignments), competencies, and learning outcomes, accounting for current and future technologies in the process, as well as for potential systems of record-keeping (if credits here they are only in the back office as engines of finance for the folks with the green eye shades). 

All of this -- and more -- comes from the feedback of 200 institutions currently exploring the DQP, and testifies to what “iteration” can accomplish. This is not a short-term task, nor is it one that is passed to corporate consultants or test developers outside the academy. I would not be surprised if, after a decade of work, we saw 50 or 60 analogous but distinct applications of the DQP living in the public environment, and, as appropriate to the U.S., outside of any government umbrella. That sure is better than what we have now and what has been scrambled even more by MOOCs -- something of a zero.

It has been a long road from the competence-based visions of the 1970s, but unraveling discontents will help us see its end. We know that technologies and delivery systems will change again. That, in itself, argues for the stability of a competence-referenced set of criteria for the award of at least three levels of degrees. Some of the surface features of the DQP will change, too, but its underlying assumptions, postulates, and language will not. Its grounding in continuing forms of human learning behavior guarantees that reference point. All the more reason to stand firm with it.

Cliff Adelman is a senior associate at the Institute for Higher Education Policy.

Editorial Tags: 

New ETS test on non-academic skills

Smart Title: 

ETS releases a new test to measure students' non-academic skills. Colleges want to use test for advising and finding remedial students with "grit."

Essay on how professors can deal with assessment

My first encounter with assessment came in the form of a joke. The seminary where I did my Ph.D. was preparing for a visit from the Association of Theological Schools, and the dean remarked that he was looking forward to developing ways to quantify all the students' spiritual growth. By the time I sat down for my first meeting on assessment as a full-time faculty member in the humanities at a small liberal arts college, I had stopped laughing. Even if we were not setting out to grade someone’s closeness to God on a scale from 1 to 10, the detailed list of "learning outcomes" made it seem like we were expected to do something close. Could education in the liberal arts — and particularly in the humanities — really be reduced to a series of measurable outputs?

Since that initial reaction of shock, I have come to hold a different view of assessment. I am suspicious of the broader education reform movement of which it forms a part, but at a certain point I asked myself what my response would be if I had never heard of No Child Left Behind or Arne Duncan. Would I really object if someone suggested that my institution might want to clarify its goals, gather information about how it’s doing in meeting those goals, and change its practices if they are not working? I doubt that I would: in a certain sense it’s what every institution should be doing. Doing so systematically does bear significant costs in terms of time and energy — but then so does plugging away at something that’s not working. Paying a reasonable number of hours up front in the form of data collection seems like a reasonable hedge against wasting time on efforts or approaches that don’t contribute to our mission. By the same token, getting into the habit of explaining why we’re doing what we’re doing can help us to avoid making decisions based on institutional inertia.

My deeper concerns come from the pressure to adopt numerical measurements. I share the skepticism of many of my colleagues that numbers can really capture what we do as educators in the humanities and at liberal arts colleges. I would note, however, that there is much less skepticism that numerical assessment can capture what our students are achieving — at least when that numerical assessment is translated into the alphabetical form of grades. In fact, some have argued that grades are already outcome assessment, rendering further measures redundant.

I believe the argument for viewing grades as a form of outcome assessment is flawed in two ways. First, I simply do not think it’s true that student grades factor significantly in professors’ self-assessment of how their courses are working. Professors who give systematically lower grades often believe that they are holding students to a higher standard, while professors who grade on a curve are simply ranking students relative to one another. Further, I imagine that no one would be comfortable with the assumption that the department that awarded the best grades was providing the best education — many of us would likely suspect just the opposite.

Second, it is widely acknowledged that faculty as a whole have wavered in their dedication to strict grading, due in large part to the increasingly disproportionate real-world consequences grades can have on their students’ lives. The "grade inflation" trend seems to have begun because professors were unwilling to condemn a student to die in Vietnam because his term paper was too short, and the financial consequences of grades in the era of ballooning student loan debt likely play a similar role today. Hence it makes sense to come up with a parallel internal system of measurement so that we can be more objective.

Another frequently raised concern about outcome assessment is that the pressure to use measures that can easily be compared across institutions could lead to homogenization. This suspicion is amplified by the fact that many (including myself) view the assessment movement as part of the broader neoliberal project of creating “markets” for public goods rather than directly providing them. A key example here is Obamacare: instead of directly providing health insurance to all citizens (as nearly all other developed nations do), the goal was to create a more competitive market in an area where market forces have not previously been effective in controlling costs.

There is much that is troubling about viewing higher education as a competitive market. I for one believe it should be regarded as a public good and funded directly by the state. The reality, however, is that higher education is already a competitive market. Even leaving aside the declining public support for state institutions, private colleges and universities have always played an important role in American higher education. Further, this competitive market is already based on a measure that can easily be compared across institutions: price.

Education is currently a perverse market where everyone is in a competition to charge more, because that is the only way to signal quality in the absence of any other reliable measure of quality. There are other, more detailed measures such as those collected by the widely derided U.S. News & World Report ranking system — but those standards have no direct connection to pedagogical effectiveness and are in any case extremely easy to game.

The attempt to create a competitive market based on pedagogical effectiveness may prove unsuccessful, but in principle, it seems preferable to the current tuition arms race. Further, while there are variations among accrediting bodies, most are encouraging their member institutions to create assessment programs that reflect their own unique goals and institutional ethos. In other words, for now the question is not whether we’re measuring up to some arbitrary standard, but whether institutions can make the case that they are delivering on what they promise.

Hence it seems possible to come up with an assessment system that would actually be helpful for figuring out how to be faithful to each school or department’s own goals. I have to admit that part of my sanguine attitude stems from the fact that Shimer’s pedagogy embodies what independent researchers have already demonstrated to be “best practices” in terms of discussion-centered, small classes — and so if we take the trouble to come up with a plausible way to measure what the program is doing for our students, I’m confident the results will be very strong. Despite that overall optimism, however, I’m also sure that there are some things that we’re doing that aren’t working as well as they could, but we have no way of really knowing that currently. We all have limited energy and time, and so anything that can help us make sure we’re devoting our energy to things that are actually beneficial seems all to the good.

Further, it seems to me that strong faculty involvement in assessment can help to protect us from the whims of administrators who, in their passion for running schools "like a business," make arbitrary decisions based on their own perception of what is most effective or useful. I have faith that the humanities programs that are normally targeted in such efforts can easily make the case for their pedagogical value, just as I am confident that small liberal arts schools like Shimer can make a persuasive argument for the value of their approach. For all our justified suspicions of the agenda behind the assessment movement, none of us in the humanities or at liberal arts colleges can afford to unilaterally disarm and insist that everyone recognize our self-evident worth. If we believe in what we’re doing, we should welcome the opportunity to present our case.

Adam Kotsko is assistant professor of humanities at Shimer College.

Editorial Tags: 

Gainful employment's future uncertain after court ruling

Smart Title: 

Gainful employment takes another hit in court, jeopardizing a possible appeal and raising questions about federal collection of data on higher education.

Technical college puts job readiness and attendance scores on transcripts

Smart Title: 

A two-year college in Missouri issues "job readiness work ethic" scores on students' transcripts, as well as a rating for attendance.

Free online course providers pair up with credit-bearing exams

Smart Title: 

New batch of free, online courses geared to credit-bearing exams could be the fastest, most affordable way to earn college credit.

Competency-based education and regional accreditation

Historians of this period, possessing the clearsightedness that only time provides, will likely point to online learning as the disruptive technology platform that radically changed higher education, which had remained largely unchanged since the cathedral schools of medieval Europe -- football, beer pong and food courts notwithstanding.

Online learning is already well-understood, well-established and well-respected by those who genuinely know it. But what we now see in higher education is a new wave of innovation that uses online learning, or at least aspects of it, as a starting point. The meteoric growth of the for-profit sector, the emergence of MOOCs, new self-paced competency-based programs, adaptive learning environments, peer-to-peer learning platforms, third-party service providers, the end of geographic limitations on program delivery and more all spring from the maturation of online learning and the technology that supports it. Online learning has provided a platform for rethinking delivery models and much of accreditation is not designed to account for these new approaches. 

Until now, regional accreditation has been based on a review of an integrated organization and its activities: the college or university. These were largely cohesive and relatively easy to understand organizational structures where almost everything was integrated to produce the learning experience and degree. Accreditation is now faced with assessing learning in an increasingly disaggregated world with organizations that are increasingly complex, or at least differently complex, including shifting roles, new stakeholders and participants, various contractual obligations and relationships, and new delivery models. There is likely to be increasing pressure for accreditation to move from looking only at the overall whole, the institution, to include smaller parts within the whole or alternatives to the whole: perhaps programs, providers and offerings other than degrees and maybe provided by entities other than traditional institutions. In other words, in an increasingly disaggregated world does accreditation need to become more disaggregated as well?

Take the emergence of competency-based education, which is more profound – if less discussed – than massive open online courses (MOOCs). Our own competency-based program, College for America (CfA), is the first of its kind to so wholly move from any anchoring to the three-credit hour Carnegie Unit that pervades higher education (shaping workload, units of learning, resource allocation, space utilization, salary structures, financial aid regulations, transfer policies, degree definitions and more). The irony of the three-credit hour is that it fixes time while it leaves variable the actual learning. In other words, we are really good at telling the world how long students have sat at their desks and we are really quite poor at saying how much they have learned or even what they learned.  Competency-based education flips the relationship and says let time be variable, but make learning well-defined, fixed and non-negotiable.

In our CfA program, there are no courses. There are 120 competencies – “can do” statements, if you will – precisely defined by well-developed rubrics. Students demonstrate mastery of those competencies through completion of “tasks” that are then assessed by faculty reviewers using the rubrics. Students can’t “slide by” with a C or a B; they have either mastered the competencies or they are still working on them. When they are successful, the assessments are maintained in a web-based portfolio as evidence of learning. Students can begin with any competency at any level (there are three levels moving from smaller, simpler competencies to higher level, complicated competencies) and go as fast or as slow as they need to be successful. We offer the degree for $2,500 per year, so an associate degree for $5,000 if a student takes two years and for as little as $1,250 if they complete in just six months (an admittedly formidable task for most). CfA is the first program of its kind to be approved by a regional accreditor, NEASC in our case, and is the first to seek approval for Title IV funding through the “direct assessment of learning” provisions. At the time of this writing, CfA has successfully passed the first stage review by the Department of Education and is still moving through the approval process.

The radical possibility offered in the competency-based movement is that traditional higher education may lose its monopoly on delivery models. Accreditors have for some time put more emphasis on learning outcomes and assessment, but the competency-based education movement privileges them above all else. When we excel at both defining and assessing learning, we open up enormous possibilities for new delivery models, creativity and innovation. It’s not a notion that most incumbent providers welcome, but in terms of finding new answers to the cost, access, quality, productivity and relevance problems that are reaching crisis proportions in higher education, competency-based education may be the most dramatic development in higher education in hundreds of years. For example, the path to legitimacy for MOOCs probably lies in competency-based approaches, and while they can readily tackle the outcomes or competency side of the equation, they still face formidable challenges of reliable, trustworthy and rigorous assessment at scale (at least while trying to remain free). Well-developed competency-based approaches can also help undergird the badges movement, demanding that such efforts be transparent about the claims associated with a badge and the assessments used to validate learning or mastery. 

Competency-based education may also provide accreditors with a framework for more fundamentally rethinking assessment. It would shift accreditation to looking much harder at learning outcomes and competencies, the claims an entity is making for the education it provides and for the mechanisms it uses for knowing and demonstrating that the learning has occurred. The good news here is that such a dual focus would free accreditors from so much attention on inputs, like organization, stakeholder roles and governance, and instead allow for the emergence of all sorts of new delivery models. The bad news is that we are still working on how to craft well designed learning outcomes and conduct effective assessment. It’s harder than many think. A greater focus on outcomes and assessment also begs other important questions for accreditors:

  • How will they rethink standards to account for far more complex and disaggregated business models which might have a mix of “suppliers,” some for-profit and some nonprofit, and which look very different from traditional institutions?
  • Will they only accredit institutions or does accreditation have to be disaggregated too? Might there by multiple forms of accreditation: for institutions, for programs, for courses, for MOOCs, for badges and so on? At what level of granularity?
  • CBE programs are coming. College for America is one example, but other institutions have announced efforts in this area. Major foundations are lining up behind the effort (most notably the Lumina and Bill and Melinda Gates Foundations), and the Department of Education appears to be relying on accreditors to attest to the quality and rigor of those programs. While the Department of Education is moving cautiously on this question, accreditors might want to think through what a world untethered to the credit hour might look like. Might there be two paths to accreditation: the traditional “institutional path” and the “competency-based education path,” with the former looking largely unchanged and the latter using rigorous outcomes and assessment review to support more innovation than current standards now do?  Innovation theory would predict that new innovative CBE accreditation pathway would come to improve the incumbent accreditation processes and standards.

This last point is important: accreditors need to think about their relationship to innovation. If the standards are largely built to assess incumbent models and enforced by incumbents, they must be by their very nature conservative and in service of the status quo. Yet the nation is in many ways frustrated with the status quo and unwilling to support it in the old ways. Frankly, they believe we are failing, and the ways they think we are failing depend on whom you ask. But never has the popular press (and thus the public and policy makers) been so consumed with the problems of traditional higher education and intrigued by the alternatives.  In some ways, accreditors are being asked to shift or at least expand their role to accommodate these new models.

If regional accreditors are unable to rise to that challenge they might see new alternative accreditors emerge and be left tethered to incumbent models that are increasingly less relevant or central to how higher education takes place 10 years from now. There is time. As has been said, we frequently overestimate the amount of change in the next two years and the dramatically underestimate the amount of change in the next 10. The time is now for regional accreditors to re-engineer the paths to accreditation. In doing so they can not only be ready for that future, they can help usher it into reality.

Paul J. LeBlanc is president of Southern New Hampshire University. This essay is adapted from writing produced for the Western Association of Schools and Colleges as part of a convening to look at the future of accreditation. WASC has given permission for it to be shared more widely and without restriction.

ACE to assess Udacity courses for credit

Smart Title: 

ACE considers credit recommendations for a batch of Udacity courses.


Subscribe to RSS - Assessment
Back to Top