Imagine yourself emerging from the Way Back Machine in London, England. It’s 1526. Henry VIII is on the throne. You furtively duck into a shop, and quickly head to the back room. You’ve come to buy an English translation of the New Testament. The mere possession of this book is punishable by death.
In the 1520s, having open access to books (knowledge) was a dangerous game. It threatened the establishment. It meant that ordinary people could see for themselves what the elite had guarded so closely.
Enter Thomas Cranmer, Archbishop of Canterbury, who commissioned the publication of the “Great Bible,” in English, making it available to every church where it was chained to the pulpit to ensure it was accessible and didn’t “disappear.” While the publisher paid for this access with his life, in three short years readers were provided so that everyone, even the illiterate, could hear the Word of God proclaimed in their native English.
Fast-forward 472 years. You’re a college student. You’ve taken advantage of some amazing opportunities in the online world. You’ve listened to Nobel laureates discuss the Eurozone crisis and explain how current difficulties relate (or not) to classical theories of economics. You’ve worked through the underlying physics and chemistry for nearly every episode of "MythBusters." You regularly watch the TED lectures. And you’ve even taken courses from the Open Learning Initiative and from OpenCourseWare at MIT. Now you want the academic credit for those forms of learning.
Although you won’t actually be burned at the stake as Cranmer was, you have a very good chance of experiencing the modern version of this torture because it is equally threatening to the elite. It goes something like this.
First, you’ll be asked to produce the sacred document, otherwise known as a transcript, indicating that you officially took the course. No transcript you say? Sorry — your learning is then considered “illegitimate,” and you’re then often cast out into the night where there is weeping and gnashing of teeth as you stumble back to the very beginning of college to start over.
While an exaggeration, today -- through such outlets as TED, various open-source course initiatives, and primary sources through digital content providers -- we all have access to the knowledge that previously was the province of academia. In the same way that access to the New Testament gave otherwise uneducated English people access to the very heart of Christianity, that access is “dangerous.” It threatens the central notion of what a college or university exists to do, and so, by extension, threatens the very raison d’etre of faculty and staff.
Threats to a well-entrenched status quo are not well-received. But the funny thing about many of them — whether books or ideas — is that they often quickly become the mainstream.
Higher education is facing the very situation that confronted our colleagues in the P-12 world when home schooling threatened the world order. Initially considered a fringe activity of substandard quality, the sector figured out that if appropriate standards (i.e., learning outcomes) were agreed upon and stated clearly, it didn’t really matter what path students took to get to the knowledge destination.
Higher education needs to take a lesson from that experience and work much harder on specifying our analog of the Common Core State Standards. The tools are there, and have been there for a very, very long time. It just has not been in our self-interest to develop and agree on them. But we’d better, and we’d better do it now. Otherwise, it will be done to us.
What Thomas Cranmer figured out was that it was impossible to execute people fast enough to stem the desire to access the new sources of knowledge. So he wholeheartedly adopted the reform, and made it his own. What we need to learn from that is to accept the reality that anyone can access the same information we academics used to carefully mete out, so the best approach is to adapt and make that reality our own. We need to create a higher educational system that embraces competency-based achievement, realign the milestones by which we gauge increasing levels of knowledge/competence, and redefine degrees on this basis.
We have an instructive example. Standard 14 of the Middle States’ Characteristics of Excellence pertains to the assessment of student learning. The standard requires that students be told what “…knowledge, skills, and competencies [they] are expected to exhibit upon successful completion of a course, academic program, co-curricular program, general education requirement, or other specific set of experiences… .”
As stated in the Standard, the objective is “…to answer the question, ‘Are our students learning what we want them to learn?’ ” Such assessment is “an essential component of the assessment of institutional effectiveness” (Characteristics of Excellence,, p. 63). The description then goes on to discuss how learning outcome assessment should be designed and its results used. Nowhere is there a discussion of credits.
Given that we already have an accreditation system based on the assessment of student learning (i.e., knowledge/competence acquisition), then it is a rather straightforward matter of taking the existing approach to the next step to complete the conversion process from one grounded on credit accumulation (irrespective of learning) to one based on demonstrated learning outcomes.
More specifically, we need to adopt the approach already taken in many professions of clearly articulating what students are supposed to know and be able to demonstrate at various levels of educational attainment, and create accreditation standards and metrics that reflect it. This would put real teeth in the assessment of student learning outcomes by putting consequences on not doing it well, as well as put the focus on where the content comes from and its quality assurance that underlies the knowledge/competence we expect students to acquire.
When that happens, the recognition of prior learning becomes very straightforward, and its source becomes irrelevant as long as the appropriate competencies are shown. In other words, we already have all the basic elements necessary to take the Cranmeresque step of moving from banning the immediate and unquestioned acceptance of demonstrated knowledge/competence to creating the postsecondary equivalent of the Book of Common Prayer.
The Brave New World
Adopting an accreditation system predicated on the authentic assessment of student learning outcomes liberates faculty to serve a much more important role — that of academic mentor and guide for the student’s learning and knowledge/competence acquisition process. In a way, this will return us to the past, whereby through the judicious use of technology faculty will be able to provide far more individualized instruction to many more students than the current system could ever possible allow or support. In another way, it means that the kind of individualized attention we give to doctoral students can be extended to all. This would be a major improvement for students and faculty alike.
To continue to have legitimacy, accreditation must focus on the core issue — student learning. Accreditation must begin certifying that students actually learn, and that what they learn matches the stated objectives of a course, an academic program, or a specific set of objectives (such as in general education). In short, accreditation must move from certifying that an institution claims that it is doing what it is supposed to do to certifying that students are learning and progressing in their acquisition of knowledge/competence.
Because people can simply wander around the Web and pick up content that is neither amalgamated by a content provider nor verified for accuracy, it will become necessary for some entity to engage in quality assurance in terms of learning outcomes. The job of verifiying bona fide knowledge/competencies and establishing where along the continuum of knowledge/competence acquisition a student falls can become the province of organizations that resemble LearningCounts.org, or even broader entities.
In both cases (i.e., using content offered by an certified provider or doing it on your own with no official guidance), a credential or type of certification would be provided each time a new level of knowledge/competence is reached. The student would then deposit those credentials or certifications into a credential bank for future reference. The student -- not the registrar’s office -- owns the credential
Degrees Deconstructed and Decoupled
We get to this alternate accreditation world in two ways: by clearly defining what each degree means and aligning accreditation with content providers (not institutions that confer degrees).
This requires that we come to quick agreement on what different types of degrees mean. In the United States the TuningUSA effort is just beginning the work of more clearly articulating what knowledge/competencies a student is supposed to demonstrate before being awarded a postsecondary degree.
This is in contrast with the current practice of awarding degrees based on a student's spending a specified minimum amount of clock-defined time amassing an arbitrary number of credits and obtaining a minimum set of grades. Nothing in the current definition says anything about what knowledge or competencies the student actually demonstrates. We need to test it — look up the degree requirements for English literature degrees across a variety of institutions and compare them. This loose approach is in contrast to efforts in other parts of the world, such as Europe, where degree qualifications discussions have been ongoing for over a decade.
Once the accreditation focus is placed on student learning outcomes for real, accreditation becomes tied to learning and is decoupled from institutions granting degrees. Accreditation then becomes aligned with entities that provide content and the parcels or “courses” in which they are delivered. The seal of accreditation would then be placed on the separate pieces of content offered by content providers who demonstrate that the content offered comes with embedded authentic assessment of learning. To be sure, most of these providers will still be postsecondary institutions, but the accreditation umbrella is extended more broadly to reflect the current reality that content comes from many sources.
In such a system, regional accreditation no longer gives thumbs-up or thumbs-down only on the traditional degree-granting institution. Rather, it focuses on what is provided by any entity that wants to claim it’s in the business of offering content. If and only if that content meets certain standards would it be “accredited.”
Shifting the focus from the institutional level to the content level would strengthen the link between accreditation and federal financial aid eligibility. If and only if a student was using content from an accredited source would the student be able to apply for and receive federal financial aid. Likewise, if the student has amassed knowledge/competencies from self-instruction or from noncertified sources and wants to convert that into “certified learning,” then federal financial aid could be spent only at accredited entities in that business.
Charting a Future Course
The possible future I have described here is both scary and exciting. We can choose to sit down in the captain’s chair and help chart our own course by fully embracing new opportunities while really being serious about quality as defined as authentic assessment of the acquisition of knowledge/competence. Or we can put up the shields, claim that the way we provide access to knowledge now is to remain immutable for all time and that change will bring our world crashing down and condemn us to eternal damnation, and have a modern equivalent of Thomas Cranmer bring it all crashing down.
It’s up to us. Shields will not work. We have only one real option if we want to build on the true legacy and meaning of education: to boldly go where accreditation has never gone before.
John C. Cavanaugh is Chancellor of the Pennsylvania State System of Higher Education. This essay is adapted from a speech he gave Tuesday at the annual meeting of the Middle States Commission on Higher Education.
In redesign of accreditation process, Western agency will publish reports on colleges and require institutions to define graduates' "levels of proficiency" -- but
proposals to compel peer comparisons face pushback.
Innovation in higher education, I sometimes think, is a bit like the weather. Everybody talks about it, but nobody does anything about it.
Every six months or so, as some new conference or other on the future of higher education heaves into view, I’ll get a call asking if I can list any and all recent innovations in higher education. The people on the other end of the line seem to feel that these innovations must surely be out there, so they make phone calls looking for them. But they always seem disappointed when I resort to listing the usual suspects: online universities, open educational resources, commercial ventures looking to partner with institutions. That’s not innovation, the people on the other end of the line seem to be saying. And in many respects, I agree with them. We haven’t yet seen anything truly game-changing, have we?
In recent months, the focus on innovation in higher education has turned to “disruptive innovation,” that concept originally formulated over a decade ago by Harvard Business School professor Clayton Christensen to describe change and innovation across numerous industries, but which he has more recently begun applying to education. Now everybody wants to know where the disruptive innovations in higher education are hiding.
For his part, Christensen points to online learning. But even by the standards established by Christensen’s own theory (where disruptive innovations are easier to use, cheaper, and serve new audiences), the case for online learning as a disruptive innovation is equivocal.
Is it a simpler, easier-to-use product? In some respects, but not all.
Is it less expensive to deliver? Outside of a few grant-program case studies, not particularly; the potential may be there, but it has yet to be fully realized.
Is it reaching a new audience? Probably, yes, but the evidence is mostly indirect and approximate.
Of course, there’s a reason why we don’t actually see much in the way of real innovation in higher education, and Christensen understands this. Incumbent leaders in mature industries engage in what Christensen calls “sustaining innovation” – the development of new features and benefits that make a product or service more useful, but not dramatically so. Think, for example, of the addition of a camera on the iPad2. With an increase in benefits, prices typically rise as well. What keeps pricing in balance in most industries, however, are those disruptive innovations – think of the personal computers that supplanted the mainframes decades ago. These cheaper and easier to use tools attract new audiences to the category and refashion the economics of the industry’s business model.
While colleges and universities may well engage in some sustaining innovations (the high-rise dormitories, the state-of-the-art fitness centers, the not-entirely-mythical rock climbing walls, not mention the world-class science labs and other high-tech investments), the fact is that they face little in the way of disruptive innovation because they have a lock on the market – it’s called accreditation – and thus there’s little opportunity for new entrants to come in and offer something less expensive or simpler to use.
To my mind, if you’re looking for an innovation opportunity, technology is just a part of the story. The real innovation – in price, in ease of use, in access – will occur when our colleges and universities face some real competition, and that will only come when we allow some new, entrepreneurial providers into the market.
If you want innovation, I say, remove the barriers.
To that end, I’d like to propose that the U.S. Department of Education establish a new “demonstration program,” not unlike the Distance Learning Demonstration Program of the past. That former program allowed institutions that delivered a majority of their programs online to distribute Title IV funds. Twenty-four institutions – a mix of nonprofits and for-profits – participated in the program. Along the way, we learned something important about the potential for scale within online learning; and today, one in four college students has taken at least one course online.
Now we need something a little different, but based on the same model – call it the “Innovation Demonstration Program.” In this case, the program will charter new organizations to offer degrees and distribute Title IV funds – even if they lack accreditation. That has the potential to open up real innovation within multiple segments of the marketplace.
Commercial organizations that offer tutoring services, curriculum, or learning technologies could get into the degree granting business and even make federal financial aid available to their students.
At the same time, established institutions might see this as a terrific opportunity to build new degree-granting organizations adjacent their own traditional campuses – unencumbered by the regulatory and governance hurdles that currently stymie their attempts to reach new markets, deliver new programs, or otherwise rethink how they do business.
It will, of course, be necessary to guard against the potential for new diploma mills entering the market and targeting federal dollars, but that’s where the regulatory apparatus becomes useful. It can both foreclose fraud and stimulate innovation at the same time. Under the kind of close supervision that a federal demonstration program would require, a few dozen experiments of this sort could teach us a great deal about what’s really possible when it comes to innovation in higher education.
If you think this sounds absurd, consider the case of the Relay Graduate School of Education, granted a charter by the New York State Board of Regents earlier this year to offer master’s degrees to teachers in New York. Founded by three charter school organizations – KIPP, Achievement First, and Uncommon Schools – the Relay Graduate School of Education was purpose-built to meet the education and professional development needs of those schools’ own teachers.
Along the way, Relay did something innovative. It tore up the semester model. In its place, Relay delivers 60 discrete learning modules. Students learn in the context of the schools in which they teach, and online curriculum is augmented by cohort discussions within the schools, all under the supervision of on-site mentors. This is a very different way of thinking about delivering education – and it’s innovative.
What makes it innovative isn’t that there’s technology involved – it’s really a very people-centered learning model – it’s that the organization is free to rethink the “why” and “how” of teacher professional development. Equally important, the oversight of the Board of Regents puts Relay on a level playing field with all of the other traditional providers of master’s degrees in education within the state of New York. Now ask yourself why the same thing shouldn’t be happening in disciplines such as business, engineering, computer science, health care, and numerous other fields, and on a national scale.
There is, after all, another key element in Christensen’s theory of disruptive innovation. It happens at the margins, and it happens within organizations that are free from the obligations of established incumbents. One of the great misunderstandings regarding Christensen’s theory, in my view, is that we can all disruptively innovate ourselves. But Christensen himself points out that the only companies that have successfully accomplished that feat have done so by establishing separate, discrete R&D units free of the pressures of the parent organization’s business model, customer demands, profit targets and more. The reality is, more often than not, that disruptive innovations put established incumbents out of business. That, after all, is what makes them disruptive.
If traditional higher education wants to innovate – if it realizes that it must – then that innovation will have to take place in the margins, free from the demands of traditional culture, regulation, and financial models. An Innovation Demonstration Program would allow us a chance to see just how much invention is in us, and how far we can go in lowering prices, increasing access, and educating the nation.
Peter Stokes is executive vice president of Eduventures, a higher education consulting firm.
Do majors matter? Since students typically spend more time in their area of concentration than anywhere else in the curriculum, majors ought to live up to their name and produce really major benefits. But do they?
Anthony P. Carnevale, the Director of Georgetown’s Center for Education and the Workforce, had recently provided a clear answer. Majors matter a lot -- a lot of dollars and cents. In a report entitled “What’s it Worth,” he shows how greatly salaries vary by major, from $120,000 on average for petroleum engineers down to $29,000 for counseling psychologists.
But what if one asked whether majors make differing contributions to students’ cognitive development? The answer is once again yes, but the picture looks very different from the one in the Georgetown study.
A few years ago, Paul Sotherland, a biologist at Kalamazoo College in Michigan, asked an unnecessary question and got not an answer but a tantalizing set of new questions. It was unnecessary because most experts in higher education already knew the answer, or thought they did: as far as higher-order cognitive skills are concerned, it doesn’t matter what you teach; it’s how you teach it.
What Sotherland found challenged that conventional wisdom and raised new questions about the role of majors in liberal education. Here’s what he did. Kalamazoo had been using the Collegiate Learning Assessment (CLA) to track its students’ progress in critical thinking and analytical reasoning. After a few years it become clear that Kalamazoo students were making impressive gains from their first to their senior years. Sotherland wondered if those gains were across the board or varied from field to field.
Since gains in CLA scores tend to follow entering ACT or SAT scores, they “corrected” the raw data to see what gains might be attributed to instruction. They found significant differences among the divisions, with the largest gains (over 200 points) in foreign languages, about half that much in the social sciences, still less in the fine arts and in the humanities, least of all in the natural sciences .
How was this to be explained? Could reading Proust somehow hone critical thinking more than working in the lab? (Maybe so.)
But the sample size was small and came from one exceptional institution, one where students in all divisions did better than their SAT scores would lead one to expect, and where the average corrected gain on CLA is 1.5 standard deviations, well above the national average. (Perhaps Inside Higher Ed should sponsor the “Kalamazoo Challenge,” to see if other institutions can show even better results in their CLA data.)
The obvious next step was to ask Roger Benjamin of the Collegiate Learning Assessment if his associates would crunch some numbers for me. They obliged, with figures showing changes over four years for both parts of the CLA -- the performance task and analytical writing. Once again, the figures were corrected on the basis of entering ACT or SAT scores.
The gains came in clusters. At the top was sociology, with an average gain of just over 0.6 standard deviations. Then came multi- and interdisciplinary studies, foreign languages, physical education, math, and business with gains of 0.50 SDs or more.
The large middle cluster included (in descending order) education, health-related fields, computer and information sciences, history, psychology, law enforcement, English, political science, biological sciences, and liberal and general studies.
Behind them, with gains between 0.30 and 0.49 SDs, came communications (speech, journalism, television, radio etc.), physical sciences, nursing, engineering, and economics. The smallest gain (less than 0.01 standard deviations) was in architecture.
The list seemed counterintuitive to me when I first studied it, just as the Kalamazoo data had. In each case, ostensibly rigorous disciples, including most of the STEM disciplines (the exception was math) had disappointing results. Once again the foreign languages shone, while most other humanistic disciplines cohabited with unfamiliar bedfellows such as computer science and law enforcement. Social scientific fields scattered widely, from sociology at the very top to economics close to the bottom.
When one looks at these data, one thing is immediately clear. The fields that show the greatest gains in critical thinking are not the fields that produce the highest salaries for their graduates. On the contrary, engineers may show only small gains in critical thinking, but they often command salaries of over $100,000. Economists may lag as well, but not at salary time, when, according to “What’s It Worth” their graduates enjoy median salaries of $70,000. At the other end majors in sociology and French, German and other commonly taught foreign languages may show impressive gains, but they have to be content with median salaries of $45,000.
But what do these data tell us about educational practice? It seems unlikely that one subject matter taken by itself has a near-magical power to result in significant cognitive gains while another does nothing of the sort. If that were the case, why do business majors show so much more progress than economics majors? Is there something in the content of a physical education major (0.50 SDs) that makes it inherently more powerful than a major in one of the physical sciences (0.34 SDs)? I doubt it.
Since part of the CLA is based on essays students write during the exam, perhaps the natural science majors simply had not written enough to do really well on the test. (That’s the usual first reaction, I find, to unexpected assessment results -- "there must be something wrong with the test.") That was, however, at best a partial explanation, since it didn’t account for the differences among the other fields. English majors, for example, probably write a lot of papers, but their gains were no greater than those of students in computer sciences or health-related fields.
Another possibility is that certain fields attract students who are ready to hone their critical thinking skills. If so, it would be important to identify what it is in each of those fields that attract such students to it. Are there, for example, “signature pedagogies” that have this effect? If so, what are they and how can their effects be maximized? Or is it that certain pedagogical practices, whether or not they attract highly motivated students, increase critical thinking capacities – and others as well? For example, the Wabash national study has identified four clusters of practices that increase student engagement and learning in many areas (good teaching and high-quality interactions with faculty, academic challenge and high expectations, diversity experiences, and higher-order, integrative, and reflective learning).
Some fields, moreover, may encourage students to “broaden out” -- potentially important for the development of critical thinking capacities as one Kalamazoo study suggests. Other disciplines may discourage such intellectual range.
One other hypothesis, I believe, also deserves closer consideration. The CLA is a test of post-formal reasoning. That is, it does not seek to find out if students know the one right answer to the problems it sets; on the contrary, it rewards the ability to consider the merits of alternative approaches. That suggests that students who develop the habit of considering alternative viewpoints, values and outcomes and regularly articulate and weigh alternative possibilities may have an advantage when taking the CLA exam, and quite possibly in real-life settings as well.
Since the study of foreign languages constantly requires the consideration of such alternatives, their study may provide particularly promising venues for the development of such capacities. If so, foreign languages have a special claim on attention and resources even in a time of deep budgetary cuts. Their "signature pedagogies," moreover, may provide useful models for other disciplines.
These varying interpretations of the CLA data open up many possibilities for improving students’ critical thinking. But will these possibilities be fully utilized without new incentives? The current salary structure sends a bad signal when it puts the money where students make very small gains in critical thinking, and gives scant reward to fields that are high performers in this respect . (For example, according to the College & University Professional Association for Human Resources, full professors in engineering average over $114,000, while those in foreign languages average just over $85,000.
Isn’t it time to shift some resources to encourage experimentation in all fields to develop the cognitive as well as the purely financial benefits of the major?
W. Robert Connor
W. Robert Connor is senior advisor to the Teagle Foundation.