The call last week for a new system of quality assurance for alternative providers of higher education got us here at Inside Digital Learning thinking (again) about how to define "quality" in higher education generally.
Quality is an amorphous concept in higher education. It is thrown around with abandon by college leaders, politicians and others -- and yet it isn't clear what most people mean when they say it. The colleges and universities in the United States commonly thought to be of the highest quality are among the oldest and the most selective (think Harvard, Princeton and Williams). College and university rankings impose their own analysis, in some cases weighting institutions' research excellence over other outcomes, and in most cases heavily influenced by peer opinions about their "reputations."
Conversations about the "quality" of new forms of higher education (particularly those delivered with the help of technology) are particularly contentious, because they almost inevitably start with comparisons to more traditional forms of learning. But the emergence of new forms and new providers of postsecondary learning offers a logical occasion to renew the discussion about how to define (and ultimately measure?) quality and to see if it's possible to develop a common understanding.
Inside Digital Learning asked a broad array of experts on digital learning and higher education quality how they define quality in digital learning. (They were asked to respond to the italicized statements and questions below.) Please feel free to add your own definition in the comments section below.
The emergence in recent years of more alternative providers of postsecondary education and training has prompted significant discussion about whether these new forms of learning -- think coding bootcamps, competency-based learning, MOOCs and other digital courses untethered from formal institutional credit -- are of sufficient "quality." The rise of these alternatives has spurred a federal experiment designed to identify new entities to assure quality, talk of alternatives to accreditation -- and questions, too, about the adequacy of the current crop of accreditors in assuring quality.
It's not surprising, perhaps, that the discussions of quality have been driven by the emergence of newfangled alternatives to traditional learning (though those questions tend to take for granted that the traditional is good and the alternatives less so). But there's a more fundamental question to be asked: what do we mean by "quality" in higher education generally? Is there some minimum level of rigor of the education that must be surpassed? Is it how much students learn (measured in what way)? Some sort of value equation, that weighs learning against how much the education costs the student (and/or taxpayers)? Is it like the Potter Stewart definition of pornography, such that we know it when we see it, or should it be defined in a commonly understood way (and can it be)?
How do you define "quality" in online/digital learning or in other types of alternative postsecondary learning that are digitally based?
Deb Adair, executive director, Quality Matters
You have raised some excellent questions.
I don’t think I agree with the premise that the questions about quality, driven by the emergence of alternatives to traditional learning, take for granted that traditional is good and alternatives less so. In fact, it seems popular to be dismissive of the education being provided by higher education and to point to education (didn’t we used to call that training?) driven by the workplace as the desirable standard?
Certainly the ruinous level of student debt has driven economic concerns about the ROI of higher education -- presenting a clear and fair argument and serious cause for concern. In any case, both higher education and career training have an important place in society and quality, which needs to be understood by purpose and context, can be identified in both. It’s when we mix up the two that things can get murky.
Quality Matters is one of eight Quality Assurance Entities (QAE) in the Department of Education’s EQUIP experimental sites program examining the quality of programs provided through partnerships between accredited universities and alternative providers. In this work, we are asked to respond to a set of largely outcomes-based questions. Academic rigor, however, is not one of these questions. The presumption is that this remains the purview of the regional accreditors whose role with the academic institution is maintained throughout the EQUIP program.
It’s questionable if the EQUIP program, considering the small number of programs with agendas varying from degree completion to workplace/professional skills certification, will answer the question about how quality is defined in alternative postsecondary learning. EQUIP includes a few QAEs, like Quality Matters, that have been evaluating traditional and non-traditional education for years. It also includes some entities with no such track record but that, presumably, proposed innovative evaluation models that interested the department. The results of the different approaches to evaluation will be interesting in any case; however, the definition of quality has been largely circumscribed in EQUIP by the almost exclusive focus on outcomes.
If the desired student outcomes (like expedited degree or certification completion, job attainment, salary increases, debt repayment ability, etc.) are achieved, is that enough evidence that it was a quality education? Do the ends justify the means? Or is there something else, some level of academic rigor, required whenever academic credit is provided? For QM, some inputs (or perhaps throughputs) in the education process have always mattered.
For alternative education offered for college equivalency, what should that look like, exactly? Is it the definition pointed to by the Senate Committee on Educations’ proposal about Accreditation pulling from Academically Adrift such metrics as number of pages for reading and writing assignments? If we know it when we see it, how can we better define it so we can assure quality in alternatives to post-secondary education that are to be considered for college equivalency? These are some of the questions QM’s participation in EQUIP has raised for us and that we will be exploring in meetings and conversations in our community this year.
Ken Brooks, chief operating officer, Macmillan Learning
I would consider one measure of educational quality to be the actions one can take and the opportunities one can pursue as a result of the experience. The traditional view is that this takes a four-year degree program (or more), yet there are many programs demonstrating that success can be achieved more widely.
The Online Master of Science in Computer Science (OMSCS) program between Udacity and Georgia Tech is one example of new models that appear to be working well, as are the nanodegrees and certificates offered by Udacity, Coursera and EdX. It does take extraordinary discipline to learn without the benefit of instructor, peer support and feedback. I speak from experience about OMSCS, as I am a student in this program.
Ultimately, I think that quality will come to be assessed by individuals and potential employers as the capability of the individuals who come out of the programs. Of course, institutions measure the success of programs, in large part, via job placements, performance and salaries in the workplace. Yet, placements and salaries are incomplete metrics of educational quality.
While salaries are a good indicator of ROI on the part of students, salaries fall well short of describing the intrinsic and intangible benefits of an engaging educational program. Whether it’s a traditional four-year program or a coding bootcamp, a key tenet of educational programming is to develop the entire person -- socially, emotionally, mentally and professionally. As alternative programs continue to proliferate, it will be important for institutions and students alike to keep this in mind as they evaluate various educational offerings.
Jessie Brown, analyst, Ithaka S+R
At a high level, quality in online, digital and alternative learning should be defined in the same way it is in any learning environment (though this is hardly an item of consensus). My own opinion is that a “quality” learning experience consists of research-backed inputs that produce outcomes that are beneficial for students and society. What those outcomes are -- improved critical thinking, an employer-aligned skillset, increased knowledge about a particular topic -- can vary across programs depending on goals and mission, but they should correlate with some meaningful change in how a student thinks or what she is able to do. While the question of value is an important and related one, I see it as distinct from quality, and efforts to correlate the two can distort assessments and create a stratified system.
Technological changes, the rising cost of college, changing demographics and other factors have put pressure on traditional methods of measuring and assuring quality, and have forced us to reassess what sort of educational inputs and outcomes might be worthy of public investment. For example, in a forthcoming paper on alternative credentials and pathways, my colleague, Martin Kurzweil, and I explore how providers like competency-based programs, coding bootcamps and digital nanodegrees aim to fill a market gap for employer-aligned outcomes that more “traditional” programs have struggled to meet.
In order to achieve these ends, newer providers are using different curricular, pedagogical and experiential inputs, many of which our current systems of quality assurance are poorly suited to accommodate or assess. The Collaborative for Quality in Alternative Learning proposal provides some promising prompts for thinking through how a new model of quality assurance might better incorporate these sorts of programs.
It’s easy to assume that, because they don’t use “traditional” curricular inputs, are lower-cost or focus so explicitly on outcomes instead of academic experiences, digital and alternative providers deliver lower-quality educational experience than more mainstream ones. Though misled, this thought process is based in legitimate concerns. A program that focuses solely on producing measurable outcomes -- such as improved completion rates or labor market outcomes -- could be easily incentivized to game the system to maximize performance on these indicators, without a corresponding change in a student’s experience.
A metric like labor market outcomes should be one of several indicators that allow us to assess whether students in an online nanodegree program, for example, had a transformative experience that prepared them to succeed in their career.
All of this points to a couple of considerations for continued efforts, like the CQAL proposal, to define, measure and assure quality. As the proposal states, while boundaries are needed, our thinking about which inputs and outcomes are acceptable for higher education (and associated categorizations like Title IV eligibility) should be dynamic, and focus more on quality performance than compliance with existing input and output models. Where we run into trouble with these expanded definitions is when we define quality with too narrow a focus on either inputs or outcomes, without assessing the connection between the two.
As Kurzweil and I argue in a recent white paper published by ACE, definitions of quality should take into account inputs -- like pedagogy, modality and curriculum -- as well measurable outcomes, both short and long term. Assessments should evaluate how those inputs influence outcomes via students’ learning and experience (which, of course, pose tremendous measurement challenges). Alternative and digital learning providers give us the opportunity to rethink this connection, and how to measure and strengthen it; but we must be careful not to interpret easily counted indicators of quality outcomes as sufficient evidence of quality itself.
Gates Bryant, partner, Tyton Partners
Early in my career I had the opportunity to have dinner with a management “guru.” This guy was a renowned adviser to CEOs of large companies all over the world. We were talking about defining success and he said something simple but memorable: “I always tell my clients the truth, that way I don’t have to remember what I actually said.” Defining quality in education should follow a similar heuristic.
For starters, we’re not always precise about defining the “client.” Institutions that think about their students as clients start to consider what aspects of the educational experience get in the way of their success. Our recent work developing new approaches to quality assurance as part of the EQUIP program in support of a partnership between CSU Global and Guild Education has taught us that achieving quality is as much a formative process as it is a summative one. The formative approach to quality starts with a very precise definition of the student that is to be served.
Similarly, across all modalities of digital learning in higher education, we struggle to determine if the “client” is the student, the instructor or the institution. In fact, there are benefits (and perils) to be found for all three. Today, there is considerable dissatisfaction with digital learning and both suppliers and institutions need to initiate a quality improvement process that holistically considers all the inputs: feature sets, faculty training and incentives, student supports and more.
When it comes to quality for institutions, we’re finding that institutions that define the student, the needs of the instructor and the specific instructional “problem to solved,” work toward achieving quality that is contextually relevant. Perhaps in time, this relentless, formative orientation will get us closer to a picture of quality in education that won’t require us to remember how we defined it.
Gardner Campbell, associate professor of English and former vice provost for learning innovation and student success, Virginia Commonwealth University
The question of quality in online/digital learning depends on what we mean by learning. In “The Logical Categories of Learning and Communication,” anthropologist Gregory Bateson makes the following essential observation: “The word ‘learning’ undoubtedly denotes change of some kind. To say what kind of change is a delicate matter.” (In Steps to an Ecology of Mind, University of Chicago Press, 2000, 238.)
I agree with Bateson. I think much confusion about online/digital learning (I will use the question’s formulation though I’m not sure the terms are synonymous) emerges from the kind of change we mean. Do we mean learning at a stimulus-response level, something on the order of classical or operant conditioning, something along the lines of Pavlov or B. F. Skinner? It turns out that such “flash-card” paradigms of learning can be facilitated pretty readily in an online/digital environment.
Or do we mean learning that affords practice and challenges in higher-order thinking -- problem-finding, problem-solving, metacognition -- the kinds of things that allow learners to recognize similar patterns in different contexts, to generalize or conceptualize beyond a single knowledge domain? If so, we’re asking a different kind of question.
I recognize the usefulness of flash cards, and I am not opposed to memorization. I fully agree that one must know and be able to remember things -- facts, questions, relationships, hypotheses -- to be able to engage in higher-order thinking. But we lay a foundation in order to build on that foundation. The very phrase “higher education” implies as much.
So for me, the highest quality online/digital learning must include rich and varied opportunities for practice and challenges in higher-order thinking. We will certainly need computer-generated elements to do that: visualizations, simulations, games, even certain kinds of adaptive-learning strategies. That said, I believe the best, richest and most varied opportunities for practice and challenges in higher-order thinking will emerge from computer-mediated affordances, online digital spaces for inquiry, expression and social interaction with other learners.
If we’ve learned anything from the past two decades of increasingly networked civilization, digital objects and digital environments take you only so far. The magic is in the network, where computers mediate experience, thoughts and presence across many nodes of individuals, elaborating out of these connections something synergistic, something far vaster and more interesting than the sum of its parts. To the extent that online/digital learning affords students the opportunity to tap into that emergent shared consciousness, across time and space, such learning will have high quality.
This concept of emergent shared consciousness is not new. It’s one of the primary reasons for the technologies of print, of books, indeed of libraries. These elements hand on more than information. They hand on, across time and space, the experience of human beings making meaning out of shared lives.
Jerome Bruner gives a stirring definition of learning at its best and purest, one worth quoting in full:
Getting to know something is an adventure in how to account for a great many things that you encounter in as simple and elegant a way as possible. There are lots of different ways of getting to that point, and you don’t really ever get there unless you do it, as a learner, on your own terms. All one can do for a learner en route to her forming a view of her own is to aid and abet her on her own voyage. The means for aiding and abetting a learner is sometimes called a “curriculum,” and what we have learned is that there is no such thing as the curriculum. For in effect, a curriculum is like an animated conversation on a topic that can never be fully defined, although one can set limits upon it. (“Narratives of Science,” in The Culture Of Education, Harvard University Press, 1996, 115-116.)
To the extent that online/digital learning affords the learner an adventure in learning to account for a great many things in as simple and elegant a way as possible, on her own terms, within an environment rich with possibilities for communication, for narrating, curating and sharing the process and products of one’s learning, I would say that online/digital affordance or design is of high quality.
Judith Eaton, president, Council for Higher Education Accreditation
As CHEA has been saying for a long time, if the alternative provider sector is going to grow and engage more and more students, we will, of course need to focus on quality. This is the case whether there is a means by which alternative providers become eligible for federal funds or not. This is why we developed our “Quality Platform.”
Alternative providers represent a major disruption vis-à-vis traditional higher education. They are not “add-ons” as the days of older continuing education nor are they a novelty as MOOCs started out. Instead, they are part of an emerging redefinition of higher education where the sustained undergraduate experience culminating in a degree is more and more replaced by shorter-term educational experiences, very much driven by the need for access to low-cost education, and the need for immediate education gains especially tied to work. And, even traditional higher education is embracing these providers -- think StraighterLine, think MOOCs -- as now accepted for credit consideration at some colleges and universities.
I don’t know that the fundamental quest for quality -- a level of performance by an institution or provider such that there is reliable evidence that students gain skills and are successful -- will differ with alternative providers. What will vary is the goals that are set by providers (in contrast to traditional institutions) and what counts are evidence. The core questions are still there: Did students learn? How well? Do they achieve the educational goals they had when enrolling in a provider’s offering? Student learning outcomes are central.
Traditional accreditation can handle new providers if the community wishes to. Or new types of quality assurance providers can as well. Or both. However, the disruption in traditional higher education is also a disruption to traditional accreditation.
Bart Epstein, CEO, Jefferson Education Accelerator
No other industry benefits from societal pressure and subsidies in the way that higher education does. Restaurants have not convinced us to submit transcripts of our previous eating experiences and letters of recommendation from other chefs in the hope that we will be “accepted” to spend our money in their eateries. Auto repair shops do not send us letters to share the good news that, after a rigorous review process, our cars have been accepted for repair. Amusement parks do not require us to take the “Amusement Park Aptitude Test” before taking our money.
Yet nearly every nook and cranny of our higher education system employs these processes and then benefits from perhaps the strongest signal of all – the willingness of our federal government to give large loans without regard to creditworthiness, to students who seek to attend almost any institute of higher education.
In this context, it is easy to understand why we need better tools to compare and contrast programs to gauge their apparent quality. Potential students are inundated with signals of higher education’s importance and value. We push them to enroll and to incur debt. We taxpayers are then at risk if they fail to repay their loans. These students need strong tools to help them make appropriate decisions about their futures.
As we collectively create and then pressure-test new comparison tools it is important that we avoid the temptation to treat a system that is incredibly diverse as a monolith. Community colleges, liberal arts colleges and vocational programs play a multiplicity of functions for students, communities, and the economy. We need different types of measures to quantify and compare their various outputs for various audiences.
I am encouraged by a growing body of work that attempts to compare institutional -- and student -- objectives, with outcomes they produce or attain. It’s exciting to see institutions borrowing from other fields, to test the applicability of accounting standards, or using other established practices to put rigor behind the collection of data. And I think it’s positive when traditional institutions team up with upstarts to explore areas for alignment or consensus. Over time, there should be more and more pressure for the sort of data that can enable apples-to-apples comparisons among divergent pathways.
Ultimately, what’s most important is creating transparency for students so that they can make informed decisions about where to spend their time and money. As a society, we currently invest tremendous resources into encouraging and supporting higher education attendance. We are currently underinvesting in the type of decision-support tools that prospective students need, and the time is now for us to support multiple efforts to define quality.
Peter T. Ewell, president emeritus, National Center for Higher Education Management Systems
The definition of “quality” with respect to any form of postsecondary education is always, to some extent, constituency-driven. In that sense, but only in that sense, it is like the famous definition of pornography that you reference, always to some extent “in the eye of the beholder.” But that does not mean that all definitions are relative or that there is no absolute bottom line.
Beginning with content, different stakeholders will value different things. Employers will look for absolute and observable attributes that are relevant to their needs. They do not care how much it cost to produce the outcome or whether those engaged enjoyed the process or not.
Students and potential students, as well as public officials, are interested in “value for money” -- essentially the quality of outcomes in relation to what it cost to produce them. These differences in perspective will always color -- and sometimes confuse -- public discussions of quality.
Whatever the metric chosen, moreover, there is a responsibility to establish minimum absolute standards. Accreditation in higher education has been tangled up in this discussion for decades. Admittedly, institutions and programs differ and these differences will inevitably affect comparative performance. And there are established mechanisms for taking such differences into account when making comparisons including peer groups and statistical adjustments. But regardless of such differences, should institutions with a 5 percent graduation rate or with substantial numbers of recent graduates unable to construct an understandable paragraph in standard written English really be certified as “quality?” I don’t think so.
We have known for many years how to make the requisite measurements and how to collect appropriate evidence. What the higher education community lacks is the collective will to apply them where they are needed and to make them stick.
Debra Humphreys, vice president of strategic engagement, Lumina Foundation
There is a reason why, in its recently released strategic plan, Lumina Foundation has prioritized issues of quality, transparency and equity. To create the nimble workforce we need in a competitive global economy and to create meaningful opportunity, millions more Americans need the increased knowledge and skills that can only be obtained through high-quality learning beyond high school. That is why we must, at once, increase the pathways to postsecondary credentials while also assuring that those credentials are of high quality.
For a credential to be of high quality, it doesn't matter whether it was obtained in a traditional, online or other alternative setting as long as it has clear and transparent outcomes aligned to today's workplace and it positions one to continue building skills over time. Given the imperative for continual up-skilling, today's credentials can only be considered high quality if they position people who have them to succeed both in work and in pursuing further education.
This recognition of the changing landscape of work has already generated a remarkable consensus across many groups -- educators, employers, civic and policy leaders, disciplinary associations -- about a core set of skills that all high-quality postsecondary credentials must develop in students. These include critical thinking and judgment, written and oral communication, problem-solving and teamwork especially in diverse groups, quantitative reasoning and technological capacity, and the ability to apply all of these skills in new and changing environments. If a credential isn’t developing these capacities at high levels and in ways appropriate to the field of study, it really cannot be considered a high-quality credential.
Steven Mintz, executive director, University of Texas System's Institute for Transformational Learning
Unlike college rankings, which measure quality In terms of admissions standards, graduation rates and a school’s financial resources and reputation, many institutions prefer to stress value for the money and “value-added,” including their contribution to social mobility.
Accreditors, in turn, often emphasize “input” variables, including faculty qualifications, student-faculty ratios and investment in instruction, while state and federal governments show greater interest in “outcome” measures, such as employment and earnings of graduates, institutional cost per degree and default rates on student loans.
Unfortunately, none of these measures say much about the quality of the academic experience. Except in pre-professional domains, where students must take externally administered exams such as the MCAT, LSAT or the NCLEX, we lack valid and reliable standardized assessments of student learning. More attention needs to be paid to the actual educational process -- the nature of assignments, activities and assessments and their alignment with learning outcomes, and the quantity and quality of instructor feedback.
A step forward would require greater transparency in learning objectives and assessment techniques. In the programs that the University of Texas System’s Institute for Transformational Learning is developing in partnership with campus faculty, we create a “skills ledger,” which identifies critical learning outcomes and evaluation metrics. These ledgers, which are extremely complex and often involve hundreds of nodes and levels, are developed in consultation with industry, subject matter specialists, accrediting agencies and other standards-setters, and serve as the master blueprint that can underlie a variety of certificate and degree programs. We are also developing a Comprehensive Learner Record, that identifies the specific competencies that students have acquired and how they were assessed.
Transparency in learning objectives and assessment methods can help stakeholders make a much more informed assessment of educational quality.
Robert Shireman, senior fellow, The Century Foundation
"A high-quality school or course is one that successfully engages students in constructive learning activities."
Now let’s parse that phrase (which is mine).
Constructive learning activities are designed by experts to advance students by having them listen, read, talk, write, rewrite, analyze, look, design, experiment, re-read, deconstruct, re-read, question, hypothesize, experiment or present. Education is a process of doing. It is the mental equivalent of physical exercise, albeit far more complex. The experts decide whether the exercises were done well enough for the students to deserve formal credit. The credentials, units and grades that are then granted have no foundation in the natural world: like points in a video game, they are artifices that the experts can use to incentivize and reward student work.
The “credit hour” is frequently blamed for an accountability system that is based too much on seat time and not enough on learning. But that is yesterday’s credit hour, at least as far as the federal government is concerned. U.S. Department of Education regulations (which I played a part in drafting) now define a credit hour as “an amount of work” that is “verified by evidence of student achievement.” In other words, it is now a measure of learning activity. Institutions can offer credit for whatever they want, but for federal financial aid purposes credits are supposed to be backed by evidence of the work that students do, such as the graded papers, quizzes, discussions and presentations. Some colleges do this type of direct evidence analysis in their internal program review or assessment processes. Some do not. More should.
The change in the credit hour definition, if enforced, is potentially revolutionary. It means that brick-and-mortar schools can no longer get away with counting lecture hours as a method of proving the rigor of the curriculum. Instead, they would need to show that there is a system that reviews the work that students are doing to earn their credits and degrees. For nontraditional delivery methods, that means there will be something against which their students’ work can be compared. Are these new methods producing student-work artifacts comparable to those of the ground-based students?
Students. The assigned activities must take into consideration the background and skills of the students. The level of performance that can be expected at a college enrolling students who were top performers in high school is going to be very different from the analytical power of students with less stellar academic backgrounds. The purpose of college is to advance students from where they are, and part of excellent instruction is assigning work that is challenging enough to strain the brain but not so challenging that students give up or become lost.
Which brings me to the third element: successfully engaging. It is not enough to present material to students and hope they interact with it. That’s what a textbook does, or a library or a website. The role of a school, as an education provider, is to creatively corral the student into actually doing the work.
Mitchell Stevens, associate professor of education and director of the Center for Advanced Research through Online Learning, Stanford University
I applaud the Strada Education Network’s effort to coalesce conversation on building new mechanisms for responsible practice in the postsecondary education sector. Their report encourages me to make two provocations.
First, on my view the entire enterprise of accreditation as we inherit it from the 20th century is not a reasonable scaffold for ensuring responsible educational practice going forward. Accreditation is constitutionally organized around the inputs, not the products, of educational endeavors. As virtually everyone now recognizes, review systems organized around inputs systematically favor resource-rich providers and direct attention away from value produced for learners or society.
If we presume that review based on inputs is untenable, we face much harder political -- and ethical -- questions about what constitutes educational productivity. While compelling on first glance, measures of learning gains or income returns for individual learners in the short term cannot be adequate measures of value, since we know that returns to educational services accrue over long stretches of the life course and to entire communities, not just to individuals. That means thinking very ambitiously about imagining data systems that integrate information about learners, labor markets and civic vitality longitudinally.
It sounds fanciful but such systems are now technically quite feasible. The hard part will be devising incentives that will encourage players in an always loosely regulated postsecondary sector to contribute. Creating shaming mechanisms for non-contribution would be high on my list of plausible strategies.
Second -- and here I take issue with a core tenet of the Strada report -- it’s essential to erase the distinction between “conventional” and “alternative” postsecondary delivery. Two- and four-year college degrees accrued via seat time in physical classes chosen cafeteria style from an anarchic pile of offerings may be “conventional,” but that by no means makes that delivery regime some sort of standard against which “alternative” delivery should be differently appraised. MOOCs delivered to millions on the web can and should be assessed according to the same criteria by which we evaluate lectures to hundreds of snoozing, Facebooking undergraduates in stuffy lecture halls.
The goal is to encourage competition in the pursuit of educational value and excellence for learners and their patrons, regardless of how value and excellence are produced -- and to shame wasteful, costly forms of provision that yield low returns.
In short we need to evaluate quality on the basis of ends, not means. And what are those ends? That’s the ethical question I mentioned above. And, short of imagining college a giant employment agency, national policy conversations almost never include it. Shame on us.