My first encounter with assessment came in the form of a joke. The seminary where I did my Ph.D. was preparing for a visit from the Association of Theological Schools, and the dean remarked that he was looking forward to developing ways to quantify all the students' spiritual growth. By the time I sat down for my first meeting on assessment as a full-time faculty member in the humanities at a small liberal arts college, I had stopped laughing. Even if we were not setting out to grade someone’s closeness to God on a scale from 1 to 10, the detailed list of "learning outcomes" made it seem like we were expected to do something close. Could education in the liberal arts — and particularly in the humanities — really be reduced to a series of measurable outputs?
Since that initial reaction of shock, I have come to hold a different view of assessment. I am suspicious of the broader education reform movement of which it forms a part, but at a certain point I asked myself what my response would be if I had never heard of No Child Left Behind or Arne Duncan. Would I really object if someone suggested that my institution might want to clarify its goals, gather information about how it’s doing in meeting those goals, and change its practices if they are not working? I doubt that I would: in a certain sense it’s what every institution should be doing. Doing so systematically does bear significant costs in terms of time and energy — but then so does plugging away at something that’s not working. Paying a reasonable number of hours up front in the form of data collection seems like a reasonable hedge against wasting time on efforts or approaches that don’t contribute to our mission. By the same token, getting into the habit of explaining why we’re doing what we’re doing can help us to avoid making decisions based on institutional inertia.
My deeper concerns come from the pressure to adopt numerical measurements. I share the skepticism of many of my colleagues that numbers can really capture what we do as educators in the humanities and at liberal arts colleges. I would note, however, that there is much less skepticism that numerical assessment can capture what our students are achieving — at least when that numerical assessment is translated into the alphabetical form of grades. In fact, some have argued that grades are already outcome assessment, rendering further measures redundant.
I believe the argument for viewing grades as a form of outcome assessment is flawed in two ways. First, I simply do not think it’s true that student grades factor significantly in professors’ self-assessment of how their courses are working. Professors who give systematically lower grades often believe that they are holding students to a higher standard, while professors who grade on a curve are simply ranking students relative to one another. Further, I imagine that no one would be comfortable with the assumption that the department that awarded the best grades was providing the best education — many of us would likely suspect just the opposite.
Second, it is widely acknowledged that faculty as a whole have wavered in their dedication to strict grading, due in large part to the increasingly disproportionate real-world consequences grades can have on their students’ lives. The "grade inflation" trend seems to have begun because professors were unwilling to condemn a student to die in Vietnam because his term paper was too short, and the financial consequences of grades in the era of ballooning student loan debt likely play a similar role today. Hence it makes sense to come up with a parallel internal system of measurement so that we can be more objective.
Another frequently raised concern about outcome assessment is that the pressure to use measures that can easily be compared across institutions could lead to homogenization. This suspicion is amplified by the fact that many (including myself) view the assessment movement as part of the broader neoliberal project of creating “markets” for public goods rather than directly providing them. A key example here is Obamacare: instead of directly providing health insurance to all citizens (as nearly all other developed nations do), the goal was to create a more competitive market in an area where market forces have not previously been effective in controlling costs.
There is much that is troubling about viewing higher education as a competitive market. I for one believe it should be regarded as a public good and funded directly by the state. The reality, however, is that higher education is already a competitive market. Even leaving aside the declining public support for state institutions, private colleges and universities have always played an important role in American higher education. Further, this competitive market is already based on a measure that can easily be compared across institutions: price.
Education is currently a perverse market where everyone is in a competition to charge more, because that is the only way to signal quality in the absence of any other reliable measure of quality. There are other, more detailed measures such as those collected by the widely derided U.S. News & World Report ranking system — but those standards have no direct connection to pedagogical effectiveness and are in any case extremely easy to game.
The attempt to create a competitive market based on pedagogical effectiveness may prove unsuccessful, but in principle, it seems preferable to the current tuition arms race. Further, while there are variations among accrediting bodies, most are encouraging their member institutions to create assessment programs that reflect their own unique goals and institutional ethos. In other words, for now the question is not whether we’re measuring up to some arbitrary standard, but whether institutions can make the case that they are delivering on what they promise.
Hence it seems possible to come up with an assessment system that would actually be helpful for figuring out how to be faithful to each school or department’s own goals. I have to admit that part of my sanguine attitude stems from the fact that Shimer’s pedagogy embodies what independent researchers have already demonstrated to be “best practices” in terms of discussion-centered, small classes — and so if we take the trouble to come up with a plausible way to measure what the program is doing for our students, I’m confident the results will be very strong. Despite that overall optimism, however, I’m also sure that there are some things that we’re doing that aren’t working as well as they could, but we have no way of really knowing that currently. We all have limited energy and time, and so anything that can help us make sure we’re devoting our energy to things that are actually beneficial seems all to the good.
Further, it seems to me that strong faculty involvement in assessment can help to protect us from the whims of administrators who, in their passion for running schools "like a business," make arbitrary decisions based on their own perception of what is most effective or useful. I have faith that the humanities programs that are normally targeted in such efforts can easily make the case for their pedagogical value, just as I am confident that small liberal arts schools like Shimer can make a persuasive argument for the value of their approach. For all our justified suspicions of the agenda behind the assessment movement, none of us in the humanities or at liberal arts colleges can afford to unilaterally disarm and insist that everyone recognize our self-evident worth. If we believe in what we’re doing, we should welcome the opportunity to present our case.
Adam Kotsko is assistant professor of humanities at Shimer College.
“It’s not the strongest of the species that survives,” Charles Darwin once observed, “but the one most responsive to change.”
If only it were true in higher education.
It’s interesting to observe, isn’t it, how much higher education is still driven by a “brute force” model of delivery? As much as we might wish it were otherwise, postsecondary courses and degree programs are still largely delivered in a one-size-fits-all manner, and those students who can’t keep up are simply left behind, sometimes irretrievably so – the higher education equivalent of natural selection, some might say.
(I once had lunch with a colleague, for example, who told me with no small amount of pride that he only taught to the 10 percent of the class who “got it.” The others, it seemed, were not worth his effort.)
But surely anyone – teacher, student, or otherwise – who has ever sat in a classroom has seen glaring evidence of the fact that not all students move at the same pace. Some are prepared to move more quickly than the majority while others require greater attention and more time to master the same material as their classmates. The limits of mainstreaming diversely skilled students are obvious to all and yet we largely persist in the vain hope that greater numbers of students will learn to move at “class pace” if only we underscore their responsibility to do so in syllabuses and first-class lectures.
Of course, when teachers face classes of 20 or 40 or 200 students, personalized instruction isn’t much of an option. It’s simply too expensive and impractical – until now, perhaps.
Witness the countervailing perspective emerging these days that the curriculum is the thing that needs to change pace. Indeed, after a number of years of quiet experimentation we may now be on the cusp of an evolutionary moment – one that promises greater personalization, deeper engagement, and stronger outcomes for students of many types. And it may even be affordable. In fact, it may even be cost-efficient, by virtue of allowing instructors to use their time more judiciously.
Welcome to the emerging realm of adaptive learning – an environment where technology and brain science collaborate with big data to carve out customized pathways through curriculums for individual learners and free up teachers to devote their energies in more productive and scalable ways.
What promises to make adaptive learning technologies an important evolutionary advance in our approaches to teaching and learning is the way these systems behave differently based on how the learner interacts with them, allowing for a variety of nonlinear paths to remediation that are largely foreclosed by the one-size-fits-all approach of traditional class-paced forms of instruction.
To put it simply, adaptive systems adapt to the learner. In turn, they allow the learner to adapt to the curriculum in more effective ways. (See this recent white paper from Education Growth Advisors for more background on what adaptive learning really looks like – full disclosure: I had a hand in writing it.)
If the early results hold, we may soon be able to argue quite compellingly that these forms of computer-aided instruction actually produce better outcomes – in certain settings at least – than traditional forms of teaching and assessment do. In the future, as Darwin might have said were he still here, it won’t be the students who can withstand the brute force approach to higher education who survive, but those who prove themselves to be the most adaptive.
A recent poll of college and university presidents conducted by Inside Higher Ed and Gallup showed that a greater number of the survey’s respondents saw potential in adaptive learning to make a “positive impact on higher education” (66 percent) than they saw in MOOCs (42 percent). This is somewhat surprising given the vastly differing quantities of ink spilled on these respective topics, but it’s encouraging that adaptive learning is on the radar of so many college and university leaders. In some respects, adaptive learning has been one of higher education’s best-kept secrets.
For over a decade, Carnegie Mellon University’s Open Learning Initiative has been conducting research on how to develop technology-assisted course materials that provide real-time remediation and encourage deeper engagement among students en route to achieving improved outcomes. So adaptive learning is not necessarily new, and its origins go back even further to computer-based tutoring systems of various stripes.
But the interest in adaptive learning within the higher education community has increased significantly in the last year or two – particularly as software companies like Knewton have attracted tens of millions of dollars in venture capital and worked with high-visibility institutions like Arizona State University. (See Inside Higher Ed’s extensive profile of Knewton’s collaboration with ASU, from January of this year, here.)
Some of our biggest education companies have been paying attention, too. Pearson and Knewton are now working together to convert Pearson learning materials into adaptive courses and modules. Other big publishers have developed their own adaptive learning solutions – like McGraw-Hill’s LearnSmart division.
But a variety of early-stage companies are emerging, too. Not just in the U.S., but all around the world. Take CogBooks, based in Scotland, whose solution’s algorithms permit students to follow a nonlinear path through a web of learning content according to their particular areas of strength and weakness as captured by the CogBooks system. Or consider Smart Sparrow, based in Australia, whose system supports simulations and virtual laboratories and is currently being deployed in a variety of institutions both at home and here in the U.S., including ASU.
There is also Cerego, founded in Japan but now moving into the U.S., with a solution that focuses on memory optimization by delivering tailored content to students that is based not only on a recognition of which content they have mastered but also with an understanding of how memory degrades and how learning can be optimized by delivering remediation at just the right point in the arc of memory decay.
These adaptive learning companies, and many others working alongside them, share a common interest in bringing brain science and learning theory into play in designing learning experiences that achieve higher impact.
They differ in their points of emphasis – a consequence, in part, of their varying origin stories. Some companies emerged from the test prep field, while others began life as data analytics engines, and so on. But they are converging on a goal – drawing on big data to inform a more rigorous and scientific approach to curriculum development, delivery, and student assessment and remediation.
In the months ahead, you should expect to be seeing more and more coverage and other discussion of companies like these, as well as the institutions that are deploying their solutions in increasingly high-impact ways. Last month, the Bill & Melinda Gates Foundation issued an RFP inviting institutions to collaborate with companies such as these in seeking $100,000 grants to support new adaptive learning implementations. The grants are contingent, in part, on the winning proposals outlining how they’ll measure the impact of those implementations.
Before long, then, we may have much more we can say about just how far adaptive learning can take us in moving beyond a one-size-fits-all approach to teaching and learning – and in achieving better outcomes as a result. And for some students, their survival may depend upon it.
Peter Stokes is executive director of postsecondary innovation in the College of Professional Studies at Northeastern University, and author of the Peripheral Vision column.
With great interest, I read the recent news announcing that the American Council on Education (ACE) had evaluated five Coursera MOOCs and recommended them for credit. But I had hoped for something different.
Having traditional prestigious institutions making their online content open to the world – of course without their prestigious credit attached – was an exciting development. A race to post courses ensued. On the surface, it’s an altruistic move to make learning available to anyone, anywhere for free.
Dig deeper and we are left to ask, how many MOOC courses will really be worth college credit, where will the credits be accepted, and for how long will college credits even be the primary measurement of learning?
Now that ACE has evaluated a few courses, MOOC providers will see how their process goes as students start actually finding proctors and taking tests -- or finding other methods of assessment -- to prove they learned the material. But a few courses will not be enough to really help students earn degrees, and with MOOC courses and providers continuing to proliferate, this does not seem like a viable way to keep up with demand.
Regardless, it is more than likely that the universities that agreed to the ACE CREDIT review are never going to accept an ACE CREDIT transcript themselves. The students with ACE CREDIT transcripts will need to present those transcripts to “lesser known” schools that are not among the elite players – colleges with much lower tuition and a willingness to serve post-traditional students.
More troubling is the fact that the ACE process for credit review is still course-based. Will this really be flexible enough in the future? Will it measure competencies and individual learning outcomes? Even if it seems scalable, will it mean all MOOC evaluations have to run through ACE and only ACE? Will students have to wait until ACE has evaluated a MOOC course before they can get credit?
Moreover, this raises the question: Are course evaluations and testing really the best or only way to deal with this new era of learning? What about experiential learning? If someone has college-level learning from their life experience is it invalid unless they take a course?
As Inside Higher Ed points out in its article, this was a fast move in an industry that moves at a glacial pace. But when ice really begins to melt, it can quickly turn into a waterfall. Students have more options for learning, and can get more information, from a variety of sources. So the question for education becomes, how can we best accommodate that?
I would assert that a portfolio assessment of students’ learning is the best way. Just as an artist shows a portfolio to a prospective employer, students should be able to demonstrate learning from wherever they have learned -- work, MOOCs, informal training, military service, volunteer service, and more -- all in one place. And much of this learning will not involve a course at all.
If MOOCs are to be truly disruptive, they must link to competencies, credentials, degrees and/or ultimately jobs. Using a course-by-course, credit hour-by-credit hour approach to do this will not dramatically change the way people earn degrees. And dramatic change that allows for individual demonstrations of competencies is the only way to provide the education quality and agility necessary to truly recognize learning derived from free resources on the web. By focusing on competencies, we can align and accept learning experiences from everywhere.
Pamela Tate is president/CEO of the Council for Adult and Experiential Learning.