Submitted by Paul Fain on January 31, 2014 - 3:00am
The federal government this week announced the launch of a new online complaint system for college students who are veterans or active-duty members of the U.S. military. The Education Department and Department of Veterans Affairs, as well as the Consumer Financial Protection Bureau, are participating in the interagency effort to protect students and Post-9/11 GI Bill investments. The complaint system will be a way for students report negative experiences with colleges and universities. Veterans groups called the announcement a "game changer," according to Stars and Stripes.
Matt Reed’s recent column on experimental sites and competency-based education (CBE) offers just the kind of thoughtful analysis we’ve come to expect of his columns. He raises important questions about the role of faculty, the efficacy of approaches that include less instructional interaction, the viability of pay-for-performance aid models, and more. The answers to those questions today? We don’t know. And that’s why we need to support the Department of Education’s experimental sites proposal, to create safe places in which to explore the kind of thoughtful and constructive questions that Matt poses.
Last year saw the dizzying ascendency of the massive open online course, driven by some combination of their blue chip provenance, their creators’ triumphant claims, and the smitten embrace of popular media outlets (especially TheNew York Times).
To the satisfaction and relief of some, MOOCs have come back to earth. Still in search of a purpose (the job they are “hired to do,” to use a Clay Christensen phrase), a business model, and an ideal user scenario, MOOCs are entering a more useful and realistic phase of their development. A lot of smart, mission-driven people are working on MOOC 3.0 (everyone forgets about MOOC 1.0 that came before Coursera and edX put MOOCs on the map) and we’ll see if MOOCs are 21st-century content, a platform innovation, or a powerful new disruptive presence in the educational landscape.
Competency-based education is the hot new innovation, at least in its latest incarnation, largely untethered to the structure of courses and credits, the basic building blocks of curriculums and thus learning. In truth, CBE has been around for decades and pioneered by accredited nonprofits like Excelsior, Charter Oaks, and Western Governors University. They have been joined by a growing number of new providers including the University of Wisconsin System, Northern Arizona University, Brandman University, Capella University, Lipscomb University, the Kentucky Community and Technical College System, and my own Southern New Hampshire University. Another 30 or more institutions are working on their own CBE offerings.
The Department of Education is exercising its authority to create experimental sites and has invited proposals for administering federal financial aid funds in new ways that support CBE models, and the White House is calling for more innovation and putting its weight behind CBE. The leading higher education associations – including EDUCAUSE, CAEL, AAC&U, and ACE – are joining in and announcing new initiatives, webinars, and meetings.
Accreditors are releasing new guidelines for CBE programs and the administration continues to pressure them by raising the possibility of new validation systems better suited to support innovative new delivery models. Think tanks and foundations have added their intellectual and financial backing to the effort. The hope, one I share, is that CBE can deliver on the holy triad of quality, cost (access), and completion.
This is a very different set of circumstances than those that have characterized the MOOC movement. CBE has an actual track record of success in its earlier iterations, is being embraced by powerful stakeholders, is being developed by institutions with deep understanding of the students they seek to serve, and is being tied into the established financial system of funding.
More importantly, CBE offers a fundamental change at the core of our higher education “system”: making learning non-negotiable and the claims for learning clear while making time variable. This is a profound change and stands to reverse the long slow erosion of quality in higher education. It is so fundamental a change that we hardly yet know all its implications for our world. For example:
If the claims we make for student learning really are non-negotiable, we will likely see a drop in completion rates, at least for some length of time;
We will have a lot of work to do around assessment, still difficult terrain in higher education;
The Department of Education, entrusted to protect billions of taxpayer dollars, will need reassurance that we have in place measures that guard against fraud;
If competencies are a new “currency” replacing credit hours, we will need to work out the “exchange rates” if we are to have a system that does not replicate the waste and inefficiencies of the current credit hour and transfer system.
Faculty roles are likely to be redefined, at least in some models, and a profession long in transition, and some would say under siege, will be further impacted;
Student information and learning management systems are not designed for these new models, yet form the administrative backbone that supports everything from registration to transcripts to billing to financial aid to... well, almost everything we do.
Accreditation standards, even new ones, will be tested and will have to evolve to reflect the lessons we learn over time.
In other words, if CBE is finally a movement, it is like many new movements still in search of the basics. It lacks a taxonomy, an agreed-upon nomenclature, the aforementioned exchange rate, a widely accepted form of documentation (what is the right form of CBE transcript?), the supporting systems, and experience with a wide variety of students.
This is why the Department of Education’s proposed experimental sites are so important. The key word here is experiment. Institutions need safe spaces in which to try new things, new rules by which to operate, the ability to rethink fundamental assumptions about how we deliver learning and support students, trying new models for costing and paying, and tolerance for mistakes. If we are not making mistakes, it isn’t really innovation that’s going on.
We need a range of approaches to see what works best for what students in what settings. In return, institutions engaged in the work have to do their part. That includes collecting and providing data with a level of transparency that our industry has historically resisted (higher education is a culture that innately resists accountability outside of student grades), putting aside underlying competitive impulses to share what we learn, and finding ways to support students and quickly address the mistakes we must inevitably make (remembering that we never “play” with student welfare).
Experimental sites are important for what they allow, but also for what they (should) fend off. We should beware a premature setting of standards or guidelines. We should beware a premature overturn of the credit hour, flawed as it is, before we have worked out its substitute (or more likely, complementary system). We should beware an opening of the gates like the one that attended online learning, when unscrupulous players entered the market and abused the system for enormous gains at enormous costs for students and the federal government.
In other words, we need just the kind of good questions that Matt Reed poses in his recent column. We need leading thinkers like CAEL and AAC&U to help us think through the big questions before us. We need EDUCAUSE to help us spec out new systems and technologies. And we need to try various models, collect data, and work through the significant questions still in front of us so we can better inform policy-making and the reauthorization discussion now getting under way.
Traditional higher education is not going away any time soon, but CBE has the potential to both provide new affordable, high-quality pathways to students and to challenge our incumbent delivery models to better identify the claims they make for learning and how they know. Those demands, whatever CBE turns out to be, are not going away either and CBE can function like the industry’s R&D lab. The proposed experimental sites align with that very useful role and deserve our collective support.
Paul LeBlanc is president of Southern New Hampshire University.
The Education Department has again rescheduled its “technical symposium” on the Obama administration’s proposed college ratings system. The new date for the daylong, public meeting is February 6, according to an email sent Thursday to presenters.
Education Department officials, citing poor weather conditions in Washington, D.C., earlier this week postponed the event and set February 20 as the new date. But, according to emails to speakers, officials have since decided they want to hold the conference sooner.
The symposium will feature presentations from more than a dozen people with expertise in higher education data who will make presentations on various aspects of the department’s proposal to develop a ratings system.
A group of institutions that favor a competency-based approach to student learning have offered examples of the sorts of approaches they would try in a program the U.S. Education Department is contemplating to encourage such experimentation. The department in December issued an invitation to institutions to propose ways in which a waiver of certain federal financial aid rules, as part of an "experimental sites" program, might allow them to improve student outcomes, speed time to degree, and lower costs for students.
In their submission, the institutions -- which include a mix of traditional public and private institutions, online only institutions, and community college systems -- proposed "testing new or alternative federal definitions of attendance and satisfactory academic progress," "decoupling federal financial aid from time-based measures," and allowing federal aid to flow to a degree program that mixes competency- and credit-hour-based learning, among other approaches.
The institutions are: Alverno College, Antioch University, Brandman University, Broward Community College, Capella University, Cardinal Stritch University, Charter Oak State College, Council for Adult and Experiential Learning, Excelsior College, Kentucky Community and Technical College System, Lipscomb University, Northern Arizona University, Southern New Hampshire University, SUNY Empire State College, University of Maryland University College, the University of Wisconsin-Extension and Westminster College.
A survey of senior academic affairs officers in higher education has found that 84 percent of their institutions have common learning goals for students, up from 74 percent four years ago. This suggests that measuring student learning is now "the norm," says a report on the results from the National Institute for Learning Outcome Assessment. The study also found that the "prime driver" for assessment efforts is unchanged from the last survey: pressure from regional and specialized accreditation agencies.
It’s that time of decade again, when randomly selected departments at U of All People are faced with assessment. The administration brings in a posse of NAAAAAA experts with credentials bought from the people who sell fake IDs, and has the faculty entertain them for three days while they poke their noses into everything, including Professor Winkle’s Dryden seminar, which no one has disturbed in years. Here’s how the process works, at least in the English department:
Three months before the assessors arrive, the department is galvanized into action by the chair, acting on directives from the dean, obeying the orders of the provost, who bows to the president. “The assessors are coming, the assessors are coming!” shouts the chair from the comparative safety of the rostrum at the semester’s first departmental faculty meeting while everyone else dives for cover. After this warning shot comes the collective indignation of the faculty -- How dare they judge us? We’re in the humanities! -- as the professors go through the Kübler-Ross stages of denial, anger, bargaining, depression, and acceptance.
When everyone has settled down (except for Professor Winkle, who’s settled in for a nap), the chair starts planning the arduous task of self-judgment. The task consists of recruiting three faculty members who blinked at the wrong time, including Professor Winkle, who opened his eyes after his nap. The disgruntled three are assigned to gauge how much the students aren’t learning from the department’s courses.
What are the standards, criteria, methods? The Renaissance contingent proposes noble goals, such as achieving wisdom and learning to appreciate a Shakespearean sonnet, but no one wants to set the bar too high, or the assessment will be that this department needs to pull up its socks.
The faculty debate setting the bar absurdly low: for instance, that students should learn to read, but there’s no guarantee of students passing that bar, either. After several more meetings and the formation of a committee to oversee the assessment committee, the proposal is that each student should be familiar with the terms literature and irony; must know how to put together an argumentative essay proving that Shakespeare was a great writer; and should have enough literary history to realize that 1800 came after 1564, and that both are before 1922. These arbitrary criteria, once insisted upon, achieve a solidity as satisfying as trompe l’oeil papier-mâché walls.
The methods for data collection are decided by the assessment committee, eager to pass on responsibility to other, unwilling faculty. The methods involve snatching away student essays for disappointed analysis: counting how many times the words in my personal opinion and irregardless appear in the essays, seeing whether the arguments hold water (Professor Winkle performs that job over the sink in the fourth floor men’s restroom), and checking for spelling and grammar, assuming that the faculty are up to it.
As an extra concession, the department tracks alumni/ae to see whether anyone actually used the English major to wangle a job; and contemplates giving an exit exam to department seniors, though the offer of free pizza to anyone who’ll sit for the exam gets only three takers. The sample questions include references to periods, movements, literary terms, authors and works, and seven questions on Dryden. The sample size of all the data varies from a dozen to one faked reply by Professor Winkle.
Other creative assessment methods involve tossing the student essays downstairs to see which go farthest, and throwing the I Ching. To tabulate the results: charts with percentages look good, as do bulleted lists, though the superimposition of one over the other is probably (too late) a poor decision.
Tension mounts till the assessors arrive, at least one in a rumpled brown business suit, all looking as if they haven’t slept since the start of the fall semester. The assessors ask a lot of questions, visit classes, and interview people whom no one ever thought to talk to previously, including Clarice, the custodial supervisor for the liberal arts building. Eventually, they write up a report that recommends a 15 percent reduction in adjunct labor, greater funding for core courses, less departmental internecine warfare, and more attention paid to Dryden.
The report is circulated down the ranks until, months later, it reaches the English department faculty. Since the administration has ignored the implications of the report, the department restricts discussion to only 17 hours, spread out among four faculty meetings.
What rides on all this? Not much till next decade’s visit, when the department scrambles to recall what it did the last time.
David Galef directs the creative writing program at Montclair State University. His latest book is the short story collection My Date With Neanderthal Woman (Dzanc Books).