"Frenzy" may be the best way to describe what’s currently happening in higher education.
On one hand, there’s MOOC (massive open online course) mania. Many commentators, faculty creators, administrators, and public officials think this is the silver bullet that will revolutionize higher education.
On the other hand, there is the call for fundamental rethinking of the higher education business model. This is grounded most often in the argument that the (net) cost structure of higher education is unaffordable to an increasing number of Americans. Commentators point out that every other major sector of the economy has gone through this rethinking/restructuring, so it is only to be expected that it is now higher education’s turn.
Furthermore, it is often claimed that colleges and universities need to disaggregate what they do and outsource (usually) or insource (if the expertise is really there) a re-envisioned approach to getting all the necessary work done.
In this essay I focus on the optimal blending of online content and the software platforms underneath.
Imagine how transformative it would be if we could combine self-paced, self-directed postsecondary learning (which has been around in one form or another for millennia) with online delivery of content that has embedded in it both the sophisticated assessment of learning and the ability to diagnose learning problems, sometimes even before the learner is aware of them, and provide just-in-time interventions that keep the learner on track.
Add to that the opportunity for the learner to connect to and participate in groups of other learners, and, to link directly to the faculty member and receive individualized attention and mentoring. What you would have is the 21st-century version of do-it-yourself college, grounded in but well beyond the experienced reality of the thousands of previous DIYers such as Abraham Lincoln, Frederick Douglass, and Thomas Edison.
A good goal to set for the future? No. The great news is that we already have all the components necessary to make this a reality in the near term. First, it is now possible to build “smart” content delivered through systems that are grounded in neuroscience and cognitive psychological research on the brain mechanisms and behaviors underlying how people actually learn. The Open Learning Initiative at Carnegie Mellon University, which creates courses and content that provide opportunities for research for the Pittsburgh Science of Learning Center (PSLC), is an example of how research can underlie content creation.
Such content and systems depend critically on faculty expertise, in deciding exactly what content is included, in what sequence, and how it is presented. Faculty are also critical in the student learning process, but perhaps not solely in ways we have traditionally thought. That is, it may not be that faculty are critical for the actual delivery of content, a fact we have known for millennia given that students obtain content through myriad sources (e.g., books) quite successfully.
Still, effective and efficient student learning has always depended critically on how well faculty master both these content steps as well as the other parts of the learning process, as evidenced by the experience with faculty who are experts at doing it and the ease with which learning seems to happen in those situations.
Second, these “smart” systems exist in a context of sophisticated analytics that do two things: (a) monitor what the learner is doing such that it can detect when the learner is about to go off-track and insert a remedial action or tutorial just in time, and (b) assess what the learner knows at any point. These features can be used to set mastery learning requirements at each step such that the learner cannot proceed without demonstrating learning at a specific level.
Ensuring mastery of content has long been a major concern for faculty, who used to have to spend hours embedding pop quizzes or other learning assessments into their courses, set up review sessions, set office hours during which students may (or may not) attend, and implore students to contact them is they encountered difficulties. The dilemma for faculty has usually been figuring out who needs the assistance when and how.
The sophisticated analytics underneath content delivery systems help take the guesswork out of it, thereby enabling faculty to engage with more students more effectively, and, most important, design the engagement to address each student’s specific issue. Better student-faculty interactions will likely do more to improve student learning than most any other intervention.
Third, the platforms on which these “smart” systems are built and delivered include ways to create virtual teams of learners (both synchronously and asynchronously) and to include faculty interaction from one-on-one to one-on-many. This tool will make the long tradition of having students form study groups easier for faculty to accomplish, and enable students whose physical location or schedules may have made it difficult previously to participate in such groups to gain their full benefit.
Fourth, the creation of these “smart” systems has resulted in much clearer articulations of the specific competencies that underlie various levels of mastery in a particular field. As evidenced by the various articulations and degree profile work done in the U.S. and internationally, and by the development of specific competencies for licensure by several professional associations, faculty play a central role.
Fifth, the specification of competencies makes it easier to develop the rubrics by which learning acquired prior to formal enrollment in a college/university or in other ways not otherwise well-documented can be assessed, and the learner be placed on the overall continuum of subject mastery in a target field or discipline. Although faculty have always played a central role in such assessments, standardization of assessment has proven difficult. However, with the inclusion of faculty expertise, assessments such as Advanced Placement exams and learning portfolios can now be accomplished with extremely high reliability.
All of this could have enormous consequences for higher education. To be sure, we need more research and development of a broader array of content and delivery approaches than we currently have. In the meantime, though, three steps can be taken to meet students’ needs and to increase the efficiency with which colleges and universities provide the educated citizens we need:
Define as many postsecondary credentials as possible in terms of specific competencies developed by faculty and practicing professionals. This will provide the bases for developing as many “smart” systems as possible for improved content and learning assessment, and for assessing prior learning.
Meet students at the edge of their learning. Each student that arrives at a college/university is at a different spot along the learning continuum. Previously, we made at best very rough cuts at determining where students should start in a course sequence, for example. But more sophisticated prior learning assessment means we can be much more precise about matching what the student knows and where s/he should connect to a learning sequence. Not only would this approach minimize needless repetition of content already mastered, but it could also provide faster pathways to credentials.
Design personalized pathways to credentials. Better and clearer articulation of what students need to know for a specific credential, plus better assessments of prior and ongoing learning, plus more sophisticated content, plus the opportunity for faculty to engage individually and collectively with students in more focused ways means we can create individual learning plans for students to complete the credentials they need. In essence, a learning gap analysis can be done for each student, indicating at any point in time what s/he still needs to know to achieve a credential. Faculty mentorship can become more intrusive and effective, and a student’s understanding of what and why specific knowledge matters would be deeper.
Institutions that have greater flexibility to address these steps will be the most likely to succeed. I am heartened by the many professors and administrators who are creating the innovative approaches to make the changes real, and to embed them in the culture of their respective institutions. They provide students with superior advising and clearer pathways to achieving the academic credentials students seek. In the longer run, those institutions are likely to see cost structures decline due to more efficient progress through academic programs.
The technology-driven changes described here may well enhance student learning, and help us reach the goal of greater access to higher education for adults of all ages.
But it raises a crucial, and largely unaddressed, question that gets lost in debates about whether costs can be reduced using such technology or whether it will result in fewer faculty jobs.
We have not yet adequately confronted the definition of “faculty” in this emerging, technology-driven environment. Although a thorough discussion of that issue necessarily awaits a different article, suffice it to say that just as technology and costs have changed the job descriptions of people in most other professions, including health care, it has also created new opportunities for those in them. For instance, even though the rise of nurse practitioners has changed key aspects of health care delivery, the demand for more physicians, whose job descriptions may have changed, remains.
In any case, the best part is that these new approaches do not replace the most important aspect of education — the student-teacher interaction. Rather, they provide more effective and efficient ways to achieve it.
John C. Cavanaugh is president & CEO of the Consortium of Universities of the Washington Metropolitan Area.
I know! I know! Everyone is sick to death of debating the pros and cons of MOOCs, the massive online courses that, depending on your viewpoint, will be the downfall or resurrection of higher education. But what's getting lost in all the noise is that MOOCs are far from the only game in town when it comes to online education.
Key in determining the effectiveness of a course, both online and on the ground, is how actively it is being taught and how effectively it is engaging students.
Educators are creating and tweaking a number of very different learning models to engage students in "active learning," both in the physical classroom and the virtual world – often in intriguing combinations.
Based on innumerable conversations with faculty, students, administrators, staff, and the general public, the following are the three most important things I know about the role distance education plays in higher education today and about how to create high-quality programs.
Distance education is not a singular thing.
Educators and administrators often use only the terms "synchronous" and "asynchronous" to differentiate among distance education models. But the most critical descriptor of distance education models has nothing to do with the extent of live instruction; rather, it is the extent to which a course is "actively taught."
On one side of the active-teaching spectrum is a "course-in-a-box" -- a course with pre-built media assets meant to stand alone, with minimal or no involvement or intervention by the faculty. MOOCs, for instance, often consist of pre-recorded high-production video and automated assessments. If the faculty member were to disappear or otherwise disengage from the course, the course would still exist. The thousands of students in the MOOC could simply press the play button on the screen, answer automatically graded test questions and otherwise enter input as appropriate. And, of course, the size of the MOOC is nearly limitless, subject only to technology capacity constraints.
On the other side of this spectrum is the very actively taught class. Independent of media assets available to students, faculty teach. They communicate with students, lead discussion, provide feedback, and otherwise engage. If a faculty member were to stop teaching, the class would cease to exist. Typically, such actively taught courses are smaller and require that faculty know and interact with students much more intimately, more like a seminar than a lecture hall.
Some MOOCs employ teaching assistants, striving for modest interaction with students. However, in most cases, the scale of MOOCs overwhelms even multiple instructors; plus, TAs are, by definition, not faculty. Thus, while MOOCs may be great for personal enrichment, most are not yet appropriate for college credit, given that they are largely unresponsive to the learning needs of any given student.
The questions being asked about effective distance education aren’t all that different from those concerning "traditional" teaching models.
Just as with traditional education, one the greatest challenges of distance education is how to better engage students. Traditional educators often discuss the role of lecture, discussion, feedback, group projects and peer assessment. Today they also talk about "flipping the classroom" so that lectures and other didactic material are recorded and made available to students outside of class. Class time can then be reserved for discussion and application.
Understanding that student engagement is highly correlated to active teaching, distance educators are addressing the very same issues. The "course in a box" model is rarely engaging - many MOOCs create very passive experiences for students, who are required to watch hours of video and answer machine-graded multiple choice questions.
That said, some "course in a box" exceptions come close to rivaling substantive live interactions. Simulations, games, and other online modules in which students must solve problems and make decisions within an automated environment can be very effective teaching tools that adapt to students’ varying levels of skill and mastery. Fully adaptive learning technologies may, in fact, be more engaging than traditional teaching, given that students’ learning experiences may be customized to individual needs.
Of course, not even all traditional education is "active." A professor’s recitation of pre-written 75-minute lectures twice a week for an entire term would hardly be more active than simply recording those lectures and posting them on a website. An actively taught traditional course, like a distance education course, would require the faculty member to engage much more intimately with students through discussion, feedback, and more.
While some asynchronous models have no active teaching element -- including many MOOCs -- others rely on highly active and present faculty to asynchronously engage with students. Asynchronous communications, including group discussion boards, blogs, and wikis, can lead to more substantive exploration of course material than live, in-person conversations. Some faculty report that asynchronous communications allow students to better digest and consider others’ opinions while constructing their own beliefs, and can lead to deeper and more robust discussions.
Putting aside the aforementioned adaptive and interactive learning technologies (which are still relatively rare), an active teacher can better understand the needs of each student and differentiate instruction, customizing discussion and explanations as appropriate. Non-active teaching -- whether through distance or traditional education -- tends to be inflexible and monolithic.
Faculty conversations about distance education are shifting markedly.
Faculty today are less interested in debating the quality of distance education and how much a student can learn. Perhaps the launch of edX by MIT and Harvard opened the gates -- suddenly high-profile, top-notch universities were committing to distance education with significant resources, searching for new ways of teaching and learning.
For whatever reason, today’s conversations by faculty focus less on quality and more on the qualities of distance education. Many express concern that a distance course may be deficient at enhancing cognition, emotion and interpersonal relationship-building, or at developing the "whole student." These are reasonable concerns. No serious distance educator would ever suggest that distance education fully supplants the benefits of a live in-person experience. Rather, we argue that the loss of face-to-face benefits in a classroom can be mitigated in a distance learning environment if students achieve the intended learning outcomes while benefiting from convenience and increased access to higher education.
Faculty are also keenly interested in the impact of distance education on higher education broadly and the faculty workforce specifically. Given that distance courses can be taught by faculty anywhere in the world to students anywhere in the world, they question whether distance education will result in a sort of standardization of curriculum, fewer faculty at their home institutions, and lower standard of quality.
While not unreasonable, such questions must be considered within the context of how distance education is evolving. If today’s MOOCs become widely available for credit, concern would be merited. However, if most credit-bearing distance education is "actively taught," then the risks are lessened, if only because the costs of actively taught distance education can be just as great as the costs of traditional education.
Besides, without dramatic change, institutions of higher education, many of which are in financial distress, face a highly uncertain future. The question to ponder: how a future with distance education compares to all other possible futures for higher education.
Joel Shapiro is associate dean of academics at Northwestern University School of Continuing Studies and has taught in and led distance education programming at Northwestern for more than six years.
Submitted by Gary S. May on September 10, 2013 - 3:00am
When a new product is launched, particularly in technology, people often rush to be among its early adopters. The sudden explosion of users invariably reveals bugs and glitches that need to be addressed.
This is analogous to what we appear to be witnessing right now with massive open online courses. An unrelenting stream of attention-grabbing announcements is being followed by closer inspection – and the realization that, although MOOCs are a novel approach to education, they may not be a panacea.
The picture of MOOCs presented in the press is quite a paradox. The concept has been described as both a game-changer and a hyped retread. MOOCs deliver great content to faraway places, but some believe they place academic quality in peril. They are financial enigmas — offering the potential to bend the higher education cost curve, yet lacking an accepted plan for monetization. Some leaders in higher education are scrambling to get into the game; others are issuing a call to slow down.
The contradictions are rich, and the hyperbole in full bloom. Personally, I find all of the discourse to be a positive sign. The intensity of the MOOC dialogue indicates a chord has been struck. The promise of technology and access is igniting a larger discussion about the higher education paradigm. The initial rush has evolved, but what’s next? Where is this train ultimately headed?
First, let’s keep in mind that in the technology adoption life cycle, MOOCs are probably somewhere between innovation and early adoption; it’s too early to declare victory or to reject the concept before it has been further tested, evaluated and refined. Second, colleges and universities are ground zero in the exploration of ideas. If you can’t experiment here, then where?
In thinking about this issue as an engineer, I am reminded of the Wright Brothers and their pursuit of human flight. The brothers’ first test glider in 1900 failed to achieve the altitude that Wilbur and Orville had anticipated. So they revisited their equations and re-analyzed the aerodynamic data obtained from the aviator, Otto Lilienthal. They increased the size of the wings and refined the sloped surface of the airfoil, but additional adjustments brought the same disappointing results. It would be another two and a half years before the Wright brothers succeeded in launching and controlling a powered aircraft.
What if their early struggles with the gliders had gotten the better of the Wrights? How much longer might humankind had to wait to fly?
The same might be asked today of MOOCs. The dawn of a new academic year seems an appropriate time to contemplate such questions and share a few observations on higher education’s latest grand experiment:
1. The prospect of MOOCs replacing the physical college campus for undergraduates is dubious at best. Other target audiences are likely better-suited for MOOCs. My university, the Georgia Institute of Technology, is preparing to offer an inexpensive M.S. degree in computer science via massive (but not open) online courses beginning January 2014, with two options. The on-campus version has a research emphasis, requiring one-on-one interaction, whereas the online degree caters to professionals by focusing on applying advanced knowledge in the workplace. If successful, thousands are expected to enroll in this $7,000 MS degree program.
2. In addition to the master’s level, MOOCs may also help level the playing field for precollege education. This is another area of the MOOC wilderness being explored. With a $150,000 grant from the Bill and Melinda Gates Foundation, for example, Georgia Tech is offering MOOCs in three introductory topic areas for people who have yet to pursue a college degree. One can also easily extrapolate and imagine MOOC-like advanced placement courses available to students at high schools without their own Advanced Placement offerings.
3. Despite challenges, delivering content online could be a real asset to enhance pedagogy for undergraduates as well. The inverted classroom – in which students and faculty convene solely for discussion, and all lectures take place online – appears to have significant promise. For example, a recent comparison between a standard fluid mechanics course at Georgia Tech and its "flipped" counterpart revealed that weaker students in the flipped classroom actually outperformed stronger students who experienced traditional delivery of the material.
American higher education finds itself at a pivotal point in its great MOOC experiment. We must continue working to optimize MOOCs so that their promise and potential can be realized. While operational and execution issues remain, MOOCs still represent a tremendous opportunity for people around the world to learn and for educators to study and optimize that learning process.
A realistic time frame for evaluating the successes, failures, and unanticipated results is still likely another three to five years away. But, as Wilbur Wright said about learning to fly: "If you are looking for perfect safety, you will do well to sit on a fence and watch the birds; but if you really wish to learn, you must mount a machine and become acquainted with its tricks by actual trial."
Gary S. May is dean of the College of Engineering at Georgia Institute of Technology.
Pearson will expands its partnership with the adaptive learning technology company Knewton to offer MyLab and Mastering products to six new subject areas this fall, the education company announced on Thursday. MyLab and Mastering, e-tutoring products that "continuously [assess] student performance and activity in real time," have been available since fall 2012 for students in math, economics, reading and writing. With the addition of topics including biology, anatomy and physiology, chemistry, physics, finance and accounting, Pearson estimates the products will reach about 400,000 students.
While some observers say academe is already moving to a post-MOOC era or one dominated by MOOC-like offerings that aren't really massive open online courses, the MOOC itself has a new symbol of recognition. Oxford Dictionaries, published by Oxford University Press, has now added MOOC as an official word.
Definition: "a course of study made available over the Internet without charge to a very large number of people."
Origin: "early 21st century: from massive open online course, probably influenced by MMOG and MMORPG."
There are no easy solutions to these problems. Nevertheless, we think that a more public-facing academy is a necessary, if insufficient, response. Public engagement helps to demonstrate the value of research. It also helps to generate a larger audience for scholarly research and therefore potentially more revenue for publishers. We are not suggesting that research intended for a broader audience can or should supplant research targeted at the scholarly community. But we think there is room for more scholars to demonstrate that their expertise is important outside their subfield.
We have a new book on the 2012 presidential election, The Gamble, that provides one model for public engagement. The book was designed to be an accessible academic account of the election, written in real time and published within a year of the election itself — standard timing for books focused on the general public, but an unusually short time frame for a scholarly book. Together with our publisher, Princeton University Press, we structured the project so that we could enter into the ongoing public discussion about the election alongside pundits and journalists — via continuous analysis and writing, serializing the process of peer review, and accelerating the final mechanics of publication.
Our experience writing this book suggests to us that there are underutilized opportunities for both scholars and their publishers to innovate on traditional modes of academic writing and thereby bring scholarly research to a much larger audience. We joked over the past two years that part of "the gamble" was simply writing the book itself. We believe that this gamble has paid off, and we offer our story in hopes that it might encourage others to roll the dice. We think this sort of project can benefit scholars, publishers, and the broader public alike.
Why We Wrote the Book
The book was motivated by two goals. The first was simply to tell the story of what promised to be a lively and competitive election. The second goal was to amplify the voice of political science in the conversation about the election—from events on the campaign trail to explanations and interpretations of the election after it was over.
Journalists typically write the history of American presidential elections, a history built on their access to decision-makers in the campaigns. We believed that the social scientific study of campaigns, with its emphasis on systematic data and statistical analyses, adds something important. Whereas journalistic accounts effectively capture why campaign principals made the decisions they did, a political science account can better determine whether those decisions mattered.
The problem, however, is that political scientists — like most academics — usually work too slowly to have much influence. Science takes time, and so the first academic articles might appear about 18 months after an election. Academic books may take two to three years or even longer. By this point it is too late. The conventional wisdom about the election has congealed — whether it is correct or not — and journalists, commentators, and voters are already thinking ahead to the next election. After the 2004 election, for example, the misinterpretation of a single question on the exit poll led some commentators to attribute President George W. Bush’s victory to his appeal among "values voters."
We wanted to be different. We wanted to write an academic book, but with a journalist’s faster metabolism.
How We Wrote the Book
In August 2011, we pitched the book to several different presses. In February 2012, we signed a contract with Princeton University Press. In August 2012, the first two e-chapters of the book were made available for free by Princeton Press. In January 2013, a third e-chapter was released. In April 2013, a fourth and final e-chapter debuted. In September 2013, the print edition of the book will be published — including revised versions of the e-chapters as well as four additional chapters. Looking back at our drafts of the initial and final chapters, we wrote the entire book in about a calendar year. How were we able to do this?
Like Lennon and McCartney, we got by with a little help from our friends. Their help was most evident in the data we were able to obtain at no cost — weekly survey data from the firm YouGov, daily data on media coverage from the firm General Sentiment, data on candidate advertising courtesy of The Washington Post, and multiple other datasets from generous colleagues. These data were necessary to make our book stand apart from other accounts of the campaign. Most importantly, we received these data promptly and continuously, allowing us to do analysis while the campaign was under way.
Second, we wrote about our findings in public forums during the campaign itself. This writing had several benefits. It helped ensure that the book would be completed in time. It allowed us to elicit responses to our argument that, at times, led to revisions and corrections — a sort of crowdsourced peer review. And it put our perspective into the conversation happening in the moment. We found blogs to be the ideal venue for doing this because they allowed us to write and publish with minimal editorial delay and to get feedback in comments threads under each blog post. We contributed to The Monkey Cage, YouGov’s Model Politics, Campaign Stops and FiveThirtyEight at The New York Times, and Wonkblog at The Washington Post.
Finally, and perhaps most important to the successful completion of the book, was the innovative plan devised by Princeton University Press (PUP), which certainly took a gamble as well. Our editor at the project’s inception, Charles Myers, convinced us that the book would be more accessible to a non-academic audience if it had a chronological narrative at its core, rather than the thematic structure that academics often favor. Then, as we completed drafts of individual chapters, PUP sent them out for peer review, rather than waiting until we had finished the entire manuscript. PUP had secured reviewers in advance and requested a tight turnaround. PUP also produced the multiple e-chapters that allowed the book to be partially serialized. In their view, having these e-chapters — and giving them away for free — would help build interest in the book. Over 2,000 copies of these chapters were downloaded from Amazon, in addition to an untold number of PDF copies downloaded from the PUP website or The Monkey Cage. Several colleagues assigned these chapters to their students, circulating them further.
PUP also accelerated the process of producing a print volume — giving us stringent deadlines that we had to meet. We managed to do this with modest success, although we created delays by adding a new chapter at the 11th hour and by fine-turning analyses for weeks on end. But ultimately, we finished the manuscript in time to produce a book that would be published alongside, or even before, the journalistic accounts. PUP deserves credit here as well, as it is taking them only three months to turn that final manuscript into a book available for purchase.
What impressed us throughout this process was the press’s flexibility and willingness to innovate. The press showed how to take the existing model of scholarly publishing — one centered on peer review — and modify that model to produce a book that was still rigorous but also timely and, we hope, lively.
Did “The Gamble” Pay Off?
We believe that it did. We sought to tell the story of this election, and we believe that our account provides a novel perspective that challenges much conventional wisdom. More than a few commentators argued that the underlying economic and political fundamentals were not in Obama’s favor. We show that this was untrue: the economy was growing fast enough for the incumbent to be favored. Many commentators also saw the Republican primary as a search for "anybody but Romney." We show that this was also untrue. The many anybodies — Rick Perry, Herman Cain, Newt Gingrich, Rick Santorum, etc. — surged largely because of temporary increases in media coverage of them, and not because Republican voters had any underlying hostility toward Romney himself.
After the election, commentators were quick to attribute Obama’s victory to his superior campaign. We show that the effects of things like campaign advertising and field organizing were likely not large enough to account for Obama’s victory. We also call into question many prevailing interpretations of the election — that it augured a Democratic realignment, that it suggested a profoundly "Liberal America," that it suggested the Republican Party needed a complete overhaul. On the whole, the 2012 election was very much what extant political science research led us to expect. It showed that a book building on and elaborating that research could make a useful contribution.
We also sought to be part of the conversation among journalists and commentators, and we felt included in that conversation. This was reflected in opportunities and invitations to contribute to media outlets — such as our collaboration with Ezra Klein to develop a forecasting model for Wonkblog. It was reflected in the willingness of high-profile journalists and commentators to endorse the book. It was reflected in ways in which commentators chose to engage with political science in their own writing. Even when they disagreed with us, with other political scientists, or with their conception of what "political scientists say," it was better than being ignored.
Of course, we will have a better sense of whether our book has any particular impact after it is out. But regardless we believe — although it is difficult to measure — that political science ideas and findings are much more in the bloodstream of campaign journalism and punditry than they once were.
John Sides is an associate professor of political science at George Washington University. Lynn Vavreck is an associate professor of political science and communication studies at the University of California at Los Angeles.
Blackboard announced last week that Ray Henderson is leaving his position as president of academic platforms at the company and is joining its Board of Directors. On his blog, Henderson characterized the shift as one that could expand his influence. "It means I’ll no longer manage day-to-day operations for our Academic Platforms group. In handing off the day to day, I’ll take a new role that will provide me a perch with broader purview across the whole of Bb. I’ve been enlisted to think about the whole of Bb and its pieces, and how they might come together to produce the most coherent and effective global education company that we can design," he wrote.
Henderson's shift is likely to be closely watched (and it already is). He came to Blackboard from Angel, when Blackboard purchased that company in 2009. Henderson was seen by many as more communicative and more open to ideas than other Blackboard leaders at the time -- and his presence has reassured not only customers of Angel but many other Blackboard customers. The e-Literate blog noted that Henderson's move comes at a time of a number of prominent job changes in the learning management system industry.
Despite the praise heaped on California Senate Bill 520 by Phil Hill and Dean Florez in a recent panegyric published in Inside Higher Ed, the bill was not the right answer for California’s higher education access woes, and it is a poor model for other states to emulate.
A bill that would open the door to for-profit companies -- including unaccredited “fly-by-night” ones -- to offer courses in the name of a state’s colleges and universities is fraught with danger. A bill that would require a state’s colleges and universities to outsource their core educational function is truly misguided, however well-intentioned the idea may have been.
That’s the real reason for the huge uproar and the rare universal opposition to California’s SB 520 from those close to higher education -- both faculty groups and the universities themselves.
Let’s be clear about one thing that’s not acknowledged in Hill and Florez’s piece: colleges and universities around the country already allow transfer credit from other universities as long as those courses meet the quality control standards of the home institution.
That tradition has been in place for a long time precisely to balance the needs of students who often take courses at more than one institution with the needs of the public to ensure quality control and the integrity of degrees from its taxpayer-funded institutions. The people of California (including employers) need to know that a degree from the University of California, the California State University, or a state community college is just that -- and not something offered by an unknown entity.
By mandating that state public colleges and universities begin a process of outsourcing its courses, SB 520 would have seriously weakened transparency and accountability in its institutions of higher learning. That’s one reason why the provosts of major universities in the Midwest have argued against similar schemes in their institutions. Alumni and trustees at Thunderbird Business School have also expressed serious concerns about how such a proposed relationship would threaten the reputation of that school and the value of its degrees for all students.
There is good reason for such concern, for cautionary tales about relying on for-profit companies to offer a college’s courses are unfolding right now around the country. In a December 2012 court settlement, for instance, the New York Institute of Technology was found legally and financially liable for actions of its for-profit partner. More recently, Tiffin University has seen its accreditation threatened because of over-reliance on unaccredited for-profit companies to offer its courses.
If SB 520 had passed, it would not have expanded meaningful access to quality higher education in the state. But it would have thrown open the door to massive profits for edu-businesses, who are accountable not to the people of California, but to investors and stockholders. No wonder so many CEOs were there to praise SB 520.
Florez and Hill labor mightily to make SB 520 sound bold and innovative, an effort to “wake up [California’s] higher education community,” they say. What everyone, including the state’s elected leaders, really need to wake up to are the fundamental facts about higher education funding in California.
According to a report published in February 2013 by Postsecondary Opportunity: The Pell Institute for the Study of Opportunity in Higher Education and titled “State Disinvestment in Higher Education FY1961 to FY2013,” California’s state fiscal support for higher education as a percentage of state personal income dropped by 58.2 percent (adjusted for inflation) between 1980 and 2013. The trajectory is clear: if the current long-term trend continues, California will reach zero in state funding for higher education in the year 2054.
Unfortunately, as Postsecondary Opportunity’s research demonstrates, many other states are also in a “Race to Zero.”
SB 520 was no “wake-up call” for anyone. It was, in fact, a dangerous diversion from the reality that there is simply no substitute for public investment in higher education, and there is no single cheaper teaching modality or low-cost “magic bullet” that will meet our need for qualified college graduates.
With all that is at stake for the futures of millions of students and for our country, we need to take a harder look at so-called “innovative” solutions that make the old promise of “something for nothing.”