The news that Purdue University likely overstated the impact of its early warning system, Course Signals, has cast doubt about the efficacy of a host of technology products intended to improve student retention and completion. In a commentary published in Inside Higher Ed, Mark Milliron responded by arguing that “next-generation” early warning systems use more robust analytics and will be likely to get better results.
We contend that even with extremely robust and appropriate analytics, programs like Course Signals may still fall short if their adoption ignores the most pressing piece of electronic advising systems — their use on the front end, by advisers, faculty and students. Until more attention is paid to the messy, human side of educational technology, Course Signals — and other programs like it — will continue to show anemic impacts on student retention and graduation.
Over the past year, we have worked with colleges in the process of implementing Integrated Planning and Advising Systems (which include early warning systems like Course Signals). The adoption of early warning systems requires advisers, faculty and students to approach college success differently and should, in theory, refocus attention on how they engage with advising and support services. In practice, however, we have found that colleges consistently underestimate the challenge of ensuring that such systems are adopted effectively by end-users.
The concept of an early alert is far from new. In interviews, instructors and advisers have consistently reminded us that for years, students have received “early alert” feedback in the form of grades and midterm reports. Early warning systems may streamline this process, and provide the reports in a new format (a red light instead of a warning note, for example), but the warning itself isn’t terribly different.
What is potentially different about products like Course Signals is their ability to connect these course-level warnings to the broader student support services offered by the college. If early warning signals are shared across college personnel, and if those warnings serve to trigger new behaviors on their part, then we are likely to see changed student behavior and success. In other words, sending up a red light isn’t likely to influence retention. But if that red light leads to advisers or tutors reaching out to students and providing targeted support, we might see bigger impacts on student outcomes.
Milliron says, for example, that with predictive analytics, “student[s] might be advised away from a combination of courses that could be toxic for him or her.” But such advising doesn’t happen spontaneously: it requires advisers to be more proactive in preparing for and conducting each advising session. They must examine a student’s early warning profile, program plan and case file prior to the session; they must reframe how they present course choices to students; and they have to rethink what the best course combinations are for students with varying educational and career goals, as well as learning styles and abilities. Finally, they may have to link students to additional resources on campus — such as tutoring— and colleges need to ensure these services exist and are of high quality.
For this process to occur, advisers need to be well-versed in how to use the analytics, and be encouraged to move past registering students for the most common set of courses to courses that make sense for the individual. But because most colleges remain uncertain about the process changes that should occur when they adopt early warning systems, they are unable to provide the training that would help faculty and advisers make potentially transformative adjustments in their practice.
Even if colleges do adequately prepare faculty and advisers for this transition, there is much we still don’t know about how students will perceive and use the data and messages they receive from early warning systems. These unknowns may influence the extent to which the systems impact student outcomes.
For example, if students perceive early warnings as a reprimand rather than an opportunity to get help, they may ignore the signals or avoid efforts of college personnel to contact them. To anticipate and mitigate these kinds of potentially negative responses, it is important to understand how all students, not just those who use and enjoy early alert systems, experience and react to such signals. As Milliron notes, we need to figure how to send the right message to the right people in the right way.
Early warning systems are only tools, and colleges will have to pay closer attention to changing end-user culture in order to maximize their effectiveness. Currently, colleges are skipping this step. At the end of the day, even the best system and the best data depend on people to translate them into actions and behaviors that can influence student retention and completion.
Melinda Mechur Karp is a senior research associate at the Community College Research Center at Columbia University's Teachers College. Also contributing to the essay were Jeff Fletcher, a senior research assistant, HooriSantikianKalamkarian, a research associate, and Serena Klempin, a research associate.
Cengage Learning, the second-largest higher education publisher in the U.S., on Tuesday announced it has formed a partnership with Knewton to provide adaptive learning technology in a handful of its products. Cengage will use Knewton technology in the company's MindTap platform, an interactive textbook reader. The technology will first appear in the management and sociology disciplines, a Knewton spokesman said.
For decades, the Supreme Court has kept vigil over the campuses of state universities as, in the words of one memorable 1995 ruling, "peculiarly the marketplace for ideas." No opinion, the Supreme Court has emphasized, is too challenging or unsettling that it can be banned from the college classroom.
Forget the classroom – professors today are fortunate if they can be safe from punishment for an unkind word posted from a home computer on a personal, off-campus blog.
The Kansas Board of Regents triggered academic-freedom alarm bells across America last month with a hastily adopted revision to university personnel policies that makes “improper use of social media” grounds for discipline up to and including termination. (While the board this week ordered a review of the policy, it remains in place.)
While described as a restriction on “social” media, the policy is nothing of the sort. By its own terms, the policy is an assertion of college authority over “any facility for online publication and commentary.” (Kansans, think twice before pushing “send” in the comments section of this article.)
The breathtaking sweep of the regulation – it seemingly would confer jurisdiction over every online appearance, from an interview with Slate magazine to an academic article in a science journal – evidences an eagerness to control the off-the-clock lives of employees that is itself cause for suspicion.
The policy purports to create two categories of online speech. Speech made “pursuant to” or “in furtherance of” official duties is subject to essentially complete regulation, and penalties up to firing may be imposed for any speech deemed “contrary to the best interest” of the institution.
All other online speech is punishable if it adversely affects the workplace, but only after a “balancing analysis” that considers the institution’s interests in “efficiency” against the employee’s interest in addressing matters of public concern.
These categories roughly track the Supreme Court’s employee-speech jurisprudence. But the Kansas regulation dangerously oversimplifies the law of employee First Amendment rights in ways that invite abuse.
The Court’s 1968 ruling in Pickering v. Board of Education marks the headwaters of public employee First Amendment protection. There, in the case of an Illinois teacher fired for a letter to the editor about a school bond issue, the court coined its “Pickering balancing test” to determine whether employee speech may lawfully be punished.
The test requires weighing “the interests of the teacher, as a citizen, in commenting upon matters of public concern” against “the interest of the state, as an employer, in promoting the efficiency of the public services it performs through its employees.”
Pickering was curtailed in the 2006 ruling, Garcetti v. Ceballos, involving a California prosecutor fired over an internal memo critical of the way the police department handled evidence. The Garcetti case essentially recognized that, when a dispute involves speech contained in an official work assignment, that is the government’s speech and not the individual’s. Accordingly, the individual cannot claim a First Amendment violation if the speech displeases a supervisor, and no balancing of interests is even necessary.
Although some lower courts have expansively applied Garcetti in dubiously supportable ways, it’s essential to recognize just how narrow the Garcetti decision really is.
Properly understood, Garcetti applies only where the speech itself is a work assignment – not where the speech is about work responsibilities. Prosecutor Richard Ceballos lost his First Amendment case because his speech came in a memo he was assigned to write. The same message in an interview with The Los Angeles Times – or on Facebook – might well have been protected.
Indeed, the Supreme Court painstakingly made the distinction in Garcetti between speech that “concerned the subject matter” of an employee’s work (which remains highly protected) versus speech “pursuant to” official duties, which Garcetti left unprotected.
Importing the Garcetti standard into the employment policies of Kansas universities raises two principal legal concerns.
The first is why Garcetti language belongs in a policy about off-hours social media activity at all. Few positions at a university require creating social media as part of official job duties. For the few that do, the Kansas policy is unnecessary. If you are the employee in charge of managing the university’s Facebook page, doing that job badly has always been grounds for removal.
Enactment of a new regulation suggests something more – a desire to extend authority over social media activity that is not a part of the employee’s job. The portentous descriptive – that the college may freely regulate speech “in furtherance of” official duties – is especially ominous for employees (read, faculty) for whom speaking and publishing is an expected credential-builder.
A researcher at Hawaii Pacific University recently created the “Faculty Media Impact Project” (call it “Klout for Kollege”), which attempts to measure individual professors’ influence by online references to their work, including mentions on social media. (Evidencing the blurry line between professors’ online visibility and their institutions, Southern Methodist University recently issued a news release boasting of its #2 national ranking – far outdistancing #17 Harvard – in the inaugural “impact” scores.)
No university employee, particularly not a teaching employee, can be secure of the boundaries where speech “in furtherance of” official duties ends and personal speech begins. That’s a problem.
Restrictions on the content of speech must be so clear and so specific that a speaker can be certain he is protected. Otherwise, speakers will censor themselves for fear of crossing indistinct boundaries.
The second and more legally intriguing concern is whether Garcetti can legitimately be applied to teaching faculty without running afoul of academic freedom.
Two of the 12 federal geographic circuits have recently said no. In September, the Ninth Circuit U.S. Court of Appeals ruled in Demers v. Austin, involving disciplinary action against a Washington State University professor, that “Garcetti does not — indeed, consistent with the First Amendment, cannot — apply to teaching and academic writing.” The ruling echoes a decision by the U.S. Court of Appeals for the Fourth Circuit, Adams v. Trustees of the University of North Carolina at Wilmington.
Decisions from three other federal circuits – the Third, Sixth and Seventh – suggest to the contrary that professors receive no special forgiveness from Garcetti.
By embracing without qualification the Garcetti level of authority over all employee speech, the Kansas Board of Regents inevitably has teed up a future case in its own Tenth Circuit, which has yet to speak to the issue.
Dissenting in the Garcetti case, Justice David Souter prophetically warned that employers would simply broaden employees’ job descriptions so that virtually any speech about the agency came within their official duties. This is no idle fear in the university setting.
To give one concrete example, it is the responsibility of nearly every university employee with a supervisory position – a dean, a coach, a club sponsor – to notify campus authorities upon learning that a student was sexually assaulted. Arguably, complaining in a blog that the college fails to diligently pursue and punish rapists might be speech pursuant to official duties, and consequently, grounds for termination at a supervisor’s complete discretion.
The context in which the Board of Regents enacted this hurry-up policy cannot be overlooked. It came in response to the suspension of David W. Guth, a University of Kansas journalism professor, for an angry outburst on a personal Twitter account blaming the National Rifle Association for the fatal shooting of 12 employees at the Washington Navy Yard on Sept. 16.
Though harsh and tasteless, the posting addressed a disputed political issue – the type of speech to which courts have always afforded special First Amendment dignity, even outside the academic world – and no reasonable reader would have confused the post with an official statement of KU policy.
That the Board of Regents enacted a regulation unmistakably intended to ratify disciplinary action for speech like Guth’s is worrisome. It conveys the message that the proper official response to provocative speech that offends sensitive listeners is to punish the speaker – even on a college campus, where the Supreme Court has always said that extreme views must be given their chance to find an audience (or, as in Guth’s case, to be discredited).
At its heart, the Kansas policy exemplifies a larger problem afflicting all of government – the hair-trigger use of punitive authority whenever the agency’s public image is imperiled. At many, if not most, government agencies today, it is easier to get fired for making the agency look bad than for actually doing your job badly.
The media is filled with stories of police officers, firefighters and teachers who have lost their jobs for entirely legal activity on social media that their supervisors consider “unprofessional.”
The public would justifiably rebel against a “24/7 optimal conduct code” that made it a punishable offense for a teacher to wear a sexy Halloween costume to the shopping mall or enjoy a cocktail in a local restaurant. But let the teacher share a photo of that moment on Facebook, and the same harmless behavior that was publicly viewable to the community in the real world is pronounced to be “bad judgment” and grounds for termination.
Frank D. LoMonte is executive director of the Student Press Law Center, an advocate for the First Amendment rights of students and educators.
A little over 12 months ago, The New York Times famously dubbed 2012 “The Year of the MOOC.” What a difference 365 little days can make. Here at the back end of another calendar year, we wonder if 2013 might come to be thought of as “The Year of the Backlash” within the online higher education community.
Even Udacity's founder, Sebastian Thrun, one of the entrepreneurs whose businesses kicked off MOOC mania, seems to be getting into the backlash game.
According to Fast Company magazine, Thrun recently made the following observation regarding the evanescent hype surrounding MOOCs and his own company: "We were on the front pages of newspapers and magazines, and at the same time, I was realizing, we don't educate people as others wished, or as I wished. We have a lousy product."
Of course, the hype around this category hasn’t wholly abated. Coursera has just announced another $20 million infusion of venture capital. And MIT has just released a report embracing the disaggregation of the higher education value chain fomented by platforms such as edX.
But maybe Thrun is right. Maybe MOOCs are a lousy product – at least as initially conceived. And even if MOOCs are meaningfully reimagined, the mark they have made on the public consciousness to date could have lasting repercussions for the broader field of online learning.
It seems like only last year (in fact it was) that some were crediting elite institutions with “legitimizing” online learning through their experimentation with MOOCs. But what if instead of legitimizing online learning, MOOCs actually delegitimized it?
Perhaps this is why, currently, 56 percent of employers say they prefer an applicant with a traditional degree from an average college to one with an online degree from a top institution, according to a Public Agenda survey undertaken earlier this year.
We’ve been following online learning for a long time, and collectively share experiences in teaching online, earning credentials online, writing about online learning, analyzing the online learning market, and serving as administrators inside a research university with a significant stake in online and hybrid delivery models.
While some MOOC enthusiasts might like you to believe that online learning appeared out of nowhere, sui generis, in 2012, the reality is that we’ve been bringing courses and degree programs online for more than 20 years. Hardly born yesterday, online learning has evolved slowly and steadily, taking these two decades to reach the approximately one-third of all higher education students who have taken at least one online course, and serving as the preferred medium of delivery for roughly one-sixth of all students. The pace of adoption of online learning – among institutions, students, faculty, and employers – has been remarkably steady.
The advent of this so-called “lousy product” – the MOOC – may be triggering a change, however. Indeed, recent survey evidence suggests that the acceptance of online learning among certain constituencies may be plateauing. Is it possible that a backlash against MOOCs could even precipitate a decline in the broader acceptance of online learning?
The long-running Babson Survey Research Group/Sloan-C surveys show relatively little change in faculty acceptance of online instruction between 2002, when they first measured it, and the most recent survey data available, from 2011. The percentage of chief academic officers that indicated they agreed with the statement “faculty at my school accept the value and legitimacy of online education” only grew from 28 percent in 2002, to 31 percent in 2009, and 32 percent in 2011. According to a more recent Inside Higher Ed/Gallup survey, “only one in five [faculty agree] that online courses can achieve learning outcomes equivalent to those of in-person courses.”
We have to be careful making comparisons across surveys, audiences and time spans, of course. But there is a palpable sense here that something may have shifted for online learning in the last year or so, and that as a result of that shift, online learning may be in danger -- for the first time in some 20 years -- of losing momentum.
In recent months, we’ve witnessed faculty rebelling against online learning initiatives at institutions as diverse as Harvard, Duke, Rutgers, and San Jose State, to name a few. In the latter case, faculty rallied to resist the use of Udacity courses on campus, but other instances of resistance did not even pertain to MOOCs – such as Duke’s decision to withdraw from the 2U-sponsored Semester Online consortium, or the vote from Rutgers’ Graduate School faculty to block the university’s planned rollout of online degree programs through its partnership with Pearson.
Our hypothesis is that MOOCs are playing a role here – chiefly by confusing higher education stakeholders about what online learning really is. By and large, of course, online learning isn’t massive and it isn’t open. And by and large, it does actually involve real courses, genuine coursework and assessment, meaningful faculty interaction, and the awarding of credentials – namely, degrees.
In numerous focus groups and surveys we have conducted over the course of 2013, both prospective students and employers have raised concerns about online learning that we had not been hearing in years past – concerns that have been chiefly related to the level of faculty interaction with students, the relationship between quality and price, and the utility of courses that don’t lead to recognized credentials.
The net contribution of the MOOC phenomenon, for the moment at least, may be a backsliding in the general acceptance of online learning – not least among faculty, who may fear they have the most to lose from MOOC mania, especially in the wake of controversial legislative proposals in a variety of states mandating that MOOCs be deemed creditworthy, thereby threatening further public divestment in higher education.
For those of us that have nurtured the growth and strengthening of online learning over many years, this would be an unfortunate outcome of the MOOC moment.
If there is a backlash under way, and if that backlash is contributing to an erosion in the confidence in the quality of online learning generally, that is something that won’t be overcome in a single hype cycle – it will take time, just as the establishment of degree-bearing online learning programs took time to develop and bolster. Possibly even more than one year.
Peter Stokes is vice president of global strategy and business development at Northeastern University, and author of the Peripheral Vision column. Sean Gallagher is chief strategy officer at Northeastern University.
Signals has had a rough few months. Blog posts,articles, and pointed posts on social media have recently taken both the creators and promoters of the tool to task for inflated retention claims and for falling into common statistical traps in making those claims. Some of the wounds are self-inflicted — not responding is rarely received well. Others, however, are misunderstandings of the founding goals and the exciting next phases of related work.
Signals is a technology application originally created by a team at Purdue University that uses a basic rules-based set of predictions and triggers — based on years of educational research and insight from the university's faculty — and combines them with real-time activity of students in a given course. It then uses a “traffic light” interface with student that sends them a familiar kind of message:
Green light: you’re doing well and on the right track.
Yellow light: you’re a little off-track, you might want to think about x,y, or z or talk with someone.
Red light: you’re in trouble. You probably need to reach out to someone for help.
These same data are also shared with faculty and administrators so they can reach out to those who need specific support in overcoming an academic challenge or just extra encouragement. Signals is now part of the services offered by Ellucian, but just about all the major players in education technology offer some version of "early warning" applications. In our insight and analytics work, a number of colleges and universities are piloting related apps; however, the promise and problems of Signals are an important predicate as we move forward with that work.
The Signals app project began with a clear and compelling goal: to allow students, faculty, and advisers access to data that might help them navigate the learning journey. For too long, the key data work in education has been focused on reporting, accreditation, or research that leads to long reports that few people see and are all too often used to make excuses, brag, blame, or shame. More problematic, most of these uses happen after courses are over or worse, after students have already dropped out.
The Signals team was trying to turn that on its head by gathering some useful feedback data we know from research may help students navigate a given course, and to give more information to faculty and administrators dedicated to helping them in the process. The course-level outcomes were strong. More students earned As, fewer got Fs, and the qualitative comments made it clear that many students appreciated the feedback. A welcome “wake-up call,” many called it.
John Campbell, then the associate vice president of information technology at Purdue, was committed to this vision. In numerous presentations he argued that “Signals was an attempt to take what decades of educational research was saying was important — tighter feedback loops — and create a clean, simple way for students, faculty, and advisers to get feedback that would be useful.”
Signals was a vital, high-profile first step in the process of turning the power of educational data work toward getting clean, clear, and useable information to the front lines. It was a pioneer in this work and should be recognized as such. The trouble is the conflation of this work with large-scale retention and student success efforts. Claiming a 21 percent long-term retention lift, as some at Purdue have, is a significant stretch at best. However, Signals has shown itself to be a useful tool to help students navigate specific courses, and for faculty and staff striving to supporting them. And while that will likely be useful in long-term retention, there is still much work to be done to both bring Signals to the next level of utility in courses and to test its impact on larger student success initiatives.
First, as Campbell, now CIO at West Virginia University notes, Signals has to truly leverage analytics. In our recent conversation he posited, “The only way to bring apps like Signals to their full potential, to bring them to scale, to make them sustainable is through analytics.” Front-line tools like Signals have to be powered by analyses that bring better and more personalized insight into individual students based on large-scale, consistently updated, student-level predictive models of pathways through a given institution. Put simply, basing the triggers and tools of these front-line systems on blunt, best-practice rules is not personalized, but generalized. It’s probably useful, but not optimal for that individual student. There needs to be a “next generation” of Signals, as Campbell notes, one that is more sophisticated and personalized.
For example, with a better understanding of the entire student pathway derived from analytics anchored on individual-level course completion, retention, and graduation predictions, a student who was highly likely to struggle in a given course from day one — e.g., a student having consistent difficulty with writing-intensive courses who is trying to take three simultaneously — might be advised away from a combination of courses that could be toxic for him or her. By better balancing the course selection, the problem — which would not necessarily be the challenge of a given course — could be solved before it begins. In addition, an institution may find that for a cluster of students standard “triggers” for intervention are meaningless. We’ve seen institutions that are serving military officers who have stellar completion and grade patterns over multiple semesters; however, because of the challenges of their day jobs, regular attendance patterns are not the norm. A generalized predictive model that pings instructors, advisers, or automated systems to intervene with these students may be simply annoying a highly capable student and/or wasting the time of faculty and advisers who are pushed to intervene.
Second, these tools have to be studied and tuned to better understand and maximize their positive impact on diverse student populations. With large-scale predictive flow models of student progression and propensity-score matching, for example, we can better understand how these tools contribute to long-term student success. Moreover, we can do tighter testing on the impact of user-interface design.
Indeed, we have a lot to learn about how we bring the right data to the right people – students, faculty, and advisers — in the right way. A red traffic light flashing in the face of a first-generation student that says, “You are likely to fail” might be a disaster. It might just reaffirm what he or she feared all along (e.g., “I don’t belong here”) and lead to dropping out. Is there a better way to display the data that would be motivating to that student?
The chief data scientist at our company, David Kil, comes from the world of health care, where they have learned the lessons of the impact of lifespan analysis and rapidly testing interventions. He points out the importance of knowing both when to intervene and how to intervene. Moreover, they learned that sometimes data is best brought right to the patient in an app or even an SMS message, other times the message is better sent through nurses or peer coaches, other times a conversation with a physician is the game changer. Regardless, testing the intervention for interface and impact on unique patient types, and its impact on long-term health, is a must.
The parallel in education is clear: Signals was an important first step to break the data wall and bring more focus to the front lines. However, as Campbell notes, if we want these kinds of tools to become more useful, we need to design them with triggers and tools grounded in truly predictive models and create a large-scale community of practice to test their impact and utility with students, faculty, and advisers – and their long-term contribution to retention and graduation. Moreover, as Mike Caulfield notes, technology-assisted interventions need to be put in the larger context of other interventions and strategies, many of which are deeply personal and/or driven by face-to-face work in instruction, advising, and coaching. Indeed, front-line apps at their best might make the human moments in education more frequent, informed, and meaningful. Because, let’s be clear about it, students don’t get choked up about apps that changed their lives at graduation.
Mark Milliron is Civitas Learning's chief learning officer and co-founder.