Seed of Doubt
Is online education as good as traditional, face-to-face education?
It is a loaded question. Online programs comprise the fastest-growing segment of higher education, with brick-and-mortar colleges — many ailing from budget cuts — seeing online as a way to make money and expand their footprints. Meanwhile, some politicians are eager for public institutions to embrace online education as a way to educate more people at a lower cost.
These movements have much invested in online education being equal or superior to the old-fashioned kind. And since a Department of Education meta-analysis last summer concluded that “on average, students in online learning conditions performed better than those receiving face-to-face instruction,” many advocates now consider the matter closed.
Not so fast, say researchers at the National Bureau of Economic Research.
The Education Department’s study was deeply flawed and its implications have been overblown, say the authors of a working paper released this month by the bureau.
“None of the studies cited in the widely-publicized meta-analysis released by the U.S. Department of Education included randomly-assigned students taking a full-term course, with live versus online delivery mechanisms, in settings that could be directly compared (i.e., similar instructional materials delivered by the same instructor),” they write. “The evidence base on the relative benefits of live versus online education is therefore tenuous at best.”
Mark Rush, an economics professor at the University of Florida and one of the study’s three co-authors, says he thinks the Education Department was under immense pressure to reassure online education’s many stakeholders, particularly cash-strapped state higher-education systems, that online education is just as good, if not better, than the classroom kind. But the fact that it “did not compare apples to apples” and severely lacked experimental data means that to treat the meta-study as a conclusive vote of confidence for online education would be scientifically irresponsible. “The conclusion that Internet-based and live classes are comparable might have been reached a little hastily,” Rush says.
Rush and his collaborators — Lu Yin, also of the University of Florida, and Northwestern University’s David N. Figlio, the lead author — sought to contribute to the online-education debate something they say it sorely lacks: reliable data collected via a controlled experiment.
In spring 2007, they randomly assigned 327 volunteers enrolled in an introductory microeconomics course to either attend the class lectures live or watch them online. Both groups would have access to the same ancillary materials and access to office hours and graduate assistants; the only difference would be the mode of lecture delivery.
They found no statistically significant differences between the academic performances of the two groups generally. However, they did find that Hispanic students, male students, and low-achieving students in the online group fared significantly worse than their counterparts in the live-attendance group.
These findings do not exactly refute the conclusions of the Education Department’s meta-analysis. Nor is the new study without flaws of its own, which the authors enumerate in detail — though not the most obvious, which is that videotaped lectures are a relatively primitive form of online teaching, and, where they are used, are usually only part of the package.
But Rush says the main takeaway of the bureau’s experiment is not that he and his co-authors are right or that the Education Department’s study was wrong; just that there is much more work — much more precise work — to be done before any firm pronouncements can be made on the merits of online education relative to the face-to-face kind.
An Irrelevant Truth?
Barbara Means, director of the Center for Technology and Learning at SRI International and lead author of the Education Department’s meta-study, says the bureau's paper, in addition to being rife with erroneous claims, draws conclusions that are essentially irrelevant to the debate over online education.
By taking pains to isolate the online-versus-classroom variable while keeping other variables constant, Means says Rush and his collaborators miss a crucial point: that what distinguishes online education from classroom education has little to do with the fact that one comes on a computer screen and the other does not.
That narrow distinction “is something that most people in the field of technology feel is not particularly interesting,” Means says. Why? Because most online courses consist of more than just videotaped lectures. To the contrary, most modern online programs expressly try to present course content in a way that is unique to the online environment. If videotaped lectures are included, they are often a small part of a larger package. “The point of using the online technology," Means say, "is to do things that you cannot do face-to-face."
In other words: Assessing all of the points of departure at once in a controlled experiment is an implausible task, and pretending that the online delivery mode is the only point of departure is an irrelevant one.
Accordingly, that was not what Education Department’s meta-analysis sought to do, Means says; rather, it sought to measure the relative “impact” of online programs, using a less scientific, but perhaps more practical, methodology.
As for the question of the politics, Means says she was never felt any pressure to affirm the merits of online education, and was indeed "surprised" by the results — which, she noted, were reviewed independently before they were published.
For the latest technology news from Inside Higher Ed, follow Steve Kolowich on Twitter.
Search for Jobs