You have /5 articles left.
Sign up for a free account or log in.
Getty Images
The release this week of a Stanford University economist's study of the return on investment for students in online programs elicited an initial flurry of critiques from prominent analysts like e-Literate's Phil Hill and WCET's Russell Poulin. Much of the commentary focused on perceived major flaws in the study's data and design.
But given the prominence of the study's author, Caroline Hoxby, and her (accurate) assertion that the growing prominence of online education demands that it be subjected to serious scholarly examination, it is probably unwise to dismiss the paper outright.
Inside Digital Learning asked a diverse group of prominent academics, analysts, corporate leaders and other experts on online learning to offer their thoughts on the Hoxby paper and its relevance to public policy and campus discussions about digitally delivered education, and its impact on public perceptions. Their responses, sometimes lightly edited for length, follow.
***
Deb Adair, executive director, Quality Matters
There appear to be data issues here and I would challenge the focus on students at primarily for-profit institutions as representative of all online education as well as the claims here about the rarity of online education and its purposes. The discussion of cost and price seems particularly pejorative with the stated assumption that expenses with online education are far less so price points should be less. The author is clearly making the point that online institutions are short-changing students by spending on things other than instruction. The things required in the support of robust online instruction are simply very different than F2F instruction and might not, in fact, be accounted for in the “instruction” line item. If faculty salaries and benefits make up the bulk of the instruction expense in F2F programs, online instruction also requires a well-developed and maintained technology infrastructure, significant investments in course design and development, etc. Policy and pedagogy, which at many institutions may limit the size of online classes in ways that approximate the F2F class, also undermines the claim that online learning should cost less than F2F. I’ll also say that I think it’s a very good thing for student learning that online institutions invest in academic and student support rather than some of the amenities of value primarily for the on-campus student population.
Economic returns to education are certainly something with which society should be concerned. The question is about how we are measuring those returns, as compared with what, and if there are values that can’t be measured. This study doesn’t seem to be looking broadly or comprehensively at economic returns – it’s simply looking at impact on forecasted wage trajectories for a subset of online learners at a subset of institutions that provide online education. In this case, I don’t believe it supports the conclusions about the value of online education to taxpayers. What about those learners who are seeking education or degrees simply to maintain their jobs as requirements and job pressures change over time? What about the social good of allowing people to maintain employment while pursuing education as opposed to stopping out? How do these ways trajectories look like compared to similar others? What do the wage trajectories look like for similar students at different types of institutions?
In short, the study has little relevance for understanding the current and future economic value of online education as online learning has rapidly evolved from something provided primarily by for-profit institutions for primarily online students (who apparently, as implied in the article, need to matriculate at the least-rigorous institutions, perhaps for nefarious reasons) to a much more wide spread adoption and deployment. About a quarter of all post-secondary institutions participate with Quality Matters (including all types, levels and Carnegie classifications with only a small fraction representing for-profit institutions) and the diversity in online programs, delivery and students is tremendous. Online learning is not something being perpetuated in some dark little corner of the academic world with scheming profit-seeking institutions short-changing academic rigor to line their pockets in collusion with students of dubious integrity in ways that defraud the taxpayer. That’s not even an “alternative fact.” The truth is that the economic value of education is difficult to measure regardless of delivery modality and the different approaches to teaching with technology today is fast making the mode of delivery irrelevant if not impossible to differentiate.
***
Jill Buban, senior director, research and innovation, Online Learning Consortium
Beyond the flawed methodology that my colleagues Russ Poulin (WCET) and Jeff Seaman (Babson Survey Research Group) pointed to on the day of the study’s release, are Hoxby’s generalizations about online learning such as the types of education online learning lends itself to and the modality’s formats, as well as a lack of perspective surrounding the characteristics of students enrolled in online learning programs, including student age and educational goals.
From the beginning of the report, Hoxby states that ‘online platforms lend themselves to certain types of education, such as programming and technical design, where interacting with a computer is naturally an important part of the learning process. No matter the institution’s classification, online learning programs are offered across all disciplines and degree level. Hoxby continues to attempt to make a connection between online learning’s promise, and perhaps lack of fulfillment of this promise, to provide employers with workers with the skills they require. This is a phenomenon in which all of higher education is trying to grasp and find better solutions.
In recent years, we’ve seen more opportunities for online education to attempt to address workforce skills such as TAACCT grant funding for community colleges, and alternative credentials such as MOOCs, badges, and boot camps. However, perhaps more importantly, 21st century skills like working in teams, public speaking, and critical thinking are also being thoughtfully integrated into programs. In a recent discussion with Marc Singer (Thomas Edison University) for a report on alternative credentials that OLC was commissioned to conduct, he mentioned the desire of companies to work with TEU to build programs that infuse such outcomes.
Throughout the report, there is no discussion of blended learning, or programs that take advantage of both online and face-to-face modalities through traditional combinations of the two or through residency programs. Hoxby touches upon such programs when she misidentifies Walden University as an exclusively online institution when, in fact Walden requires academic residencies for many of its doctoral and master’s level programs.
We know that since the inception of distance learning, beginning with correspondence courses, and then transitioning to online and blended learning formats, this modality has provided millions of learners, the majority classified as adult learners, with access to postsecondary education. Hoxby states that the average age of online students is “shockingly high” while we know that this modality typically serves adult learners who, according to the NCES (2015), typically have delayed enrollment into postsecondary education, among other characteristics including attending part-time, being financially independent of parents, working full-time while enrolled, having dependents other than a spouse, being a single parent, and/or lacking a standard diploma.
While the focus of the report is on economic returns of postsecondary education, there is no mention of online student motivation for their education. In other words, the lens is narrowly focused on students’ possible monetary gains from seeking a degree. Lacking are other motivating factors such as advancing their education, setting an example for their children, or perhaps, as a first-generation student, being the first in their family to complete a degree. Additional research on the history of online learning, characteristics of an online, adult learner, and student motivations would have provided Hoxby with a more thorough understanding of online learning.
The report emphasizes student composition at institution type, focusing heavily on for-profit institutions. As a field, I think we’re moving beyond these classifications and looking at what matters no matter the type of institution: quality, as evidenced by the OLC suite of quality scorecards.
Last week’s report published by WCET and Hoxby’s report are an important reminder that we should be examining education, not only online education, through many lenses and that there is a need to continue to provide resources and research to impact the field of online teaching and learning.
***
Anthony P. Carnevale, research professor and director, Georgetown University Center on Education and the Workforce
Earnings are one of the legitimate standards for education outcomes, especially in a time when education and earnings have become so closely aligned. In a democratic society, the intrinsic value of learning and human flourishing is one standard for education. In a capitalist economy earnings-returns is another the appropriate measure of effectiveness.
It’s not surprising that online learning has weak effects on completion or other outcomes but that is not necessary a bad thing. Online learning occurs with very little peer-to-peer interaction. We’re all social animals and in the end, we work best when we’re engaged with others and make commitments. The level of engagement online is low. It is (too) easy to go online and (too) easy to get off. Its value to society is its low-cost option value, especially if it is subsidized by third party payers like government. IT makes learning accessible to everybody but most successful for well-prepared students with time and high social capital.
This does not mean that technology has no future in education. In general, technology has not penetrated education including higher education to the extent it has in other industries because the industry is very isolated from external influences. The education industry is monolithic and isolated compared with other industries. For example, the vast majority of people who deliver education are educators whereas only 5 of the people who deliver food are farmers. As a result, the food industry is a much more mature -- made up of diverse industry and occupation networks that encourage multiple points of entry for technology change and other modernizing influences.
***
David Clinefelter, consultant, The Learning House
This is an impressive study. I applaud the attempt to analyze the economic impact of online education on the individual and society.
My first reaction is that it would be very helpful to do the same analysis comparing similar-age students who attend classes on campus to understand the difference between online and on ground programs. I’m pretty sure the ROI for on ground programs and students would be lower because of the higher costs for noninstructional expenses like facilities. It would be even more interesting to do the same analysis comparing traditional-age college students. When one factors in the lost “opportunity” costs of 4-6 years with little or no income, the ROI would get quite a bit smaller for the individual.
State policy makers who keep pouring money into public university systems may be surprised to learn what the ROI is for these expenditures. I do think ROI is a valid consideration for state and federal policy makers. Taxpayers deserve to know. We just built a new stadium for the Vikings in Minneapolis and there were quite a bit of local and state dollars invested in it. There were a variety of pro and con arguments and several studies estimating the economic impact and ROI. It would be great to have a similar level of scrutiny of the money invested in higher education. However, there are a variety of other factors in addition to ROI to consider such as job growth, business creation, and quality of life factors.
These data are heavily influenced by the for-profit universities and students who attend them. Based on how the sample was drawn, 77 percent attended for-profit institutions, 21 percent attended private nonprofit institutions, and only about 2 percent attended public universities. The latest Eduventures estimates are that the three types of institutions enroll almost equal percentages of students. This over-representation of for-profit institutions skews the data because they have higher tuition and lower retention rates. An important finding from this report is that the longer students are enrolled, the higher the ROI is, especially if the enrollment period goes out four or five years. The public and private nonprofit institutions have higher retention rates and the public institutions have considerably lower tuition so the findings would be quite different if they were represented proportionately in the study.
The author calculates ROI 10 years after enrollment. This doesn’t make sense to me. The ROI will get better the longer out you go. Therefore, I think it would be more accurate to calculate it based on projected lifetime earnings.
Minor point, but I think the author shows bias toward online learning. For example, the following quote applies to only a handful of for-profit institutions and not the vast majority of colleges and universities that provide online programs:
“Moreover, in federal undercover investigations and audits, online postsecondary institutions have been disproportionally found engaging in deceptive marketing, fraud, academic dishonesty, low course grading standards, and other violations of education regulations.”
The following quote also demonstrates ignorance about online programs. Every institution that is heavily or exclusively online provides library services and many provide extracurricular activities like academic clubs.
“[Lower costs] are especially modest when one considers that exclusively online schools do not even attempt to replicate many dimensions of the in-person experience: libraries, laboratories, academic clubs, student music and drama, and so on.”
***
Ryan Craig, managing director, University Ventures
While I disagree with Hoxby’s methodology for establishing a pool of "100 percent online” students (424,000 vs. the generally accepted 3 million number), there are two reasons I’m not surprised she finds low return on investment from online education. Unfortunately for her argument, neither has much to do with online education.
First, Hoxby is measuring returns from enrollment in online programs, not from completion of degrees or other credentials of recognized value in the labor market. As completion rates from enrollments in bachelor’s degree programs are under 60 percent, and as completion rates in online programs are lower still, I would expect to see low ROI from enrollment as opposed to completion.
Second, despite the constant drip of reports on the “college premium,” these studies measure income of graduates who earned their credential at least a decade ago, i.e., prior to the Great Recession and an era of 50 percent-plus unemployment and underemployment for college grads. So Hoxby’s findings are consistent with recent findings like the Gallup-Purdue survey, which reported that only a minority (38 percent) of young alumni strongly agree that “college was worth the price.” Both these factors are not specific to online education, but rather to higher education’s continuing crises of completion and employability.
In addition, in calculating return on investment, the cost Hoxby is measuring is the tuition charged by colleges and universities, not the actual cost of delivering online education. The high tuition charged for online programs is a product of higher education’s superstition that price is a signal of quality or legitimacy, and few colleges and universities are interested in sending any pricing signal that could be interpreted as lower quality or less legitimate than traditional on-ground programs. It’s also a result of the fact that online programs market almost entirely digitally and most spend many thousands of dollars buying keywords and leads, as well as on vast enrollment systems and teams in order to acquire each student.
Multiply this by high attrition – particularly in the first 90 days – and many programs spend $10,000 or more in marketing and enrollment to acquire each student producing a material level of revenue. Delivering online education is actually highly economical, particularly at scale. What’s broken on the cost side is student acquisition for online programs, as well as a failure of courage and imagination on pricing.
***
Richard Garrett, chief research officer, Eduventures
As has been widely noted, the study’s methodology is flawed in important respects. The author carelessly conflated “online students” with students at institutions that are wholly or majority online, when in fact the bulk of students taking all or a majority of their studies online are enrolled at conventional colleges and universities that also offer online programs. This means that the study does not permit any conclusions about the ROI of online learning as a whole, for students, taxpayers or institutions.
However, the study does -- assuming no other major methodological flaws -- highlight poor financial returns for many students enrolled in online programs offered by wholly or majority online institutions. I suspect this says more about nontraditional students and institutions, and a challenging economy in the second half of the study period, than about online education as such. National Student Clearninghouse data, looking at student completion rates for part-time, adult and other non-traditional students, and completion rates at for-profit schools, shows, on average, pretty weak results. We know from census data that the wage premiums for all levels of postsecondary education have stagnated or even fallen in recent years, even as the wage gap between postsecondary and high school graduates has widened.
Online learning, insofar as it typically, at program level at least, targets non-traditional students with often little prior higher education experience and few preparatory advantages, it is not surprising that the delivery mode alone fails, at national level, to significantly overcome such input challenges. If the study looked at online graduate programs, including those offered by mainstream universities, I suspect the returns would look a lot better. But, again, that would say more about student and institutional inputs, and less about online learning in isolation. It is far-fetched to imagine one variable, such as delivery mode, across multiple institutions, would produce an impact strong enough to counter the plethora of institution and program-level variables. “Online learning” is not a given but rather a methodological toolkit that can be used in numerous ways, for different purposes and with a variety of inputs.
The reality is that online learning is a net positive in higher education today, in terms of convenience, flexibility and access. It permits indirect cost savings for students- who are better able to combine work and study- even if most institutions lack the incentive to use online delivery to rationalize costs and price. Online learning encourages faculty and institutions to re-think standard pedagogical assumptions. No question that much remains to be done to improve the typical online student experience, but that can be said of higher education in general. Nonetheless, I believe that if online learning is to continue its expansion significantly above present levels, practitioners must more clearly add value beyond convenience of access.
***
***
Daniel Greenstein, director of postsecondary success, Bill & Melinda Gates Foundation
The methodology has serious flaws that have been described elsewhere so I won’t repeat here what you already know.
A number of other observations…
Institution matters with respect to student outcomes -- see the college report cards published by Chetty et al recently at equality-of-opportunity.org. Hoxby’s sample is dominated by students from for-profit institutions (especially the case given the time span it covers), which are known via other means to have less impressive outcomes than other institutions. I’d love to see the data pulled out for students from the University of Central Florida or even Western Governors University.
Not all online education is the same … in fact work we are supporting at Arizona State University (EdPlus) is beginning to unearth a number of use cases which I’d describe as (a) full degree programs, (b) online courses offered to enrolled students to improve student outcomes in high-enrollment high-drop and mixed ability Dev and Gen Ed courses, (c) online courses offered to enrolled students to improve access to courses that students need to graduate. We see significant gains in student outcomes (including and in many cases especially for low-income students and students of color) in use cases (b) and (c) -- especially where the online learning is offered as part of an enterprise wide strategy with appropriate and usually centralized or coordinated supports for things from infrastructure to – crucially – instructor training!
We are seeing poorer outcomes in use case (a) particularly for low-income students and students of color. Note, the study is looking at top-of-the-line enterprise scaled approaches -- many of them benefiting from nextgen or near nextgen ed tech that is only becoming available now (in the last couple of years) and won’t be showing up much if at all in Hoxby’s cohort. Note too that the work described here does not take account of student outcomes after college… the work is based on students enrolled in 2016-17 – they haven’t graduated yet.
The cost data I have seen from the above study shows considerably lower costs for online ed showing up in each of the institutions where the study has been conducted…. The study doesn’t trace cost impacts through to tuition price. Additionally, an analysis we made of IPEDS data (same data that Hoxby uses) show an inverse association (steering shy intentionally of correlation) between education and research expenditure per completer on the one hand and proportion of student credit hours being delivered online. The same data show a positive association between proportion of student credit hours offered online and completions per 100 full-time-equivalent students. The trend shows up in both two year publics and the four year publics
I actually think the study, with all its many faults, could be quite useful in focusing more rigorous attention on return on investment, which I think is incredibly promising for work in higher ed generally -- whether focusing on online learning or other aspects. Those RoI measures should focus on students and institutions (where costs and benefits will be measured differently).
I think it will be valuable if it provokes much needed discussion of and ultimately coalescence around some kind of taxonomy that enables us to understand online ed in all its many flavors rather than lumping them all together. For my money, if the taxonomy could be sensitive to things like use cases, technology approaches, pedagogical approaches, that’d be cool.
I think it is vital that we evaluate education innovation through a variety of perspective lenses including economic return to student AND to institution
As to perceptions: I think you can say with confidence that online ed is here to stay and will maintain and quite probably increase its penetration especially in larger institutions (trend lines suggest that with some exceptions -- Simmons being a notable one, apparently -- the smaller liberal arts schools of which there are many, are backing away for obvious reasons).
The economics of higher ed and the needs of so many (but not all) students have for more flexible access more or less guarantees it (not to mention the fact that it seems quite preposterous to be watching the introduction of the driver-less car and yet still questioning the role tech plays in ed). I think you can also probably safely assume that the technology will only get better (I’m never a big one for betting against it in this country), as will the understanding of how to use it effectively at scale and with impact .
That doesn’t get at your point about “perception”… here, I think that what you see in online depends on where you sit. Again referring to data from the last annual survey or so, we see that faculty who have actually taught online have a much higher opinion of it than faculty who have not (that is, an opinion of how it compares to on-ground ed). Administrators too continue to see online as critical to their institution’s strategic plan (the figure is, I believe, staying consistent around two-thirds. So… I think where discussion of online ed continues to be inflamed as if part of some sectarian controversy (remember Mac vs. PC from some years ago?), the study will be welcomed by those already prone to deep skepticism (much as the meta-study published some years ago showing online being as good or better than onground was waved around furiously in those same places by the advocates of online).
***
Todd Hitchcock, senior vice president of learning services, Pearson
As the global economy and the student population change, it’s clear we need new ways for people to access higher education. Online programs bring that much needed access. The study shows that online students are older and likely juggling other responsibilities like work and families. In-person schooling simply isn’t practical for everyone. We know students struggle with the cost of an education and the education community needs to continue addressing that. While this study looks at the economics of an online degree, it doesn’t address other factors driving today’s student to pursue a more flexible and convenient education. The benefit of an online education is that we can reach people where they are and in a way that suits their lifestyle. Simply put, there is more to consider than cost.
***
Martin Kurzweil, director, educational transformation, Ithaka S+R
Caroline Hoxby is right that more high-quality evidence is needed to inform students, providers, and policymakers’ decisions regarding online learning. She has also rightly identified economic return on investment as a valuable analytical tool for assessing online learning. However, her new working paper, The Returns to Online Postsecondary Education, does not apply that tool in the way most likely to advance understanding of the benefits and drawbacks of online learning. Hoxby derives her insights by aggregating—considering many highly dissimilar programs and students together and focusing on average results—when what is most needed right now in online learning research is contextualized splitting. In short, the aggregate analysis tells us little about whether online learning can provide better or comparable outcomes at lower costs, because the answer to that question varies by context.
As my colleagues at Ithaka S+R noted in two recent literature reviews, there is a paucity of quality studies on the student outcomes associated with online and hybrid learning. The more recent report identified just twelve studies during 2013 and 2014, only three of which employed an experimental or quasi-experimental design allowing an inference of causation. (There have been several important contributions since the last review in 2015, but the main conclusion still stands.) For the millions of students each year who take courses online at Title IV institutions -- not to mention the tens of millions who take courses online without an institutional affiliation—this is a rather thin evidence base.
One of the important gaps in the literature is rigorous evidence of the costs of online learning, and the related question of both social and private return on investment. There are logistical reasons for this: it is challenging to allocate provider costs at the course- and sometimes the program-level, comparison experiences are imprecise, technology-enhanced courses and programs tends to have high upfront costs that amortize over time. With access to data from the Treasury Department, Hoxby has plugged one of the big holes limiting return on investment research: lack of data on post-program earnings. While earnings are not the only educational “return” of interest, they are certainly important—especially for older, working students who may particularly value the flexibility of the fully online programs that are the focus of Hoxby’s working paper.
At least in this paper, however, Hoxby misses an opportunity to use the earnings data and return on investment methodology to address an even more important gap in the literature: how do different students, in different contexts, experience different kinds of online learning. Others, who know the federal datasets better than I do, have critiqued the way that Hoxby has filtered and aggregated IPEDS data to conduct her analysis. My point is more basic: it would be far more interesting, and valuable, to disaggregate.
The average return on investment results do not look pretty. But the average return on investment is the answer to the wrong question. The contexts in which online learning is offered are highly varied. What really matters to policymakers, institutional decision-makers, and students, is the return on investment of a particular type of program, in a particular context, for a particular type of student.
For example, Hoxby’s two focal groups—students who enrolled at institutions that exclusively offer online programs and those enrolled at institutions whose programs are substantially online—are dominated by older students at for-profit institutions. Hoxby could profitably extend the analysis and report any heterogeneity by category underlying that average result. Do the subset of public institutions that exclusively offer online programs (according to Hoxby’s definition) enroll substantially different types of students than for-profits? Is the return on investment different? It would be even more helpful to report the return on investment by institution, as Raj Chetty and colleagues have done with college-by-college social mobility statistics derived from the same IRS data Hoxby uses.
Why is this important? For one thing, there is a risk that this kind of aggregate negative finding leads policymakers or institutional decision-makers to throw the baby out with the bath water. More important, the educators who design, manage, and teach in online and blended programs, and the students who are choosing among postsecondary options, need better information about what features work well for particular types of students in particular contexts.
An important caveat in all of this: the findings Hoxby presents are, mostly, descriptive and not causal. Disaggregated results similarly would not reveal whether a particular program’s return on investment was in fact attributable to the institution or program itself and not simply the students who chose to enroll in it. But variation in the descriptive results between otherwise similar institutions would be a solid entry point for further investigation.
***
Javier Miyares, president, University of Maryland University College
Essentially, Hoxby took an economist's perspective and used extant data sets that lumped all types of institutions (i.e., for-profit, public, private) and online students (i.e., working adult, military students) which would not show any differences among the sectors or types of students. That said, however, the study does seem to focus on the for-profit sector, which would tend to distort the findings. The only not-for-profit Hoxby lists by name is Liberty University, which is hardly the typical online institution.
Hoxby’s data on the number of online students is not in agreement with most reliable sources of data either. Organizations such as WCET have already taken the authors to task on this. The methodology is quite flawed.
It’s true that the population of students who tend to take online classes are adults, who are older, with busier lives, and more episodic registrations, but she claims there is no control group of adult students to compare with this group -- which is absolutely not true.
Online programs attract more adult students than traditional-aged students, so it is not surprising that these institutions are less selective, since many have a mission to serve adults attain a degree.
Further, there is no comparison with the number of colleges that are non-selective and focus on face-to-face instruction, but there are far more than just online institutions in that category. There are plenty of face-to-face evening and Saturday programs where she could find adults studying. It would be very important to compare this population of online students with a similar population of students in face-to-face courses.
The study also makes unsupported assertions, including that institutions provide instruction inexpensively but spend disproportionately on curriculum development, administrative services and legal and fiscal operations; that online education may be a draw to students because of lax academic standards and greater opportunity to cheat; and that it costs less to produce an online course than a face-to-face course. It's also a big leap to think that online schools do not have libraries or advising. They do!
Finally, I believe what the entire section on ROI to online education is trying to say is online students tend not to finish degrees -- they enroll for short stints -- which could explain the low ROI. But it's not about online education -- this is about a student population that is very busy and finds it hard to stay in school (face-to-face or online).
Hoxby seems to be biased by her opinion of what online education is, not what it factually is.
***
Peter Smith, Orkand Chair, Professor of Innovative Practices in Higher Education, University of Maryland University College
I am not an economist, but have two concerns about the way this study was done, as I understand it. First, IPEDS data only includes first-time, full-time students. And they represent a very small percentage of all online students. When mixed with the other data sources, which have differing population sets, I am concerned that the findings are simply unreliable.
Second, none of the data is sorted by the life situations of the learners and the consequential barriers to success that those situations represent. And the author is apparently comfortable throwing all learners into the same pot -- single mothers with Ivy Leaguers. Typically, someone taking an online course at MIT and someone doing so at a community college would have different outcomes when researched at scale.
The problem with DoE data, once again, is that it does not identify students by the number of risk factors in their lives. The DoE does have a list of six risk factors, such as being a single parent or working part time or being unemployed, which, singly and collectively, contribute to lack of persistence compared to those who do not have any or as many. If we compared performance by the average number of risk factors of learners admitted by sector or at the institutional level, that would give a far more accurate picture of value and effectiveness. Again, this gives me concern about the reliability of the study over all.
***
Robert N. Ubell, vice dean emeritus for online learning, New York University's Tandon School of Engineering
Caroline Hoxby’s National Bureau of Economic Research study is a hatchet job, a misleading attempt to tar online learning with the corrupt brush of for-profit profiteering. A quick look at merely two deceiving data points, sinks her whole story, with her report showing that her sample was drawn principally from for-profit universities (76.8 percent), with merely 1.95 percent from public universities; yet astonishingly, public institutions represent the largest portion of virtual students in the country, with more than 70 percent online. Hoxby’s report tells us nothing new that we haven’t known for years.
That’s why President Obama’s Secretary of Education Arne Duncan put many of them out of business. Hoxby’s report stands as an indictment of for-profit education, confirming actions taken in the previous Admiration to take away their ability to fleece vulnerable students and U.S. taxpayers. Every university mentioned in her report is either a discredited for-profit or is now bankrupt or sold. Her data have absolutely no current relevance. The report is old and disgraceful news about the for-profit industry, with nothing new or useful to say about online learning. The new administration, unwisely considering reviving for-profits to pillage needy students again, should be ashamed.