You have /5 articles left.
Sign up for a free account or log in.

Screenshot/Elizabeth Redden

Even before the coronavirus pandemic, a growing share of students were already taking all of their classes online. In fall 2019, nearly 15 percent of undergraduate students and 33 percent of graduate students were exclusively enrolled in online courses. As students and colleges alike have become more familiar with online education, more students are likely to enroll in fully online programs after the pandemic finally ends than when it began.

Online programs provide access to higher education for students who cannot attend in-person courses due to work, family or geographic challenges. However, prior research has found that students who attend in-person classes tend to perform better in those courses than students enrolled in online classes. One study also concluded that students who attend colleges that are primarily or fully online see at most a minimal return on their investment.

And little is known about the outcomes of a growing group of students: those who enroll at online programs at public and private nonprofit institutions that have traditionally operated classes in person. Spurred on by student demands for online offerings, competition from the for-profit sector and the need to generate additional revenue, many traditional colleges have rapidly expanded their online offerings. Some of this has been accomplished with the help of third-party online providers (online program management companies, or OPMs) that provide a range of services to support new online programs, but some of this has also been done by colleges using internal resources.

The long-standing debate about the value of online education combined with recent scrutiny of OPMs from a few prominent Senate Democrats further the interest in knowing more about student debt burdens and postcollege earnings of students attending online programs at traditionally in-person institutions. I have long been interested in program-level outcome data and have written extensively about how to use data from the federal College Scorecard and the Department of Education’s Integrated Postsecondary Education Data System for research and accountability purposes. So when a group of leading third-party online providers commissioned me to see if I could say anything about the value of their partners’ programs, I was happy to jump into the data to see what was possible. I was provided a list of their programs at partner institutions and went on my way.

The key takeaway of my research is that while the U.S. Department of Education’s College Scorecard has institution-level data on debt and earnings by field of study, it is not possible to separate out the outcomes data of students attending online versus in-person programs. This is a concern at traditional institutions because it appears that colleges frequently start online programs when they already have an in-person option. And these in-person programs are often sizable: among early adopters of OPMs, the in-person programs likely graduate as many or more students than the online versions.

In order to answer the questions that everyone wants to know about the outcomes of students attending online programs at traditional institutions and to better compare the student outcomes of online and on-ground programs, the U.S. Department of Education needs to make several improvements to the data that they collect from colleges. The three most important recommendations are the following:

  • Make it clear when a college only offers a certain program online instead of having both online and in-person options. As a part of IPEDS data reporting, colleges are currently asked whether no, some or all programs within each Classification of Instructional Programs code can be completed through distance education (which in 2022 usually means online). This metric has value, but it fails to distinguish programs that have both in-person and online options from programs that can only be completed online. A small tweak to data collection would allow for solely online programs to be identified.
  • Report IPEDS data on the number of graduates by program separately for fully online programs and all other programs. There is no way to tell the share of graduates coming from online programs versus in-person or hybrid programs. This makes it difficult to see the prevalence of online delivery models and how traditional institutions have adjusted their strategies. Colleges already have to report completions by CIP code, race and gender, so adding in a measure for whether the program is fully online or not should be straightforward.
  • Report College Scorecard debt and earnings data separately for fully online programs and all other programs. Similar to the above point, combining data for online and in-person programs makes it impossible to tell how either delivery model performs. The drawback of reporting outcomes by modality is a smaller sample size, which means that some programs’ outcomes would not be reported due to privacy restrictions. But this could be alleviated by combining additional cohorts of students to provide a picture of student outcomes.

Improving the quality of data on student outcomes should be an area of bipartisan agreement, as evidenced by the Obama administration’s introduction of the modern College Scorecard and the Trump administration’s addition of program-level outcomes data. Providing information on debt and earnings of graduates based on whether they enrolled in online or in-person programs will give students better information about their options, and it will also give policy makers a sense of how to approach new delivery models and partnerships. But for now, it’s impossible to answer some key questions on the performance of higher education in spite of substantial public investments in both data infrastructures and financial aid provided to students.

Next Story

Written By

More from Views