You have /5 articles left.
Sign up for a free account or log in.

To the Editor:

Measures of quality in online education have been developed through decades of research in the field of instructional design and technology (IDT), informing best practices in the design, development,  implementation and evaluation of contemporary online courses and programs. Your recent article, “Online Classes Surge at Virginia Tech. But What About Outcomes?” by Susan D'Agostino, calls into question the quality of the online courses featured in the piece.

Unfortunately, this article contains notable misconceptions about online education in general as well as misunderstandings about strategies for measuring the effectiveness of these courses. It is hoped that the opportunity to highlight these issues and share relevant, evidence-based resources from the IDT knowledge base may inform future publications which feature the topic of quality in online education.

First, the author labels the courses featured in this article “massive online courses,” conflating them with Massive Online Open Courses (MOOCs). MOOCs are free or low-cost learning opportunities available to anyone and are quite different than the courses featured in this article—large enrollment undergraduate courses for residential students. This distinction is an important one. The students are different, the  designs of the learning experiences are different, and, thus, the measures of quality are necessarily different. 

Another key misconception in this article relates to a recommendation that VT should determine the effectiveness of the featured online courses by comparing student learning outcomes in those classes to their on-campus counterparts. This suggested practice fails to consider decades of research that explicates why such comparisons are not valid or meaningful. A variety of reasons exist for avoiding the use of such comparisons. One fundamental challenge is that this approach presumes the face-to-face classroom to be the gold standard against which quality is measured. These comparisons place credit (or  blame) for student achievement on the physical learning environment without considering the many factors that do impact student learning, particularly the selected teaching strategy and its alignment with targeted learning outcomes.

As such, the instructional method and the delivery mode are confounded, resulting in outcomes that are meaningless. Media comparison studies such as those recommended in the article are not useful sources of information about the effectiveness of any form of instruction. 

Another key oversight in the article is the absence of information about the instructional design strategies employed in the featured courses. The faculty teaching these courses engaged in mindful planning and
implementation efforts informed by the science of learning and best practices in online education. For example, Professor Greg Tew created open online textbooks for his Design Appreciation and Life in the Built Environment courses using strategic opportunities to engage the learners in authentic applications of design knowledge and skills.

Professor Stefan Duma organized in-person lab sessions to bring online students together as part of his Concussion Perspectives course. Each of these faculty members communicate regularly with their students regarding course plans, timelines, and expectations, information that is made available through their detailed course websites which serve as an organizational framework for these courses. Additionally, Professor Duma holds weekly in-person office hours as opportunities to engage with and assist his online students, in addition to providing support via email in an ongoing manner. Professor Tew offers one-on-one Zoom calls as needed and personally responds to student emails usually within minutes, seven days a week. These kinds of instructional design and learner support strategies are the hallmarks of effective online and blended teaching and learning, underpinned by decades of research by scholars in instructional design and technology.

Finally, the article neglected to mention the important use of formative evaluation data in the implementation of the featured courses. The article’s key question of “What about outcomes?” is answered through continuous evaluation of these courses to ensure they are facilitating the intended learning outcomes and meeting student needs using various formative evaluation strategies. For example, the learning management systems that support online (and in-person) courses at Virginia Tech permit instructors to monitor student progress throughout the course, providing insights as to when students may be struggling.

Additionally, the regular engagement these faculty have with their online students, beyond what is possible in a large lecture hall, presents additional formative feedback which can be used to revise course plans and/or provide additional detailed guidance or support for the whole class. Formative evaluation data generates critical information which can be used to address emergent student needs and facilitate a more effective learning experience over all.

The chance to share these issues and challenges contained in the article “Online Classes Surge at Virginia Tech. But What About Outcomes?” is appreciated. Perhaps greater awareness of the principles of
effective design, implementation, and evaluation of online teaching and learning, as well as the extensive research base on which best practices for online teaching and learning are built, can positively inform
written explorations of these topics going forward. 

--Barbara B. Lockee
Associate Vice Provost for Faculty Affairs
Professor, Instructional Design and Technology

Next Story