Theorizing Digital Learning

Connecting the dots.

July 24, 2019

This post is inspired by Dean Dad’s July 10 piece “Ideas in Search of a Theory: Day Two of the ‘Future of Higher Ed’ Conference.” In that post, Matt wrote that “the issues public higher ed is facing now are ‘undertheorized’” -- and that “some connecting of the dots could go a long way.”

Dean Dad was talking about public higher ed, but his lament could apply just as easily to our digital learning conversation.

How do the articles in each week’s “Inside Digital Learning” hang together? Is there a framework that we can apply to help us understand the latest news about new online programs or the most recent data on the shift from residential to online education? How can we make sense of the growth of university/corporate partnerships in the creation and running of new online degree programs? Is there a model that we can employ that will help us untangle the relationship between institutional resilience, demographic shifts and the evolution of digital and online learning? Are there theories of academic innovation that can help us imagine the future of higher education beyond hit-and-miss experimentation?

George E. P. Box famously quipped that "all models are wrong, but some are useful."

A model for digital learning is a set of integrated ideas, concepts and frameworks that help us build hypotheses and make sense of data.

Thinking about the future of universities through a digital learning lens requires that we make some predictions. These predictions, to be useful to anyone, must be falsifiable. If we can’t be wrong, then we aren’t saying anything important.

To date, most of the theories of the impact of digital have emerged -- and have been applied -- outside higher education. There is a large body of literature that examines the impact of digital technology on industries as diverse as news, retail and entertainment. When these theories and frameworks are applied to higher education, the results are most often less than optimal.

A prime example is Clayton Christensen’s theory of disruptive innovation, perhaps the dominant framework on the impact of digital technologies on the future of higher education. The theory of disruptive innovation was proposed as a way to understand the dynamics and failures of industries outside higher education. It was later applied to higher education by Christensen and others, and the results (in our opinion) have been problematic to say the least.

Higher education is more an ecosystem than an industry. Universities -- and the educators that make up our institutions -- adopt a much wider set of goals and objectives than even the most diversified of corporations. Unlike companies, universities both collaborate and compete -- and they do so with very long time horizons and under a bewildering array of constraints and objectives.

Rather than complain about the hegemony of disruption thinking when discussions of digital learning come up, it is up to us to develop a better set of ideas and frameworks.

We should recognize that the creators and consumers of digital learning will bring their own sets of assumptions and biases to these activities. Articulating a theory of how digital learning is likely to play out at our universities will go a long way to surfacing and addressing the preconceived notions that each of us brings to this conversation. Ultimately, no single theory can either explain all the ways that digital learning is changing higher education or predict what will happen to our schools and to our educators.

What theorizing can do is help us systematically think through the impact of digital technologies on our schools, and to hopefully enhance our ability to make wise decisions.

What theories, or models or frameworks, of the impact of digital learning on the future of higher education are you familiar with?

Be the first to know.
Get our free daily newsletter.


+ -

Expand commentsHide comments  —   Join the conversation!

Inside Higher Ed’s Quick Takes

Back to Top