You have /5 articles left.
Sign up for a free account or log in.

BALTIMORE – If any of the academic administrators who attended a session entitled “When Does Blended Learning Move the Needle?” at a conference here last week hoped to emerge with a simple strategy for using classroom technology to improve student outcomes, they were surely disappointed.

Barbara Means and Rebecca Griffiths of SRI Education, the presenters during the Academic Affairs Summer Meeting of American Association of State Colleges and Universities, had no magic potions or silver bullets to offer the associate provosts, deans and others eager for insights.

As Means and Griffith laid out the state of research on which kinds of technology-enabled learning are likeliest to help students learn more and under what sort of conditions, they offered evidence that certain uses of technology – particularly those that blend digital content and tools with in-person instruction – are consistently found to produce more student learning (as measured by grades, at least) than entirely face-to-face settings. (The same is not true for purely online forms, on balance.)

But their other key takeaway was that the use of technology itself appears not to be primarily responsible for the improved outcomes. Rather, the accumulated studies they shared found that the biggest effects came when the instructors changed what material they taught and how they taught it.

“If you just use a new digital learning technology without changing anything else, chances are you’re not going to have a significant impact” on learning, Griffiths said.

Going Meta

SRI may be best known as the producer of arguably the most significant piece of research on the efficacy of online learning: the Education Department’s 2009 meta-analysis showing that students who took their courses entirely online performed roughly similarly to those in face-to-face classes, but that students who took a blended curriculum (with more than 25 percent of the content, but not all, delivered digitally) performed about a third of a standard deviation better than those in comparable in-person courses. (A third of a standard deviation would put a student who had been in the 50th percentile in the 64th instead, Means said.)

The 2009 study was hardly the last word in what has been a continuing debate about the comparative efficacy of traditional and technology-enabled learning. Various studies have challenged the SRI analysis, but Means and Griffiths presented more recent meta-studies that reinforced their original conclusions, consistently finding average significance effects of about 0.35 for blended or hybrid approaches and negligible effects (0.02 to 0.15 for online-only formats).

“We’re not saying that blended learning is a silver bullet or is going to totally change everything,” Means said. “But based on the research base, this is an approach worth investing in.”

But knowing those findings doesn’t help a college administrator very much, she acknowledged. The meta-studies’ conclusions are based on scores or hundreds of studies featuring different facts and circumstances and different technological tools. And right now there are relatively few studies that affirm the validity of most individual blended learning products – and enormous variability in the effects found in different research even on the same products.

So are there patterns within them that might help a college administrator figure out what might work in his or her circumstances or institution?

That’s another $64,000 question, Griffiths said – and yes, “we do see patterns in the literature about what types of technologies and approaches are most likely to have positive effects,” she said.

In general, they are tools that:

  • increase the flexibility of what happens in the classroom, rather than overly scripting it;
  • give students lots of opportunity to explore; and
  • engage students in active problem solving.

Some of those conclusions are proven through negative results, Griffiths said. For example, one notable study examined a teaching tool that gave students a significant amount of help – “scaffolding,” in digital learning parlance – to figure out the right answer to problems. The research found that students were more likely to answer those specific questions accurately, but were less likely to be able to solve problems later on, and in other settings, because too much of the work was done for them.

“Students who used this tool were worse in transferring the knowledge to other contexts,” Griffiths said. “Technology that overly scaffolds, that takes too much of the work away, can negatively impact students.”

The Next Generation Learning Challenge funded by Educause offered additional evidence about what works and doesn’t in blended learning approaches, the SRI researchers said. The study of 29 products that claimed prior evidence of effectiveness and were designed to “scale up” to reach more students found widely varying impact for different types of products.

Learning analytics platforms produced little gain in student learning. Adding in supplemental digital resources was a mixed bag. Peer learning supports offered modest gains. So did course redesign efforts.

Only when institutions undertook “a fundamental transformation of the course” – essentially introducing all of the above – were there consistently significant positive results.

Which, of course, creates a major challenge for institutions and the technology developers trying to serve them, Griffiths said.

“Comprehensive redesigns take a lot of effort to implement, and the more change that is required, the harder it is to scale,” she said. It is in developers’ interest to make their products as easy as possible to adopt, but simple and narrow interventions tend not to be as effective.

"Those two things are in tension with each other," Griffiths said.

The researchers' advice to deal with this conundrum, and to improve the chances of adopting blended learning approaches that actually move the needle on student learning?

Start with the problem. Too often decisions to use a new technological tool or learning system emerges from an individual instructor's curiosity or a company's pitch. It should start by identifying the problem the institution wants to solve.

"Maybe it's that you need students to be acquiring more skills, say a higher pass rate for Math 100," said Means. "What is it that we think contributes to the low pass rate? Maybe it's that students aren't working through the content at a steady pace," and some of them may have "anxiety around math concepts, too." "If we start from that framing, we can look for products that do those particular things."

Focus on specific sets of students. Talking about whether a technology or pedagogical approach "works" is too simplistic a way of thinking. Different types of students in different settings respond differently to approaches, and "if a digital learning intervention is not well-suited for the students it is being used for, it can have negative repercussions," said Griffiths. "We've seen lots of evidence of the negative impacts of taking a technology designed for one group and using it for another."

Engage with faculty members as partners, and build teams. "Too often when choices about digital learning products are being made, ... we see institutions taking the the faculty member out of the equation," said Means. Surround them with instructional designers, IT support, colleagues and evaluators in building experiments, "because it's important to know whether what you're doing is actually helping students." And give professors sufficient training, too, she added.

Next Story

Written By

More from Teaching & Learning