Persuasive sales reps, unscientific research studies and half-baked pilots may be driving tech adoption in higher ed more than many decision makers would care to admit.
A qualitative study investigating how institutions select new educational technology has found that many institutions are making decisions based on “less than rigorous” evidence. The study was conducted by researchers from Columbia University’s Teachers College with financial support from the Jefferson Education Accelerator and published in Educational Technology Research and Development in June.
Based on interviews with 45 decision makers at 42 U.S. public and private two- or four-year institutions between September 2016 and April 2017, the researchers found a wide range of approaches to selecting new technology, but few made use of strong scientific evidence to show whether that technology has a beneficial outcome.
Peer-reviewed external research was mentioned by just a fifth of interviewees. More often, decision makers conduct their own internal investigations into new tools. These efforts may provide information relevant to the specific institution, but they are often lacking in strong research methodology, the study determined.
Incorporating externally produced research into decision-making processes in higher ed is “difficult,” the authors said. Good research on a particular tool or practice is not always available, and even when it is, there is often no guarantee that what works in one institution will work in another.
There is an “inherent tension” between external and internal research, the authors wrote.
“Externally produced, rigorous research, such as randomized controlled trials (RCTs), is often expensive, may take too long to inform pressing decisions, and is often difficult to generalize to a decision maker’s context,” they said. On the other hand, “locally relevant, internal research, such as faculty and student surveys or pilot studies, may be more feasible to implement and may provide more timely information” but may be “less reliable for providing solid answers to questions about effectiveness for improving academic outcomes.”
That universities and colleges are not doing more to make evidence-based tech decisions a priority is somewhat surprising, given that many higher education institutions include research in their mission statements, the authors said. “But universities have often been characterized as ‘organized anarchies’ in which faculty and students operate with a great deal of autonomy and administrators struggle to manage disparate interests.”
The approaches to decision making identified in the study ranged from “reasonably rational” to what is described as the “garbage can model.” The former starts with a specific need identified by faculty, students or administrators and ends with the implementation of a carefully vetted tool. In the latter model, new ed-tech tools are acquired “with a view to later finding a use for them.”
Most of the interviewees appeared to fall somewhere between these two approaches -- constantly scanning a variety of sources for information about new tools, while at the same time trying to gather information about their community’s needs. The authors described this process as “matchmaking.”
“I was expecting to find more rational decision-making processes,” said Fiona Hollands, associate director of the Center for Benefit-Cost Studies of Education at Teachers College, who co-wrote the study. “I thought more institutions would start with a need or a problem and then figure out the solution, rather than starting with solutions and finding problems to solve with them.”
“There are places that literally scan the market looking for new innovative technologies, bring it in, play with it in their technology units and then try to find a use for it on campus. I find that a bit absurd,” said Hollands.
While it’s good to be aware of what technology is out there, Holland believes that some institutions have “more money than sense” when it comes to technology adoption.
It doesn’t make a lot of sense to adopt a technology without knowing what you want to do with it or investigating whether there is evidence it might work, said Hollands. “What is the problem you’re trying to address -- are you trying to improve access for students who can’t get to campus? Are you looking to improve your retention rate? Or perhaps prepare students to be more tech-savvy for the job market?”
Institutions waste a lot of money by not frequently evaluating whether the technology they have bought is working. “People are acquiring more and more technology and never getting rid of the stuff that doesn’t work,” said Hollands. “Too much technology is acquired and then not divested if it’s not doing what’s needed.”
Bryan Blakeley, executive director of the Center for Digital Innovation in Learning at Boston College, who was not involved in the study, agreed it is important to regularly review whether tools are actually achieving results. “The landscape of teaching technologies changes pretty quickly, and these are expensive decisions. We’re not talking about a couple of hundred dollars here or there -- it costs hundreds of thousands of dollars to implement new software at the institutional level,” he said.
Boston College is “pretty conservative” about adopting new technology, often waiting until other colleges or universities have demonstrated success, said Blakeley. The college is part of a group of Jesuit colleges that often collaborate and informally share tech recommendations with each other, he said.
Both Blakeley and Hollands said they would like to see colleges share with others more of the pilot studies they conduct internally. Hollands favors the creation of an ed-tech research repository, which would have checks and balances for quality that would prevent ed-tech companies from adding biased studies into their own products. The Jefferson Education Accelerator, which rebranded as the Jefferson Education Exchange in 2018, works with educators and ed-tech companies to share research on the use of educational tools.
Blakeley agrees that research is important when considering adopting a new tool. When you can’t find a specific study about the tool you’re considering, he suggests looking to see if there is related learning-science research underpinning how you want to use the tool.
For example, Blakeley is currently evaluating a collaborative annotation tool. There isn’t much existing research on the use of the tool in an academic setting, but there is research demonstrating that students reading and evaluating content together is beneficial.
Boston College often conducts pilots of new technology before adopting technology at scale, asking questions such as: Were students more engaged in their courses? Did the quality of their assignments increase? Did they show higher levels of engagement?
It helps to have faculty willing to try out new tech or come forward with ideas, said Blakeley. “We’re very lucky to have a group of 40 or 50 faculty that want to experiment,” he said.