You have /5 articles left.
Sign up for a free account or log in.

istockphoto.com/Jagrutiben Patel

Measuring learning is difficult. Disentangling the effects of ed-tech tools from all the other contributing factors in an educational environment can be almost impossible. After a decade of conducting mixed-methods studies of educational interventions, and having the privilege of working with some of the leading researchers in this area, I’ve landed on a basic but critical truth: useful impact research begins with context. Without context, research results lack utility to the educators we are trying to serve, and sample sizes, correlation coefficients and significance tests are irrelevant.

As educators demand more insights into the efficacy of ed tech, I’m excited by the growing body of published research in this field. But as I travel from campus to campus talking with instructors about efficacy studies they may find useful, I inevitably hear, “But my class is different.”

And they’re usually right! For example, I was recently talking with a psychology instructor at a small community college about results from a rigorous, controlled study conducted at a large, private four-year university. She was somewhat underwhelmed by the results -- the difference in average student outcomes and effect size -- and asked me, “Do you think [the researchers] would get the same results with my students?” Given the study design and methods, I couldn’t say.

Over time, I’ve come to the realization that many instructors are not reading the growing body of efficacy literature because they can’t relate to it -- and I can’t blame them. As researchers we have to do better. Instructors need reliable and relevant evidence to make well-informed decisions about whether to use educational technology, which tools to adopt and how to implement them. A tightly controlled efficacy study may be rigorous, but it is of limited use to educators if the insights are not reliable and relevant to their particular situation. Additionally, since many studies take years to complete, the results may be too late to help educators at the point of choosing ed tech and how to implement it.

So, instead of relying on traditional methods of impact research that try to isolate only what works, I suggest four principles for conducting impact research to ensure insights are useful to educators and enable improvements to education:

  1. Instructors should be partners in a study, not participants in one. Researchers should conduct studies in partnership with instructors. An understanding of the goals, needs and challenges of instructors should form the foundation of a study. Researchers should then co-evolve the study goals, design and strategy for data collection to ensure the results of the study will surface insights that instructors need and can use. Equally important is understanding the culture of the instructor’s institution -- including expectations for how research involving students is conducted at that institution to ensure that all requirements of their local institutional review board are met.
  2. Get to know how the tool is being implemented and the students using it. The first step in any research program evaluating the efficacy of ed tech should be a set of “implementation” studies conducted across a range of instructors and types of institutions. The educational environment in which the study is taking place should be meticulously observed and documented, including the physical organization of the environment, the makeup of the student body and -- most importantly -- how the instructor has chosen to implement the tool in their course and how students engage with it.
    This can be achieved by collecting rich qualitative data from instructors and students (interviews, focus groups and observations) and matching this with quantitative data (survey metrics, platform analytics, academic performance and attendance) gathered from consenting students. Analyzed together, these data provide a robust foundation from which insights can be drawn to understand how using a tool a specific way within a unique context leads to particular instructor, course and student outcomes. Though the results of these analyses are descriptive and correlational (and so not appropriate for causal claims), they are critical for providing the context required to understand why a tool works the way it does.
  3. Explore effectiveness within one context, and then among multiple contexts. Analyses should first be conducted on data gathered from individual contexts, and inferences about effectiveness made that are restricted to that educational environment. These can provide useful insights to instructors who partnered on the study, and for instructors who implement the tool in similar ways and contexts. Then, data should be re-examined at the aggregate to uncover trends that appear across context and implementation models, providing rigorous analysis of the relationships between the use of the tool or intervention and outcomes achieved that produce results that are more generalizable.
  4. Communicate research results in ways that are relevant and actionable to educators. Educators are often confronted with trying to find insights that are relevant to them from long, dense and technical studies. In order to make efficacy research more accessible and actionable, it must be communicated in ways that are relevant and easily consumable for the intended audience. For example, instructors often find value reading about peers who have participated in studies -- their course goals, contexts, challenges, implementation choices and results they achieved. A growing library of these narratives provides instructors with the opportunity to identify sound research conducted in a context similar to their class giving them confidence in their ability to achieve similar results. Researchers should also provide supporting research briefs and technical reports to provide transparency (of study methods, designs and analyses), so any educator can confirm for themselves the validity of any claims that are made.

As educators continue to demand more robust and relevant insights into what works with ed tech, researchers need to reach further to provide them with visibility into why it works, whom it works for and in what use cases and contexts. After establishing a clear understanding of context and implementation, novel comparative studies that build on the foundation of early research and deepen the partnerships with instructors will prove more reliable and useful.

If our goal is for ed tech to help improve student outcomes and success, we need to shift the culture and conversation of efficacy away from research that is conducted in isolation to research that is highly collaborative between researchers and educators so that findings are relevant and actionable for instructors in their courses.

Next Story

More from Views