In the last two years since MOOCs have been in the spotlight, both commentators and practitioners have made the case that a key to realizing the potential of technology in education is the collaboration of experts in teaching and learning, educational researchers, computer scientists, and disciplinary specialists.
We have such a partnership at MIT as the Teaching and Learning Laboratory (TLL) has teamed up with computer scientists from Anyscale Learning for All (ALFA), led by Dr. Una-May O’Reilly, and a physics faculty member, Professor John Belcher, who heads a team teaching electricity and magnetism (E&M), a required course for all MIT undergraduates. (Full disclosure: He is also my husband.)
We realize these kinds of collaborations are not new; they were pioneered by institutions like the Open University in the U.K. and Athabasca University in Canada, as well as organizations like the Open Learning Initiative. But we’ve added one more member to our team, Shreeharsh Kelkar, a doctoral student in anthropology, who observes our interactions.
Our goal is to unearth best practices in how communicating across disciplines can improve on-the-ground delivery of education. There is no doubt that in the months we’ve been collaborating, we’ve had challenges in communicating across our fields, but new ways to study and strengthen digital teaching and learning have also emerged. Here are several examples of what we have seen in our own work together.
On the simplest level, we need to translate jargon for one another. For example, I have no idea what Kmeans, hidden Markov models, or dynamic Bayesian networks are—all of which are techniques used by the ALFA researchers. On the other hand, they weren’t familiar with the concept of “expert/novice learners” and how students move from one state to another.
And while the computer scientists on the team might remember Faraday’s Law, a pivotal concept in freshman E&M, from their own undergraduate days, I, for one, haven’t a clue. The physics team, on the other hand, is intimately familiar with Faraday’s Law, what the mastery of this concept might involve, and the difficulties undergraduate experience while trying to understand it.
Introducing educational theory into the work of the ALFA data analytics team has also yielded interesting differences in perspectives. For example, ALFA has been looking for patterns in problem solving in the data collected for MIT’s first MOOC on circuits and electronics. They have been categorizing problems according to the context in which they appeared, whether it was homework assignments, lab problems, or exams. (See here for ALFA’s data model for the behavioral data collected by MOOC platforms and its open source framework for MOOC data visualization and analytics.)
TLL explained to ALFA researchers that the educational literature has grappled with the issue of classifying problems by the learning objective they serve, which gave them ideas for other patterns to explore. In turn, that led to further discussion about from whose perspective the problems should be classified—students? instructors? educational researchers?
We have seen friendly debates between the physics instructors and the teaching and learning experts over instructional practice. In one team meeting, we discussed whether pre-class questions the students submit should have specific learning objectives. As expected, the educationalists pushed for their use, but the physicists argued they would only cause confusion in the minds of the students.
The instructors, naturally, are most concerned with the practicalities of delivering a course and may imagine students in different ways from the TLL and ALFA researchers who are one step removed. In fact, the educational researchers have chosen not to advocate for a particular pieces of the research design in deference to the pressures on the instructors teaching an 800-student course.
Shreeharsh has also observed that each disciplinary specialist may see the role and practices of education differently because of the values embedded in their particular fields. For example, he watched a discussion about a workshop on data-driven education that three other team members had attended.
One of the ALFA researcher was complimenting another computer scientist who had developed a method to computer-generate assessment questions. However, the educationalist found that possibility “distressing” because “it makes teaching seem like something anyone can do with a script.” (Quotes from Shreeharsh’s field notes.)
The most interesting interaction for me was when we were trying to work out the methodology we would use to study the data being collected for the on-campus E&M course by the edX platform running on a local server. Discussions between researchers in each field identified intriguing differences between how educational researchers and computer scientists approach their work even when the goal is the same: to build a latent variable model, in which all variables are not directly observed, to explain the data.
Educational researchers refer to theories in that discipline to posit the structure and processes represented by a latent variable model. Computer scientists look at the data and attempt to create a model that best represents a process. They work with latent variables that are undefined—that is, they know something is affecting the fit of the model to the data, but they don’t know exactly what that might be.
Educational researchers can help to explain the model using existing theory or work with the computer scientists to develop and validate a new theory. Thus, we developed a research cycle that is iterative, interleaved, and because it uses both inductive and deductive approaches, complementary.
This post is not meant to be an exclusive list of the disciplinary differences we have seen, and we expect more to surface as we move forward. But we do think these examples are representative of common issues multi-disciplinary teams working in education may face as norms, beliefs, and practices in different disciplines—many of which are implicit—are encountered.
We intend to keep reporting from the “field” as we move forward in this collaboration.
Lori Breslow is the Director of MIT’s Teaching and Learning Laboratory and a Senior Lecturer in Managerial Communication at the MIT Sloan School of Management.