You have /5 articles left.
Sign up for a free account or log in.

Wavebreakmedia Ltd/wavebreakmedia/getty images

One of us, Richard Light, recently surveyed a group of faculty colleagues, asking simply, “What is the biggest change you have noticed in the university’s culture over the past 10 years?” The sample size was modest—just 25 professors, each of whom had been at their institution for at least a decade. Yet their responses are illuminating in part because, despite representing various disciplines, the faculty converged around just a few core ideas.

Seventeen respondents, more than two-thirds, immediately described a now far heavier emphasis on strengthening teaching—on working hard to figure out new and constantly more effective ways to instruct students—as the biggest change in the university’s culture by far. Several of the respondents also mentioned how they had also seen an uptick in faculty-led experimentation to truly understand effective teaching methods.

When pressed for specific examples, some of the professors seemed to enjoy listing and describing their own experiments for more effective classroom teaching. Nearly half of them had done so fairly recently, and almost all put a simple caveat on their remarks: experiments to improve classroom teaching and enhance students’ learning must be reasonably simple to implement. They should not be too time-consuming. They should ideally be inexpensive. And the improvements in students’ learning—even if modest (as they usually are)—should be measurable and clear.

Many instructors were happy to share their new ideas, to talk about what they had actually tried in their classes and what worked or didn’t work. Some examples they discussed included:

  • Personal communication from a professor to individual students, seeking to understand their goals and challenges;
  • Cold calling on students, rather than only inviting those with hands raised to speak; and
  • Assigning homework between classes that requires each student to post a public response online before class

We have actually witnessed these examples in classrooms at strong universities around the country. We offer concrete details about them here not to share the most successful experiments but to highlight how vital it is that faculty test new approaches, whether successful or not.

Letter to the Editor
A reader has submitted
a response to this essay.
You can view the letter here.
Find all our Letters
to the Editor here.

Any great university should constantly encourage its faculty to experiment with their classroom teaching. Most important, professors should commit (and be supported) to gathering reasonably rigorous evidence and data to see if their new teaching strategies are contributing to some tangible change in student learning. Hunches are nice—we all have hunches. Concrete data are even better.

A No-Cost Effort to Reduce Anonymity in Large Classes

Joshua Goodman, now a professor at Brandeis University, tried out a near-zero-cost experiment with a class of 60 students. He decided to try to determine whether his communication style to students in his Regression and Causal Analysis course made any difference to their academic performance. Goodman divided his class into three equal groups and designated one group as the control group. It received no special intervention. A second group received, one month into the semester course, what he calls an “academic email,” even though each student received it personally addressed to them by name. It read as follows:

Dear (Student’s First Name),

I’m enjoying teaching our class and would like to find out more about any specific econometric questions you might have than the large class format allows. If you’re willing, would you write me back a short email describing any questions that have arisen that would be helpful for me to clarify?

Sincerely, Josh

A third group of randomly chosen students received a somewhat more personal email. “My hope,” Goodman noted, “was that such a connection might improve their engagement with the course and might inform my own teaching (such as choosing different examples for class).” This email read,

Dear (Student’s First Name),

I’m enjoying teaching our class but would like to get to know you a bit better than the large class format allows. If you’re willing, would you write me a short email describing your personal current or budding professional interests? And your current feelings about how our course is relevant, if at all, to you personally?

Sincerely, Josh

After students responded to Goodman’s initial email, he would always write an additional brief email in return to confirm he’d read their response.

He describes the goal of this simple intervention: “I thought of the more ‘academic treatment’ as addressing specific, intellectual challenges but without explicitly addressing any issues of especially personal connection. In contrast, I thought of the more ‘personal treatment’ as one emphasizing a personal connection between me and the students.”

In general, roughly 90 percent of students responded to his emails. The only prominent difference between academic outreach versus personal outreach was the length of students’ responses. The replies from students to Goodman’s personal notes were on average more than twice as long as replies to his purely academic notes.

But did any discernible difference exist in class performance among the three randomly selected groups? Most everyone hopes for a yes, since this is such a quick and easy intervention for any professor. Yet—unfortunately—the answer is a clear no. Goodman presents data from students in all three groups in his written summary and writes in his conclusion, “If anything, the control group that received no emails at all appears to have slightly outperformed both treatment groups on problem sets and exams. There are no statistically significant differences, and the sample sizes are small. In short, this intervention had little positive effect on observable academic outcomes for students.”

Goodman’s final paragraph in his write-up is striking:

“The one constructive lesson I take from this experiment is something I had not previously fully appreciated. It is so many students’ strong desire to tell faculty about their own lives and how their trajectories connect to the curriculum. I was surprised that the responses to the personal email treatment were so lengthy, detailed and enthusiastic. That’s particularly true relative to the academic responses, which often struck me as underwhelming. This suggests to me that, going forward, I will find other ways to solicit students’ personal stories from them and make sure to incorporate connections to those stories into the curriculum itself.”

Cold Calls and Online Posts

Harvard University professor Dan Levy wanted to investigate the results of different teaching techniques in his two moderately large classes. One strategy was cold calling: a practice of choosing students somewhat at random to answer questions, rather than solely those with their hands raised. The second was the use of online web postings: requiring some or all students to post their thoughts and responses on a course webpage. Levy was interested in exploring whether either—or both—of the techniques showed compelling signs they could enhance students’ learning.

During one particular year, Levy taught two sections of a class called Quantitative Analysis and Empirical Methods. Each class had approximately 80 students. He divided each class in half, for a total of four roughly equally sized groups, and implemented a different teaching technique for each group.

In one class section, half of the students were asked as part of their homework assignments to post a response online to some prompts. They were also told they were being put on a cold-call list for the semester. Meanwhile, the other half of the students were put into a control group. Levy encouraged all members of the control group to read before class (a pretty standard remark), but otherwise they received no intervention nor change from traditional teaching.

In the other class section, Levy randomly assigned half of the students to do online postings before class—no cold calling—while the other half was assigned to the cold call list without requiring any web postings.

Students were required to post answers to three questions based on the readings for that day’s class on the course website by 4 a.m. on the day of class. The third question was always the same: “Please tell us what you found difficult or confusing in this reading assignment.” This question, recommended by physics professor Eric Mazur, was meant to facilitate metacognitive thinking from students and to give the instructor a sense of common student difficulties. Levy used this information to adjust the length of class time spent on each topic. He also shared with students the themes that emerged from the posts.

For each class session, Levy randomly chose one student from his cold-call list and asked that student two to three related questions, all carefully prepared. The questions tended to be factual in nature, so any student who had done the reading carefully should be able to provide a response. Levy rates this level of cold calling as moderate compared to many law schools and business schools across the country.

Throughout the semester, Levy met regularly with small groups of the students to ask about their perceptions. At the end of the course, students were asked to fill out a brief anonymous survey in which they indicated their predictions as to which treatment would work and why. The qualitative survey was instrumental for understanding the results of the experiments and in helping Levy draw lessons for his pedagogy.

Over all, the key findings from Levy’s experiments were:

  1. Both web postings and cold calling had a positive effect on the amount of time students read before class, but not on sheer academic performance (as measured by exam results).
  2. When tested against each other, neither of the two methods (web postings and cold calling) came out on top in terms of improving either class preparedness or academic performance.

Levy and his colleague Josh Bookin also solicited verbal comments from students who participated. The students’ comments may offer some insights about what students thought about the two teaching techniques:

  • “Postings and reading did not enhance the in-class learning; rather, they took time away from problem sets. My time is not infinite.”
  • “If you do the readings hastily (because there is so much to do for this course), it does not make much of a difference.”
  • “While the cold calling did nudge me to be more motivated to do the readings, the intense workload of the course and mandatory biweekly postings completely burned me out and crushed my motivation to read by the end of the course.”
  • “I did not like the web postings because they distracted from my focus on studying the actual material.”

These students’ verbatim remarks are valuable because they are so uniformly blunt. Levy worked so hard to enhance students’ learning, and many students reported they found his innovations too much additional work, or requiring too much time, or both. We were surprised by Levy’s findings, and we are apparently not the only ones.

In fact, it seems clear that while one part of the results was indeed quite well predicted, the second was woefully misjudged. This reminds us of the extraordinary value of gathering some evidence, organizing an evaluation design rigorously and sharing the results carefully with (even somewhat dubious) colleagues.

For the first class that involved the control group, as well as students engaged with both new teaching techniques, the vast majority of both students (92 percent) correctly predicted the interventions’ positive impacts on reading time. But most of them incorrectly predicted the lack of effect on their actual learning (only 26 percent from both groups predicted correctly).

Similarly, while the majority thought that students’ public web postings before class would increase reading time relative to cold calling, that was not supported by the evidence. In addition, only 18 percent of students correctly predicted that web postings and cold calling would be equivalent in terms of their impact on students’ demonstrable learning outcomes.

We find the work of Levy and Bookin to be particularly powerful. They chose to investigate a commonly held assumption: that web postings and cold calling would lead to increases in students’ preparedness and ultimate academic performance. And in this case and for their students, the common wisdom turned out to be incorrect.

This kind of systematic investigation and evaluation of new ideas for teaching is a critical piece of continuous improvement at colleges and universities. We may be pushing an open door here; we don’t think we are advocating for some sort of shocking overturn of what many good colleges and universities do now. We simply remind our readers about the power of concrete, carefully gathered evidence.

We chose these examples carefully to emphasize a few main points. We hope they provide some sense of inspiration to test assumptions in teaching and think creatively about how to enhance students’ learning. This process could be as rigorous as Dan Levy’s experiment that incorporated predictions and multiple interventions, or it could be as straightforward as Josh Goodman’s email test. In all cases, we recommend asking students for their feedback. We also urge faculty to actually incorporate student recommendations, a suggestion we will explore in greater detail in a follow-up essay.

Finally, we also hope that administrators will commit to encouraging and rewarding faculty for trying innovative ways to teach, even if they do not immediately achieve desired outcomes. It is the spirit of experimentation that matters here.

Next Story

Found In

More from Career Advice