You have /5 articles left.
Sign up for a free account or log in.
Substack personality Matthew Yglesias started a brief Twitter kerfuffle this past weekend when he asserted that higher education is an understudied subject, particularly given that higher education institutions are loaded with social scientists who could do such studies.
It’s a dumb thing to say. Either Yglesias hadn’t bothered to do a 30-second search or he was trolling for the angry responses he inevitably received. Such is the discourse, I guess. A later tweet implied that Yglesias was suggesting that it’s not that the research doesn’t happen, just that it isn’t useful, given how little we seem to know about important aspects of higher education.
Part of the response involved Kevin Carey of New America jumping in with a qualified endorsement of Yglesias’s assertion, tweeting, “The extent to which universities as institutions -- or individual programs, or professors -- succeed at teaching is mind-bogglingly under-studied, particularly given what could be done, and how important it is.”
Siva Vaidhyanathan, a professor at the University of Virginia, pointed out what I think is obvious to most of us who do teach: “That’s because it’s hard to study, not because it is not being studied. Revision, questioning, reforming, experimenting is a major part of our professional obligations and effort. It’s also not a single question. Differs by subject, level, purpose, etc.”
My Twitter-length response is aligned with Vaidhyanathan’s. Measuring effective teaching is incredibly complicated. The reason we don’t seem to have definitive answers about who succeeds at teaching is rooted in those complexities, but as I thought more about it this week, I realized how it’s even more complicated than I really considered in the moment of reading the online exchange.
The notion that teaching is understudied is mind-boggling to me, given that I have run my classroom as an extended experiment in improving the pedagogy of teaching writing for the entirety of my career. My assumption is that while broad policy-focused types like Yglesias and Carey would be approving of this activity, it would not actually count as research on higher ed/teaching and learning.
But why not?
The major focus of my teaching is asking if what I am doing is working, and if not, what else I could do differently. It is a recurring process of data collection, analysis and findings, repeated semester after semester. I know I am not alone in this practice.
It’s worth taking a couple of moments to ask and answer why this doesn’t count.
First, and most obviously, my brand of pedagogical research is not acknowledged as meaningful within the structures of higher education institutions. While teaching is an element of earning tenure, it is a threshold to clear, rather than an activity on which faculty are expected to continually improve. In fact, faculty are cautioned not to spend too much time on their teaching lest they fall short of expectations for scholarship. Additionally, publishing scholarship about teaching is primarily confined to its own discipline, as opposed to something faculty across all disciplines are expected to do.
More importantly, most of the teaching that happens at our public institutions (where the vast majority of education is happening) is done by contingent faculty. The expectation is that we show up, meet classes and allow the institution to collect the tuition dollars. That so many of us dedicate much more to the job than this bare minimum is a testament to the desire to do right by students and institutions, but let’s not imagine that it matters according to the systems and structures of higher ed. As good and dedicated as I might be, I am treated as entirely fungible. That I built up considerable expertise that could’ve been utilized by the institutions for which I’ve worked matters not at all.
The chief barriers to improving instruction are not pedagogical. At the same time, much of the experimenting around pedagogy is discounted.
But why? It’s not that it remains locked away with each individual instructor. It often winds up in publicly available forums where the ideas can be disseminated, discussed and implemented. West Virginia University Press has an entire series on higher education teaching and learning that’s rooted in just this work.
My own continuous research and experimenting resulted in two books. One, Why They Can’t Write: Killing the Five-Paragraph Essay and Other Necessities, illustrates one of the other complications of research on what makes effective teaching -- we don’t even agree on what the by-product of successful teaching looks like, at least not when it comes to the teaching of writing.
In fact, in the book I argue that one of the biggest impediments to teaching students to write is the very assessments we use to determine student writing proficiency. Reducing student progress to producing a passable five-paragraph essay has distorted writing instruction, resulting in all kinds of bad knock-on effects, primarily the demoralization of students when it comes to writing.
I do not know if my evidence and arguments in Why They Can’t Write would pass Matthew Yglesias’s (or Kevin Carey’s) definition of “rigorous” research. The book reflects almost 20 years of on-the-ground teaching experience and is as comprehensive a review of available evidence as I could muster within the context of the project.
That said, I have not done a quantitative study with an isolated variable to test my method.
I have not done that because it would be silly in the context of a subject like teaching writing. I know that my approach works. How? By assessing the work product of my students and, more importantly, by asking them. I believe students are reliable sources of evidence regarding their own learning. Many disagree. How could we ever bridge that gap to deliver research that would satisfy Kevin Carey and Matthew Yglesias?
I don’t believe we can, which is why people who are invested in education but who do not teach can be so frustrating to those of us who do.
If we cannot agree on a shared measure of what success looks like, how can we come to any kind of consensus on what makes for effective teaching? Anyone who will cite scores on tests that require a five-paragraph essay as a response as a measurement of progress (or lack thereof) is in a different universe of values from me. That evidence is literally meaningless … to me.
On the other hand, someone like me, who instead measures students’ enthusiasm for writing and their self-articulated degree of confidence in their ability to tackle novel writing situations, will appear to be far from someone oriented around scores on a standardized test.
I’m confident that my pedagogical approach as articulated in The Writer’s Practice is replicable by others. I know this because other people have been using the book and told me so. But their agreement is predicated on coming from a similar set of values as me. We share a point of view about what matters, so my approach is not so much mine as ours. I was just the person who put it down into a book.
When we say that measuring teaching is complicated, it’s not a comment on the difficulty of setting up a testable hypothesis. I do this every semester.
It’s complicated because we have not done the deeper work of asking and answering what measurement truly matters. Carey’s and Yglesias’s stances suggest we’ve agreed on that question.