A way to produce more information about instructors' effectiveness (essay)
Colleges and universities across the country are under pressure to successfully educate more students and rein in rising costs. President Obama has seized on simmering public discontent with higher education by calling for a range of policies aimed at college affordability, including better information for prospective students, tying federal financial aid dollars to student success, and promoting innovative uses of technology.
These are laudable goals, but they largely ignore the most central part of students’ college experiences: the instruction that goes on in college classrooms and the people responsible for delivering it.
It would be as if K-12 education policy were only concerned with funding formulas and school social workers and ignored the people standing at the front of the classroom. In reality, improving the quality of teaching is a central goal of policymakers at the K-12 level because of a mountain of evidence that teachers are the most important in-school contributor to student learning.
But once a high school senior becomes a college freshman, there is suddenly little hard evidence on how much students learn in their courses and the quality of instruction they receive.
The main reason we know so little about instructors in higher education -- whether they be professors, lecturers, or adjuncts -- is that there are few common metrics of how much students learn in their courses. It is often impossible to even compare student learning across different sections of the same course at the same institution because exams are written and graded by individual instructors. Consequently, a math department chair likely has little idea of how well students in a calculus course are learning calculus. And although we know that there is no difference, on average, between the effectiveness of an elementary teacher with a B.A. and her colleague with an M.A., we are basically clueless as to whether it is better for a college student to learn from an instructor with an M.A. or one with a Ph.D.
The solution to this problem is straightforward: all students in large, introductory courses should take the same final exam. Examples of this practice are sprinkled throughout American higher education, but not liberally enough. A noteworthy example is Glendale Community College in California, which has used common final exams in its developmental algebra courses for over a decade. The effort was initiated by faculty concerned about grading standards and student learning, and the exams are written by instructors in the department (but not those teaching a section covered by the exam that semester). Exams are graded consistently for administrative purposes but instructors retain autonomy by being able to re-grade their own students’ exams and assign final course grades.
Data produced by the Glendale common final system enabled me to study how student learning varies across the classrooms of different instructors. On average, full-time instructors outperformed their part-time colleagues, and students learned more in the classrooms of instructors with an M.A. than those with a Ph.D. But the identity of the individual instructor was a much stronger predictor of student performance than any specific characteristic. These findings from a single college may not apply more broadly, but show the kind of analysis that is only possible with common measures of student learning. This kind of hard evidence would help faculty and administrators make better decisions around the hiring, evaluation, training and retention of instructors.
Efforts to administer common finals need not be limited to single campuses going it alone. Last fall, the City University of New York (CUNY) began using a common final exam in the beginning algebra course taught on all of its campuses that offer associate degrees. As at Glendale, the exam questions are written by instructors and thus reflect their judgments about what students ought to know after taking this course. The Glendale and CUNY examples show it is possible for campuses to measure student learning, a necessary step before anything meaningful can be done about the quality of instruction in their classrooms.
Fear of evaluation may prompt faculty resistance to attempts to measure student learning on college campuses. It is not hard to see how such efforts could devolve into the acrimony that has characterized the teacher evaluation debate on the K-12 side.
This is exactly why campus faculty and administrators should follow the lead of Glendale, CUNY, and other campuses by developing assessments that faculty trust as valid measures of student learning and agree on sensible ways to put the results to use. Colleges that fail to act in this crucial area run the risk that efforts to measure student learning will be done to them rather than by them. The K-12 experience shows why that is a less desirable outcome for everyone involved.
Matthew M. Chingos is a fellow at the Brookings Institution's Brown Center on Education Policy.