Welcome to this week's edition of "Transforming Teaching and Learning," a column that explores how colleges and professors are reimagining how they teach and how students learn. Please share your ideas here for issues to examine, hard questions to ask and experiments -- successes and failures -- to highlight. If you'd like to receive the free "Transforming Teaching and Learning" newsletter, please sign up here. And please follow us on Twitter @ihelearning.
A decade ago, the history profession -- spurred on by Lumina Foundation and officials in several states -- embarked on an ambitious effort to unite faculty members around a common view of what degree recipients in a given field should know and be able to do.
Over several years, as part of this process known as tuning, historians from scores of academic departments agreed on a definition of the "skills, knowledge and habits of mind that students develop in history courses and degree programs." The goal was to better explain for students, parents, employers and others the goals of a history education and "reference points for measuring progress toward those goals."
The leaders of the profession's tuning exercise describe it as successful, part of a wider set of efforts that began changing the way many historians think about their work. They also acknowledge that it has largely stalled, "a good example of something that works but you can't get continued funding for it because it's no longer seen as an innovation," says James Grossman, executive director of the American Historical Association.
A somewhat harsher judgment comes from a Stanford University education professor, who describes the tuning process's creation of rubrics of what students should learn as "quite successful foreplay" that stopped short of the necessary act: developing a set of exercises that would provide evidence of whether students are achieving the goals in those rubrics.
In this new column dedicated to understanding the state of postsecondary teaching and learning, sometimes looking back can help us figure out how we might move ahead. Examining how efforts like tuning worked -- and where they fell short -- might help point the way forward.
Doing so can also fill in the gray areas (as opposed to black and white) that tend to dominate many topics we think we understand.
Tuning, for instance, puts the lie to the trope that most humanities faculty members don't care about the quality of their teaching or their students' learning. But it also shows how hard it can be to institutionalize change in higher education.
Let's return to 2009, when tuning first emerged on the landscape. The page had just turned on the administration of President George W. Bush, which was best known (infamous, in some quarters) in higher education for the work of the Spellings Commission on the Future of Higher Education and the administration's push on accreditors and colleges to measure student learning.
That effort did not accomplish what some believed then-Education Secretary Margaret Spellings and her allies wanted: having accreditors set a clearly defined, comparable floor for colleges to meet on student learning outcomes (Senator Lamar Alexander led a bipartisan group of legislators that stepped in to block such an effort).
But the pressure initiated by Spellings lit many fires on the postsecondary landscape, prompting accrediting agencies, groups of colleges, national associations and disciplinary societies -- fueled by foundations and federal grants -- to embrace the call for collecting and making public more and better information about how and what students learn (while purposefully saying that institutions and instructors, not governments, should drive that work).
Tuning was part of that onslaught. As announced by Lumina Foundation in April 2009, while the embers of the Bush-Spellings administration still smoked, tuning was inspired in part by Europe's Bologna Process, which sought to lay "qualifications frameworks" (sets of learning outcomes and competencies a student must demonstrate to receive a degree "at a specific level") alongside discipline-level definitions of what a degree in that field prepares graduates with different credentials to know and do. (Lumina set off down a related path with its Degree Qualifications Profile; you can read Inside Higher Ed's coverage of that effort here.)
While science and engineering disciplines have historically championed student learning outcomes assessment more comfortably than their peers in the humanities and social sciences, the American Historical Association bought in to tuning in 2012. It agreed as part of a three-year grant to convene faculty members from a range of colleges and universities across the United States to develop a disciplinary core (there were two versions, in 2013 and then 2016), and then to help departments at dozens of institutions use the resulting rubric to "tune" their programs to align with student, parent and employer expectations.
"This is a much better way for departments to grapple with assessment than have someone from the outside say, 'This is what your students should know and how you should be teaching,'" Grossman, the executive director of the history group (then and now), said at the time of the grant's announcement.
So what happened during the years that followed? A lot of on-the-ground work, and a cultural change, those closest to the tuning work say.
Over three years, nearly 160 instructors at more than 120 colleges participated in the AHA's tuning efforts, "with history faculty getting together to articulate what a history major learns and can do and talking about the purpose of a history major and how to articulate that in ways that employers and parents could understand," says Grossman.
But perhaps more important than that granular work, says Daniel J. McInerney, an emeritus professor of history at Utah State University who has studied the tuning initiative, was how the effort changed how the history association and its members look at instruction.
"Hardly any of us of an older generation were trained at all in pedagogy, and we just didn't have the vocabulary to answer a question like 'when a student finishes a course, what do you want them to know, understand and be able to do?' " McInerney says.
The tuning work created an environment in which members of the historians' group grew much more comfortable talking about themselves as teachers (as opposed to researchers) first, built interest in regional conferences on teaching and "opened the door" to initiatives like a $1.65 million project funded by the Andrew W. Mellon Foundation to make introductory courses serve as gateways for underrepresented students rather than trapdoors for them. (A 2017 study showed that students from low-income, first-generation and certain racial backgrounds were far likelier than their peers to fail or struggle in intro courses.)
"Disciplinary societies … can’t force anybody to do anything," Grossman says. What tuning helped the AHA do, he added, was move from being primarily a convener of research work to putting "teaching at the center of who we are as a profession."
That fight is far from won, Grossman and McInerney acknowledge.
"The incentives that historians face on their own campuses [favoring research over teaching] remain a struggle, and everyone knows that your annual review will in most cases focus on research first, teaching second," McInerney says. "We haven't made a lot of progress there."
Grossman also speaks, somewhat wistfully, about his disappointment that Lumina chose other priorities over continuing to support the AHA's work on tuning after the initial three-year grant expired in the middle of the last decade.
"We were just getting the cultural change into the soil, but to continue that metaphor, we still need resources and the agency of our members for the seeds and the farming," the AHA director says. Another 40 history programs raised their hands to participate in a next round of tuning work, Grossman added, but there is no money to fund it.
Grossman and McInerney may attribute the undone work of the tuning initiative to the uphill climb in a research-first culture and a plug that got pulled too soon before the job was finished.
Sam Wineburg sees another problem -- a structural limitation in the original goal of the tuning work. Wineburg, the Margaret Jacks Professor of Education and, by courtesy, of History and American Studies at Stanford, describes tuning as a "genuine attempt to wrestle with echoes of external criticism that were also felt if not acknowledged out loud by many humanists and social scientists."
He "thinks the world" of Grossman's embrace of tuning as head of the historians' group. By getting on board with tuning, Grossman "led in a way that many heads of membership groups do not," Wineburg says.
The effort "put an accent mark on the need to ask ourselves deep questions about what it is that students learn, rather than just view it as a reflex of teaching," says Wineburg. "It made historians realize that their work wasn't done when they formulated a succinct and powerful lecture."
Physicists and biologists had embraced this work during the 1980s and 1990s, exemplified by the efforts of Carl Wieman and Eric Mazur, says Wineburg, a former middle and high school history teacher who returned to get a Ph.D. in applied psychology. But getting a group of humanists and social scientists to pay attention to the question of what students learn was a major advance, he says.
The problem is that's where the work stopped, Wineburg says, referring to it as "successful foreplay" that trailed off before consummation.
"It equated the act of assessment with the formulation of cogent and beautifully written rubrics" about what students should know and be able to do, he says, instead of trying to develop classroom exercises "that would embody those rubrics."
Wineburg and some colleagues at the Stanford History Education Group did some work on their own that offers evidence, he says, of the risk of that shortcoming. Their 2018 paper in the Journal of American History exposed juniors and seniors at a California public university to formative assessments originally designed to gauge critical thinking among high school history students, and found that few of them accurately understood the significance of a historical event based on readily available evidence presented to them.
It isn't surprising that most historians may not know how to construct formative exercises to help them gauge whether are students are learning in the moment -- not only are they not trained psychometricians, but many of them have been trained, Wineburg says, in an academy that treats non-grade-based assessment as an "anti-intellectual, know-nothing approach to learning," such that they view it with distaste. Many of them are proffering introductory courses that have "not been retooled since they were formulated."
Wineburg is hopeful, though, that historians may be ready to move tuning to the next level -- to "consummate" it, to use his metaphor.
He sees a potentially willing collaborator in the AHA and Grossman, who ascribes to the "never waste a crisis" mentality and sees one developing in the shrinking number of history majors and the disappearance of faculty lines on many campuses.
"The culture has been a problem, but I think change has begun to happen as departments lose [full-time-equivalent faculty positions]," Grossman says. "There are department chairs and influential faculty members who are now willing to listen to things they weren’t willing to listen to 10 years ago."
Historians recognize the need, he says, to become "more fluent in explaining things that seem so simple to them, but clearly aren't to the public: What is the value of history, not only in terms of intellectual development but in terms of career and civic life?"
Tuning alone didn't answer questions like that, or how to develop more substantive assessments, build pedagogically richer doctoral training, reframe the structure and culture of tenure and promotion, or better empower the adjunct instructors who provide so much of undergraduate instruction, says McInerney.
But "it has made historians intensely aware of the need to address and lead discussions that generate answers to those concerns," he says. And for that it's worth revisiting, and building on.