Who Leads on College Learning?

Significant experimentation has taught us much about what works and what doesn't in teaching and learning -- yet the knowledge remains diffused, not systemic. Can anyone marshal it?

January 29, 2020
 
Istockphoto.com/erhui1979

Welcome to this week's edition of "Transforming Teaching and Learning," a column that explores how colleges and professors are reimagining how they teach and how students learn. Please share your ideas here for issues to examine, hard questions to ask and experiments -- successes and failures -- to highlight. And please follow us on Twitter @ihelearning.

***

When I started this column this month, I said it would provide lots of practical advice and highlight interesting experiments (successes and failures alike) to help campus practitioners rethink their work to improve learning.

I still vow to deliver that -- eventually. But right now, I've got some big, messy questions buzzing in my brain that I need to try to work out, with help from those of you who are interested.

The one I'm puzzling over the most right now has emerged from recent conversations with a group of people who've been working on teaching and learning issues for a long time:

How can we align into a coherent whole the enormous amount of exploration that many individuals and organizations have been doing in classrooms, on campuses and in disciplines to understand and improve learning?

Let's break that question down into its subparts.

First, it's important to acknowledge the unfairness of the oft-heard criticism that college teaching and student learning has hardly changed for decades, if not centuries. Just look around.

At most colleges over the last decade or two, individual professors or academic departments have explored and in many cases embraced significant innovations in teaching format, curriculum or pedagogical practice. You can find scores if not hundreds (or thousands) of examples in publications like Inside Higher Ed and The Chronicle, in presentations at disciplinary and accrediting conferences, and at a teaching and learning center near you.

Those campus teaching and learning centers have spurred faculty experimentation, and administrators have sought to seed those efforts with teaching awards and other incentives.

Associations representing a range of disciplines have promoted new methods tied to their fields.

Up a layer, organizations like the POD Network, the National Institute for Learning Outcomes Assessment, the Association of American Colleges & Universities, and, before it, the now-defunct American Association for Higher Education have tried to build national networks of professionals interested in improving their own work and the learning of their students.

Foundations like Spencer and Teagle have a particular focus on learning, and if you widen the lens to focus more broadly on "student success," many other foundations are also funding experiments and initiatives aimed at understanding or bolstering college-level learning. (The most visible philanthropic players in higher education, Lumina and the Bill & Melinda Gates Foundations, do pay attention to the quality and amount of learning within their larger push for greater postsecondary attainment, but it often gets lost in the shuffle. Lumina's Degree Qualifications Profile was one particularly ambitious effort.)

And in the last decade-plus, the accrediting agencies (collectively, though with some variation) have significantly upped their efforts to prod institutions to set goals for what students learn and to show how and whether they are doing so. (More on that later.)

So, yes, lots of activity in lots of places involving lots of players.

***

Which brings us to a second assumption embedded in my question above -- that there is a need to "understand and improve learning."

(Point of clarification: Please remember that I'm very purposefully differentiating "learning" from the currently popular focus on "student success," which may include learning but tends to focus more on whether students complete their programs and earn credentials. As important as that is, it is insufficient as a way to judge the quality of higher education, as Derek Bok argued here. Earning a degree or certificate does not ensure that learning occurred; the macro-level question I'm asking in this column is, ultimately, "Completion of what?" What learning did a student gain in the process of earning that credential?)

Saying that we need to understand "learning" means that it is an important element of what happens in higher education, which I doubt anyone would argue with. But suggesting that higher education needs to "improve" learning is another matter: that implies some inadequacy or insufficiency in how much learning is going on right now.

Is that fair? The short answer is we don't really know, at least in any systematic way.

But key constituents of higher education -- many employers, some parents and students, and parts of the general public -- seem to be increasingly doubting whether sufficient learning is taking place on college campuses.

While concerns about affordability and campus politicization tend to top the public's concerns (depending on the political party), questions about whether students emerge well educated and the quality of education are also commonly cited in surveys of public attitudes.

In addition, consider this study showing that students think they're prepared for the workforce, but employers don't; this study showing mixed results on a wide range of learning objectives, including critical thinking and writing; and perhaps most importantly, a report on this effort by a collection of institutions to capture performance metrics in important areas, which concluded that the institutions produced too little evidence to try to gauge the quality of learning at the program level.

It's possible to reject the idea that college students aren't learning enough (or the right things) and still think that colleges, programs and individual instructors can do more to ensure that their students learn, and learn more.

Several recent studies, like this one, have found that practices that have been shown to help students succeed are often available only to small numbers of students. And administrators and faculty members at many, many campuses -- as professionals who want to be better at what they do, and who care about how their students fare -- invest time and energy in faculty development programs aimed at improving teaching.

***

A third assumption I made above is that it is either desirable or possible to "connect and align" the multitudinous efforts I've described into "a coherent whole."

What I'm envisioning is a unified effort, first, to better define the learning that we want to see in students; second, to develop better evidence about the extent of learning that is occurring, and to the extent that there's some good news there, as there almost certainly is, to make the case better; and third, to the extent shortcomings exist, as there also almost certainly are, to find agreement on how to fix what's not working and spread the use of what is to try to raise the bar.

At this point I can hear some of you echoing the words the president of a highly selective university said to me years ago when s/he insisted that a new pedagogical approach involving technology would improve the learning on his/her campus.

I asked "How will you know?" and the response came back, "Don't tell me you're buying in to that assessment crap?"

"Assessment" is a dirty word in some quarters of higher education, and for (some) good reasons.

Far too much effort and attention has been paid, critics (including many faculty members) argue, to what Natasha Jankowski of the National Institute for Learning Outcomes Assessment described at a conference last year as "assessment as bureaucratic machine." This approach often resulted, Jankowski said, in institutions slapping together ill-conceived efforts to try to measure something to prove to accrediting agencies or government regulators that they were doing so.

Rather than spending time on assessment "for them" (the politicians and accreditors that about half of faculty respondents to an Inside Higher Ed survey believe assessment is all about), professors and institutions should be focused on understanding learning for their own sake, and that of their students, John Etchemendy, former provost of Stanford University, said during the same conference session.

The goal should be more about checking "whether we’re teaching what we’re trying to achieve, and is the design still a good design, or maybe times have changed," Etchemendy said. "If we discover that our class is not working or that our students are not getting what we want them to get out of the class, then I would think we would all try to change it. Those are the good parts of assessment, and I think anybody can buy in to that."

The effort I'm talking about above does not involve the federal government. The administration of President George W. Bush tried that more than a decade ago, Congress blocked it (with the strong support of higher education) and the push stalled (though it definitely drew attention to the question of learning and lit a fire under the accreditors that are recognized by the Education Department. They, in turn, turned up the heat on the colleges they accredit).

Any new effort must come from within higher education and, ideally, be "bottom up," says Peter Ewell, president emeritus of the National Center for Higher Education Management Systems and a longtime expert in the domain of student learning.

At the most ambitious, he envisions a consortium of organizations and institutions that view student learning as a priority and work together to create a "community of judgment," as the Quality Assurance Agency in the United Kingdom did more than 20 years ago. They might reach a common understanding of what students should know and be able to do to earn certain credentials, or to be deemed to have certain levels of proficiency, and then create a framework for judging whether institutions are helping their students achieve those levels.

More within reach, he said, might be replicating and expanding the work of the Multi-State Collaborative to Advance Learning Outcomes Assessment, an effort by the Association of American Colleges & Universities and the State Higher Education Executive Officers. It aimed to get professors from around the country to (a) agree on a set of general education outcomes and (b) use that rubric to judge actual classroom work from representative groups of students at colleges around the country. No standardized tests, no bright-line outcomes.

Adrianna Kezar, Dean’s Professor of Leadership and director of the Pullias Center for Higher Education at the University of Southern California, shares the view that too much work aimed at improving teaching and learning unfolds in silos, within academic departments and among diffuse organizations.

"It happens in these pockets, with no synergy," Kezar said. "There's great work on learning being done around student engagement and collaborative learning, and within disciplinary associations, and by groups focused on diversity. Why don't these communities speak to and learn from each other? Could they come up with a set of common things they're exploring and work together on them?"

A historical strength of the American higher education ecosystem is that it isn't a system -- there's no government ministry that manages it, and little to no formal organizing structures at the national level. That relative independence and decentralization has in many ways allowed higher education to flourish, through competition and innovation, over time.

On the flip side, it also makes it difficult to take ideas to scale; it's hard to bring about systemic change in a nonsystem. The existing structures all have their frailties when it comes to leading any kind of nationwide effort: national associations can't get too far out in front of their members, for instance, and the institutions that higher education often looks to for leadership -- the most selective and wealthiest colleges and universities -- don't have any incentive to focus on better measuring learning because they are already assumed to be the best.

So where might leadership on this issue come from?

One of the more fascinating developments of recent years in higher education has been the emergence of new "networks" of institutions formed to attack specific problems or challenges. Achieving the Dream, which focused on student success at community colleges, was one of the earliest; the University Innovation Alliance, a coalition of research universities interested in increasing college attainment, and the American Talent Initiative, selective institutions aiming to diversify their campuses, are more recent examples.

Might there be a new organization in the offing that can pick up the ball from all the good work that others are doing now (and have done previously)? Or at least better align their work into a more coherent strategy?

Or am I missing something that's already out there that could be a better solution?

Feel free to weigh in in the comments section below if you have thoughts to share. And thanks for reading.

Read more by

Back to Top