As summer ends, professors across the country are gearing up for a new academic year: refurbishing old syllabuses, reviewing some alternate readings, perhaps adding service learning or a new assessment tool to their courses. I’m designing one entirely new seminar, plus working with colleagues to rethink our team-taught intro class. It all requires time and energy, and has to be done. But the best thing I do to improve students’ work in my courses is far simpler.
I will learn and use their names. It’s easy, and it works.
Using those names in class is uniquely powerful. As Dale Carnegie said, “Remember that a man’s [sic] name is to him the sweetest and most important sound in the English language.” (Of course we know today that this is true for a woman too.) A student who hears his name suddenly becomes completely alert; one who hears herself quoted (“As Hannah said, Machiavelli was just trying to be realistic”) will be replaying those words in her head, over and over, for at least a week.
I used to learn names by taking the class list and scribbling descriptions, and for a time I would videotape students actually speaking their names, then review the tape every morning over my Cheerios. My current technique, at least for larger classes, is flashcards. The first day I line up the students alphabetically (they’ll already be smiling at each other, with a nice excuse for meeting), then take their pictures one by one, bantering like a novice fashion photographer (“Excellent!” “You look sharp,” “Nice t-shirt,” “Great smile,” and so on).
After being photographed, the students write their preferred first and last name, with phonetic guides if needed, on a pressure-sensitive file label, a sheet of which lies on the desk. At the end of the day, I deliver the pictures to a one-hour development kiosk, and by morning have a full deck of photos, each with a name stuck on the back. Before each class meeting I spend a few minutes going through the deck again, memorizing the names. Whenever I pick up a new tidbit about a student I’ll write it on the back: “Plays lacrosse,” “Civil War buff,” “always wears these glasses,” “from Vermont.” The names take maybe four class meetings to learn; last fall, when I had 82 students in two courses, it required about two weeks in total.
And the technique, or at least its principle of individualized recognition, is scalable. With smaller classes (say, 29 students or less), you can make up nameplates – just a folded paper card will work, with names on the front. Within a few days not only will you know their names, the students will also know everyone else’s – a nice side benefit, and very helpful in seminars. With larger classes, learning the names certainly takes more work -- although a dean of students I once knew was famous for knowing and using the names of all 700 or so students at his college, from the day they matriculated. It’s impressive if you do learn so many; even if you can’t, your teaching assistants can learn students’ names in their sections. Or even without knowing any names, a lecturer who pays attention can spot a puzzled student and say, “Do you have a question?” It is possible to connect well, with even a large class.
Why is knowing someone’s name or acknowledging them individually so important? Any person’s name is emotionally loaded to that person, and has the power to pull him or her into whatever is going on. By putting that person at the center of attention, naming takes only a moment from you – but for them, it is deeply affecting, and lasts.
But more than that, calling a student by name opens the door to a more personal connection, inviting the student to see the professor (and professors generally) as a human being, maybe a role model or even a kind of friend. In the 10-year longitudinal study that Chris Takacs and I did of a cohort of students moving through college (for our book How College Works), students who found congenial advisers, or even full-fledged mentors, were more likely to stay in school, to learn more, and to enjoy the entire experience.
Several years ago I saw Jon Stewart, the television show host, deliver a marvelous 74-minute stand-up comedy routine for an audience of 5,000 people, apparently with no notes whatsoever. Stewart worked the crowd, picking up on what we liked, playing off of a few local references, sensing groups in the audience who responded differently, asking questions, riding the laughs but knowing when to quiet our responses. He connected with us; he made us part of the show. It was exciting and memorable.
I’m no Jon Stewart, nor a match for that dean of students. But once about 20 years ago I had a social psychology class of 144 students. Armed with the freshman facebook (small “f,” remember that?) photos and some scribbled hints, I worked on their names for a couple of weeks. Then one day I came into class and started pointing at each student, slowly speaking his or her name. Some were easy, others took a moment; still others I skipped, to return to when I remembered or had eliminated possibilities. As I progressed around the room, students became increasingly focused on what I was doing, smiling and laughing at who was remembered, and who took a minute. Eventually I got to the last few, the people at the outer edge of my mnemonic ability. When I declared that last name – correctly -- the entire class hesitated, and then erupted in a long, sustained round of applause. Some cheers were thrown in.
And the course went well.
Daniel F. Chambliss is Eugene M. Tobin Distinguished Professor of Sociology at Hamilton College. He is the author, with Christopher G. Takacs, ofHow College Works(Harvard University Press).
Regular readers of the higher education press have had occasion to learn a great deal about digital developments and online initiatives in higher education. We have heard both about and from those for whom this world is still terra relatively incognita. And, increasingly, we are hearing both about and from those commonly considered to be to be “digital natives” –- the term “native” conveying the idea of their either having been born to the culture in question or being so adapted to it that they might as well have been.
When we think of digital natives, we tend to think of students. But lest we think that things are easy for them, let us bear in mind their problems. Notably, they share the general difficulty of reputation management or what we might consider the adverse consequences of throwing privacy away with both hands when communicating on the internet. More to the point in the world of higher education, many suffer from the unequal distribution of online skills most relevant to academic success –- yet another factor in the extreme socioeconomic inequality that afflicts our nation’s system of higher education.
But let us turn our attention to the faculty, and first to those relatively unschooled in new information technologies. At the extreme, there are those who view the whole business with fear and loathing. We must find ways to persuade them that such an attitude is unworthy of anyone who has chosen education as a vocation and that they would do well to investigate this new world with an explorer’s eye –- not uncritically, to be sure, given the hype surrounding it –- in order to reach informed positions about both the virtues and the limitations of new information technologies.
Others are more receptive, but also rather lost. They are fine with what Jose Bowen calls “teaching naked” (i.e., keeping technology out of the classroom itself), since they have been doing it all their working lives, but are unable to manage the other major part of the program (that is, selecting items to hang in a virtual closet for their students to try on and wear to good effect, so that they come to class well-prepared to make the most of the time together with one another and their instructor). What these faculty members need is the right kind of support: relevant, well-timed, and pedagogically effective –- something far less widely available than it should be.
Digitally adept faculty have challenges of their own, some of which are old problems in new forms. There is, for example, the question of how available to be to their students, which has taken on a new dimension in an age in which channels of communication proliferate and constant connectedness is expected.
And then there is the question of how much of themselves faculty members should reveal to students. How much of their non-academic activities or thoughts should they share by not blocking access online or perhaps even by adding students to some groups otherwise composed of friends?
Many of us have worked with students on civic or political projects –- though not, one hopes, simply imposing our own views upon them. Many of us have already extended our relationship into more personal areas when students have come to us with problems or crises of one sort or another and we have played the role of caring, older adviser. We have enjoyed relatively casual lunches, dinners, kaffeeklatsches with them that have included discussion of a variety of topics, from tastes in food to anecdotes about beloved pets. The question for digital natives goes beyond these kinds of interaction: To what extent should students be allowed in on the channels and kinds of communications that are regularly –- in some cases, relentlessly and obsessively –- shared with friends?
Not all of this, to be sure, is under a faculty member’s control. Possibilities for what sociologists call “role segregation” hinge on an ability to keep the audiences for different roles apart from one another –- hardly something to be counted on in these digital times. But leaving aside the question of how much online information can be kept from students, how much of it should be kept from them?
Will students be better-served, as some faculty members seem to believe, if they see ongoing evidence that their teachers are people with full lives aside from their faculty roles? Should students be recipients of the kinds of texts and tweets that faculty members may be in the habit of sending to friends about movies, shopping, etc.? Given how distracting and boring some of this may be even to friends, one might well wonder. Some students will perhaps get a thrill out of being in a professor’s “loop” on such matters, but do we need to further clutter their lives with trivia? This is an area in which they hardly need additional help.
To put this issue in a wider context: In her 1970 book Culture and Commitment, anthropologist Margaret Mead drew a distinction among three different types of culture: “postfigurative”, in which the young learn from those who have come before; “cofigurative”, in which both adults and children learn a significant amount from their peers; and “prefigurative”, in which adults are in the position of needing to learn much from their children. Not surprisingly, Mead saw us as heading in a clearly prefigurative direction –- and that years before the era of parents and grandparents sitting helplessly in front of computer screens waiting for a little child to lead them.
Without adopting Mead’s specific views on these cultural types, we can find her categories an invitation to thinking about the teaching and learning relationship among the generations. For example, should we just happily leap into prefigurativeness?
Or, to put it in old colonialist terms, should we “go native”? Colonial types saw this as a danger, a giving up of the responsibilities of civilization –- not unlike the way the Internet-phobic see embracing the online world. The repentant colonizers who did decide to “go native”, motivated either by escapism or by a profound love and respect for those they lived and worked with, sometimes ended up with views as limited by their adopted culture (what is called “secondary ethnocentrism”) as were limited by their original one. This aside from the fact that attempts to go native are not always successful and may even seem ridiculous to the real folks.
Perhaps it is helpful to think of ourselves first as anthropologists. We certainly need to understand the world in which we ply our trade, not only so that we can do our work, but also because we are generally possessed of intellectual curiosity and have chosen our vocation because we like working in a community. We believe that we have much to learn from the people we study and, at the same time, know that we can see at least some things more clearly because we have the eyes of outsiders.
But we are also missionaries, since we feel we have something of value to share –- to share, to be sure, not simply to impose. What might that something be?
In the most basic sense, it is the ability to focus, to pay attention, take time to learn, looking back at least as often as looking forward. Most of our students live in a noisy world of ongoing virtual connectedness, relentless activity, nonstop polytasking (how tired are we of the word “multitasking”?). Like the rest of us, they suffer from the fact that too much information is the equivalent of too little. Like the rest of us, they live in a world in which innovation is not simply admired, but fetishized.
So, even as we avail ourselves of the educational benefits of new information technologies, we might think of complementing this with a Slow Teaching movement, not unlike the Slow Food movement founded by Carlo Petrini in 1986 with the goal of preserving all that was delicious and nutritious in traditional cuisine. We have such traditions to share with our students even as we become more knowledgeable about the world in which they move.
Our students and junior colleagues don’t need us to be them; they need us to be us. Or, as Oscar Wilde so engagingly put it: Be yourself; everyone else is already taken.
Judith Shapiro is president of the Teagle Foundation and former president of Barnard College.
“Would you like to see the brain collection?” my guide asked, as we finished our tour of the Yale School of Medicine. What scientist could resist?
I was expecting an impersonal chamber crammed with specimens and devices. Perhaps a brightly lit, crowded, antiseptic room, like the research bays we had just been exploring. Or an old-fashioned version, resembling an untidy apothecary’s shop packed with mysterious jars.
But when we entered the Cushing Center in the sub-basement of the Medical Library, it was a dim, hushed space that led through a narrow opening into an expansive area for exploration and quiet reflection. As my guide noted, it looked remarkably like a posh jewelry store, with lovely wooden counters, closed cabinets below and glass-enclosed displays above.
And such displays! Where I had envisioned an imposing, sterile wall of containers, with disembodied brains floating intact in preservative fluid, there was instead a long sinuous shelf of jars just above eye level, winding around the room. Each brain lay in thick slices at the bottom of its square glass container, the original owner’s name and dates on a handwritten label. Muted light glinting off the jars, and lending a slight glow to the sepia-toned fluid within, gave the impression of a vast collection of amber.
In frames leaning from countertop to wall or resting in a glass-topped enclosure set within the counter were collages of photos and drawings. Surprised, I stepped closer, glimpsed human faces, and found extraordinary science therein.
I had anticipated spectacle: materials displayed in a manner that entertains, yet distances the audience and makes what is viewed seem exotic and alien. Instead, I experienced science in its most human manifestation: specimens arranged to emphasize the reason they were of interest to their original owners, those who had studied them, and those now viewing them.
A typical collage showed photographs of an individual living human being alongside Cushing’s exquisite drawings of the person’s brain, as dissected during surgery or after death. The photographs were posed to show the whole person as a unique individual – and also, in many cases, revealed the presence of the brain tumor they were then living with, through the shape of the skull or as a lump beneath the skin. The drawings revealed the location and anatomical details of the tumor. The very brain that had animated the person and suffered the tumor reposed in its jar nearby.
One could not walk away unmoved.
On the personal level, I was reminded of various individuals I have known whose deaths were caused by brain tumors. The first, decades ago: an admired college mentor. The most recent pair, within the last year: the vivacious wife of one colleague, the young child of another. I remember them as people who enriched others’ lives with their grace and strength of character and I am grateful for the medical advances that gave them extra time to be part of their families and communities.
As a scientist, I was reminded viscerally that this is exactly what we mean when we say all science exists within a human context. Cushing’s work, memorialized so effectively in this small museum, began at a time when neurosurgery was crude and ineffectual, and hope for those with brain tumors was practically nonexistent. By his career’s end, he had introduced diagnostic and surgical techniques that lowered the surgical mortality rate for his patients to an unheard-of ten percent, a rate nearly four times better than others achieved.
The human patients on whom Cushing operated were everything to him, simultaneously providing motivation, subject, object, and methods for his research. In endeavoring to find cures for their conditions, he studied their lives and symptoms, operated on and sketched their tumors, and used what he learned from each case to improve his effectiveness. The purely scientific aspect of his work (advancing the surgical treatment of brain tumors) was inextricably linked with its humanistic aspects (understanding the histories and fates of the individual members of his clinical practice). Indeed, it was his methodical linking of the clinical and human sides of medicine that made his contributions of such lasting significance. Cushing himself stressed that “a physician is obligated to consider more than a diseased organ, more than even the whole man – he must view the man in his world.”
Seen in this light, the juxtaposition of images inside the museum’s frames carries dual meanings.
First, the combined images document the course of medical history, forming what the biographer Aaron Cohen-Gadol calls “the diary of neurological surgery in its infancy.” The very format of these still photographs, hand-drawn sketches, and carefully stained glass slides reminds us that Cushing worked in an era before radiological methods for brain imaging and, initially, an era when even still photography was rather cumbersome. Indeed, his own artistic talent and training was crucial for accurately recording the outcomes of his surgeries. The contents of the images capture the conditions of patients when they came to see Cushing, the treatment, and the aftermath. Collectively, they show how neurology and neurosurgery were practiced in Cushing’s day and how these fields evolved year by year throughout his career.
Second, the combined images directly influenced the course of medical history. Cushing deliberately correlated, through the information in the photographs, anatomical sketches, and medical records, the external indicators of otherwise hidden medical problems within the skull. This led to improvements not only in how neurosurgeons operated but also in how readily other doctors could recognize early external indications of brain tumors and send patients for prompt treatment. As Cushing’s biographers note, “Each patient is of historical significance now because our discipline of neurological surgery evolved through his or her care.” Moreover, because he trained a generation of neurosurgeons in these methods, Cushing helped ensure the continuing development of the field; a number of these junior colleagues, in turn, were instrumental in the creation of the museum that now makes the images publicly visible.
The juxtaposition of Cushing’s images therefore represents the very essence of how the humanities and sciences are intertwined: achieving his medical breakthroughs depended directly on his active depiction and analysis of human experience.
As an educator, I find that the displays in the Cushing Center encapsulate why young scientists need to study their fields in historical and social context. Isolated technical proficiency would not have enabled Cushing to become the originator of modern neurosurgery; his intense focus on the human condition was essential. Indeed, Cushing mused in a letter to a fellow physician that he “would like to see the day when somebody would be appointed surgeon somewhere who had no hands, for the operative part is the least part of the work.” Similarly, to fully prepare for careers in science, it is essential that students grasp how the impetus for scientific work arises from the world in which the scientist lives, often responds to problems the scientist has personally encountered, and ultimately impacts that society and those problems in its turn.
A very few scientists may be largely self-taught and spend their entire careers working on abstract problems in isolated research institutes without ever teaching a course, writing a grant or giving a public lecture. Even they, however, are influenced in their selection of research problems by the results that other individuals have previously obtained. And even they must communicate their results to other people in order to impact their field. Most of us interact far more directly with other people in our scientific endeavors: they inspire our choices of major or thesis topic, pay taxes that support grants for our facilities and students, run companies that underwrite our applied investigations, propose legislation that regulates how we share data and maintain lab safety.
Some might argue that these considerations apply mainly to the life sciences, where the human connections are most tangible. They might think, for instance, that my own work as a theoretical physicist is too abstract to be influenced by societal context. After all, the field-theoretic equations I manipulate have no more race or gender or politics than the subatomic particles they describe. Yet my choice of research questions has unquestionably been affected by the contingent historical details of my own professional life: the compelling lectures that enticed me to switch fields during graduate school, the inspiring discussions with my doctoral adviser that established symmetry as a guiding principle, the discovery of certain subatomic particles at the start of my career and the decades-delayed confirmations of others. My sense of how science operates on both philosophical and practical levels has also unmistakably been influenced by my long-ago experiences as a graduate teaching assistant for History of Science courses and my ongoing conversations with scholars in Science Studies.
This is why programs that deliberately train scientists in the humanities are so essential to educating scientists effectively. Every nascent scientist should read, think, and write about how science and society have impacted one another across cultural and temporal contexts. Not all undergraduates will immediately appreciate the value of this approach. The first-year students in my own college have been known to express confusion about why they must take that first course in the history, philosophy, and sociology of science. But decades later, our alumni cite the “HPS” curriculum as having had a profound impact on their careers in science or medicine. They remember the faculty members who taught those courses vividly and by name. They tell me the ethical concepts absorbed in those courses have helped them hew more closely to the scientific ideal of seeking the truth.
In the wake of C.P. Snow’s famous Rede Lecture on the Two Cultures of the sciences and humanities, academic programs were founded in the late 1960s and early 1970s (e.g., Michigan State University’s Lyman Briggs College and Stanford University’s Science, Society, and Technology program) with the express aim of immersing students in the deep connections between science and society. Decades later, those programs are thriving – and the impact of the ideas they espouse may be seen in changes that pre-professional programs in medicine and engineering have been embracing.
For example, the newest version of the Medical School Admissions Test (MCAT2015) incorporates questions on the psychological, social, and biological determinants of behavior to ensure that admitted medical students are prepared to study the sociocultural and behavioral aspects of health. Similarly, in 2000, ABET modified its criteria to emphasize communication, teamwork, ethical professional issues and the societal and global context of engineering decisions. An evaluation in 2002 found a measurable positive impact on what students learned and their preparation to enter the workforce.
While pre-medical and engineering students are being required to learn about issues linking science and culture, most students in science fields are still not pushed to learn about the human context of their major disciplines. We faculty in the natural sciences have the power to change this. Many of us already incorporate “real world” applications of key topics in our class sessions or assignments; introductory textbooks often do likewise. But we can extend this principle beyond the classroom into the world of intellectual discourse and practice. As colloquium chairs and science club mentors, we can arrange regular departmental talks on topics that stress the interdependence of science and society: STEM education, alternative energy, medical technology, gender and science.
As academic advisers we can nudge science students towards humanities courses that analyze scientific practice or towards summer internships with companies and NGOs as well as traditional REU programs. As directors of undergraduate or graduate studies, we can highlight science studies topics, interdisciplinary organizations, and non-academic career paths on the department website. Making these connections part of the life of the department can better prepare our students for their futures as capable scientists responsible to and living within society.
In the end, Cushing’s brain collection vividly reminds us why it is crucial to immerse natural science students in interdisciplinary science studies that incorporate the social sciences and humanities. It is not merely because hot new fields are said to lie at the unexplored intersections of fields whose borders were arbitrarily codified decades or centuries ago (though that is true). It is not merely because the terms interdisciplinary, cross-disciplinary, and trans-disciplinary are presently in vogue (though that is also true). It is because such cross-training produces scientists who are both more capable of extraordinary breakthroughs and more mindful of their broader impacts. The humanities truly strengthen science.
Elizabeth H. Simmons is dean of Lyman Briggs College, acting dean of the College of Arts and Letters, and University Distinguished Professor of Physics at Michigan State University.
A rash of articles proclaiming the death of the humanities has been dominating the higher education press for the last couple years. Whether it’s The New York Times,The New Republic or The Atlantic, the core narrative seems to be that liberal arts education will be disrupted by technology, it’s just a question of time, and resistance is futile. But I am convinced that not only is the “death of the humanities at the hands of technology” being wildly exaggerated, it’s directionally wrong.
This month on Inside Higher Ed, William Major wrote an essay, “Close the Business Schools/Save the Humanities”. I loved it for its provocative frame, and because I’m a strong proponent of the humanities. But it positioned business and humanities as an either/or proposition, and it doesn’t have to be so.
If John Adams were alive today, he might revise his famous quote:
I [will start with the] study politics and war... then mathematics and philosophy… [then] natural history and naval architecture, navigation, commerce and agriculture [in order to give myself a right] to study painting, poetry, music.
What would take generations in Adams’s day can be done in a single lifetime today because of technology.
Full disclosure: I was Clay Christensen’s research assistant at Harvard Business School, and am now CEO of a Silicon Valley-based technology company that sells a Learning Relationship Management product to schools and companies.
Perhaps the above might be considered three strikes against me in a debate on the humanities -- perhaps I’m already out in the minds of many readers, but I hope not. Please hear me out.
I think that technology will actually enhance liberal arts education, and eventually lead to a renaissance in the humanities, from literature to philosophy, music, history, and rhetoric. Not only will technology improve the learning experience, it will dramatically increase the number of students engaging in liberal education by broadening consumption of the humanities from school-age students alone to a global market of 7 billion people.
It might be overstating the case to say that this will happen, but it can happen if those of us who care about the humanities act to make it so. To do so, we need to accept one hard fact and make two important strategic moves.
The hard fact is that despite its importance, economic value is the wrong way to think about the liberal arts -- and the sooner we accept that reality, the sooner we can stop arguing for the humanities from a position of weakness and instead move on with a good strategy to save them.
Of course, it should be noted that there is certainly considerable economic value in attending elite and selective colleges, from Colgate to Whittier to Morehouse. The currency of that economic value is the network of alumni, the talent signal that admission to and graduation from such institutions confer, and the friendships formed over years of close association with bright and motivated people. But the economic value accrues regardless of what the people study, whether it is humanities or engineering or business.
Moreover, the effort to tie the humanities to economic outcomes cheapens the non-economic value of the humanities. Embracing their perceived lack of economic value allows us to be affirmative about the two things that technology can do to save them: (1) supplementing liberal arts with career-focused education and (2) defining the non-economic value of liberal arts so that we can extend its delivery to those who make more vocational choices for college.
Supplementing the liberal arts with career-focused education such as a fifth-year practical master’s degree, micro-credentials, minors and applied experience is critical to their survival. It doesn’t matter whether the supplements are home-grown or built in partnership with companies like Koru or approaches like Udacity’s Nanodegrees. What matters is that your students see a way both to study what they love and to build a competitive advantage to pursue a meaningful career.
The right technology can be a major part of conferring that advantage by helping students to figure out their long-term career ambitions, connect with mentors in industry, consume career-oriented content, earn credentials, and do economically valuable work to prove their abilities.
But the true promise of technology to save the liberal arts is precisely its ability to lower the cost of delivery -- and in so doing to allow everyone on earth to partake in a liberal education throughout their lifetime. Students shouldn’t have to choose between philosophy and engineering, music and business, rhetoric and marketing. And by lowering the costs, you enable increased consumption -- that is the very nature of disruptive innovations.
Given that my education in economics and business leaves me woefully inadequate to the task of defining the non-economic value of liberal arts, I’ll leave that task to John F Kennedy instead, who said:
“[Economic value] does not allow for the health of our children...or the joy of their play. It does not include the beauty of our poetry or the strength of our marriages; the intelligence of our public debate or the integrity of our public officials. It measures neither our wit nor our courage; neither our wisdom nor our learning; neither our compassion nor our devotion to our country; it measures everything, in short, except that which makes life worthwhile.”
It is for those things that do make life worthwhile that the liberal arts must be saved.
Gunnar Counselman is the founder and CEO of Fidelis.