This fall, the United States Supreme Court will consider the case of Fisher v. University of Texas, which asks whether that university’s use of affirmative action in admissions passes constitutional muster. I served on the legal team that defended the University of Michigan Law School admissions policy in Grutter v. Bollinger, where the Supreme Court held that fostering a racially diverse student body is a compelling state interest that colleges and universities can pursue in a narrowly tailored way. I believed that the Court correctly decided Grutter when I was helping to litigate the case, but I believe it even more firmly in my newer role as a law school faculty member.
For a number of technical reasons, it seems unlikely that the Supreme Court will treat Fisher as an occasion to revisit Grutter. If the Court does do so, however, then it should let Grutter stand. The reason is simple: The primary reason to leave Grutter alone is that the Court there got it exactly right.
Grutter recognized that having a diverse student body serves a variety of important educational objectives. One of those objectives can be described syllogistically: personal characteristics help determine our experiences; our experiences inform our thoughts and perspectives; therefore, having students with a wide array of personal qualities helps enrich the educational environment by infusing it with a rich variety of ideas and points of view. Of course, a diverse student body serves other educational goals as well, for example, by challenging stereotypes and building cross-cultural understanding. But the heart of the Grutter decision rests on an understanding of the unique value of learning in an environment where we encounter people whose characteristics, experiences, and perspectives differ from our own.
For many years, I have seen the benefits of having a diverse student body in one of the courses I teach — Legal Ethics. This seems unremarkable in light of the fact that the discussion of ethical matters tends to draw out the philosophical, religious, cultural, and experiential differences among the participants. It is therefore unsurprising that I have witnessed many lively exchanges in that class between students who brought dramatically contrasting orientations to the questions at hand.
Perhaps more surprising are the benefits of having a diverse student body in another course I teach — Evidence. In this class, students learn the principles that determine whether a document, object, or witness’s testimony will be admitted at trial for consideration by the jury. Evidence is what law students sometimes call a "rulesy" course and it does not put variations in individual identity, experience, perspective, or conscience on display in the same obvious way as a course on ethics. And, yet, in my evidence class those variations push their way to the surface repeatedly and unexpectedly.
Sometimes this results from an aspect of a student’s background that does not seem particularly significant or self-defining. Consider, for example, an incident that occurred when my evidence class was studying a doctrine that allows non-expert layperson witnesses to testify to their opinions on certain matters. The rules limit such testimony to the sorts of educated guesses we make in everyday life: How far is it from here to there? How fast was the car going? Is that person drunk?
To explore with my students our capacity to make such inferences based on our experiences, I conduct a simple experiment. I produce two unlabeled cups containing soft drinks and ask for a volunteer to take a sip from each and tell us which contains Coke and which Pepsi. Every year, almost all of the students in the class says they can do this; every year, the student who volunteers to try succeeds.
One year, however, my student volunteer did something unexpected. She came to the front of the class, glanced at the cups, and said confidently: "I can tell by the smell." She picked up one cup; sniffed it; and correctly declared that it contained the Pepsi. Her fellow students burst into applause. She explained that she had worked in a restaurant that served both products and that she had acquired this skill so she could help out on those occasions where the waitperson who had poured the beverages lost track of which was which.
This immediately led to an interesting debate: Was this student a layperson offering an educated guess based on her personal experience or an expert offering an informed opinion based on her specialized knowledge? For a variety of reasons, this distinction matters under the rules of evidence. This student had shown — much more clearly and memorably than I could have done by lecturing about it — that under some circumstances the distinction is very fine indeed, and perhaps even vanishes.
In many other instances, a more self-definitive characteristic that a student possesses has ended up shaping their contribution to the classroom discussion in a poignant and powerful way. I recall, for instance, one day when we were working through a problem that involves the hearsay doctrine. In very general terms, that doctrine prohibits witnesses from repeating things in court that were said outside of court. Students often find the doctrine maddeningly complicated.
Part of the doctrine’s complexity arises from the fact that it is subject to dozens of exceptions. This includes exceptions for statements that were made under stress or excitement and for statements that describe an event and were made while or right after the event was occurring. These exceptions rest in part on the assumption that statements made under these circumstances are typically less calculated and therefore more reliable.
We were discussing a scenario — based on an actual case — that presented the question of whether the tape of a phone call to a 911 operator should be admissible. In the tape, a woman who lived in an apartment building reported that several large dogs, owned by one of her neighbors, were attacking another neighbor in the hallway. The caller described the dogs, the people who owned them and were trying unsuccessfully to restrain them, and the location and severity of the attack. During the entire call, the woman remained in her apartment with the door closed.
I had taught this scenario for many years and the discussion consistently played out along the same lines. The students would recognize that the tape presented a hearsay problem. They would identify the exceptions discussed above as potentially applicable. And then they would spot a difficulty in applying those exceptions: because the woman listened to the commotion through her door and never left her apartment, she arguably did not have personal knowledge about the matters she was describing. This is how the discussion always had gone; this is how it always had ended.
On this occasion, however, a student raised his hand just as we were about to move on. “I’m sorry,” he said, “but I disagree with the conclusion. You’ve all wrongly assumed that you need to see something to have personal knowledge about it. This woman knew what her neighbor’s dogs sounded like. She could hear that they were attacking someone. She could recognize her neighbors’ voices. She could tell where the sounds were coming from. Granted, she didn’t see anything. But she certainly had personal knowledge of what was happening.”
The class sat in stunned silence. Of course, this student was right. He also happened — not incidentally — to be blind.
When the Supreme Court decided Grutter in 2003, race mattered. It shaped experience in myriad and unique ways. It informed perspectives, ideas, and opinions. It still does.
As a practicing lawyer, I have argued that institutions of higher education have a compelling interest to admit a diverse student body based upon legal principles and social science. As a faculty member, I now make the same argument based upon my experience. Indeed, I have come to believe that Grutter is wise and right in ways that I did not even understand when I was busy working on it.
I have seen the evidence.
Len Niehoff is professor from practice at the University of Michigan Law School and is chair of the higher education practice at Honigman, Miller, Schwartz & Cohn. The ideas expressed here are his own.
For a week now, friends have been sending me links from a heated exchange over the status and value of black studies. It started among bloggers, then spilled over into Twitter, which always makes things better. I'm not going to rehash the debate, which, after all, is always the same. As with any other field, black studies (or African-American studies, or, in the most cosmopolitan variant, Africana studies) could only benefit from serious, tough-minded, and ruthlessly intelligent critique. I would be glad to live to see that happen.
But maybe the rancor will create some new readers for a book published five years ago, From Black Power to Black Studies: How a Radical Social Movement Became an Academic Discipline (Johns Hopkins University Press) by Fabio Rojas, an associate professor of sociology at Indiana University. Someone glancing at the cover in a bookstore might take the subtitle to mean it's another one of those denunciations of academia as a vast liberal-fascist indoctrination camp for recruits to the New World Order Gestapo. I don't know whether that was the sales department's idea; if so, it was worth a shot. Anyway, there the resemblance ends. Rojas wrote an intelligent, informed treatment of black studies, looking at it through the lens of sociological analysis of organizational development, and with luck the anti-black-studies diatribalists will read it by mistake and accidentally learn something about the field they are so keen to destroy. (Spell-check insists that “diatribalists” is not a word, but it ought to be.)
Black studies was undeniably a product of radical activism in the late 1960s and early ‘70s. Administrators established courses only as a concession to student protesters who had a strongly politicized notion of the field’s purpose. “From 1969 to 1974,” Rojas writes, “approximately 120 degree programs were created,” along with “dozens of other black studies units, such as research centers and nondegree programs,” plus professional organizations and journals devoted to the field.
But to regard black studies as a matter of academe becoming politicized (as though the earlier state of comprehensive neglect wasn’t politicized) misses the other side of the process: “The growth of black studies,” Rojas suggests, “can be fruitfully viewed as a bureaucratic response to a social movement.” By the late 1970s, the African-American sociologist St. Clair Drake (co-author of Black Metropolis, a classic study of Chicago to which Richard Wright contributed an introduction) was writing that black studies had become institutionalized “in the sense that it had moved from the conflict phase into adjustment to the existing educational system, with some of its values accepted by that system…. A trade-off was involved. Black studies became depoliticized and deradicalized.”
That, too, is something of an overstatement -- but it is far closer to the truth than denunciations of black-studies programs, which treat them as politically volatile, yet also as well-entrenched bastions of power and privilege. As of 2007, only about 9 percent of four-year colleges and universities had a black studies unit, few of them with a graduate program. Rojas estimates that “the average black studies program employs only seven professors, many of whom are courtesy or joint appointments with limited involvement in the program” -- while in some cases a program is run by “a single professor who organizes cross-listed courses taught by professors with appointments in other departments.”
The field “has extremely porous boundaries,” with scholars who have been trained in fields “from history to religious studies to food science.” Rojas found from a survey that 88 percent of black studies instructors had doctoral degrees. Those who didn’t “are often writers, artists, and musicians who have secured a position teaching their art within a department of black studies.”
As for faculty working primarily or exclusively in black studies, Rojas writes that “the entire population of tenured and tenure-track black studies professors -- 855 individuals -- is smaller than the full-time faculty of my own institution.” In short, black studies is both a small part of higher education in the United States and a field connected by countless threads to other forms of scholarship. The impetus for its creation came from African-American social and political movements. But its continued existence and development has meant adaptation to, and hybridization with, modes of enquiry from long-established disciplines.
Such interdisciplinary research and teaching is necessary and justified because (what I am about to say will be very bold and very controversial, and you may wish to sit down before reading further) it is impossible to understand American life, or modernity itself, without a deep engagement with African-American history, music, literature, institutions, folklore, political movements, etc.
In a nice bit of paradox, that is why C.L.R. James was so dubious about black studies when it began in the 1960s. As author of The Black Jacobins and The History of Negro Revolt, among other classic works, he was one of the figures students wanted to be made visiting professor when they demanded black studies courses. But when he accepted, it was only with ambivalence. "I do not believe that there is any such thing as black studies," he told an audience in 1969. "...I only know, the struggle of people against tyranny and oppression in a certain social setting, and, particularly, the last two hundred years. It's impossible for me to separate black studies and white studies in any theoretical point of view."
Clearly James's perspective has nothing in common with the usual denunciations of the field. The notion that black studies is just some kind of reverse-racist victimology, rigged up to provide employment for "kill whitey" demagogues, is the product of malice. But it also expresses a certain banality of mind -- not an inability to learn, but a refusal to do so. For some people, pride in knowing nothing about a subject will always suffice as proof that it must be worthless.