Senate vote to bar support from the NSF has many scholars wondering whether their discipline needs a new strategy. Also being debated: Was an exception to the ban a win for research or a loss on principle?
North Dakota State U. wins $1.2 million grant for a sex education program, then -- after legislators protest -- says it might be illegal to use the money. Faculty accuse administrators of sacrificing academic freedom.
Steve Gunderson has plenty of friends, including the Senate's leading critic of for-profit colleges. But the new head of the sector's trade group isn't afraid to pick a fight -- even with one of his members.
Submitted by John Thelin on September 13, 2012 - 3:00am
Higher education of the 1960s usually brings to mind student rebellion and campus unrest. Berkeley and Mario Savio are often invoked to symbolize the era of colleges and the counterculture. But this is distorted because it is incomplete. Why not have a collective student memory that includes Mitt as well as Mario?
This seems counterintuitive to the counterculture -- but only because we have overlooked all the innovations that were taking place on American campuses in these tumultuous years. I want to make the case to add seats on the historical stage of higher education – especially with the upcoming November presidential election.
To truly understand the long-term legacy of the 1960s, we need to include Harvard’s Joint M.B.A. and J.D. program as the institution -- and its famous alumnus, Mitt Romney, as the individual – that also are part of the higher education lyrics when boomers are “Talkin’ ‘Bout My Generation.”
Scott McKenzie attracted a lot of listeners in 1967 when he sang, “If you come to San Francisco, be sure to wear a flower in your hair....” Mitt Romney, however, was not listening and went in a different direction – politically and geographically. In 1969 he left San Francisco (well, Stanford and Palo Alto) and -- after a detour to France -- headed east to graduate school at Harvard for its brand-new joint M.B.A./ J.D. program, which was founded that year. The rest is history -- and no less than the higher education of perhaps our next president.
Put aside such artifacts as Steven Kelman’s memoir about student protest at Harvard in 1969-70 in his book, Push Comes to Shove. Forget James Simon Kunen’s The Strawberry Statement and its provocative subtitle, “Notes of a College Revolutionary.” Above all, suspend from memory the images of Harvard first-year law students as depicted in Hollywood’s The Paper Chase. It’s time to reconstruct the early years of Harvard’s joint M.B.A./J.D. program and its students, which were a powerful, albeit low-profile counter to the counterculture.
Harvard’s joint program brings to mind the academic equivalent of epoxy cement -- two ingredients (law school and business school) each of which is rigorous in its own right, and when mixed, create an incredibly hard bond -- probably impervious to broad humane or societal considerations. Two articles over the past six months in the New York Times provide some insights into both the joint program and into Romney as a graduate student: Jodi Kantor’s “At Harvard, A Master’s in Problem Solving,” and Peter Lattmann and Richard Perez-Pena's “Romney, at Harvard, Merged Two Worlds.”
As Lattmann and Perez-Pena wrote: “One of the most exclusive clubs in academe is a Harvard University dual-degree program allowing graduate students to attend its law and business schools simultaneously, cramming five years of education into four. On average, about 12 people per year have completed the program — the overachievers of the overachievers — including a striking number of big names in finance, industry, law and government. ...In addition to Mr. Romney, founder of Bain Capital, the roughly 500 graduates include Bruce Wasserstein, who led the investment bank Lazard until he died in 2009; leaders of multibillion-dollar hedge fund and private equity firms like Canyon Capital Advisors, Silver Lake Partners and Crestview Partners; high-ranking executives at banks like Citigroup and Credit Suisse; C. James Koch, founder of the Boston Beer Company; and Theodore V. Wells Jr., one of the nation’s top trial lawyers.”
No doubt these graduate students were smart and worked hard. Beyond that, it’s important to note some characteristics that accompanied this program and its work ethic. First, the formal curriculum pulled inward rather than outward.
Second, Romney as a student in the joint program tended to screen out external events as distractions. According to Kantor, “And unlike Barack Obama, who attended Harvard Law School more than a decade later, Mr. Romney was not someone who fundamentally questioned how the world worked or talked much about social or policy topics. Though the campus pulsed with emotionally charged political issues, none more urgent than the Vietnam War, Mr. Romney somehow managed to avoid them.” Kantor reinforces this depiction by quoting one of Romney’s law school study partners, who recalled, “Mitt’s attitude was to work very hard in mastering the materials and not to be diverted by political or social issues that were not relevant to what we were doing.”
The program pushed toward intensive insularity using the case study pedagogy that relied on no books or contextual sources – all at a time when genuine interdisciplinary, broad perspectives were finding some breathing space in prestigious professional schools elsewhere. It’s too bad for the education of future business (and political) leaders that the joint program that started in 1969 did not consider the very different perspective offered by Earl Cheit, professor (and later, dean) of the business school at the University of California at Berkeley. In 1964, with support from the Ford Foundation, Cheit invited five scholars outside the field of business to join him in conducting a workshop that for the first time brought together business school professors with others to explore and preserve “the connection between the intellectual adventure and the business adventure.”
What a contrast to the Harvard Business School’s case study approach! Cheit’s Ford Foundation program at Berkeley featured, first as talks and later as readings, a cornucopia of ideas and issues, led off by the economist Robert L. Heilbroner’s “View From the Top: Reflections on a changing business ideology.” John William Ward, historian and president of Amherst College, spoke about “The Ideal of Individualism and the Reality of Organization.”
Henry Nash Smith of Berkeley’s English Department discussed businessmen in American fiction in the “Search for a Capitalist Hero.” Historian Richard Hofstadter asked, “What happened to the Anti-Trust Movement?” The economists Paul Samuelson and Cheit himself analyzed changing roles of business in how managers cultivate social responsibility – and how American society balanced personal freedoms and economic freedoms in a mixed economy.” Guest speakers from France and Belgium provided American businessmen with perspectives on business in Europe.
Cheit’s knowledgeable involvement in exploring the past and future of higher education did not stop with this Ford Foundation business program. In 1974-75 he sought (and received) permission to teach a graduate course in the School of Education – one in which he explored how it was that professional schools of business, agriculture, forestry, and engineering came to have a place in the American university. The course content and topic were so novel that it led to publication of a book by the Carnegie Commission, The Useful Arts and the Liberal Tradition.
Once again, it showed that an intellectual and administrative leader in the business school could look outward within the multi-versity and reach outward to the larger society and the economy by complicating the questions rather than doggedly seeking to solve business problems. Cheit also was one of the leading economists to sound an alert to the deteriorating financial condition of the nation’s colleges and universities in his 1971 book on higher education’s “new depression.”
In contrast, what were the aims and goals of the Harvard joint program? One observation provided by NYT reporters is revealing: “But former students and professors say it makes sense that a group of overachievers would be drawn to financial markets, a hypercompetitive field with the promise of immense riches.”
Really? Why were these overachievers necessarily confined to these goals? What if the teaching and discussion had included some consideration of ethics, public good, and social responsibility – along with pursuit of individual prosperity? It’s important to remember that there were good alternatives. For example, Cheit’s Berkeley approach with the Ford Foundation project was to create curiosity, exploration and reasonable doubt about our national obsession with business.
The Harvard joint program, especially its business school component, emphasized the sharpening of decision-making tools, especially in finance. Each, of course, has their place. But if a concern of a university is to ask, “Knowledge for what?,” it is Cheit’s Berkeley model more than Harvard’s joint program that is sorely needed for the thoughtful leadership, whether in business or politics, required for the early 21st century. I’ll be thinking about that on my way to the polls on Election Day in November.
John R. Thelin is a professor at the University of Kentucky. He was a graduate student at the University of California at Berkeley from 1969 to 1974. He is author of A History of American Higher Education (2011).
Of the many strange things in Gulliver’s Travels that make it hard to believe anyone ever considered it a children’s book, the most disturbing must be the Struldbruggs, living in the far eastern kingdom of Luggnagg, not covered by Google Maps at the present time.
Gulliver’s hosts among the Luggnaggian aristocracy tell him that a baby is born among them, every so often, with a red dot on the forehead -- the sign that he or she is a Struldbrugg, meaning an immortal. Our narrator is suitably amazed. The Struldbruggs, he thinks, have won the cosmic lottery. Being “born exempt from that universal Calamity of human Nature,” they “have their Minds free and disengaged, without the Weight and Depression of Spirits caused by the continual Apprehension of Death.”
The traveler has no trouble imagining the life he might lead as an immortal, given the chance. First of all, Gulliver tells his audience at dinner, he would spend a couple of hundred years accumulating the largest fortune in the land. He’d also be sure to master all of the arts and sciences, presumably in his spare time. And then, with all of that out of the way, Gulliver could lead the life of a philanthropic sage, dispensing riches and wisdom to generation after generation. (A psychoanalytic writer somewhere uses the expression “fantasies of the empowered self,” which just about covers it.)
But then the Lubnaggians bring him back to reality by explaining that eternal life is not the same thing as eternal youth. The Struldbruggs “commonly acted like Mortals, till about thirty Years old,” one of Gulliver’s hosts explains, “after which by degrees they grew melancholy and dejected, increasing in both till they came to four-score." The expression “midlife crisis” is not quite the one we want here, but close enough.
From the age of eighty on, “they had not only all the Follies and Infirmities of other old Men, but many more which arose from the dreadful Prospect of never dying.” Forget the mellow ripening of wisdom: Struldbruggs “were not only Opinionative, Peevish, Covetous, Morose, Vain, Talkative, but incapable of Friendship, and dead to all natural Affection.”
It gets worse. Their hair and teeth fall out. “The Diseases they were subject to still continuing without increasing or diminishing,” Gulliver tells us. “In talking they forgot the common Appellation of Things, and the Names of Persons, even of those who are their nearest Friends and Relations. For the same Reason they never can amuse themselves with reading, because their Memory will not serve to carry them from the beginning of a Sentence to the end; and by this Defect they are deprived of the only entertainment whereof they might otherwise be capable.”
It is a vision of hell. Either that, or a prophecy of things to come, assuming the trends of the last few decades continue. Between 1900 and 2000, the average life expectancy in the United States rose from 49 to 77 years; between 1997 and 2007, it grew by 1.4 years. This is not immortality, but it beats dying before you reach 50. The span of active life has extended as well. The boundary markers of what counts as old age keep moving out.
From a naïve, Gulliverian perspective, it is all to the good. But there’s no way to quantify changes in the quality of life. We live longer, but it's taking longer to die as well. Two-thirds of deaths among people over the age of 65 in the United States are caused by three chronic conditions: heart disease, cancer, and stroke. The “life” of someone in a persistent vegetative state (in which damage to the cerebral cortex is so severe and irreversible that cognitive functions are gone for good) can be prolonged indefinitely, if not forever.
More horrific to imagine is the twilight state of being almost vegetative, but not quite, with some spark of consciousness flickering in and out -- a condition of Struldbruggian helplessness and decay. “I grew heartily ashamed of the pleasing Visions I had formed,” says Gulliver, “and thought no Tyrant could invent a Death into which I would not run with Pleasure from such a Life.”
Howard Ball’s book At Liberty to Die: The Battle for Death With Dignity in America (New York University Press) is a work of advocacy, as the subtitle indicates. The reader will find not the slightest trace of Swiftian irony in it. Ball, a professor emeritus of political science at the University of Vermont, is very straightforward about expressing bitterness -- directing it at forces that would deny “strong-willed, competent, and dying adults who want to die with dignity when faced with a terminal illness” their right to do so.
The forces in question fall under three broad headings. One is the religious right, which Ball sees as being led, on this issue at least, by the Roman Catholic Church. Another is the Republican Party leadership, particularly in Congress, which he treats as consciously “politicizing the right-to-die issue” in a cynical manner, as exemplified by the memo of a G.O.P. operative on “the political advantage to Republicans [of] intervening in the case of Terri Schiavo.” (For anyone lucky enough to have forgotten: In 1998, after Terri Shiavo had been in a persistent vegetative state for eight years, her husband sought to have her feeding tube removed, setting off numerous rounds of litigation, as well as several pieces of legislation that included bills in the US Congress. The feeding tube was taken out and then reattached twice before being finally removed in 2005, after which Schiavo died. The website of the University of Miami's ethics program has a detailed timeline of the Schiavo case.)
The third force Ball identifies is that proverbial 800-pound gorilla known as the Supreme Court of the United States. Its rulings in Washington v. Glucksburg and Vacco v. Quill in 1997 denied the existence of anything like a constitutionally protected right to physician-assisted death (PAD). States can outlaw PAD -- or permit it, as Montana, Oregon, and Washington do at present. In the epigraph to his final chapter, Ball quotes a Colorado activist named Barbara Coombs Lee: “We think the citizens of all fifty states deserve death with dignity.” But the Supreme Court of the United States will not be making that a priority any time soon.
“The central thesis of the book,” states Ball, “is that the liberty found in the U.S. Constitution’s Fifth and Fourteenth ‘Due Process’ Amendments extends... [to] the terminally ill person's right to choose to die with dignity -- with the passive assistance of a physician -- rather than live in great pain or live a quality-less life.” The typical mode of “passive assistance” would be “to give painkilling medications to a terminally ill patient, with the possibility that the treatment will indirectly hasten the patient’s death.”
Ball notes that a Pew Research Center Survey from 2005 showed that an impressive 84 percent level of respondents “approved of patients being able to decide whether or not to be kept alive through medical treatment or choosing to die with dignity.”
Now, for whatever it’s worth, that solid majority of 84 percent includes this columnist. If the time for it ever comes, I’d want my doctor to supersize me on the morphine drip without breaking any laws. Throughout Ball's narrative of the successes and setbacks of the death-with-dignity cause, I cheered at each step forward, and felt appalled all over again while reading the chapter he calls “Terri Schiavo’s Tragic Odyssey,” although it did seem like the more suitable adjective would be “grotesque.” Tragedy implies at least some level of dignity.
The author also introduced me to a blackly humorous expression, “death tourist,” which refers to a person "visiting" a state to take advantage of physician-assisted suicide being legal there.
As a member of the choir, I liked Ball's preaching, but it felt like the sermon was missing an index card or two. As mentioned earlier, the book’s “central thesis” is supposed to be that the due-process guarantees in Constitution extend to the right to death with dignity. And so the reader has every reason to expect a sustained and careful argument for why that legal standard applies. None is forthcoming. The due-process clauses did come up when the Supreme Court heard oral arguments in 1997, but it rejected them as inapplicable. This would seem to be the point in the story where that central thesis would come out swinging. The author would show, clearly and sharply, why the Court was wrong to do so. He doesn't. It is puzzling.
Again, it sounds very categorical when Ball cites that Pew survey from 2005 showing 84 percent agreement that individuals had a right to choose an exit strategy if medical care were not giving them a life they felt worth living. But the same survey results show that when asked whether they believed it should be legal for doctors to "assist terminally ill patients in committing suicide," only 44 percent favored it, while 48 percent were opposed. With the matter phrased differently -- surveyors asking if it should be legal for doctors to "give terminally ill patients the means to end their lives" -- support went up to 51 percent, while 40 percent remained opposed. This reveals considerably more ambivalence than the 84 percent figure would suggest.
The notion that a slippery slope will lead from death with dignity to mass programs of euthanasia clearly exasperates Ball, and he can hardly be faulted on that score. A portion of the adult population is prepared to believe that any given social change will cause the second coming of the Third Reich, this time on American soil. (Those who do not forget the History Channel are condemned to repeat fairly dumb analogies.) But the slippery-slope argument will more likely be refuted in practice than through argument. Whether or not the law recognizes it, the right to make decisions about one’s own mortality or quality of life will exist any time someone claims it. One of the medical profession’s worst-kept secrets for some time now is that plenty of physicians will oblige a suffering patient with the means to end their struggle. (As Ball notes, this came up in the Supreme Court discussions 15 years ago.)
And the demand is bound to grow, as more and more of us live long enough to see -- like Gulliver -- that there are worse fates than death. Brilliant legal minds should apply themselves to figuring out how to make an ironclad case for the right a decent departure from this mortal coil. At Liberty to Die is useful as a survey of some obstacles standing in the way. But in the meantime, people will find ways around those obstacles, even if it means taking a one-way, cross-continental trip to the Pacific Northwest. There are worse ways to go.