It is my view that most of us engaged in education at our nation’s leading research universities focus our attention upon the wrong issues. These universities are wondrously complex institutions that defy easy analysis or understanding. We therefore tend to concentrate upon their most visible components, such as scientific research, star professors, state-of-the-art facilities and technology, economic development, international impact, and football and basketball teams.
It has become a cliché that American universities are the best in the world. This claim, while valid in important dimensions, can lead to complacency and neglect of serious problems.
Much of our international reputation is based upon two outstanding features of American universities: unrelenting commitment to an atmosphere of free and open inquiry, and excellence in scientific research. These twin advantages attract the best talent from around the world to American universities, not only to our graduate programs but increasingly to our undergraduate colleges as well.
In other aspects of our enterprise, however, we find ourselves hard-pressed. Our funding model, first of all, is under severe duress. States have repeatedly reduced their support of public universities, most severely in the past five years, a disinvestment that now threatens to erode their quality and competitiveness.
Some public universities have understandably attempted to make up the deficit in state support by raising undergraduate tuition aggressively and increasing the proportion of out-of-state students. But this strategy undermines the public mission of providing access, creates anger in the state, meets resistance in the legislature, and has now attracted the attention of the White House. As states have shifted the burden of paying for college from their general funds to students and their families, the perception has grown that higher education, once seen as a public good, has become a private interest. And these coping mechanisms, if continued, will lead to general deterioration in the quality of undergraduate education, the very part of our universities that depends most upon state support.
At private universities, tuition and fees plus room and board have, in some cases, reached $55,000 per year. Although most students do not pay that full cost, and though generous financial aid policies and endowment spending have actually brought down the real costs for the average student over the past five years, a degree carrying a price tag of well over $200,000 creates automatic sticker shock in the public. It also raises real questions about whether we have been paying enough attention to holding down expenses.
The airwaves are rife with predictions of disruptive change coming to the economic model of higher education. It is no wonder that parents paying and borrowing for a college education steer their children toward practical majors that seem to promise instant employment, and discourage them from studying the liberal arts and sciences in pursuit of a well-balanced education. A private interest in education today means a purely economic one.
From this inversion of values flows our second problem: a redefinition of the purpose of undergraduate education. Fifty years ago, when I started college, there was a widely shared view in America that the purpose of a college education was to prepare students to become educated citizens capable of contributing to society. College was in the public interest because it gave graduates an understanding of the world and developed their critical faculties.
Today, many Americans believe that the sole purpose of going to college is to get a job -- any job. The governors of Texas and Florida are quite clear on this point, and draw the corollaries that college should be cheap and vocational, even when delivered at major research universities like the Universities of Texas and Florida. A university education is more than ever seen as strictly utilitarian. The reasons are clear: a) as more students and families pay a large share of the costs, they naturally want a ready return on their investment; b) the most desirable jobs in this highly competitive job market require a college degree; and c) the gap in lifetime earnings between college and high school degree holders is huge.
Today, as many Americans hold a purely instrumentalist view of undergraduate education, they want a detailed accounting of its value. Hence our third problem: close public scrutiny and political accountability. Parents want to know, what did my daughter learn, and how does it contribute to her career? State legislatures want to know: what is the graduation rate at our university? How many undergraduate students do faculty members teach? And much more.
These questions put us in an uncomfortable position, because in some cases we do not know the answers, and in others we know them but do not like them. Many of us have eschewed the use of instruments assessing the value of general education, particularly at our major universities. We have, often for good reason, lacked confidence that such instruments are reliable measures of the value of a research university education, particularly if they are based on a one-size-fits-all approach.
However, given the level of scrutiny and skepticism in the public and in state houses, research universities need to take this issue seriously.
The professionalization of the professoriate has been crucially beneficial for research and graduate training at many institutions, but at most large universities, it has been problematic for undergraduate education. Several recent studies, some flawed but still indicative, have revealed that a significant percentage of students do not improve their critical thinking and writing much at all in the first two years of college. This should come as no surprise, given the dearth of small classes requiring active participation and intellectual interaction.
Too many students are adrift in a sea of courses having little to do with one another. Many courses, even at the upper division level, have no prerequisites, and many require no debate or public speaking or the writing of papers that receive close attention and correction. A student’s curriculum is a mélange of courses drawn almost haphazardly from dozens of discrete academic departments. And there is substantial evidence that students are fleeing demanding majors in favor of easier ones that have the added lure of appearing to promise immediate access to jobs.
The combination of drastic state disinvestment in public universities, student careerism, and pedagogical failings of our own has serious consequences for the country. To take one significant example, we now know that more than 50 percent of the students starting college with a stated desire to major in science or engineering drop out of those majors before graduating.
We can no longer blame this problem entirely on the nation’s high schools. A substantial body of research demonstrates conclusively that the problem is frequently caused by poor undergraduate teaching in physics, chemistry, biology, math, and engineering, particularly in the freshman and sophomore years. Students are consigned to large lecture courses that offer almost no engagement, no monitoring, and little support and personal attention. The combination of poor high school preparation and uninspiring freshman and sophomore pedagogy has produced a stunning dearth of science and engineering majors in the U.S. Our country now falls well behind countries like China and India in turning out graduates with strong quantitative skills.
According to the Organisation for Economic Co-operation and Development, the U.S. in 2009 ranked 27th among developed nations (ahead of only Brazil) in the proportion of college students receiving undergraduate degrees in science or engineering. As a result, American students are a dwindling proportion of our graduate enrollments in science and engineering. An administration report not only states that foreign students are earning more than half of U.S. doctoral degrees in engineering, physics, computer sciences, and economics but also estimates that the United States, under current assumptions, will in the next decade under produce college graduates in STEM fields by one million.
I fear the practical as well as intellectual consequences of these trends. However, I am not a pessimist; I am a realist. In this, the 150th anniversary year of the Morrill Act, I think we can do something to reverse these trends, if we muster our collective will to do so. The anticipated report of the National Research Council on the state of our research universities will, I hope, focus national attention on the problems and opportunities confronting these vital institutions.
But over time, the renewed public investment in higher education that our country needs is unlikely if we do not acknowledge our own shortcomings and begin to address them. First, we need to say loudly and clearly that improving undergraduate education will receive our closest attention and best efforts. We need to alter faculty incentives by making undergraduate teaching at least equal to research and graduate teaching in prestige, evaluation, and reward. And we need to do research-based teaching that takes account and advantage of the latest findings of cognitive science, which are extensive, on how students learn. In brief, they learn by doing, not by just listening to someone else; they learn by solving problems, not by passively absorbing concepts; they learn best in groups of peers working things out together.
Fortunately, some of our best universities are leading the way. Initiatives at such institutions as Johns Hopkins University, Stony Brook University, the University of Michigan,Stanford, Yale, and others offer great encouragement. The remarkable thing about them is the acknowledgment by faculty that we need to focus much more attention on undergraduate education, and that we need to deliver it more effectively than we have been doing. I find these examples exhilarating and promising.
At the Association of American Universities, we hope to disseminate the findings of such research across our universities, both public and private, and thus to stimulate more students to persist in their study of math and science and engineering. We have embarked on a five-year project led by top scientists and experts in science pedagogy designed to help science departments implement these new teaching methods. One of my hopes for the future of research universities is that student learning will be at the center of faculty concern, research will inform teaching, undergraduate classrooms will be places of engaged, participatory learning, and a university education will be not just a means to an entry-level job, but an invitation to a lifetime of learning.
I am well aware of the difficulty of changing those cultures. It will take a broad and deep effort to realize serious and sustainable gains. The stakes are high, not just for our universities but for the country. In the global knowledge economy, an educated public is essential not just to economic competitiveness but to national well-being.
Hunter Rawlings is president of the Association of American Universities. This article is adapted from a speech delivered on February 28, 2012, at the De Lange Conference at Rice University.
The latest book to suggest that American higher education needs to face up to a period of radical change is Abelard to Apple: The Fate of American Colleges and Universities (MIT Press). Abelard represents the medieval ideal of scholar/teacher/philosopher while Apple is the world of iTunes U. The author is Richard A. DeMillo, Distinguished Professor of Computing, professor of management and director of the Center for 21st Century Universities at Georgia Institute of Technology.
Our younger child just finished the college admissions sweepstakes. He got into one of his top choice schools, but he says he feels more unburdened than proud. Now he can get on with his life, enjoying the things he loves to do. He no longer has to worry about marketing his “admissions package,” as if he were the latest toothpaste or laundry detergent.
Our family last went through the admissions experience eight years ago when our older child applied to college. Although he ended up at one of the “hot” Ivy League universities, we sadly concluded that the selective college admissions process had no redeeming social value. You just lived through it, hoped your child survived unscathed, and prepared to hand over your bank account.
Unfortunately, it has gotten worse since then. More than ever, higher education seems like a commodity, as selective colleges market themselves shamelessly, increase applicant demand, and manage enrollments as if they were commercial enterprises. And, in response, an industry of expensive services and consultants to teach applicants how to game the admissions system is booming. Uncalculated is the toll on students, integrity and fundamental fairness.
This time around, college planning started just before ninth grade, when the college counselor at our son’s school met with parents and students to advise on the importance of course selection over the next four years. The message was to take diverse and challenging courses if you hope to get into a selective college -- loosely defined as the top 50 colleges and universities in the U.S. News & World Report annual survey. No big deal: Anyone who is interested in a rigorous liberal arts education for their child would probably take this advice anyway.
Then came 10th grade’s pre-pre-college admissions testing regimen: the PSAT, given by the College Board, and the PLAN, from ACT Inc. This was to get students ready to take the same tests again in 11th grade, to get them ready to take the tests that count big time in college admissions, the SAT and ACT. Although originally devised as alternatives, counselors now tell students to take both the SAT and the ACT and submit the score of the one they do best on. These tests are in addition to at least three SAT II “achievement” tests and, of course, a battery of Advanced Placement exams for those rigorous courses they are counseled to take. Pile on top of these the now de rigueur SAT and ACT review courses -- at, not incidentally, anywhere from $700 to $3,000 a pop.
Our son, a motivated student with top grades and a challenging academic program, is a very good, but not spectacular, standardized test-taker. Friends with children at other schools told us that kids had to have 1500 SAT’s to be in the admissions hunt at top-echelon colleges. Looking at the median test scores published by colleges and information services all over the Internet, this notion did not seem completely off-base. But even if it meant going to a lesser member of the “nifty 50” group of colleges, our son eschewed review courses on the grounds that he already had a heavy schedule and would rather read some good books than spend hours taking boring SAT or ACT prep classes. Obviously, we had done something right in his education, but we were definitely out of the mainstream.
He opted not to take the SAT at all, and ended up scoring in the 99th percentile on the ACT after doing some test prep at home on his own. This he was proud of, because, as he said, he isn’t a wiz at standardized tests, and he didn’t take an expensive prep course. I suppose it was a kind of reverse snobbery (“anyone can do well if they take a prep course, but I did it on my own”) and a real sign of the times in the selective college admissions world.
Fate was cruel to him in other ways. The night before the first AP exam in his junior year, he developed golf-ball-sized lymph nodes all over his neck and groin that looked suspiciously like lymphoma. It took four days to determine that he had mono, not cancer. This scare did put the whole college admissions lunacy in perspective for us.
On the other hand, our son endured AP and SAT II exams while suffering from mono. Now he had a new dilemma. Does he tell colleges he took the exams while sick? Does he take tests over in the fall? No matter how well he did, would he have done better if he had not had mono? In the end, he decided to accept fate. He did reasonably well on the tests, there were limits to how much of his life he was prepared to devote to getting into the “perfect” college, and he did not like making excuses, even good ones.
Our son’s college application experience was tame compared to children of a lot of upwardly mobile, well-educated, Baby Boom parents. For starters, the popularity of private “college consultants,” notwithstanding their ludicrous fees, took us by surprise. One family we know had a consultant on retainer from the time the child was in seventh grade. This was in addition to the cost of SAT prep courses and the professional editor for the college essay. The total bill for these services was more than $30,000.
An acquaintance we bumped into at a wedding last summer informed us she had just opened a private college consulting business, having recently retired from her position as a highly successful college counselor at an elite prep school. She offers a four-year package for about $15,000, or the college-application-only option for the all-important senior year for about $5,000. Her phone was ringing off the hook. Could this possibly be worth the extraordinary expense?
More important, what message does it send to children about their worth and competence when we act as if the only way they can make it into a selective college is to hire high-priced help to package and market them? Is the admissions prize worth this psychological price? As bad, are we raising a generation of young cynics?
Looking for Help
A quick Internet search revealed no shortage of expensive, fear-mongering consultants to guide students and their families through what they imply is the mine field of selective college admissions. After reading these sites, we wondered if a mere mortal could possibly fill out an application for an elite college, never mind actually get in. I went to Amazon.com and did a search for books on college admissions. The first book that turned up was A is for Admission; the Insider’s Guide to Getting into the Ivy League and Other Top Colleges (Warner Books, 1999), the controversial, tell-all exposé of selective college admissions by Michelle A. Hernandez. Hernandez is a former Ivy League admissions officer who now has -- you guessed it -- a college consulting business. I ordered the book and read it cover to cover.
She confirmed what our older son had learned from an admissions office friend at his Ivy League university: You are lucky if an admissions reader devotes 15 minutes to the application your child labored over for months. It might even be more like 10 minutes. Hernandez also explained how, by calculating a so-called “academic index,” the selective college admissions office will reduce your child’s entire high school career to one number, weighted heavily in favor of standardized tests. The book had the ring of truth, not the least because it confirmed my by-now-cynical view of the selective college admissions process.
Hernandez also instructed how to play the admissions game, with specific coaching like: play down economic advantages; play up work experience, especially hard manual labor; show long-term passion about a few things; choose teachers for recommendations who you know can write with style; and most importantly (was this tongue-in-cheek?) be yourself. Her follow-on volume, Acing the College Application: How to Maximize Your Chances for Admission to the College of Your Choice, was prescriptive about how to fill out an application, including how to do the “brag sheet,” the list of activities and interests that is required in the Common Application now used by most colleges.
Of course, her example of a brag sheet, taken from one of her clients, made the applicant sound like a combination of Albert Schweitzer and Steven Spielberg. If this was the competition, it was very discouraging. Her advice on college interviews was sensible and contained a list of common interview questions. (Spot on, according to our son, after having gone through six interviews.) You can retain Ms. Hernandez for what is undoubtedly thousands of dollars, or you can buy the books for a total of about $25. We chose the cheap alternative.
One of the great eye-openers in the college admissions experience was the amount of disingenuousness involved in writing the college essay. Our son’s school spends a few weeks in English class early in the senior year working on crafting personal essays in order to prepare for college applications, so we naively assumed that students wrote their own college essays.
Not necessarily. As we spoke to parents in other places who had lived through the senior year with their children, we personally came to know of a father who wrote his daughter’s college essay, a father who had his son’s college essay written by an employee of the father’s business, and parents who hired professional editors or writers to “help” with the college essay. The worst part is that in every case, these children got into their first choice schools.
We live in a small town in upstate New York and thought we were immune to what we viewed as these metro-area ethical challenges. Wrong again. The summer before our son’s senior year, we received a glossy brochure from a professional writer in our town. He has gone into the business of helping students to “find their voices” in the “all important” college essay, a service for which he charges the mere pittance of $1,500. Isn’t your child’s future worth it? There seems to be so much deception in college essay writing, I have come to the conclusion that essays should be eliminated from applications in favor of a personal essay question administered in a controlled environment by the College Board or ACT and forwarded by them to colleges. Ironically, I never imagined I would find myself advocating for yet another college admissions test.
The same family that spent more than $30,000 on college consultants claimed that the college counseling staff at their well-regarded country day school advised that if the family was of a charitable bent, the application year would be a good time to make a significant donation to their child’s first-choice college. The family said they pledged half a million. An old friend who has been on the faculty of an elite liberal arts college in New England for a quarter century confirmed that over the past five years it has become well known that a contribution of $500,000 to $1 million to a selective college can secure a spot in the class for a student who is academically qualified.
Since 90 percent of applicants to such colleges are academically qualified and most of them are not admitted, the wealthy who are prepared to be generous at the right time appear to be able to buy admission for their children. Off the record, some selective college administrators we know demur that you have to pledge to rebuild the library in order to influence an admissions decision. Whatever the price, the dirty little secret seems to be that admission is for sale in what sounds like a pretty straight-forward, if expensive, transaction.
Toward the end of our son’s wait to hear from colleges, he had a nightmare that notification finally came but merely said, “No conclusion.” Did it mean he was consigned to college admissions purgatory forever? This was a fate worse than death. Happily, he awoke and was eventually admitted. Just as happily, we will never have to live through this experience again.
But we cannot help wondering if the selective college admissions process is losing integrity with every passing year. Reading thousands of applications at ten or fifteen minutes apiece, can admissions officers really see through anything but the most obvious and overblown applicant marketing? How can we believe their universal representation that each application is carefully reviewed? And what happens to families whose children go to schools with under-staffed and overburdened guidance offices and who cannot afford private college consultants, clever essay editors, test prep courses and mammoth charitable contributions?
These questions raise issues of fairness that go far beyond the current debates about affirmative action. Let’s hope the colleges are trying to answer them.
Deirdre Henderson is a mother and lawyer who lives in upstate New York.
Doctoral education in the United States has changed rapidly over the last 30 years, with increasing specialization and the emergence of new sub-fields for graduate study. Depending on the nature and size of a university, some of these fields and sub-disciplines fit into traditional academic departments, while others demand their own departments or even colleges. Examples of the former abound, such as post-colonial studies, which may often find a comfortable home in an English or comparative literature department. In the latter category are fields like criminal justice, public policy, social work and nano-scale science and engineering -- highly-developed fields that attract increasingly large numbers of students and significant government and foundation funding.
We live within an academic marketplace of ideas, and the best institutions respond to the emergence of new areas of inquiry with vigor. Indeed, research universities can be judged by their ability to recognize and institutionalize new areas and disciplines, supporting excellence within them and nurturing their growth. Scholars typically lead administrators in these efforts, writing books that outline possible boundaries of a new field, establishing journals to define the area, or gathering colleagues for forward-looking conferences intended to advance cutting-edge approaches and methods.
From our standpoint, the more fields and defined areas of doctoral study, the better: Formal establishment of these areas is typically the result of tremendously high student and scholarly demand. New areas of study are also the result of significant investments by universities and, in the case of public institutions, taxpayer dollars. Not surprisingly state officials and the public are eager for an accounting of how their investments stack up against others. This issue becomes all the more important when, as is true with the National Research Council ratings project, participating public as well as private institutions must pay to be part of the study.
Given that pushing the research envelope is one of the central tenets of any great university, it seems ironic that a survey designed to evaluate the quality and breadth of research would leave so much of our nation’s research untouched. Despite the imagination, interdisciplinarity, and fluidity one finds across the academy in recognizing emerging fields, our most prominent rating system -- the NRC Assessment of Research-Doctorate Programs -- has not responded, and in fact has resisted the change we see around us.
This fall the NRC released its new taxonomy, listing the fields that would be assessed and those that would not be studied. The new taxonomy reflects our worst fears for the assessment of Ph.D. programs: It fails to recognize a large number of thriving and vitally important fields where some of the most talented researchers in the world can be found. Among these fields are criminal justice, public administration and policy, social work, information science, gender studies, education, and public health. We have expressed our strong objections about these exclusions to Ralph J. Cicerone, president of the National Academy of Sciences, and to Charlotte Kuh, study director for the NRC Assessment. We have received no response, and other academic leaders have been treated with the same disregard when they have challenged plans for the new assessment. Such behavior seems especially problematic given the importance of the NRC study for institutions and researchers.
Placing fields like gender studies and information studies in the new, nebulous "emerging fields" category -- fields that will not be rated -- does not solve the problem in the least, but simply steers important scholarly endeavors in a giant black box. The justification of the taxonomy boldly notes that "emerging areas of study may be transitory," hence it is risky to evaluate them with the same rigor used for other fields. From what we can discern at least, information science, the study of race, of ethnicity, of sexuality, and gender have already emerged, and have profoundly changed the academy for the better. We imagine that scholars in these fields are not transitory in the least: A large number of them hold endowed chairs, run centers, manage departments, edit journals, lead foundations and run major institutions.
To make matters even more painful for us, for our faculty, and for colleagues around the nation, the National Academies recently asked for our financial support for the project -- a contribution of $20,000 for larger research universities like our own. We felt compelled to pay the price, but we did so reluctantly and over the strong objections of leading scholars on our campus.
One can critique numerous aspects of the NRC rating system, and a variety of leaders in higher education have done so quite eloquently for more than a decade since the last report. The data collection takes years to compile, and these data quickly become outdated as faculty members move and institutions change. We understand that the new system will involve online questionnaires and include a database that can be updated annually, and we appreciate the National Academies’ efforts in this regard. Another difficulty with earlier NRC studies has been the inclusion of reputational surveys. The forthcoming NRC study has promised to eliminate the reputational rankings from its rating system, and this, too, is an improvement. Among the worst offences of the system has been the bias toward large programs; much of the variance in previous ratings can be explained elegantly by department size (The 600-pound gorilla of a department, even with many unproductive scholars, will come out ahead of the smaller and higher quality programs). Perhaps the questionnaires planned for institutions and admitted-to-candidacy doctoral students in selected fields will help add a new dimension to program quality that will compensate in some programs for differences in size.
But these revisions, while potentially significant, only make the intentional and unexplained omission of major fields of knowledge, critical to the development of the academy, more inexplicable. Methodological change is not much of an advance if one is not measuring the right population of fields and disciplines. A social science parallel, from public opinion research, is relevant here: You can refine a survey instrument all you like, sweating over question wording, order effects, and non-response to the survey. But if you are asking respondents about banal issues of little political import, why bother?
There is an even more troubling irony in the current effort. The NRC has chosen not to include “those fields for which much research is directed toward the improvement of practice,” such as Ph.D. programs in “social work, public policy, nursing, public health, business, architecture, criminology, kinesiology, and education.” This approach, of course, flies in the face of a recently-released report on “The Responsive Ph.D.” by the Woodrow Wilson National Fellowship Foundation. This report identifies the principle of “a cosmopolitan doctorate” as central to the future of the Ph.D. The report emphasizes that such a doctorate “will benefit enormously by a continuing interchange with the worlds beyond academia” and calls upon doctoral education to “open to the world and engage social challenges more generously.” The NRC assessment, by excluding so many well-established Ph.D. programs, will simply have the effect of reifying the status quo at research universities, instead of helping us respond boldly to the loud and chronic public call for an open and responsive academy.
The Taxonomy Committee argues that the task of evaluating research in these fields lies “beyond the capacity of the current or proposed methodology.” We do not accept this argument as valid, particularly given the proposed scope and expense of the projected NRC study. Further, the taxonomy displays no systematic logic with regard to which applied and interdisciplinary programs are included and which are excluded. Why include nutrition or pharmacology, clearly applied fields, but not criminal justice? Why is the study of sexuality not included while German linguistics and Latin American Literature both are? There is no decision rule in sight, and the taxonomy does not even come close to matching the current landscape of the academy. Perhaps if the NRC had retained the reputational measures, they might have been able to mount an argument about excluding particular fields. But, ironically, the new approach makes the taxonomy more distant from reality. It is removed from the marketplace of ideas, and excludes the voice of the scholarly community.
Apparently the NRC is not open to arguments like the ones above, and as a result, the ratings they will eventually produce will not reflect a great deal of the most important scholarship in higher education today. Not only will the final report have gaping holes, ignoring the work of thousands of scholars, but the NRC will also fail to recognize that interdisciplinary research with practical application matters immensely.
We predict that this next round of results will be received -- whenever it is complete -- as a dinosaur, an artifact of uneven logic and old-fashioned thinking about what constitutes true scholarly discovery. We are grateful that other assessment systems are appearing and regret that the NRC will spend over $5 million on a quickly outdated effort to assess graduate education. Thankfully, such short-sightedness will not stop our best scholars from developing new approaches, forging innovative fields, training hungry students, and changing the world for the better through their work. We call on the National Academies – yet again -- to reconsider their taxonomy, so that leaders in higher education can demonstrate to our public officials that we are capable of evaluating the very research enterprise with which we have been entrusted.
Kermit L. Hall and Susan Herbst
Kermit L. Hall is the president and Susan Herbst is the provost of the State University of New York at Albany.
American colleges and universities, especially those that define themselves as public institutions because they are owned by states, carry on a continuous conversation with their faculty, students, trustees, legislators, alumni and friends about the distribution of benefits and costs between private and public entities. This conversation of many decades has gained considerable visibility lately in the form of a question: Are America’s public universities becoming private? Although this question is surely worth the extended and often highly perceptive analysis it receives, it sometimes helps to reconfigure the debate slightly to gain another perspective.
It’s not that anyone misses the central point -- the public, tax supported percentage of public university budgets has been in decline for over a decade, even though the public investment in public higher education in total dollars continues to rise as more and more students enter postsecondary education. Rather, we often let our words define our view of the world when our words may not mean exactly what we take them to mean.
When we say public universities, we immediately bring a prototypical institution to mind, usually a substantial state flagship university, often from a Midwestern frame of reference, perhaps modeled after Iowa or Indiana or Wisconsin. When we say private university we also have a prototype in mind, perhaps Stanford, Yale or Duke. From these prototypes we develop a conversation about the convergence of public and private that leads us to worry about the loss of public purpose and investment in American higher education.
In the real world, most of public higher education takes place in state and community colleges that remain often 80 to 90 percent funded by public sources. For these institutions, the issue of public versus private is mostly irrelevant, and while they celebrate every small gift and modest grant, their primary focus is on their states and localities in the endless effort to sustain their operations. They are not at risk of becoming private.
Similarly, in the real world, the notion of private universities being somehow separate and independent from the obligations of public institutions by virtue of their funding sources is also not entirely accurate. Private universities, even those with exceptional endowments, exist to large extent on the public’s account. Their endowments succeed by virtue of public tax exemptions. The gifts that build the endowment enjoy a public tax exemption. The property and campuses of these private universities enjoy a public tax exemption. The federal government provides extensive tax supported need based financial aid to private institutions, revenue that subsidizes those institutions’ tuition and fess.
Private research universities, like their public counterparts, receive federal grants and contracts whose overhead pays some portion of the research costs, a direct taxpayer subsidy. Private universities in many states receive a per-student subsidy for every in-state student they enroll, again a public subsidy. And on occasion, private universities succeed in persuading their states to invest in economic development activities that support the academic objectives of the private institution (either by subsidizing research or helping defray the costs of facilities).
America’s private institutions are a public trust. While they can evade many of the considerable bureaucratic and regulatory costs and obligations that public universities endure, they are nonetheless, publicly subsidized institutions with private governance.
This is not a bad thing. It is just how we do business in America.
However, higher education itself (public or private as defined by institutional governance) is both a public good and a private good for most of its participants. Students in particular may attend college for wisdom and knowledge, but primarily they attend college to acquire the skills and credentials needed for the good life. Publication after publication calculates and compares the differential lifetime earnings of college graduates compared to high school graduates, demonstrating over and over again the exceptionally high personal, private value that a college education confers. The private benefit justifies tuition and fees, the lost income for the years of college attendance, and the loan indebtedness incurred by students and their families. The data tell us that these private benefits more than sufficiently compensate for the costs parents and students assume, a conclusion the behavior of students and parents verifies.
What’s the argument about then? If this is such a good deal, why do we have a controversy about the withdrawal of public support from public institutions? The controversy is less about the withdrawal of support than it is about the amount of subsidy individuals should receive as they acquire the private benefit of a college education. The consequence of a decline in the taxpayer subsidy of public higher education is an increase in the net cost of higher education to students. This increase in the net cost has many consequences, of course. As an example, for students whose families are at the margin of the American economic dream, any increase in the net cost may well put some forms of higher education, but usually not all forms, out of reach.
The changing emphasis away from the general benefit colleges and universities bring society to the particular benefit that they bring individuals encourages a tendency toward complex pricing. For selective institutions, as everyone knows, the sticker price of higher education (whether at a “public” or “private” institution) reflects what we could call a reference price. This is not the actual price charged every student; instead, a reference price marks the highest price a student should have to pay to acquire the private benefit of attending the institution. To arrive at the real price, the institution and the student engage in a private negotiation to set the net actual price based on an evaluation of what the institution can give the individual student and what the individual student can give the institution. This is not a public transaction that applies to all students; it is a private transaction that negotiates a private price between suppliers (the school) and individual consumers (the student).
Although this transactional model is well known to students and parents, it reflects a larger tendency in the American political and social environment to disaggregate a general public good (such as a university enterprise) into a collection of private goods (such as specific college degrees, different majors, or special programs) and then negotiate separate agreements and pricing mechanisms for the production and delivery of these private goods.
Many public and private universities find it easier to persuade legislators to buy particular fragments of the institution’s purposes than it is to acquire general funding for the overall purpose of the institution. We can get an earmark for an honors program, for a remedial program for students from disadvantaged backgrounds, for enhancement of science and math, for the improvement of writing skills, or for a research building tied to a specific economic development objective long before we can get an increase in the general fund to support general education and research for all students and faculty.
This particularistic approach to higher education affects public and private institutions in other areas of funding as well. When either type of institution asks a donor for institutional support, more and more donors want very specific agreements about the exact use of their funds, even if the funds are placed in an endowment that will last forever. They do not give money for the improvement of education; they give funds for art history, for the development of specific scientific sub-disciplines, or for the recruitment of basketball players. These transactions, like the student transactions, are individual, private, and specific.
We can speculate on the many reasons why our society has drifted into seeing higher education as a retail consumer product (whether owned by the public or by a private nonprofit corporation). We can worry about the lack of faith in the institutions’ integrity and consistency of purpose that encourages specific and detailed transactions rather than satisfaction with general commitments. We can feel outrage at the retreat from supporting higher education because it is good for the nation into a safe haven that sees higher education as a privately acquired ticket to prosperity. Yet every institution, public or private, finds itself accelerating these trends by its policies and practices.
I’m often reminded of an effort some years ago by a distinguished association of public and private research institutions to band together and refuse to participate in such retail negotiations, associated in this instance with federal earmarks. The academic leadership at that time employed remarkable eloquence in the defense of common approaches to peer reviewed grant making as being the best possible means of achieving the public good of merit based award of scientific and other educational support. After the meeting, the institutions appeared to increase their practice of securing as many earmarks in the federal budget as possible, employed high powered lobbyists to improve their chances, and kept score on how much money their local legislators helped them bring home.
All of us in American higher education, especially at the high end of America’s 170 or so research institutions, are in the public and private sectors. We all seek public funds and private funds, we all deal with our students on the basis of selling a publicly subsidized product at individually negotiated prices, we all show great creativity in disaggregating our products and services into the smallest retail units needed for sale to our many private purchasers. We might prefer a different system, but this one, for all its faults, is the one we’ve helped invent and continue to refine.
The analysis of citations -- examining what scholars and scientists publish for the purpose of assessing their productivity, impact, or prestige -- has become a cottage industry in higher education. And it is an endeavor that needs more scrutiny and skepticism. This approach has been taken to extremes both for the assessment of individuals and of the productivity and influence of entire universities or even academic systems. Pioneered in the 1950s in the United States, bibliometrics was invented as a tool for tracing research ideas, the progress of science, and the impact of scientific work. Developed for the hard sciences, it was expanded to the social sciences and humanities.
Citation analysis, relying mostly on the databases of the Institute for Scientific Information, is used worldwide. Increasingly sophisticated bibliometric methodologies permit ever more fine-grained analysis of the articles included in the ISI corpus of publications. The basic idea of bibliometrics is to examine the impact of scientific and scholarly work, not to measure quality. The somewhat questionable assumption is that if an article is widely cited, it has an impact, and also is of high quality. Quantity of publications is not the main criterion. A researcher may have one widely cited article and be considered influential, while another scholar with many uncited works is seen as less useful.
Bibliometrics plays a role in the sociology of science, revealing how research ideas are communicated, and how scientific discovery takes place. It can help to analyze how some ideas become accepted and others discarded. It can point to the most widely cited ideas and individuals, but the correlation between quality and citations is less clear.
The bibliometric system was invented to serve American science and scholarship. Although the citation system is now used by an international audience, it remains largely American in focus and orientation. It is exclusively in English -- due in part to the predominance of scientific journals in English and in part because American scholars communicate exclusively in English. Researchers have noted that Americans largely cite the work of other Americans in U.S.-based journals, while scholars in other parts of the world are more international in their research perspectives. American insularity further distorts the citation system in terms of both language and nationality.
The American orientation is not surprising. The United States dominates the world’s R&D budget -- around half of the world’s R&D funds are still spent in the United States, although other countries are catching up, and a large percentage of the world’s research universities are located in the United States. In the 2005 Times Higher Education Supplement ranking, 31 of the world’s top 100 (research-focused) universities were located in the United States. A large proportion of internationally circulated scientific journals are edited in the United States, because of the size and strength of the American academic market, the predominance of English, and the overall productivity of the academic system. This high U.S. profile enhances the academic and methodological norms of American academe in most scientific fields. While the hard sciences are probably less prone to an American orientation and are by their nature less insular, the social sciences and some other fields often demand that authors conform to the largely American methodological norms and orientations of journals in those fields.
The journals included in the databases used for citation analysis are a tiny subset of the total number of scientific journals worldwide. They are, for the most part, the mainstream English-medium journals in the disciplines. The ISI was established to examine the sciences, and it is not surprising that the hard sciences are overrepresented and the social sciences and humanities less prominent. Further, scientists tend to cite more material, thus boosting the numbers of citations of scientific articles and presumably their impact.
The sciences produce some 350,000 new, cited references weekly, while the social sciences generate 50,000 and the humanities 15,000. This means that universities with strength in the hard sciences are deemed more influential and are seen to have a greater impact -- as are individuals who work in these fields. The biomedical fields are especially overrepresented because of the numbers of citations that they generate. All of this means that individuals and institutions in developing countries, where there is less strength in the hard sciences and less ability to build expensive laboratories and other facilities, are at a significant disadvantage.
It is important to remember that the citation system was invented mainly to understand how scientific discoveries and innovations are communicated and how research functions. It was not, initially, seen as a tool for the evaluation of individual scientists or entire universities or academic systems. The citation system is useful for tracking how scientific ideas in certain disciplines are circulated among researchers at top universities in the industrialized countries, as well as how ideas and individual scientists use and communicate research findings.
A system invented for quite limited functions is used to fulfill purposes for which it was not intended. Hiring authorities, promotion committees, and salary-review officials use citations as a central part of the evaluation process. This approach overemphasizes the work of scientists -- those with access to publishing in the key journals and those with the resources to do cutting-edge research in an increasingly expensive academic environment. Another problem is the overemphasis of academics in the hard sciences rather than those in the social sciences and, especially, the humanities. Academics in many countries are urged, or even forced, to publish their work in journals that are part of a citation system -- the major English-language journals published in the United States and a few other countries. This forces them into the norms and paradigms of these journals and may well keep them from conducting research and analysis of topics directly relevant to their own countries.
Citation analysis, along with other measures, is used prominently to assess the quality of departments and universities around the world and is also employed to rank institutions and systems. This practice, too, creates significant distortions. Again, the developing countries and small industrialized nations that do not use English as the language of higher education are at a disadvantage. Universities strong in the sciences have an advantage in the rankings, as are those where faculty members publish in journals within the citation systems.
The misuse of citation analysis distorts the original reasons for creating bibliometric systems. Inappropriately stretching bibliometrics is grossly unfair to those being evaluated and ranked. The “have-nots” in the world scientific system are put at a major disadvantage. Creative research in universities around the world is downplayed because of the control of the narrow paradigms of the citation analysis system. This system overemphasizes work written in English. The hard sciences are given too much attention, and the system is particularly hard on the humanities. Scholarship that might be published in “nonacademic” outlets, including books and popular journals, is ignored. Evaluators and rankers need go back to the drawing boards to think about a reliable system that can accurately measure the scientific and scholarly work of individuals and institutions. The unwieldy and inappropriate use of citation analysis and bibliometrics for evaluation and ranking does not serve higher education well -- and it entrenches existing inequalities.
Philip G. Altbach
Philip G. Altbach is director of the Center for International Higher Education, at Boston College.
Ward Churchill should be fired for academic misconduct -- that’s the decision made by the interim chancellor at the University of Colorado at Boulder, after receiving a report from a faculty committee concluding that Churchill is guilty of falsification, fabrication and plagiarism. That report shows that, even under difficult political conditions, it’s possible to do a good job dealing with charges of research misconduct. The Colorado report on Churchill provides a striking contrast to the flawed 2002 Emory University report on Michael Bellesiles, the historian of gun culture in America, who was found guilty of “falsification” in one table. The contrast says a lot about the ways universities deal with outside pressure demanding that particular professors be fired.
Churchill is the Native American activist and professor of ethnic studies at Colorado who famously declared that some of the people killed in the World Trade Center on 9/11 were “little Eichmanns.” In the furor that followed, the governor of Colorado demanded that the university fire Churchill; the president of the university defended his right to free speech, but then -- facing a series of controversies -- resigned. Churchill’s critics then raised charges that his writings were full of fabrications and plagiarism, and the university appointed a committee of faculty members to evaluate seven charges of specific instances of research misconduct. Their 124-page report, released on May 16, concluded that Churchill’s misconduct was serious and was not limited to a few isolated cases, but was part of a pattern. The panel divided on an appropriate penalty: one recommended revoking his tenure and dismissing him, two recommended suspension without pay for five years, while two others recommended that he be suspended without pay for two years.
One key instance of “falsification and fabrication” was Churchill’s writing about the Mandan, an Indian tribe living in what is now North Dakota, who were decimated by a smallpox epidemic in 1837. The Mandan, Churchill argues, provide one example of how American Indians were the victims of genocide. In an essay titled “An American Holocaust?," he wrote that the U.S. Army infected the Mandan with smallpox by giving them contaminated blankets in a deliberate effort to “eliminate” them. Churchill footnotes several sources as providing evidence for this claim, including UCLA anthropologist Russell Thornton’s book American Indian Holocaust and Survival. But Thornton’s book says the opposite: the Army did not intentionally give infected blankets to the Mandan. None of Churchill’s other sources provide support for his claim. Nevertheless Churchill repeated his argument in six publications over a period of ten years, during which his claims about official U.S. policy toward the Mandan “generally became more extreme.” He refused to admit to the committee that his claims were not supported by the evidence he cited. Therefore, the committee concluded, Churchill was guilty of “a pattern of deliberate academic misconduct involving falsification [and] fabrication.” The panel members came to similar conclusions regarding five other charges.
The five-member Colorado committee worked under a cloud: The only reason they were asked to look at his academic writing was that powerful political voices outside the university wanted Churchill fired for his statement about 9/11. After the university refused to fire him for statements protected by the First Amendment, his critics raised charges of research misconduct, hoping to achieve their original goal. What are the responsibilities of an investigating committee in such a highly-charged political situation?
In this respect the Ward Churchill case has some striking similarities to the case Michael Bellesiles, who was an Emory University historian when he wrote Arming America, a book that won considerable scholarly praise when it first appeared -- and that aroused a storm of outrage because of its argument that our current gun culture was not created by the Founding Fathers. Pro-gun activists demanded that Emory fire Bellesiles, raising charges of research misconduct. Historians too sharply criticized some of his research. Emory responded by appointing a committee that found “evidence of falsification;" Bellesiles then resigned his tenured position.
Although the cases have some striking similarities, starting with the political pressures that gave rise to the investigations and concluding with findings of “falsification,” the differences are significant and revealing. The Emory committee concluded that Bellesiles’ research into probate records was “unprofessional and misleading” as well as “superficial and thesis-driven,” and that his earlier explanations of errors “raise doubts about his veracity." But the panel found “evidence of falsification” only on one page: Table 1, “Percentage of probate inventories listing firearms.” They did not find that he had “fabricated data.” The “falsification” occurred when Bellesiles omitted two years from the table, which covered almost a century -- 1765 to 1859. The two years, 1774 and 1775, would have shown more guns, evidence against his thesis that Americans had few guns before the Civil War.
But the Emory committee failed to consider how significant this omission was for the book as a whole. In fact the probate research criticized by the committee was referred to in only a handful of paragraphs in Bellesiles’s 400 page book, and he cited the problematic Table 1 only a couple of times. If Bellesiles had omitted all of the probate data that the committee (and others) criticized, the book’s argument would still have been supported by a wide variety of other relevant evidence that the committee did not find to be fraudulent.
The Colorado committee, in contrast, made it a point to go beyond the narrow charges they were asked to adjudicate. They acknowledged that the misconduct they found concerned “no more than a few paragraphs” in an “extensive body of academic work.” They explicitly raised the question of “why so much weight is being assigned to these particular pieces.” They went on to evaluate the place of the misconduct they found in Churchill’s “broader interpretive stance,” and presented evidence of “patterns of academic misconduct” that were intentional and widespread.
The two committees also took dramatically different approaches to the all-important question of sanctions. At Emory the committee members never said what they considered an appropriate penalty for omitting 1774 and 1775 from his Table 1. They did not indicate whether any action by Emory was justified -- or whether the harsh criticism Bellesiles received from within the profession was penalty enough.
The Colorado committee members, in contrast, devoted four single-spaced pages to “The Question of Sanctions.” They insisted that the university “resist outside interference and pressures” when a final decision on Churchill was made. Those favoring the smallest penalty, suspension without pay for two years, declared they were “troubled by the circumstances under which these allegations have been made,” and concerned that dismissal “would have an adverse effect on the ability of other scholars to conduct their research with due freedom.” These important issues needed to be raised, and they were.
Finally, the Colorado committee explicitly discussed the political context of their work, while the Emory committee failed to do so. The Colorado report opened with a section titled simply “Context.” It said “The committee is troubled by the origins of, and skeptical concerning the motives for, the current investigation.” The key, they said, was that their investigation “was only commenced after, and perhaps in some response to, the public attack on Professor Churchill for his controversial publications.” But, they said, because the claims of academic misconduct were serious, they needed to be investigated fully and fairly.
The basic problem with the Emory report was that it accepted the terms of debate set by others, and thereby abdicated responsibility to work independently and to consider the significance of the findings. Their inquiry should have been as sweeping as the stakes were high; instead they limited their examination to a few pages in a great big book. Colorado shows how to avoid the kind of tunnel vision that marred the Emory report. The report on Ward Churchill demonstrates that charges of research misconduct that arise in a heated political environment can be addressed with intelligence and fairness.