Earlier this month on Inside Higher Ed, I laid out my suggestions for terrain that would-be reformers of higher education should avoid at all costs, including such combustible subjects as intercollegiate athletics, faculty tenure and accreditation. That’s one way of staying focused, since most campaigns to transform American higher education have kept adding tangential issues to their already overcrowded agendas. Another essential way to ensure discipline is by keeping in mind the philosopher George Bernanos’s warning: the worst, most corrupting lies are problems poorly stated.
In this second essay I will strive to specify the issues, challenges, and problems that belong on a reform agenda, with as much specificity and precision as I can muster.
It’s not something the academy is comfortable talking about and certainly not something it is acting upon -- but learning really does belong at the top of any higher education reform agenda for a variety of reasons. In the first place, learning is the academy’s core business, and it has been for a long time, indeed, for something like a thousand years.
It’s what we do! For all the importance attached to research and service, most faculty spend most of their time teaching or preparing to teach or learning new things to teach to their students. What this teaching intends is learning, on the part of students as well as faculty. But there is a growing suspicion -- and in some quarters an already angry conviction -- that all is not well with the learning enterprise. Some worry that students today are not learning enough or not learning the right things or not proving capable of applying learning’s lessons to new problems and tasks. Underlying these doubts is the worry that today’s faculty -- that is, us -- do not know how to teach or teach so badly that it is inevitable our students learn so little. Derek Bok tapped into this largely undigested glob of worries, suspicions, and doubts in Our Underachieving Colleges.
When discussions of learning do take place, they most often focus on just a pair of issues. First, has the form of what is to be learned and hence taught changed? Those who argue that is exactly what has happened point out that successful learning today is less about the static acquisition of knowledge and more about the dynamic mastering of the skills needed to acquire and use knowledge.
Once the successful college student could be thought of in encyclopedic terms: all the facts, formulas, and theories neatly organized for quick and reliable retrieval. Today, though, a segment of the academy argues that the successful college student is much more a clever librarian -- that is, someone who knows how to ask the right questions and to recognize good answers. This reformulation of knowledge, they say, is the practical recognition that no one has sufficient time or gray matter to master a knowledge base that is growing exponentially every decade or so.
Discussions of the changing nature of knowledge often morph into what a successful learning outcome would be if detailed content were actually becoming less important than a well-executed learning process. The former is static; the latter is dynamic in the sense that learning processes change as the learner seeks new knowledge and tackles new problems.
My guess is that most faculty at most institutions would not know, if asked, what exactly is expected of them when critiques and pundits push the importance of critical thinking, analytic reasoning, and problem solving. Doesn’t the traditional physics course with its lectures, labs, and discussion sections, along with its focus on mastering the three laws of thermodynamics, teach critical thinking, analytic reasoning, and, above all, problem solving focused on what physics itself is all about? Doesn’t calculus or history or English or sociology or economics do just that -- teach the basics of a specific discipline? Isn’t that what’s really important -- not some kind of pop sociology or physics for poets or English literature for accountants?
This line of reasoning (or strategy of counterattack) leads almost inevitably to a second problem nascent in discussions of collegiate learning. How is anybody to know exactly what the student is either learning or actually preparing to do? Those who champion older, more content-centered and fact-specific definitions of learning have a simple answer: test the student, often and rigorously. Recognize that the testing regime is itself a learning process; students learn what they don’t know and proceed, if sufficiently prepared and motivated, to learn it so that they can do better on the next test.
Experts in the process of learning just shake their heads. They talk about the silliness of rote learning and how this static definition of learning leads inexorably to teachers teaching to the test. Many of them doubt whether the tests themselves can ever prove to be productive learning experiences. For example, coaching that includes mastering test-taking strategies can greatly improve a student’s score on the SAT/ACT and the more general sections of the GRE; however, it is not at all clear whether those improved scores also signal any improved capacity to learn -- either the static knowledge advocated by the traditionalists or the learning processes (for example, creative thinking) championed by their opposite numbers.
These sorts of discussions are linguistic cul-de-sacs, with both sides asserting that their concepts and methods of teaching and learning produce superior results. And there’s the rub: There is simply not nearly enough (I am almost tempted to write no) data telling us how or what today’s college students are learning. It is hard to understand, but it is nonetheless the case: the discussions of learning outcomes so popular today have proceeded without any agreed-upon means of either defining or testing for a learning outcome.
Some would argue the absence of testable learning outcomes manifests the academy’s reluctance to face the fact that today’s students are not learning enough of what they ought to be learning (whatever that happens to be). In fact, the absence of adequately defined testable learning outcomes reflects the fact that getting a good answer to the question has to date not proved very important. The United States continues to invest vast sums of money in an enterprise whose most tangible outcomes are only tangentially related to learning. Were this country -- or any country -- to decide it was important to rethink those investments, I think the academy would suddenly get very good at evaluating which teaching and learning modalities were the best. The question then becomes how best to create those conditions.
“There are many experiments under way at places like Carnegie Mellon University and the University of Minnesota Rochester designed to better understand how the brain physically learns, and if that work takes hold, the debate about learning I have just outlined will be swept away. Chicken Little learned a long time ago just how foolish it is to predict the end of one world or the beginning of another. Underlying those predictions too often are false signs, badly interpreted omens, and underestimates of the staying power of the status quo. But every once in a while education’s prophets prove prescient. Is this one of those times? I don’t know, but in contemplating how to recast collegiate learning, it seems prudent to keep in mind that we may all be learning differently in the decades ahead.
The numbers tell the story. For the first time in history, the 2000 decennial census reported that most Americans 25 years or older had at least some college education. A quarter of the population had earned a baccalaureate degree, and nearly 1 in 10 Americans 25 years or older had an advanced degree. In the decade following the 1990 decennial census, the United States had increased its stock of college-educated residents by more than 12 million baccalaureate degree holders -- the vast majority of whom had proceeded directly from high school to college.
And yet, despite this substantial growth in the number of college-educated Americans, the United States had actually slipped -- some would say slipped badly -- in those international comparisons that have become the metrics of globalization. The U.S. still ranks highly among developed countries in the proportion of adults aged 35 to 64 with associate degrees or higher. But by 2003, the country had slipped to seventh place among developed countries in terms of the proportion of young people aged 25 to 34 with a college degree -- just 39 percent. Ahead of the United States were Canada, Japan, Korea, Finland, Norway, Sweden, and Belgium.
College completion rates have come to play an increasing role in judging the efficiency of both individual institutions and the American system of higher education writ large. The fact that so many start and then do not finish a degree, either a two- or a four-year degree, is taken as a measure of the system’s inefficiency and hence its capacity to waste resources: the student’s time, energy, and money; the institution’s manpower; and the funding agencies’ direct and indirect appropriations.
Efficiency pundits write and talk eloquently about the dropout problem and the toll it takes on institutional budgets and state subsidies. From these lamentations it is easy to draw the conclusion that if fewer students dropped out, or better yet, had those students most likely to drop out never started, then the overall cost of the system would be substantially reduced -- though how that reduction would be translated into leaner institutional budgets is never made clear.
The flip side of high attrition and hence low attainment rates is remediation. Traditionally remediation has been narrowly defined as noncredit, pre-collegiate courses principally in math and composition. Most institutions use a combination of high school grades in specified courses plus a student’s SAT or ACT score to determine whether that student is exempt from taking the placement exams that, upon entrance, determine which students need to be tested in either math, composition, or both. The actual number of students requiring remediation is a function of how high the institution sets the bar -- either in terms of who is exempt from taking the placement exams or what constitutes a passing grade for those exams.
Remediation and developmental education programs are often described as the pathway by which students without the full panoply of required academic skills can gain access to a baccalaureate degree. For nearly all community colleges, remediation and developmental education form a core responsibility. Based on the Learning Alliance’s Pennsylvania study and Dave Veazey’s study of the alignment between high school curricula and the California State University requirements, my best estimate is that between 50 and 70 percent of entering community college students require some remediation -- as much as four courses over their first two semesters, although the average is more likely between two and three courses in their first year.
In the California State University system, the proportion of first-time enrollees requiring remediation ranges upward from one-third to one-half of each entering cohort. Middle-market institutions also regularly teach a limited set of remedial courses in composition and pre-collegiate math. Perhaps as many as one in six first-year students in these middle-market institutions enroll in one or two remedial courses in their first semester. Medallion and name-brand institutions, however, have little interest in offering remediation and only then in conjunction with programs designed to support students admitted under special circumstances. Many do not even test their freshmen, using instead SAT or ACT scores and As or Bs in high school math and English courses to establish basic proficiency.
What gives the discussion of attrition and remediation its edge is not the sense of wasted resources, though they are substantial, but the fact that who succeeds in college -- that is, who attains a college degree -- remains too much a function of the ethnicity and socioeconomic status of the student. Here, again, the numbers tell the unhappy story.
Among white students who enrolled in a four-year private college and university in 2000, two-thirds had earned a baccalaureate degree six years later. Among Hispanic students enrolling for the first time in 2000 in a private college or university, 59 percent had earned a baccalaureate degree six years later. The comparable number for black non-Hispanic students in private institutions was 45.9 percent after six years. In comparative terms, the non-completion rate for black non-Hispanic students in these institutions was 60 percent higher than that of their white counterparts.
There was a comparable gap for students attending a public four-year college or university. Among white students entering in 2000, 57.1 percent graduated in six years; 48 percent of Hispanic students graduated in six years; and 40.8 percent of black non-Hispanic students graduated in six years. Students attending public, two-year institutions were dramatically less likely to achieve an associate degree or its equivalent—and here, too, the student’s ethnicity was a major predictor of attainment. One-fourth of the white students in these institutions had earned an associate degree three years after enrolling, but only 17.9 percent of Hispanic students and 15.2 percent of black non-Hispanic students had similarly earned their associate degree in the same length of time.
Many of the strongest advocates for underrepresented and disadvantaged populations see in these numbers proof positive that only affirmative action, whatever its political risks, is necessary to produce equal access to educational opportunity. From their perspective, too many people of color have been denied meaningful educational access. It is hard to argue that American higher education is in fact colorblind -- the numbers tell a fundamentally different story. Affirmative action, however, even when it is accompanied by student support services, has not solved the problem despite the historical persuasiveness of its rationale. More recently, affirmative action has become a political minefield. Only among the nation’s most prestigious and selective institutions, where the losses are beginning to exceed the wins, does the push for affirmative action have significant traction. Given that the hallmark of a medallion college or university -- in addition to its high price tag -- is a six-year graduation rate of 85 percent or more, these institutions are also aggressive recruiters in search of students of color who will succeed academically once on campus.
Those who believe educational attainment is principally a function of economic status use these same numbers to bolster their case for increased student aid in general and more money for Pell Grants in particular. A college education’s ever-increasing sticker price actually discourages young people with limited means from thinking that a college degree is for them. The method of payment compounds the problem. For young people of limited means, borrowed money is not an investment but a harrowing burden to be avoided at all costs. As student debts rise, the inevitable result is a more immediate search for additional employment that first saps educational energies and then ultimately leads to dropping out of college altogether. The answer: put more public money in the pipeline, and the result will be more graduates.
This argument, for me at least, no longer holds water. I really do believe that the number of young Americans who are being shut out of higher education because the price is too high or the loans too great is, as the Learning Alliance’s Pennsylvania study documented, about 8 percent of the population of college high school graduates. Though that number is troublesome, it is dwarfed by the number of those not prepared for college. Because of the ready availability of student aid funds, many of these young people start college only to find themselves overwhelmed by what is being demanded of them academically. The growing number of students in remedial programs is but one measure of the problem's magnitude.
To put the matter more bluntly, the higher education attainment gap is in fact a preparation gap. For the next decade or more, the battle to make a college education equally attainable must necessarily be waged in the nation’s middle and secondary schools. The fact that those schools have become more ethnically and economically segregated over the last decade makes the struggle that much more difficult and important. That doesn’t mean that race and wealth don’t matter in 21st century America; rather, they matter most where basic skills are acquired as well as the appetite and motivation for further learning.
Focusing on preparation rather than access per se helps explain why neither federal nor state programs of student aid have closed the attainment gap. More money for student financial aid is not required, but heftier appropriations designed to improve middle and secondary schools are necessary, along with more uniform access to the kinds of low-risk educational alternatives represented by community colleges.
Advocates for the nation’s underrepresented populations do not like this conclusion, in part because it seems to let the nation’s colleges and universities off the hook.
Hardly. Higher education bears significant responsibility for the state of America’s middle and secondary schools. Higher education sets the standards, trains the teachers, and determines how K-12 education aligns with postsecondary education. Given that perspective, improving access and attainment is everyone’s responsibility. Charlie Reed, chancellor of the California State University (CSU), knows that and has committed his 23 campuses to building, sustaining, and mutually reinforcing partnerships with their feeder high schools. The goal is to cut in half the proportion of first-time freshmen entering CSU who fail one or more placement exams while at the same time increasing the number of high school students who are prepared to succeed in higher education.
No one should take my argument that improving the rates at which disadvantaged Americans attain a higher education depends more on increasing the supply of college-ready high school graduates than it does on increasing the supply of federal or state student financial aid to mean that money doesn’t matter -- or that money matters less -- in higher education. In truth, a money crisis now most threatens the continued success of the American system of higher education.
Higher education lobbyists, along with the organizations that employ them, are fierce in their advocacy of more money for higher education: more money for student aid, more money for research, more money for capital projects, more money for operations—or simply more money. They point out, quite rightly, that the public subsidy of higher education on a per-student basis has been declining, often precipitously.
But should the norm be the 1950s, when fewer than 1 in 10 Americans achieved a college degree? The 1980s, after the great expansion of public systems of higher education and the introduction of state scholarship programs to support students at private as well as public colleges and universities? Or should the norm be the late 1990s after most state legislatures had figured out that public treasuries did not have the resources—nor, given the growing anti-tax mood of the voting public, would ever again have sufficient resources—to provide a fully subsidized college education to all or even most of its citizens? Europe and a host of developed Asian countries are grappling now with this issue; there are not sufficient funds to pay for the massification of higher education. Ultimately students and their families have to pay an ever-increasing share of the cost of their higher education.
Had I been writing in the first years of this millennium, I would no doubt have argued that, through trial and error, American colleges and universities, along with the public officials responsible for federal and state higher education policy, had evolved a financial system that, while appearing awkward and at times counterintuitive, nonetheless provided a stable financial environment for the nation’s colleges and universities.
The basic principle was one of shared responsibility. Everybody paid something, and was entitled to some help if they could not afford the market prices colleges and universities had begun charging. State government provided direct appropriations to their public systems of higher education. Local and state governments provided base funding for community colleges and, in some states, postsecondary technical colleges. The federal government provided the necessary capital, grants plus loans, that ensured a smoothly functioning market for higher education.
At the same time, the federal government underwrote most costs associated with high-end science principally by funding and managing a competitive market for sponsored research. Most colleges and universities—but increasingly not all providers of postsecondary education—remained eleemosynary institutions entitled to a host of benefits as tax-exempt institutions. Their real property was exempt from local and state taxes, as were the revenues they earned by providing educational and research services and the returns they earned on their endowments. Gifts to colleges similarly provided tax benefits to donors.
Students paid their share of their college costs in a variety a ways. Many worked, both on and off campus. Most borrowed, usually at advantageous rates underwritten by public guarantees and subsidies. No one was expected to borrow more than the earning premium a particular degree was expected to provide. Parents were similarly expected to borrow to help underwrite the cost of their children’s college educations. Some, but certainly not all, of the capital underwriting these loans was provided by the federal government, again at advantageous rates.
While there is legitimate debate about whether this mix of self-help, subsidies, savings, and debt was equitable, this much is clear: the system helped make possible an extraordinary expansion of the market for higher education.
That was then, and this is now. The arrangements that once appeared settled have suddenly been called into question by a string of unexpected events and miscalculations that have left everyone a lot less certain of how to finance the nation’s higher education system. First, the underlying assumption that a college education was in fact a good economic investment came under inadvertent attack. Students and their families had assumed that tuition was more than worth the loans they had taken out to pay it. In late 2007, Harvard University’s new president, Drew Faust, announced a major change in the price of a Harvard undergraduate education. Starting with the class matriculating in the fall of 2008, students from families with incomes of less than $180,000 would pay a maximum of 10 percent of that income annually to attend Harvard. It was breathtaking—suddenly “ability to pay” and not the market would determine the price of a Harvard education for middle-class families. For families with greater incomes, the old rules would apply.
The announcement created havoc in the market. No institution save Harvard and a dozen other medallion colleges and universities had the endowment to match Harvard’s largesse. Cynics, and I was certainly among them, began asking what Harvard was really up to. It had no shortage of students, no evidence that its outsized price was discouraging middle-class students from either applying or attending. Rather, Harvard had other problems—political problems occasioned by the sheer size of its endowment, $35 billion in 2007, and the fact that its success in the market was annually yielding double-digit increases in value. What exactly was the endowment’s purpose? Was it to offset educational and scholarly costs? Guarantee the university’s educational quality for an indefinite future? Or had the Harvard endowment morphed into a tax-free hedge fund whose principal purpose was to make money with money?
Suddenly, colleges and universities of every stripe were being asked about the Harvard initiative. Would they give similar price breaks to middle-class families? Couldn’t they use their endowment returns to provide more scholarship aid to needy students -- never mind that almost no one had an endowment that came close to matching Harvard’s.
The second dislodging event that winter was the continued ripening of the payola scandal involving loan companies and collegiate financial aid officers, more institutions and more questionable practices. It was becoming embarrassingly clear just how much loose money was involved in higher education’s various financing schemes. Congress quickly moved to reduce the subsidies being paid to banks and other loan originators participating in the federal government’s student aid programs. The banks reacted by threatening to abandon the student loan market in favor of more profitable pursuits.
The third event in this tale of woe was the collapse of a housing market too dependent on the continued availability of sub-prime loans. Overnight, or so it seemed, a family’s net worth was in freefall. The equity vested in the family’s home would no longer be sufficient to support borrowing for a child’s college education. Actually, the sub-prime crisis had a much more immediate and potentially disastrous effect on a higher educational financial system that was beginning to look a lot like a house of cards. Everyone began wondering if the banks would still be in the educational loan business, given the new rules limiting subsidies and the general contraction of credit markets. At the same time, a spreading recession dampened the prospect of more money from state appropriations along with the prospect that the states hardest hit by the recession were likely to further reduce their support for higher education.
Like most crises -- and kidney stones -- this too would pass eventually. At this time, the student loan market has stabilized, most big student lenders have continued to make loans available, and the federal government has assumed the role of a lender of last resort, and, under the Obama administration, is promising to become the sole lender.
But for one brief, tantalizing moment everything seemed in doubt -- the supply of credit, the capacity of families to borrow, the ability of institutions to set prices in response to shifting market conditions. It was a crisis that curtailing higher education’s wasteful practices, whatever they might be, would not have solved, just as it was a crisis that was not about affordability -- though had higher education and its governmental sponsors not responded successfully, a college education could have ended up a whole lot more expensive for a whole lot of middle-class families.
The crisis revealed an extraordinary vulnerability -- to the machinations of a market leader like Harvard, to the sheer size of the student loan business and the nefarious schemes that amount of loose change attracted, and to a credit market that was suddenly in trouble. That vulnerability will continue unless and until higher education collectively reorders its economic practices and policies.
Governments will have to rethink the role of endowments and the tax-free advantages enjoyed by not-for-profit higher education -- and in the process decide how for-profit higher education ought to mesh with an enterprise that sees itself promoting not just individual advantage but also the public good. Institutions, both individually and collectively, will have to rethink how they set prices and whether in fact alternate ways of doing business will allow a substantial reduction in the prices they charge. And families will once more have to explore whether savings can play a bigger role in the financing of a college education -- provided there is an adequate supply of investment vehicles promising both safety and growth.
Not Four Horsemen -- but a Trio of Tough Issues
One irony underlying the three issues I have placed on higher education’s to-do list is that in defining them better -- less as slogans and more as problems in search of solutions -- I have made clear just how much needs to be done and done quickly.
The learning revolution is upon us; higher education needs to understand its import in terms of what can be done now and what will require colleges and universities in general and their faculties in particular to change how they do business. Higher education will have to rethink what it means to be a learning enterprise, including the role the new electronic technologies and insights from the neurosciences have to play in recasting what happens in the classroom, laboratory, and library.
The gap between advantaged and disadvantaged students is large and getting larger. College and universities will be required to do more to remediate past learning deficiencies. The larger challenge for higher education will be to learn to do more to help students in middle and high schools become college-ready learners.
Finally, higher education’s current financial system does look like a house of cards -- too dependent on tax breaks that are likely to be called into question, too dependent on credit markets that can suddenly contract, too unsure of the rationale by which it sets prices and offers discounts, and at the same time unable to imagine alternate production functions that could in fact yield substantial price rollbacks.
Learning, attainment, money -- a trio of truly tough issues.
Read more by
You may also be interested in...
Opinions on Inside Higher Ed
What Others Are Reading