Graduation rates

Redefining Community College Success

Education Dept. panel's work to revamp government method of judging two-year institutions spurs disagreement about whether too much is being asked of them.

Car Sales and College Graduation

What do new auto sales data and the U.S. Education Department’s Digest of Education Statistics have in common?

Not enough.

Both are eagerly anticipated and extensively covered by the news media – particularly during spring months. Both contain key indicators necessary for informed, high-stakes decisions that affect the nation’s economy. But that’s where the similarities end.

The Digest of Education Statistics is an indispensable handbook for education analysts around the country, with detailed information on everything from elementary school enrollments to postsecondary institutions' revenue sources. Unfortunately the data are not very current or granular. While graduation season for 2010-11 is coming to a close at most institutions around the country, the latest national data for the aggregate number of students earning degrees are from 2008-9 -- two full years ago.

In contrast, the auto sales report shows sales by manufacturer and car models monthly. This week we learn that auto sales fell in May for the first time in eight months due to a combination of high gas prices and a shortage of fuel-efficient models created by Japan’s record earthquake. Industry-wide sales fell 3.7 percent to 1.06 million vehicles sold in May, down from 1.1 million a year earlier. The Japanese car shortage benefited Chrysler, which enjoyed a 10 percent increase in sales.

These are the kinds of data carmakers look at constantly to help make decisions about production volume, plant closings (or openings), and pricing. Policymakers are also looking at this data to see whether the bailout of the auto industry was worth the price tag, and whether tax incentives and mileage regulations are changing the mix of models sold in the U.S.

Similar data on bachelor's and associate degrees awarded, or engineering and music degrees, are nearly two years old. So what do we know about the kind of short-term credentials students turned to in response to the nose-diving economy? Or about which states and institutions stepped up their game to meet the ambitious goal President Obama set to have the highest proportion of college graduates in the world by 2020? Not much.

Further, there is an important debate right now between those who see higher education's mission primarily as preparing graduates for jobs and economic success, and others who worry about the erosion of general education in today's colleges and universities. Tracking the number of jobs created each month as the economy recovers serves as a starting point for talking about the quality of those jobs -- are they temporary jobs, low-paid service-sector jobs, highly skilled professional jobs, jobs in new businesses or old industries?

We should be having a similar national conversation about our college graduates. Wouldn't it be nice to know whether or not there have been any recent changes, perhaps as a result of the economy or of external pressures on colleges, in the mix of vocationally oriented and broader liberal arts degrees awarded around the country?

Granted, college graduates are not cars, and industrial analogies to higher education often ring hollow. But aren’t they much more important? While higher education represents an increasing share of students’ budgets and of the American economy, and is a linchpin to job creation, we’re largely navigating in the dark. New governors and legislators are trying to get their bearings and figure out how to meet ambitious goals with the scarce resources available. A new Congress is starting to talk about whether we can afford Pell grants as we currently know them. And they’re having those conversations with two-year-old data. Imagine setting national policy for the car industry based on GM sales in June 2009. Something needs to change.

Reliable Estimates Are Possible in the Current System

As a former state-level official who was responsible for getting this kind of data out in Florida, I understand some of the reasons for the delay in reporting college completion data. The university system I worked for could be only as fast and as accurate as its slowest and least accurate member.

So imagine the situation of the federal National Center for Education Statistics (NCES), where instead of the 11 universities we had to worry about, they have 7,000. As I write this, someone at NCES or one of its contractors is still trying to figure out how a new beauty school in Alaska managed to award 50 nuclear engineering doctorates last year -- or something similarly strange.

But that size is also an advantage. If Florida State University didn’t report reliable numbers, the accuracy of any state-level report would be severely compromised. If Florida State doesn’t report to NCES, on the other hand, it’s a rounding error. And the Florida States of the world rarely cause problems -- it’s the rural community college that just lost its lone institutional researcher, or the new beauty school in Alaska that doesn’t yet have one.

Releasing Data Earlier Is Possible

Reporting more current college completion data is possible. Here’s how. NCES collects data on degrees awarded once a year. The deadline for 2009-10 completions was last October 28. By early February, preliminary numbers are available for institutional researchers’ use, but the final numbers are released in the summer, a full year after the prior year’s students have actually graduated.

The preliminary numbers available in February have proven to be a reliable estimate. In my experience, there is usually very little difference between the totals that can be calculated with the preliminary data and what eventually comes out several months later. In fact, reliable estimates for the nation and for most states could probably be reported as early as November or December, based on “early returns” from the fall survey. While that’s still a few months after the peak of graduation season, it’s gaining nearly two years of valuable time and information.

Reporting more current college completion data is worth doing. Some states recognize this and make every effort to get data out early. Kentucky, for example, recently reported that public and private credentials awarded in that state are up 11 percent in 2010-11, largely at the associate and certificate level. Such timely reporting is noteworthy – and rare. But even precocious data gatherers like Kentucky will find their numbers hard to interpret without knowing whether the trend in other states is up 15 percent or down 3 percent. And it will remain a strictly local news story, timed differently in every state, rather than an occasion for national reflection on the state of higher education. By contrast, consider the healthy competition for business and jobs under way among governors. When new unemployment numbers come out weekly, governors aren't just looking at their own states, they’re measuring themselves against their neighbors and against the nation as a whole.

The timely national release of top-line completion numbers would put a day on the calendar to spark a recurring national discussion about how we’re doing across state lines and relative to one another and to our ambitious goals. We can imagine states and colleges themselves vying to be among the top-performing institutions and using it in their marketing and recruitment efforts. The competition for new jobs is fierce among states, and a number of governors have tried using tax policy to poach business from other states. Wouldn’t it be nice to see similar competition based on recent state trends in numbers of highly skilled graduates?

Complete College Completion Data

In addition to being more timely, it’s important for college completion data to be more comprehensive. Most states have good data on their public institutions’ graduates well before NCES releases national numbers, but information on private colleges (both nonprofit and for-profit) is spotty. And yet none of the big attainment goals set by states, the White House, or the Lumina Foundation for Education can be achieved unless private higher education contributes a big share of the needed graduates.

Making good strategic higher education decisions at the state level requires analysis of both public and private institution data. Perhaps a steep drop in nursing graduates at public colleges is spurring discussion of financial incentives to graduate more RNs. But if nursing degrees at private institutions are booming, that may not seem such a wise use of public funds. And if the trend is the same at private colleges, then perhaps the incentives should be available there as well.

As our state and federal elected officials continue making difficult policy and budget choices, we should hope that they are doing so based on data that are current and that bring to the surface trends in higher education that can guide informed, effective budgeting and policy making. If we can generate detailed auto sales data monthly, unemployment claims weekly, and stock market updates by the second, we should be able to produce college completion data sooner than two years after the fact.

Nate Johnson served as executive director of planning and analysis for the State University System of Florida and as associate director of institutional research at the University of Florida. He is currently a senior consultant for HCM Strategists, a health and education public policy and advocacy firm.

High Enrollers

New data from the Education Department show booming enrollments (with for-profit colleges leading the pack) and steady graduation rates.

Consensus or Groupthink?

INDIANAPOLIS -- The last two years have seen the emergence of the closest thing in arguably 50 years to a national higher education agenda in the United States.

State Universities' Tradition of Attrition

Among all the issues in higher education today, retention once again captures our attention. Most influential is the publication of Crossing the Finish Line, a study of completing college at America’s public universities, written by William G. Bowen, Matthew M. Chingos, and Michael S. McPherson. It’s reinforced by the June 2009 report, "Diplomas and Dropouts: Which Colleges Actually Graduate Their Students (and Which Don’t)," by Frederick M. Hess, Mark Schneider, Kevin Carey, and Andrew P. Kelly of the American Enterprise Institute. The two studies have rekindled our concern about the percentage of undergraduates who fail to complete their bachelor degrees.

It’s not just a source of concern to higher education researchers. My own provost, for example, last year declared a “War On Attrition” -- a campaign slogan that elevated stopping college drop outs to the alarm status usually associated with such national crises as the “war on drugs” or “the war on terrorism.”

The concern is, of course, with current problems and future remedies. Bowen, Chingos and McPherson’s sobering data about low degree completion at state universities confirms why their discipline of economics has been called the dismal science. The finding that few state universities graduate more than about 65 percent of their undergraduates in six years is particularly problematic because it indicates a decline from the retention and graduation rate at the same institutions twenty ago. What’s important about this last point is its suggestion that history matters. “How we are doing” in graduating students means at least in part, “Are we doing better or worse than in the past?”

One difficulty, though, is that the data bases on which economists and social scientists usually rely in studying higher education issues today do not extend far back in time. IPEDS and its predecessor, HEGIS, were first compiled in the late 1960s. So, we are left with the question of historical context: How do college graduation rates of today fare when compared with, let’s say, about a century ago?

It’s an important question because one temptation for academic leaders today is to presume that in the early 1900s college students enrolled full time and then graduated in four years. But was that so? To connect past and present I propose what Hollywood producers call a “prequel” -- a backward look that provides context for our present discussions.

I gathered and analyzed enrollment, retention, and graduation data at a number of colleges from the period 1890 to 1910. This includes a mix of public and private institutions -- Harvard, Brown, Amherst College, William & Mary, Transylvania University, and the University of Kentucky. I look at enrollment trends in two ways: first, by relying on the annual summaries that colleges published in their official catalogs. And, second, for some selected cases, I used contemporary attrition-retention-graduation tracking methods applied back a century. What this meant was to compile name-by-name tracking of freshmen an entering class at a college, then followed them name-by-name for four years.

These samples suggest that undergraduate retention and graduation a century ago varied greatly among colleges. It also tempers nostalgia, as even some prestigious, established colleges lost a large percentage of students on the way from freshman orientation to commencement exercises over four years.

Consider the entering class of Brown University in fall 1900 -- 157 freshmen. Four years later, Brown’s catalog listed 113 students in the senior class, with 103 receiving bachelor’s degrees. That is a four year retention rate of 72 percent, with 66 percent receiving a degree in four years. Not bad.

But look again! If one tracks those freshmen students name-by-name, the record is not so impressive. In fact, only 86 of the original 157 students were still enrolled as seniors -- and 78 received bachelor’s degrees. The four year retention rate actually was 55 percent -- and 50 percent received degrees at the end of four years.

The annual rosters, then, indicate that there were a substantial number of students showing up in the senior year who had not been there three years earlier. In other words, there were 30 students within a class of about 150 who either were dropouts who had returned to Brown -- or students who had transferred from other colleges.

A comparable pattern holds at the University of Kentucky. If one relies on the president’s annual reports, the 124 freshmen who started their studies in fall 1907 showed a high persistence rate of 93 percent into the sophomore year, followed by 65 percent in the junior year, and 54 percent in the senior year – with 52 percent receiving the bachelor’s degree in Spring 1911. This is borderline acceptable -- but, unfortunately, on closer inspection, the news gets worse. When one tracks each of the entering students name-by-name the retention rate drops dramatically -- showing in successive years 59 percent, 36 percent, and 30 percent reaching the senior year and receiving degrees.

In the early 1900s students enrolled in Harvard College typically showed a four year retention and graduation rate of about 65 to 75 percent. Amherst College, in contrast, underwent a dramatic change around 1900, with a persistent decline in graduation rate from about 75 to 85 percent in the 1890s to a range of about 50 to 60 percent between 1900 and 1905. Why this drop took place warrants close examination. In one year, there was an interesting explanation: Most seniors refused to accept their degrees as a sign of protest against the Board of Trustees for firing a president that the students liked.

The College of William & Mary provides one of the most puzzling cases. Today, as indicated in the two recent studies, William & Mary has one of the best graduation rates among all public universities -- 91 percent in six years. Looking back to the period 1900 to 1905, data for retention after the first year seems consistent, as more than 90 percent of freshmen returned for the sophomore year.

The surprising trend is that only about half the students then return for the junior year. And, a year later at commencement, only a handful of students received the bachelor of arts degree. The explanation for this bizarre syndrome is that most students were from impoverished families and needed to earn a living. The Commonwealth of Virginia allowed undergraduates to receive after two years the “L.I.” -- License of Instruction . This certified one to teach in Virginia’s public schools.

Evidently the prospect of starting a teaching career and earning a salary after two years trumped the goal of completing a bachelor’s degree. What it meant was that for an extended period, William & Mary was enrolling an unconventional group of two-year college students within the structure and customs of a traditional four-year bachelor’s degree institution.

What these historical case studies show is that retention was relatively low, at least when analyzed by the expectations of higher education researchers today. In the period 1890 to 1910, one liberal arts college had an attrition rate of 50 percent after the freshman year. At the end of four years the percentage of degree completions seldom surpassed 15 percent. At the high end, seldom does one find a college with a four year graduation rate of more than 65 to 75 percent. One of the most unexpected findings revealed by student cohort tracking are the signs of substantial transfers into a college, along with stopping out and dropping out -- contrary to the notion of full time undergraduates persisting at the same college for four years.

This story from a century ago does not at all dispel or contradict Crossing the Finish Line or "Diplomas and Dropouts." It does give a rich context as prelude to dissecting student attrition as a crisis in the early 21st century. College presidents in their annual reports back a century ago usually exaggerated or over-estimated the retention rates in their summaries -- whether by accident or design. One provocative suggestion is that college dropouts are a perennial problem in American higher education.

A Search for Answers

How to explain these surprising trends from a century ago?

Was the price of going to college causing students to stop their studies? This does not appear to be the case. Even though this was an allegedly “elite” era in access to higher education, college tuition charges were relatively low -- and showed scant increases over a two decade period.

One intriguing explanation rests with the values of the student culture of the era. In the late 19th and early 20th century one of the most popular banners found in dormitory rooms nation wide proclaimed, “Don’t Let Your Studies Interfere With Your Education!” Evidently a lot of freshmen heeded this advice. At Yale, each class vied for the honor of having the lowest academic rating. In one yearbook the Class of 1904 boasted “more gentlemen and fewer scholars than any other class in the memory of man.” Not to be outdone, the Class of 1905 countered with the self-congratulatory claim:

Never since the Heavenly Host
With all the Titans fought
Saw they a class whose scholarship
Approached so close to naught!

This herd instinct away from academic achievement evidently endured. Jumping ahead to the 1920s at Harvard, the dean reminded freshmen that the key to college persistence was “Three C’s, a D -- and keep your name out of the newspaper.” This could hardly be called academia’s “Great Expectations.” What it does suggest is a variation on the theme of what Bowen, Chingos, and McPherson called the syndrome of “under matching” in which a student succumbs to the low academic priorities of a campus culture.

What about the large state universities that started to emerge between World Wars I and II -- and which are central to the 21st century studies? My hunch is that the extension of modest admissions requirements combined with relatively low tuition charges created intolerable overcrowding that was not relieved until the campus construction boom of the 1960s. In 1936 the University of Wisconsin offered an introductory economics course in a lecture hall that was filled with 800 students. After World War II, academic officials at the University of California at Berkeley stated matter-of-factly that they preferred undergraduates to have a lecture course with 500 students and an esteemed professor, rather than have a small class with a lesser academic star.

One dysfunctional legacy was the oft-repeated episode where a professor at State U. starts the semester by looking out over a crowded lecture hall and reminds the freshmen, “Just because we have to take you doesn’t mean we have to keep you!”

Faculty and administrators appeared to have been unconcerned about attrition until the early 1970s. Indeed, at some colleges and universities, a high dropout rate often was a source of perverse pride that a department had high academic standards. But that was then. Now, the reminder from the two recent reports is that the failure of students to complete the bachelor’s degree is seen by higher education officials as a vexing problem with no obvious solutions.

John R. Thelin is University Research Professor in the History of Higher Education & Public Policy at the University of Kentucky.


Subscribe to RSS - Graduation rates
Back to Top