Graduation rates

A Numbers Game?

Smart Title: 
Edison State awarded degrees to students who didn't complete requirements, raising concerns that pressure to raise graduation rates could produce such incidents.

Book Smarts and Basketball -- A Trend?

America's smaller colleges and universities are rarely given much chance for victory in the NCAA basketball tournament. But, they call it March Madness for a reason, in large part because of the upsets when an underdog takes on the big favorite and wins. Bucknell has 3,500 students, but last year enjoyed the thrill of taking on a far larger school and succeeding, when we upset Kansas in the first round.

Now that we have earned a second straight bid to the NCAA men's basketball tournament last week, the media have praised our players' talent and tenacity. The Los Angeles Times described Bucknell as "Duke of the Susquehanna." Added the Arkansas Democrat-Gazette, "Think (of a) bigger, stronger, more talented and more athletic Princeton."

As Bucknell's president, I can tell you we mean what we say at Bucknell -- on the court and, more important, in the classroom. Our basketball success has demonstrated that impressive academic and athletic achievements are not mutually exclusive. In fact, athletics directors and academic administrators at colleges and universities around the nation need to understand the new realities of Division I basketball competition.

Earlier this week, the Institute for Diversity and Ethics in Sport at the University of Central Florida released yet another study showing a disturbing disparity between basketball players' and all students' graduation rates. Of 65 teams competing in the Big Dance, only Bucknell could boast a 100-percent graduation rate (one team, Penn, does not report such data). I want to suggest, however, that rather than remaining an anomaly in Division I athletics, Bucknell's program -- and many in the Patriot League -- be taken as a model of the next best thing in college sports.

That is, our Bison have confirmed that fielding a team of smart players can create a competitive advantage.

There are enough bright students with basketball skills who want to play major college basketball to permit schools like Bucknell, with a driving focus on quality education, to succeed in the NCAA tournament.

Consider Kevin Bettencourt, who scored a game-high 23 points in last week's Patriot League tournament final. He's an American history major who chose Bucknell for its "great academic reputation along with Division I athletics." Chris McNaughton, the 6-11 center whose graceful hook shot sent Kansas home early last March, traveled all the way from Germany to avail himself of Bucknell's nationally celebrated electrical engineering program. Kevin, Chris, and Patriot League Player of the Year Charles Lee, earned 3.4 G.P.A.s or better this fall.

When we recruit students like these, they choose us because they receive a great education and a great basketball opportunity, yet they always know that academics will come first. On Friday, Arkansas will meet a Bucknell team populated with future scientists, engineers, writers, and businessmen who, happily, also love to play basketball.

In the future, being a "big time" sports school is going to provide less competitive advantage than it used to, in part owing to the NCAA's academic reform plan. Bucknell supports reform efforts because we want all students, not just ours, to graduate with a solid education that prepares them for life.

Also, the schools that traditionally have dominated television coverage now have competition for viewers.  The championship games of all the conferences -- not just the ACC, SEC, Big East, or Big Ten -- are being broadcast. And the trend is accelerating with broadband coverage and new stations such as ESPNU and CSTV.

As the lesser-known basketball programs enjoy greater exposure, quality players will increasingly opt to attend institutions like Bucknell, knowing they will have a reasonable shot at two hours (or more) of fame every March. But more important, they will join the ranks of those alumni who are CEOs, COOs, university professors, doctors, and lawyers, ensuring far more than two hours in the spotlight.

Just ask Les Moonves, a 1971 Bucknell graduate. He runs CBS, and CBS runs all the Big Dance games.

Brian C. Mitchell
Author's email:

Brian C. Mitchell is the president of Bucknell University in Lewisburg, Pa.

A Compromise on Unit Records

Of all the ideas to come out of Margaret Spellings's Commission on the Future of Higher Education, the final report proposal that has been the most contentious inside the DC Beltway is the proposal for a unit-records database. There are plenty of other controversial ideas floated in the commission's hearings, briefing papers, and report drafts, but the one bureaucratic detail that most vexed private colleges and student associations over the past year is the idea that the federal government would keep track of every student enrolled in every college and university in the country. Given reports this year about the Pentagon hiring a marketing firm to collect data on teens and college students, the possibility that Big Brother would know every student's grades and financial aid package has worried privacy advocates.

Fortunately, privacy and accountability do not need to be at odds.

The proposal for a unit-records database was floated in a 2005 report that the U.S. Department of Education commissioned. Advocates have argued that the current system of reporting graduation data through the Integrated Postsecondary Education Data System (IPEDS) only captures the experiences of first-time, full-time students who stay in a single college or university for their undergraduate education. How do we capture the experiences of those who transfer, or those who accumulate credits from more than one institution? Theoretically, we could trace such educational paths by tracking individuals, including your Social Security Number or another identifier to link records.

Charles Miller, who led the Spellings commission, was one of the unit-records database advocates and pushed it through the commission's deliberations. Community-college organizations liked the idea, because it would allow them to gain credit for the degrees earned by their alumni. But the National Association of Independent Colleges and Universities, the U.S. Student Association, and other organizations opposed the unit-records database, and in its current form the proposal is certainly dead on arrival as far as Congress is concerned.


There are three problems with a unit records database. The first problem is privacy. I just don't believe that the federal government would keep my children's college student records secure. An October report by the House Committee on Government Reform documents data losses by 19 agencies, including financial aid records that the U.S. Department of Education is responsible for. Who trusts that the federal Department of Education could keep records safe?

The second problem is accuracy. I have worked with the individual-level records of Florida, which has had a student-level database in elementary and secondary education since the early 1990s. If any state could have worked the kinks out, Florida should have. But the database is not perfectly accurate. I have seen records of first graders who are in their 30s (or 40s) and records of other students whose birthdays (as recorded in the database) are in 2008 and 2010. The problem is not that the shepherds of the database system are incompetent but that the management task is overwhelming, and there are insufficient resources to maintain the database. Poorly-paid data entry clerks spend their time entering students into the rolls, entering grades, withdrawals, and dozens of other small bits of information. We probably could have a nearly perfect unit-records database system, if we are willing to spend billions of dollars on maintenance, editing, and auditing. In all likelihood, a unit-records database system for all higher education in the U.S. would push most of the costs onto colleges and universities, with insufficient resources to ensure their complete accuracy.

The third problem with such a database is that the structure and size would be unwieldy. Florida and some other states have extensive experience with unit records, and very few researchers use the data that exist in such states. The structures of the data sets are complicated, and beyond the fact that using the data taxes the resources of even the fastest computers, the expertise needed to understand and work with the structures is specialized. Such experts live in Florida's universities and produce reports because they are the experts. But few others are. There would be no huge bonanza of research that would come from a national unit-records database.

A Solution: Anonymous Diploma Registration

Most of the problems with the unit-records database proposal can be solved if we follow the advice of statistician Steven Banks (from The Bristol Observatory) and change the fundamental orientation away from the question, Who graduated? and toward the question, How many graduated? The first question requires an invasion of privacy, expensive efforts to build and maintain a database, and a complex structure for data that few will use. But the second question -- how many graduated? -- is the one to answer for accountability purposes. It's the question that community colleges want answered for their alumni. And it does not require keeping track of enrollment, course-taking, or financial aid every semester for every student in the country.

All that we need is the post-graduation reporting of diploma recipients by institutions, with birthdates, sex, and some other information but without personal identifiers that would allow easy record linkage. Such a diploma registration system would fit with the process colleges and universities already go through in processing graduations. An anonymous diploma registration system could also identify prior institutions -- high schools where they graduated and other colleges where students earned credits that transferred and were used for graduation. Such an additional part of the system could be phased in, so that colleges and universities record the information when they evaluate transcripts of transfer students and other admissions. The recording of prior institutions would address the need of community colleges to find out where their alumni went and how many graduated with baccalaureate degrees.

Under such a system, any college or university could calculate how many students graduated and the average time to degree (as my institution in Florida already can). Any college or university could also count how many students who transferred to other institutions eventually graduated. High schools would be able to identify how many of their own graduates finished college from either in-state and out-of-state institutions. Institutions could figure out what types of programs helped students graduate, and the public would have information that is more accurate and fairer than the current IPEDS graduation statistics. All of these benefits would happen without having to identify a single student in a new database.

A short column is not the place to describe the complete structure for such a system or to address the inevitable questions. I am presenting the idea in more depth this afternoon at the Minnesota Population Center, and I have established an online tutorial describing the idea of anonymous diploma registration in more detail. But I am convinced that the unit-records database idea is wasteful, dangerous, and unnecessary. Anonymous diploma registration is sufficient to address the most critical questions of how many graduate from institutions, and it does not threaten privacy.

Sherman Dorn
Author's email:

Sherman Dorn is an historian of education at the University of South Florida. He edits Education Policy Analysis Archives and writes about education policy and other matters in his professional blog.

Reject the 'Finish in 4' Fad

"Finish in four, I promise!" That is what Northern Arizona University is telling its incoming students. With a little better advising and a binding contract to take 15 credits per semester, the university promises that students can complete their undergrad degrees in four years. Utah State University, the University of Iowa and the University of Colorado at Boulder are also offering similar guarantees.

Now, there are some strings. As the Tucson Citizen notes, "It doesn't hold if students change majors midway through college or drop or flunk several courses. A few majors, such as engineering, are excluded because some students need to take pre-college math courses that can extend graduation beyond four years." So, do it right, make no changes, make no mistakes, and you can move efficiently through the university.

As someone who has to report to my university’s provost about what we will do to get our students to graduate in four years, I am sensitive to this newest fad. It affects how our institutions will be ranked and how parents will select the perfect place for their children to study. Yet, as a five-year undergrad myself, I am not sure why this is even a good goal. Yes, our federal loan money, and our state subsidies, will go to more students if we can push them through, but that is exactly what we would be doing ... pushing. And is that what we are here to do? For that matter, is efficiency a worthwhile measure of a college? Of a student?

When I attend events to recruit new students, I rejoice in those who don't know what they want to do. They come to the experience open for adventure, exploration, excitement, and challenge. I tell them that they will probably do better than those who have their future planned out. Why? Because most students change their majors. And, at a public university like mine, students are even more likely to change their majors than their private college counterparts.

Why do students change their majors? I think it is because students have little idea about (a) what jobs exist, (b) what majors correspond with what jobs, (c) what they are good at, and (d) what course of study would best use their abilities.

Hell, when I attend college major recruitment fairs, almost all the students and their parents line up for business, pre-med, and pre-law. (Working class folks tend to go for health sciences and business, because they hear there are jobs there.) I am tempted to just hand out fliers that say, "Business majors have to take accounting and advanced math. Pre-med (and health sciences) folks have to take a LOT of science courses... with labs! When you find you don't like those courses, or you fail a few of them because you actually have no special ability in advanced math or science, come check us out!"

That is how we get our majors, for the most part; the students realize that they picked a major for some bogus reason, like they knew someone who had X job and s/he made a lot of money, and they realize as they take more classes in that area that it is not what they originally thought or that it does not suit them. Then they look for something that actually suits their interests and talents. So, the parents who pushed them into their original major gnash their teeth and complain when their children have to take additional courses to meet our requirements, which are different than their original major, and their time is extended. Yet, while this can be more costly, it is such a bargain in the long term. Better to make the change in undergrad than to figure out, after earning the degree, that you are ill-suited for the professions for which you were prepared.

So, among those who don't finish in four, we first have the confused. Add to this number the students who party too much, who attend a college that doesn't suit them (that was my error), who have adjustment issues transitioning to undergraduate life, whose mental illness expresses itself during college, who have personal traumas in their lives (also my issue), whose families face financial downturns, who face discrimination or harassment, and/or who just bomb a class or two. Suddenly, our numbers look terrible! See how few students we graduate in four years!?! (And we aren't even counting the transfer student s-- the year-to-degree numbers only count students who entered as freshmen. If we included those folks in our numbers, we would see how few students really graduate in four years.)

If we still have a perverse need to measure time to degree rates, we should extend the bar to six years of full-time study, as we do for athletes and for some federal reporting requirements. (Athletes are not the only ones balancing academics with other interests!) We should exclude students who move to part-time status from our count. But I would hope that we would not use these data to rate institutions.

Finish in four sends the wrong message. It says that college is simply utilitarian, a means to a financial end. We should recognize that college is not high school. It is about self-discovery, the investigation of different majors and fields, and intellectual exploration and development. Let's reject this fad and focus on the long-term goals: producing graduates who can write, read, and think critically, and who can contribute to our society.

Author's email:

Lesboprof is the pseudonym of a faculty member and administrator at a public university in the Midwest where the official line is that four years and out is a good thing.

Making Graduation Rates Matter

Education Secretary Margaret Spellings recently wrote a letter to the editor of The Detroit News in defense of her higher education commission's proposal for a national “student unit record” system to track all college entrants to produce a more accurate picture of degree completion. “Currently,” she said, “we can tell you anything about first-time, full time college students who have never transferred–about half of the nation’s undergraduates.” It took a long time to bring Education Department officials to a public acknowledgment of what its staff always knew: that the so-called “Congressional Methodology” of our national college graduation rate survey doesn’t pass the laugh test.  If the Secretary’s Commission on the Future of Higher Education made one truly compelling recommendation, it was for a fuller and better accounting through student unit records.

But it was well known that the establishment of a national student unit record system was a non-starter in Congress due to false worries about privacy and data security. So one wonders why the department hasn’t simply proposed a serious revision of the process and formula for determining graduation rates. Having edited and analyzed most of the d-department’s postsecondary data sets, may I offer an honest and doable formula?

There are four bins of graduates in this formula, and they account for just about everyone the Secretary justly wants us to count. They count your daughter’s friends who start out as part-time students -- who are not counted now. They count your 31-year-old brother-in-law who starts in the winter term -- who is not counted now.  They count active duty military whose first college courses are delivered by the University of Maryland’s University College at overseas locations -- who are not counted now.  They count your nephew who transferred from Oklahoma State University to the University of Rhode Island when he became interested in marine biology -- and who is not counted now.  And so forth.  How do you do it, dear Congress, when you reauthorize the Higher Education Amendments this year?

First, define an “academic calendar year” as July1 through the following June 30, and use this as a reference period instead of the fall term only. Second, define the tracking cohort as all who enter a school (college, community college, or trade school) as first time students at any point during that period, and who enroll for 6 or more semester-equivalent credits in their first term (thus excluding incidental students). 

Automatically, institutions would be tracking students who enter in winter and spring terms and those who enter part-time. Your brother-in-law, along with other non-traditional students, is now in the denominator along with your daughter.  Ask our colleges to divide this group between dependent traditional age beginners (under age 24) and independent student beginners (age 24 and up), and to report their graduation rates separately. After all, your daughter and your brother-in-law live on different planets, in case you haven’t noticed. You now have two bins.

Third, establish another bin for all students who enter a school as formal transfers. The criteria for entering that bin are (a) a transcript from the sending institution and (b) a signed statement of transfer by the student (both of which are usually part of the application protocol). These criteria exclude the nomads who are just passing through town. 

At the present moment, community colleges get credit for students who transfer, but the four-year colleges to which they transfer get no credit when these transfer students earn a bachelor’s degree, as 60 percent of traditional-age community college transfers do.  At the present moment, 20 percent of the bachelor’s degree recipients who start in a four-year school earn the degree from a different four-year school.  That we aren’t counting any of these transfers-in now is a travesty -- and makes it appear that the U.S. has a much lower attainment rate than, in fact, we do.  All this hand-wringing about international comparisons that puts us on the short end of the stick just might take a different tone.

Fourth, ask our postsecondary institutions to report all students in each of the three bins who graduate at two intervals: for associate degree granting institutions, at 4 years and 6 years; for bachelor’s degree granting institutions at 6 years and 9 years. For institutions awarding less than associate degrees, a   single two-year graduation rate will suffice. Transfers-in are more difficult, because they enter an institution with different amounts of credits, but we can put them all on the same reporting schedule as community colleges, i.e., 4 and 6 years.

These intervals will account for non-traditional students (including both active duty military and veterans) who move through the system more slowly due to part-time terms and stop-out periods, but ultimately give due credit to the students for persisting. These intervals will also present a more accurate picture of what institutions enrolling large numbers of non-traditional students, e.g. the University of Texas at Brownsville, DePaul University in Chicago, and hundreds of community colleges, actually do for a living.

Colleges, community colleges, and trade schools have all the information necessary to produce this more complete account of graduation rates now. They have no excuse not to provide it. With June 30 census dates for both establishing the tracking cohort and counting degrees awarded, the algorithms are easy to write, and data systems can produce the core reports within a maximum of two months. It's important to note that the tracking cohort report does not not replace the standard fall term enrollment report, the purposes of which are very different."  

But there is one more step necessary to judge institutions' contribution to the academic attainment of the students who start out with them.

So, in rewriting the graduation rate formula in the coming reauthorization of the Higher Education Amendments, Congress should also ask all institutions to make a good faith effort to find the students who left their school and enrolled elsewhere to determine whether these students, too, graduated.  The National Student Clearinghouse will help in many of these cases, the Consortium for Student Retention Data Exchange will help in others, state higher education system offices will help in still others, and we might even get the interstate compacts (e.g. the Western Interstate Commission on Higher Education) into the act.  Require our postsecondary institutions to report the students they find in a fourth bin.  They will not be taking credit for credentials, but will be acknowledged as contributing to student progress.

No, this is not as full an account as we would get under a student unit record system, but it would be darned close -- and all it takes is a rewriting of a bad formula.

Clifford Adelman
Author's email:

After 27 years of research for the U.S. Department of Education, Clifford Adelman recently left to be a senior associate at the Institute for Higher Education Policy. His last monograph for the department was The Toolbox Revisited: Paths to Degree Completion from High School Through College (2006).

Start With a Number...

I am a faculty member, and so began my career with an almost inborn distaste for assessment, which seemed like the advanced jargon of administrators with a quixotic envy for corporate processes. The only model for assessment that I could think of was legislatively or decanally mandated, and therefore it smacked of makework. Over the past two years, though, I've come round quite a bit, and now see assessment as both politically inevitable and pedagogically useful -- if done correctly. That it is politically inevitable doesn't mean it's wrong -- higher education should become more transparent to interested parties. Would you rather a legislator, donor, or prospective student base decisions on incomplete data, hearsay, and idiosyncratic assumptions? Of course not.

This essay is about a number, the kind of number that made me take an interest in assessment's possibilities. While John Lombardi is rightly skeptical about the National Survey of Student Engagement surveys, which measure student satisfaction, there is a wealth of data in those surveys that, when appropriately framed, can help us think creatively about our work with students.

Like many regional comprehensive universities, the institution where I teach worries about its six-year graduation rates. Our mission of providing access to first-generation and other precarious aspirants to higher education is imperiled if we cannot help these students graduate. Our numbers haven't always been great, but a series of initiatives over the past few years may have started nudging the percentages in the right direction.

Many faculty members respond -- I have responded -- to attention to graduation rates in a couple of different ways: first, to blame others (the students!), and second, to assume that we will be asked to make the curriculum less rigorous. It sounds like an attack: How can you be doing your job if so few students finish?

But at a recent meeting about assessment, I learned the following tantalizing datum: Sixty-three percent of our full-time students who complete their first semester with a 3.0 or better grade-point average graduate within six years. When full-time students finish the first semester with a GPA below 2.0, only 9 percent graduate within six years.

This sort of tracking, conceived and performed by experts in assessment and statistical analysis, ought to spur professors to think about their mission, about their individual courses, and about their institutions' political status in a state or system. What are we teaching our students? How can we convey to first-year students the seriousness of creditable habits? How can we discuss seriously with outside stakeholders the challenges posed by teaching adults?


For some time now, the great fetish of assessment gurus has been so-called "value-added" assessment: You can't just test what students know at the end of a semester or a program of study, because such a test can't discriminate between knowledge gained during the course and outside of it. Many professors and institutions use a combination of pre- and post-assessment as a kludge: "Here's what the students know at the start of the semester" and "Here's what they know at the end." This is a start, but it's still somewhat indirect, since improvement on such metrics doesn't always capture causal relationships.

The 63/9 percent statistic might call into question the value of pre- and post-assessments that aren't specifically about bodies of knowledge, since it suggests that differences in student performance arise from factors external to the particular class or course of study. The student with a 3.5 in her first semester doesn't need to be taught critical thinking; she is already an adept critical thinker, and will simply be refining that skill and adding to her base of knowledge. The student, by contrast, who struggles to achieve a 1.4 could very well improve -- and we all know students who have done, and perhaps some of us have even been that student. It's also possible that the student might have performed better on a different measure than grades. But it might also be the case that that student needs to pull away from college for a while. Perhaps he needs to try again in a semester when his childcare is more stable, or after she's saved up money, or after her father has weathered his major surgery. Or maybe he needs to come back after some time away, having reflected on what makes college success possible. (Again, some of us might have been this student.) Perhaps she needs to rethink whether college is, at present, as necessary to her career path as she believes. Is it the right thing to aspire to keep all such students on campus at all costs? Could a low graduation or retention rate mean that the institutions helps students make good long-term decisions, even if sometimes that decision is that they need to put off higher education?

To put all of this slightly more directly: The consistency of outcomes from first semester to sixth-year graduation suggests that we need to take a deep breath and think about what we're doing. Blaming K-12 educators for delivering us poor students isn't very credible when, to a surprising extent, we simply validate their outcomes.


Surveys of student engagement repeatedly indicate that first-year students put in nothing like the mythical two to three hours of out-of-class preparation for each hour in class. Indeed, many students spend fewer hours studying outside of class than they spend in class during the week. The 63/9 split is relevant here: Do you pitch your course to those students who will do the work outside of class? ("Teaching to the six," as Michael Bérubé once called it.) Or do you try to make the course manageable by more students?

The split suggests that the latter strategy is a good example of the fallacy of good intentions. You can craft an intro course such that more students pass it, but such strategies smack of social promotion -- students not adept at managing college work in the first semester are going to continue to struggle. What's necessary instead is a pedagogy that bootstraps students into desired study habits. Technology can help: required posts to a class discussion board or blog, the use of social bookmarking tools to create a community of inquiry, the capacity of course management software to grade simple quizzes for you -- all of these things can help students learn how to prepare without necessarily sucking up vast quantities of time.

We can decry a generation brought up believing in the myth of multi-tasking (and that myth has done our students real harm), but unless we systematically design courses to inculcate sustained attention -- and then reward that attention by making class time intellectually meaningful--then we're not really contributing much beyond gripes and moans.


Assessment in college is different from assessment in elementary and secondary education, since college isn't mandatory. We control much less about our students than did the parents and teachers who have taught them (or not) over the previous 18 or more years. The choices of young adults drive their success far more than anything we offer.

It's true that legislators, tuition-payers, and future employers of our graduates have the right to demand effective teaching. But we can't teach students who are forced to work 35 hours while they're in college. We can't teach students who don't have access to affordable, reliable daycare. We can't teach students who have significant health concerns. The rhetoric of assessment is all too frequently pitched at whipping those tenured layabouts -- or, worse, tenured radicals -- into compliance. But turning any college into a legislators' paradise -- 5/5 teaching loads taught by contingent faculty -- won't have demonstrable results on student success. Effective assessment of colleges and universities needs to be thought of as promoting learning, not as disciplining the unruly faculty.

Many faculty are suspicious about assessment, whether for ideological reasons or because they perceive it as an unfunded administrative mandate. And faculty hear numbers, especially subpar numbers, as an indictment of their expertise or their empathy for students. I have reacted this way myself. Now, however, I try to remember that numbers are an opening salvo, not the final word: We've got a measurement -- how do we improve it? That number looks bad -- but what are its causes? Is the instrument measuring the right thing? Are we administering it in the best way? Are we making sure there's a tight fit between assessment measures and intended learning outcomes? Until we begin to think clearly, both within departments and across schools and even across peer institutions -- about what our students are up to, our own cultural position will continue to seem in crisis.

Jason B. Jones
Author's email:

Jason B. Jones is an associate professor of English at Central Connecticut State University. His book, Lost Causes: Historical Consciousness in Victorian Literature, was published in 2006 by Ohio State University Press. Online, he maintains a blog at The Salt-Box and contributes regularly to PopMatters and Bookslut.

What to Measure and Reward at Community Colleges

At a time when postsecondary education is a requirement for an increasing number of U.S. jobs, community colleges provide broad access to higher education, enrolling nearly half of the nation’s undergraduates. But is access enough? Fewer than half of degree-seeking community college students achieve their goals. Do we want merely to get students to attend college, or are we committed to seeing them through to graduation?

One might think that states, in order to reap the economic benefits of a more educated workforce, would offer incentives for more students to complete their education. But most states link their support of community colleges to enrollment levels, not to student progress or success. Public funding rewards getting students into the college, independent of whether any given student is achieving his or her educational goal or is on the road to dropping out.

Over the years, a number of states have experimented with financial incentives based on performance measures like graduation rates; but a newly approved program in Washington state takes a bold and different approach. The State Board for Community and Technical Colleges decided that institutions might be more motivated to improve performance by rewards for student progress past key “momentum points,” as well as for completion. Under the new plan, Washington will reward community and technical colleges for every student who achieves particular research-based benchmarks leading up to and including graduation.

Washington's community and technical colleges will receive extra money for students who earn their first 15 and first 30 college credits, earn their first 5 credits of college-level math, pass a pre-college writing or math course, make significant gains in certain basic skills tests, earn a degree or complete a certificate. Colleges also will be rewarded for students who earn a GED through their programs. All of these benchmarks are important accomplishments that help propel students forward on the road of higher education.

Washington State’s Student Achievement Initiative rewards its colleges for helping students continue moving forward regardless of where they start or how far they may be from attaining their educational goals. Successful students take many intermediate steps between enrollment and graduation, each accomplishment building a foundation for future success. Washington state’s plan recognizes the importance of supporting students as they achieve these intermediate milestones and rewards colleges for doing so. A student who is unable to pass a pre-college math course, for example, cannot continue on to college-level work, much less earn a degree.

We know there are key points along students’ educational journeys where they may be more likely to discontinue or postpone their studies. Students who are underprepared for college-level work are less likely to graduate than their peers who move directly into college classes, for example. However, an analysis of data from Achieving the Dream: Community Colleges Count, a national initiative to help more community college students succeed, shows that students who successfully completed any developmental course in their first semester were actually more likely than their peers to persist and succeed. Washington’s plan seeks to focus colleges’ attention on some of these key educational turning points and improve the odds of success at each step.

Knowing that the success of the Student Achievement Initiative depends upon buy-in at the institutional level, from CEOs down to classroom faculty, the State Board pursued an inclusive design process and is reaching out to every college in the state. During the design phase, presidents, trustees, business and civic leaders, faculty representatives and others -- both supportive and skeptical -- were consulted. In the current year, when the new system will be tested before full implementation, video conferences have been held with faculty members, administrators, and other staff at every college.

This incentive program is a good fit in Washington, which is among 15 states across the country participating in the Achieving the Dream initiative. Participating colleges make five specific commitments, which align well with Washington’s new benchmarks. The colleges pledge to increase the percentage of students who complete developmental courses, complete introductory college courses, complete any courses they take with a “C” or better, re-enroll from one academic term to the next, and earn certificates and degrees. For each commitment, colleges analyze data to measure their progress with support and guidance from the initiative.

Currently, six of Washington’s 34 community and technical colleges participate in Achieving the Dream and can serve as a learning laboratory for the entire system. The state’s incentive plan gives colleges the freedom to figure out how best to improve their students’ success rates, and being able to learn from peers who have already analyzed the effectiveness of various strategies will help them make more informed decisions.

Washington isn’t the only state where such an incentive system can work. With more than 80 participating colleges, Achieving the Dream provides an existing support network for efforts to improve student success rates. And offering student success incentives need not be confined to Achieving the Dream states. More states should implement similar programs, altering incentives in ways that will compel colleges to action. With so many students in community colleges and so many of today’s jobs requiring higher-level skills, it just makes sense.

George R. Boggs and Marlene B. Seltzer
Author's email:

George R. Boggs is president and CEO of the American Association of Community Colleges. Marlene B. Seltzer is president and CEO of Jobs for the Future. Both of their groups are among nine national organizations working together as part of Achieving the Dream: Community Colleges Count.

Failure in Urban Universities

As a resident of the District of Columbia, it's been fascinating to watch the ascendant rock star-dom of Michelle Rhee, the D.C. public schools chancellor. A 38-year old Harvard grad and single mother of two, she's been profiled in Newsweek, interviewed by the Wall Street Journal, and featured on Charlie Rose. Her panel at the Democratic National Convention drew capacity crowds. All because she's trying to reform an urban school system legendary for incompetence, corruption, and failure. And she's not alone: Big city mayors across the country have seized control of their school systems in recent years, risking political capital on the premise that schools can serve predominantly low-income and minority students far better than they have in the past. Those schools and students have become the central K–12 education challenge of our time.

Washington’s public school system is not, however, the only public education institution in the city. There's another with very similar problems: deteriorating facilities, shrinking enrollment, rock-bottom graduation rates, and a troubled history rife with tales of mismanagement and worse. It's the University of the District of Columbia. But while the recent announcement of a new UDC president garnered respectful coverage in the local newspaper, it's a safe bet that Allen Sessoms -- a Yale-educated physics professor and former leader of Delaware State University and Queens College -- won't be making the national media rounds anytime soon. Urban higher education simply doesn't generate the urgency and attention directed to K–12, even though it faces many of the same challenges and educates many of the same students. This is a huge problem, and a quick look at graduation rates for the less selective public urban universities on the table below shows why:

City University Enrollment, Fall 2007 6-Year Graduation Rate Black 6-Year Graduation Rate Hispanic 6-Year Graduation Rate % of Students in Graduation Rate Data Transfer Out Rate
Chicago Chicago State U. 5,217 16% 16% 13% 30% 30%
Chicago Northeastern Illinois U. 10,285 19% 8% 17% 41% 37%
Washington U. of District of Columbia 5,137 19% 18% 17% 46% N/A
Denver Metropolitan State College 21,425 23% 18% 24% 38% 4%
El Paso U. of Texas at El Paso 16,769 29% 27% 28% 58% 33%
San Antonio U. of Texas at San Antonio 24,705 30% 28% 31% 65% N/A
Los Angeles California State U. at Los Angeles 16,046 31% 16% 29% 41% 42%

Indiana U.-Purdue U.

at Indianapolis

21,202 31% 26% 23% 93% N/A
Detroit Wayne State U. 21,145 32% 10% 20% 50% N/A
Memphis U. of Memphis 15,984 34% 27% 48% 55% 19%
Boston U. of Mass at Boston 10,008 35% 28% 37% 37% 5%
New York City CUNY City College 11,181 36% 40% 27% 59% 30%
Denver U. of Colorado at Denver 11,702 39% 24% 24% 19% N/A
Milwaukee U. of Wisconsin at Milwaukee 4,395 41% 15% 27% 69% N/A
Las Vegas U. of Nevada at Las Vegas 21,975 41% 32% 36% 53% N/A
Nashville Tennessee State U. 7,132 41% N/A N/A 99% 12%
San Jose San Jose State U. 24,390 42% 25% 36% 46% 39%
Houston U. of Houston 27,572 43% 40% 39% 50% 24%
St. Louis U. of Missouri at St. Louis 12,432 43% 33% N/A 10% N/A
San Francisco San Francisco State U. 25,134 44% 23% 38% 49% 37%

These self-reported numbers (courtesy of NCES) come with many caveats. They're six year graduation rates, and some students graduate in more than six years. They don't include students who move elsewhere, and some universities -- those in California stand out -- produce more transfers than graduates. They only include students who start full-time (the "% of Students in Grade Rate" column shows those students as a percentage of all students).

But even taking all of those things into account, it's clear that a great many students are entering urban universities and never completing a degree. There's a good chance that including part-timers would make graduation rates worse. And in most cases, the numbers for black and Latino students are particularly bad. Among the 20 universities on this list -- institutions that collectively enroll over 300,000 undergraduates -- the median six-year graduation rate for black students is 25 percent. No amount of extensions, adjustments or allowances would raise that number to a level that anyone should accept as good enough. (Increasing the timeline from six to eight years at Wayne State University, for example, boosts the black graduation rate from 10 percent to 20 percent -- twice as good, but still very bad.) One constantly hears policymakers lament the fact that barely half of minority students graduate from high school on time. For these universities, that would be a huge improvement.

These catastrophic failure rates are certainly not all the universities' fault. The latest UDC schedule of classes shows the fallout of the K–12 district's historical failure. The math department is offering:

  • 16 sections of "Basic Mathematics"
  • 13 sections of "Introductory Algebra"
  • 9 sections of "General College Math I"
  • 7 sections of "General College Math II"
  • 4 sections of "Intermediate Algebra"
  • 2 sections each of "Pre Calc with Trig I," "Pre Calc with Trig II," "Calculus I," "Calculus II," and "Calculus III"
  • 1 section each of "Differential Equations," "Number Theory," "Linear Algebra," "Advanced Calculus," etc.

Any number of high schools in the DC metropolitan area offer proportionately more advanced math. Overall, nearly 70 percent of incoming UDC freshmen need some remediation. Like too many colleges and universities, UDC is often forced to be an essentially secondary -- not postsecondary -- institution.

UDC's budget was also slashed during the city's financial restructuring in the mid-1990s. Most UDC students juggle work and family while trying to pay for college with limited means. All commute; there are no dorms. The small campus of nameless, numbered concrete buildings, rendered in the brutalist style, has been allowed to crumble.

But UDC is also an institution that is often described as "poorly run" and worse. The average age of the unionized, highly-tenured faculty is 68. Despite having a relatively small student body with concentrated academic needs, UDC offers a range of degree programs that grant very few degrees. More than 30 years after being created through the forced marriage of a local teachers college, city college, and technical institute, old institutional divisions remain.

To varying degrees, these problems are mirrored in urban universities nationwide -- academically unprepared students, insufficient funding, and the worst of city politics and higher education administration put together in one tangled mass of dysfunction. There are exceptions, of course, institutions and departments doing great things despite many challenges. But on the whole, the odds are stacked against many city college students, and the outcome data reflect the end result.

Beyond specific problems of preparation, funding, administration and teaching, the terrible success rates at urban universities reflect the fundamental difference in the way K–12 and college students are viewed. The underlying premise of any conversation about elementary and secondary education is that the schools bear significant responsibility for student success. But the moment a student walks off their high school graduation stage, they are magically transformed in the public eye into a fully actualized adult who bears 100 percent of the burden for any and all educational outcomes that subsequently occur -- or don't occur. As Peter Smith, founding president of California State University-Monterey, said of high college drop-out rates in his book The Quiet Crisis: How Higher Education is Failing America:

"In colleges and universities, the institution is not at fault; I, as president, am blameless. The traditional model of college tells us that it is the students who have failed, not the college. They bear the shame."

No wonder political leaders aren't throwing their weight and money behind improving urban universities. If the onus of success or failure falls entirely on the students, what's the point?

So we find ourselves, in a time when more students want and need college than ever before, herding large numbers of academically at-risk, disproportionately low-income students into urban universities built on a traditional model that doesn't serve them well. They are the very same students whom we're trying so hard to get through high school -- only to turn our attention away from them just a few months or even weeks before they falter in college. All because of the strange and dangerous idea that educational institutions bear little responsibility for how much their students learn or whether those students earn degrees. Until that changes, the quiet crisis of urban higher education will continue, and much of the best work of K–12 reformers will come to naught.

Kevin Carey
Author's email:

Kevin Carey is the research and policy manager of Education Sector. He blogs about K–12 and higher education policy issues at The Quick and the Ed. An archive of his Outside the Circle columns may be found here.

The Propaganda of International Comparisons

The latest rhetorical trope in the bad news presentation of U.S. higher education is to say -- even when home front improvements are acknowledged -- “Wait a minute! But other countries are doing better!" and rush out a litter of population ratios from the Organization for Economic Co-operation and Development (OECD) that show the U.S. has “fallen” from 2nd to 9th or 3rd to 15th place in whatever indicator of access, participation and attainment is at issue.

The trope is not new. It’s part and parcel of the enduring propaganda of numbers. Want to wake up a culture numbed in the newspaper maps to the Final Four, that places bets on Oscar nominees, checks the Nielson ratings weekly, and still follows the Top 40? Tell them someone big is down. In the metrics of international economic comparisons we treat trade balances, GDP, and currency exchange rates the same way, even though the World Economic Forum continues to rank the U.S. No. 1 in competitiveness, and the recent strength of the dollar should tell anyone with an ounce of common sense that the markets endorse that judgment in the midst of grave economic turmoil.

Except in matters of education, the metrics of the trope are false, and our use of them both misguided and unproductive. The Spellings Commission, ETS, ACT, the Education Commission of the States, the Alliance for Excellent Education, and, most recently, the annual litany of "Measuring Up" and the College Board’s "Coming to Our Senses" all lead off their reports and pronouncements on higher education with population ratios (and national rankings) drawn from OECD’s Education at a Glance, and assume these ratios were passed down from Mt. Sinai as the tablets by which we should be judged.

The population ratios, particularly those concerning higher education participation and attainment for the 25-34 age cohort, well serve the preferred tendency of these august bodies and their reports to engage in a national orgy of self-flagellation that purposefully neglects some very basic and obvious facts.

To be sure, U.S. higher education is not doing as well as we could or should in gross participation and attainment matters, but on the tapestry of honest international accounts, we are doing better than the propaganda allows. When you read reports from other countries’ education ministries that worry about their horrendous dropout rates and problems of access, you would think they don’t take population ratios seriously.

Indeed, they don’t, and one doesn’t need more than 4th grade math to see the problems with population ratios, particularly in the matter of the U.S., which is, by far, the most populous country among the 30 OECD member states.

None of our domestic reports using OECD data bothers to recognize the relative size of our country, or the relative diversity of races, ethnicities, nativities, religions, and native languages -- and the cultures that come with these -- that characterize our 310 million residents. Though it takes a lot to move a big ship with a motley crew, these reports all would blithely compare our educational landscape with that of Denmark, for example, a country of 5.4 million, where 91 percent of the inhabitants are of Danish descent, and 82 percent belong to the same church.

For an analogous common sense case, Japan and South Korea don’t worry about students from second language backgrounds in their educational systems. Yes, France, the UK, and Germany are both much larger and more culturally diverse than Denmark, but offer nowhere near the concentration of diversities found in the U.S. It’s not that we shouldn’t compare our records to theirs; it’s just that population ratios are not the way to do it.

OECD has used census-based population ratios to bypass a host of inconsistencies in the ways its 30 member countries report education data, but, as it turns out, the 30 member countries also employ different census methodologies, so the components of the denominator from Sweden are not identical with the components of the denominator from Australia. With the cooperation of UNESCO and Eurostat’s European Union Labor Force Survey, and occasionally drawing on microdata from what is known as the Luxembourg Income Study, OECD has made gallant efforts to overcome the inconsistencies, but you can’t catch all of them.

When ordinary folk who have no stake in education propaganda look at those 30 countries and start asking questions about fertility rates, population growth rates, net immigration rates, and growth in foreign born populations, they cannot help but observe that the U.S. lives on another planet. Only 4 countries out of the 30 show a fertility rate at or greater than replacement (2.0): France, New Zealand, Mexico, and the U.S. -- and of these, Mexico has a notable negative net migration rate. Out of those 30 countries, 7 have negative or zero population growth rates and another 5 show growth rates that might as well be zero. On the other hand, the U.S. population growth rate, at 0.9 percent, is in the top five. In net immigration through 2008, only Australia, Canada, and Ireland were ahead of us (and we count only legal immigrants). Triangulating net immigration, one can examine the percentage growth in foreign-born populations over the past 15 years. In this matter, the Migration Policy Institute shows the U.S. at 45.7 percent—which is more than double the rate for Australia and Canada (I don’t have the figures for Ireland).

It is no state secret that our immigrant population is a. young, b. largely schooled in other countries with lower compulsory schooling ages, and c. pushing the U.S. population denominator up in the age brackets subject to higher education output analysis. Looking ahead to 2025 (the College Board’s target “accountability” date), Census projections show an increase of 4.3 million in the U.S. 25-34 age bracket. Of that increase 74 percent will be Latino, and another 12 percent Asian. Can you find another country, OECD or otherwise, where an analogous phenomenon is already in the cards -- or is even somewhere in the deck, waiting to be dealt? As noted: the U.S. lives on a different demographic planet.

We are often compared with Finland in higher education matters----and to our considerable disadvantage. I will give the Finnish education system a lot of credit, particularly in its pre-collegiate sector, but the comparison is bizarre. Like Denmark, Finland is a racially and linguistically homogenous (mandatory bilingual, to be sure, in Finnish and Swedish) country of 5 million, with a population growth rate of 0.1% and a net immigration rate of 1% (principally from Eastern Europe).

In the 1990s, Finland increased the capacity of its higher education system by one-third, opening 11 new polytechnic institutions known as AMKs (for the U.S. to do something equivalent would require establishing 600 new AASCU-type 4-year colleges). So the numerator of participation in higher education increased considerably, bolstered by fully-subsidized tuition (surprise, anyone?), while the denominator remained flat. Last time you looked, what happens to percentages when numerators rise and denominators don’t?

And there is more to the Finnish comparison: the median age of entrance to higher education in Finland is 23 (compared with 19 in the U.S.) and the median age at which Finnish students earn bachelor’s degrees is 28 (compared with 24-25 in the U.S.). In our Beginning Postsecondary Students longitudinal study of 1995-2001, those entering 4-year colleges in the U.S. at age 23 or higher constituted about 5 percent of 4-year college entrants, and finished bachelor’s degrees within 6 years at a 22 percent rate (versus 65 percent for those entering directly from high school). Is comparing Finnish and U.S. higher education dynamics a fair sport? If you left it up to the folks who produced the Spellings Commission report, Measuring Up, and Coming to Our Senses, it is.

International data comparisons on higher education are very slippery territory, and nobody has really mastered them yet, though Eurostat (the statistical agency for the 27 countries in the European Union) is trying, and we are going to hear more about that at a plenary session panel of our Association for Institutional Research next June. What does one do, for example, with sub-baccalaureate degrees such as our "associate," for example? Some countries have them -- they are often called “short-cycle” degrees -- and some don’t. In some countries they can be considered terminal degrees (as we regard the associate), in other countries they are not considered higher education at all, and in still others they are regarded as part of the bachelor’s degree.

Instead of or in addition to “short-cycle” degrees, some countries offer intermediate credentials such as the Swedish Diploma, awarded after the equivalent of two-thirds of a baccalaureate curriculum. Are these comparable credentials? What’s counted and what is not counted varies from country to country. I just finished plowing through three German statistical reports on higher education from different respected German sources in which the universe of “beginning students” changed from table to table. A German friend provided a gloss on the differences, but the question of what gets into the official reporting protocol went unanswered. You can be sure that the people who put together the Spellings Commission report, Measuring Up, and Coming to our Senses never thought about such things.

Why is all this important? First, to repeat the 4th grade math, which Jane Wellman tried to bring to the attention of U.S. higher education with her Apples and Oranges in the Flat World, issued by ACE last year. When denominators are flat or declining and numerators remain stable or rise slightly, percentages rise; and vice-versa when denominators rise faster than numerators. So if you use population ratios, and include the U.S., it’s going to look like we’re “declining”—which is the preferred story of the public crisis reports. Ironically, trying to teach basic math and human geography to the U.S. college-educated adults who wrote these reports is like talking to stones. They don’t want to hear it. Wellman made a valiant effort. So did Kaiser and O’Heron in Europe in 2005 (Myths and Methods on Access and Participation in International Comparison. Twente, NL: Center for Higher Education Policy Studies), but we’re going to have to do it again.

Second, it’s like the international comparisons invoked by business columnists. The BRIC (Brazil, India, China, and Russia) countries’ GDPs have been growing much faster than ours (though some are now declining faster than ours), but none of those GDPs save that of China match the GDP of California. It’s that big ship again: the U.S. starts with a much higher base---of everything: manufacturing, productivity, technological innovation. Both growth and contraction will be slower than in economies that start from a much lower base. Where we have demonstrable faults, the most convincing reference points for improvement, the most enlightening comparisons, are to be found within our systems, not theirs. So it is with higher education, where the U.S. massified long before other countries even thought about it. Now, in a world where knowledge has no borders, if other countries are learning more, we all benefit. The U.S. does not---and should not---have a monopoly on learning or knowledge. Does anyone in the house have a problem with this?

Third, OECD itself understands the limitations of population ratios for education a lot better in 2008 than it did a scant five years ago, and is now offering such indicators as cohort survival rates in higher education. I had hoped the authors of Measuring Up 2008 might have used those rates, and read all the footnotes in OECD’s 2008 Education at a Glance so that one could see what was really comparable with what. Had they done so, they would have seen that our 6-year graduation rate for students who started full-time in a 4-year college and who graduated from any institution (not just the first institution attended) is roughly 64 percent which, compared with other OECD countries who report the same way (e.g. the Netherlands and Sweden), is pretty good (unfortunately, you have to find this datum in Appendix 3 of Education at a Glance 2008). In Coming to Our Senses, the College Board at least read the basic cohort survival rate indicator, 58 percent, but didn’t catch the critical footnote that took it to 64 percent or footnotes on periods of reporting (Sweden, for example, uses a 7 year graduation marker, not 6). Next time, I guess, we’ll have to make sure the U.S. footnotes are more prominent.

Driving this new sensibility concerning cohort survival rates, both in OECD and Eurostat, is the Bologna Process in 46 European countries, under which, depending on country, anywhere from 20 percent to 80 percent of university students are now on a 3-year bachelor’s degree cycle. Guess what happens to the numerator of graduation rates when one moves from the old four and five year degrees to new three-year? Couple this trend with declining population bases (the UK, for example, projects a drop of 13 percent in the 18-20 year-old population going forward), and some European countries’ survival rates will climb to stratospheric levels. We’ll be complaining about our continual international slippage well into the 2030s. That will suit the crisis-mongers just fine, except none of it will help us understand our own situation, or where international comparisons truly matter.

And that’s the fourth -- and most important -- point. The numbers don’t help us do what we have to do. They steer us away from the task of making the pieces of paper we award into meaningful documents, representing learning that helps our students compete in a world without borders. Instead of obsession with ratios, we should look instead to what other countries are doing to improve the efficiency and effectiveness of their higher education systems in terms of student learning and enabling their graduates to move across that world. In this respect the action lines of the Bologna Process stand out: degree qualification frameworks, a “Tuning” methodology that creates reference points for learning outcomes in the disciplines, the discipline-based benchmarking statements that tell the public precisely what learning our institutions should be accountable for, Diploma Supplements that warrantee student attainment, more flexible routes of access, and ways of identifying under-represented populations and targeting them for participation through geocoding.

These features of Bologna are already being imitated (not copied) in Latin America, Australia and North Africa. Slowly but surely they are shaping a new global paradigm for higher education, and in that respect, other countries are truly doing better. Instead of playing the slippery numbers and glitz rankings, we should be studying the substance of Bologna -- where it has succeeded, where our European colleagues have learned they still have work to do, where we can do it better within our own contexts -- perhaps experiencing an epiphany or two about how to turn the big ship on which we travel into the currents of global reform.

Now that would be a constructive use of international comparisons.

Clifford Adelman
Author's email:

Clifford Adelman’s The Bologna Club: What U.S. Higher Education Can Learn from a Decade of European Reconstruction and Learning Accountability from Bologna: a Higher Education Policy-Primer can be found on the Web site of the Institute for Higher Education Policy, where he is a senior associate. The analysis and opinions in this essay are those of the author, and should not be interpreted as reflecting those of the institute.

Retention Matters

The past academic year has been a roller coaster ride for those of us who work at colleges. Increasing costs, the economic meltdown, and high unemployment have many in the higher-education sphere wondering what the future will bring. Indications are that by 2020 some institutions may not be in business.

In the small liberal arts college world (aka privates), the cost of our product is already near the highest in the marketplace, and unfortunately the demographics of potential clients near the lowest. What we can do to improve the odds that our institutions not only survive but thrive in the next 10 years? Solutions seem elusive. But one key means of improving the picture already lies within our grasp. It can be summed up in one word: retention.

Consider the cost of a college degree from the frame of the strategic service concept: "The benefits perceived by the customer against total price in the context of alternatives." While the product is excellent at most small liberal arts colleges, the competitors’ product is also outstanding. Large privates, small and large publics, and community colleges are all good choices today. The problem for many privates is that their price is already out of reach for most Americans, and going in the wrong direction – while many publics charge much less .

The total annual cost at many privates is between $40,000 and $50,000; while tuition costs tend to go up an average of 4 percent a year, the increase barely covers the concomitant increase in fixed expenses (salaries, fuel, inflation, debt depreciation, etc.).

By 2020, then, the total cost for most privates – if current trends continue -- will be $60,000-$70,000 per year. The current average yearly cost at many publics -- $10,000-$20,000 -- should rise by 2020 to about $17,500-$27,500 – still a lot of money for students and families, but clearly a significant price break from the privates.

Many students emerge from college with major debt. It is commonplace for students to graduate from privates with $50,000, $75,000, or over $100,000 in loans. This is not a sustainable model as costs continue to increase. At what price point do families determine that the cost/benefit analysis does not make sense?

The clear challenge is to hold costs in higher education. It may be clear but it is far from simple. To hold or decrease expenses without limiting the product has so far seemed impossible. To increase revenues without raising tuition has been equally daunting.

Institutions need to look within first -- and retention is the place to start.

Out of approximately 2.8 million first-year college students each year, more than 450,000 do not return to the college or university they started with for their second year, according to 2008 statistics. In other words, 25 percent of first-year students do not return to the institution where they began their college career. What other industry do we know that successfully recruits 25 percent new clients each year, plans for an average loss of 25 percent of those new clients, and accepts this as business as usual?

Significant improvement in the retention of current clients is the low-hanging fruit of revenue increases for colleges and universities.

For privates, improving retention rates is one of the best solutions for reducing cost increases and maintaining revenue streams. Though the retention rates on average for privates are better than for large publics, the financial impact of each student lost is greater. A 20 percent attrition rate for a private can mean 100 or more students lost, at $30,000 or more per student. So long as freshman classes have remained sufficiently full, the strategy has been to replace lost students rather than commit more resources to retain them.

First-year students are the key to significant retention improvement, and based on the available data, the first six weeks is the most critical time for a successful transition to the college environment. It is the make-or-break period for many students regarding their academic, social, and emotional engagement with their chosen institution.

Unlike corporate America, which long ago discovered the benefits and return on extensive job-training prior to engagement, many institutions of higher education attempt to teach new students the keys to success after they arrive on campus and while they are fully immersed. The majority of transition education is similar to teaching one to swim while in the deep end.

The call is for privates to redefine the orientation and preparation process for first-semester students, and to commit sufficient resources to preparing their newest clients for success prior to their arrival on campus. Privates can and should shift attitudes and perceptions, by minimizing doubt and uncertainty, and increasing the confidence of entering first-year students. Freshman orientation is a blur of information and indoctrination, compressed into a few days, not a training process for preparing students. Colleges should shift their emphasis toward the months preceding each new academic year, and commit to providing new students with effective college-readiness training. For most colleges this will require utilizing new technologies and resources to reach their cohort in flexible ways, with minimal impact on time, energy, and resources for both the students and the colleges.

The intent is to empower students with information for success, and ultimately to improve retention rates. Pennsylvania's public college system long ago committed to improved retention, including performance indicators and rewards for retention outcomes. In October 2007, Kenn Marshall, chair of the system's board, noted that system universities received a combined $38.7 million in performance funding as a reward for showing improvement in key areas related to student achievement and efficiency.

Traditionally, many in the privates have felt that this is not their role, and have lacked the will to fully commit to efforts for improved retention. They can no longer afford the luxury of that attitude. As 2020 and $70,000 per year costs approach, and institutions look for new revenue streams, it is time for privates to reconsider their strategies to retain more of their current students. Privates may find that significant improvement in the retention of their students is one of the only solutions to cost containment and financial survival.

Bryan Matthews
Author's email:

Bryan Matthews is director of athletics and associate vice president for administrative services at Washington College.


Subscribe to RSS - Graduation rates
Back to Top