In the spotlight more than ever before, community colleges are increasingly being asked to do more with less -- facing greater pressure to produce more college graduates at the same time that state funding is being reduced.
For example, Arizona's state spending on community colleges in fiscal 2013 dropped 7 percent, from $71 million to about $66 million, in spite of a 7 percent increase in community college enrollment. In Virginia, the average state funding per student at community colleges fell 36 percent, from $4,602 to $2,946, between 2006 and 2011. And, in the last four years, demand for community college education in California has increased while the budgets have been cut by 12 percent. Many institutions nationwide cite such hurdles to justify three-year graduation rates dipping as low as 15 percent, saying it’s impossible to do better. But that’s not true.
Even in the face of all the challenges, there are examples of community colleges doing a superb job achieving student success at scale on campuses across the country. The sector is inventing programs that show promising results, yet community colleges are still being recognized more for their challenges than their successes. What community colleges need is a better sense of where to look for examples of excellence in the sector in order to raise the bar, not only for college completion, but also for student learning outcomes and employment after college.
In July, the Aspen Institute published data that offer some pointers on where to look for solutions. Performance and improvement metrics were released that detail which 120 community colleges are doing best -- and improving the most -- in terms of graduation rates, retention rates and degrees awarded, for all students and for minority populations that have historically performed at lower levels. The data are used to determine the top U.S. community colleges that are eligible for the 2013 Aspen Prize for Community College Excellence.
The data set Aspen released, which is based on the Integrated Postsecondary Education Data System (IPEDS), does not tell the whole story, but it tells an important one. It shows community colleges across the country what levels of student success are possible, as well as some places they ought to look to as models. For example, the data show that:
Walla Walla Community College (Wash.) boasts a 54 percent three-year graduation and transfer rate, well above the national average of 40 percent.
Santa Barbara City College (Calif.) has a three-year graduation and transfer rate of 48 percent for Hispanic students, which make up over 30 percent of its student body.
Lake Area Technical Institute (S.D.) has a three-year graduation and transfer rate of 76 percent, even though over 40 percent of its students are low-income enough to be receiving Pell grants.
Not every example on this list of 120 is relevant to every community college. But every two-year college in the country can find examples in the Aspen data set of a place that looks a lot like they do, yet is achieving higher levels of student graduation, or retention, or degrees awarded, or minority student success. They can then work to figure out what those colleges are doing that allowed them to be so successful and examine the programs that are working well -- helping students learn, complete programs and obtain degrees.
For example, even after consecutive years of budget cuts in California, Santa Barbara City College has maintained an excellent range of programs to improve student success, including an accelerated track that helps speed the neediest students through developmental math and an exceptional writing center that prepares students for the rigors of upper-division classes if and when they transfer to a four-year college. Walla Walla Community College has developed very strong systems for advising students to ensure that they complete degrees, employing quarterly advising by case managers and excellent online tools to monitor progress towards credentials with strong labor market value. Lake Area Technical Institute also prevents students from slipping through the cracks by enrolling all students in cohort-based, block-scheduled programs, where students progress together through each semester knowing exactly what courses, degree and career lie ahead.
Valencia College, the winner of the first Aspen Prize in 2011, achieved its 51 percent three-year graduation rate with a highly diverse student population – 46 percent of its students are Hispanic or African-American. While many significant and scaled initiatives have contributed to Valencia’s exceptional student outcomes, the college’s success is built in substantial part on a culture of learning among professors and staff, fueled by a completely revamped tenure process that rewards professors for improving their teaching.
These institutions, as well as the others on the list of 120, have awakened to the realities that we cannot continue to deliver higher education in the same way we always have in this country and expect better student outcomes. And, community college outcomes need to improve. The national full-time graduation rate of 28 percent is unacceptably low, and student success rates remain under 40 percent even if you count students who transfer to a four-year college without ever completing community college. And, as has been widely reported, graduation rates are even lower for the large number students who enter community college needing remedial education.
But understanding the need to improve is only the first step. By examining the quantitative outcomes of the 120 colleges on this list, all community colleges should be able to understand that much higher levels of student success are possible. Aspen will help over the coming year by releasing toolkits and providing briefings about what is happening at the 10 finalist community colleges vying for the 2013 prize -- which were just announced. Our hope is that increasing amounts of attention will be paid to these exemplars of student success, and that more and more people will recognize them as excellent, deserving of our investments and places that offer institution-wide solutions to the challenges community colleges face.
Joshua Wyner is executive director of the Aspen Institute’s College Excellence Program.
Our problem with the new reportThe College Advantage: Weathering the Economic Storm, on the employment of university graduates since the start of the Great Recession, begins even before the first word of text. In the first paragraph of the acknowledgments, speaking of those who financed the study, the Lumina and Gates Foundations, the authors -- Anthony Carnevale and associates at Georgetown University -- observe, "We are honored to be partners in the mission of promoting postsecondary access and completion for all Americans."
Thus this report is about promoting a mission, a policy position, not about achieving a dispassionate, objective and complete analysis of the evidence. It is thus better viewed as a piece of PR, agitprop musings as it were, not a serious academic study. Certainly, we doubt any peer-reviewed reputable academic journal in economics would touch this study in its current form.
This brings up a bigger problem: isn't there an inherent, huge conflict of interest in university researchers issuing reports favoring positions that are in their own self-interest? Is it not true that Georgetown and other universities gain marketing advantages (and maybe higher tuition fees) by promoting the idea that “it pays to go to college”? The subjective bias is further revealed as the authors at the very beginning decry "attacks" by higher education "cost-cutters,” as if trying to improve efficiency in a low productivity industry is somehow bad.
Getting to the evidence, the Georgetown team is probably correct, that, on average, college graduates fared better in labor markets in the Great Recession and its slow recovery than did those with lesser degrees or diplomas. But where are the control variables accounting for the fact that college graduates are, on average, brighter, more disciplined, and more ambitious than those with less education? A typical high school graduate is a less desirable employee than a typical college graduate for reasons independent of the formal postsecondary education acquired.
Moreover, while the members of the Carnevale team agree that those working only part-time jobs are not truly “employed,” they draw no such distinction with those trained for relatively highly skilled work now doing menial labor. College biology graduates driving taxi cabs are considered fully employed by the definitions that are used. Yet in a real economic sense, they are underemployed or mal-employed, and their human capital utilization is well below what the expectations of both the worker and arguably society as a whole.
“Employment” is not in any meaningful economic context a simple binary variable like pregnancy (you are, or you are not), but a continuum reflecting variations in both hours worked and the meaningfulness of the labor performed. Our guess, based on looking at other labor market data, is that “human capital utilization” among college graduates has fallen a fair amount more than “employment” in recent years, as college graduates increasingly take low-paying (reflecting low productivity) jobs. According to the most recent report by a Northeastern University professor for the Associated Press, using Current Population Survey data, roughly 53 percent of recent college graduates are underemployed, instead of the 8 percent reported in the Carnevale report.
All of this suggests that data are subject to an altogether different interpretation than used by the Georgetown team. Consider, for example, the argument advanced by the Georgetown group that "Even in traditionally blue-collar industries, better educated workers fared better." To us, that basically says overqualified people with college degrees appear to be crowding out others in the market for low-skilled jobs, showing that the "underemployment" problem amongst college graduates is considerable.
Similarly, the authors are lumping together those with a high school diploma and those with less than a high school diploma in the statistical comparisons -- so the analysis differs from the traditional comparisons of high school- and college-educated individuals.
Those who did not graduate from high school make up 24 percent of the sample for the “High school or less” category and 33 percent of the unemployed according to the May 2012 Bureau of Labor Statistics data. Those who attained only a high school diploma are much more likely to still have a job (8 percent unemployment) than are those with less than a high school education (13 percent unemployment).
But what is worse, the authors fail to seriously do what even undergraduate economics students writing papers would be expected to do -- relate costs to benefits. Suppose, even after controlling for everything under the sun, college graduates have a clear employment security advantage during turbulent economic times -- a conclusion that we acknowledge is plausible, maybe even expected, if colleges do what they claim to do.
Is the value of that job security advantage big enough to offset the costs of college attendance, where "costs" include not only cash outlays by students, and the income foregone while studying rather than working, but also the total cost to society from the various government subsidies associated with a college education, and the high risks related to the fact that a majority of college students either do not graduate at all, or fail to do so in the advertised (four-year) time needed to complete the degree?
All of that aside, however -- a huge "aside" -- there are hints in the data that the college advantage is becoming frayed. Look, for example, at Figure 2 in the executive summary, which seems to show that the college degree earnings advantage (to us a vastly overused and flawed statistic) peaked around 2005 or so and has declined modestly since. From 2008 and 2010, Census Bureau data show real earnings fell a good deal for full-time male college-educated workers, unlike, for example, those with less than a high school education. On September 7 the Census Bureau will release 2011 data which will give further indication whether the most recent data are the beginning of a longer-term trend.
Our reading of the evidence is that truly dispassionate examination of the data by those without any vested interest in the conclusions, controlling for other factors involved in determining unemployment and earnings, might well yield a radically different conclusion than found in this public relations effort of Tony Carnevale and his team at Georgetown. The assumptions of this report – that those with little education dramatically improve their job security by deciding to go to college -- are certainly not adequately demonstrated.
Richard Vedder directs the Center for College Affordability and Productivity, teaches economics at Ohio University, and is an adjunct scholar at the American Enterprise Institute. Daniel Garrett is an honors undergraduate economics major at Ohio University.
The author and philosopher, Thomas Merton, once said that “the self-fulfilling prophecy perpetuates a reign of error.” The self-fulfilling prophecy that remedial education has failed now leads us to such a reign of error. The news media, policy makers and various higher education agencies are using flawed interpretations of data about remediation to make unsupported assertions and repeat them frequently, thus leading to erroneous policy decisions.
It began with a 2007 report by Mattorell and McFarlin. Supported by the Rand Corporation, these researchers used a regression-discontinuity design to study a large sample of Texas college students whose scores placed them just above and just below the placement level for remedial courses in mathematics and reading. They found that those who just missed the cut score and placed into remedial courses did no better in college-level classes, graduation rates, transfer rates and earnings than those students who just made the cut score. This finding was used to support the authors’ conclusion that remediation was of questionable value.
There is a major flaw in this conclusion. The authors assume that participation in remedial courses should result in participants performing better than students who did not take them. The purpose of remedial courses, however, is to level the academic playing field for underprepared students, not to enable them to outperform prepared students. Given that, the fact that there is little difference in performance between the two groups would indicate that the purpose of remedial courses had been accomplished.
A similar study using a regression discontinuity design was conducted by Calcagno and Long (2008) with similar results. Students in this study who scored just below the cut score and participated in remediation did no better in the corresponding college-level course than those who scored slightly higher and bypassed remediation. Calcagno and Long were more tentative about the meaning of their findings than Martorell and McFarlin. They pointed out that “[t]he results of this study suggest that remediation has both benefits and drawbacks as a strategy to address the needs of underprepared students.” They also admit that “the research design we used only allows the identification of the effect of remediation on a subset of students who scored just above and just below the cutoff score. Estimates should not be extrapolated to students with academic skills so weak that they scored significantly below the cutoff point”
Neither study explored the performance of students with lower assessment test scores in later college-level classes. And neither study is generalizable to the entire range of remedial courses and students. Yet these are the major studies used to justify the claim that all remediation has failed. Although none of the authors of these studies makes this claim, their work is used to justify it.
Meanwhile, there are other studies of remediation leading to different conclusions. In a 2006 study using the 1988 National Educational Longitudinal database, Attewell, Lavin, Domina and Levey found that “two-year college students who successfully passed remedial courses were more likely to graduate than equivalent students who never took remediation.” They also found that students who took remedial reading or writing were more likely to graduate from community colleges than students who did not take these courses. Bahr then later studied the effect of remediation on a large sample of students at 107 California community colleges. He found that students who successfully completed math remediation were just as successful in college-level math courses as those who did not require remediation. He concluded that “these two groups of students experience comparable outcomes, which indicates that remedial math programs are highly effective at resolving skill deficiencies.”
Boatman and Long, in a 2010 study, reviewed a sample of 12,200 students enrolled at public institutions in Tennessee. Although they found many negative effects for remediation, they also found some positive effects depending upon the subject area and the degree of student underpreparedness. Among their conclusions were that postsecondary decision makers “not treat remediation as a singular policy but instead consider it as an intervention that might vary in its impact according to student needs.”
If we look at all the major studies of remediation, we find conflicting findings and inconclusive results. Given these findings, it is difficult to understand how any credible scholar familiar with the available evidence can decisively conclude that remediation has failed. Nevertheless, there are those who misinterpret or ignore the available evidence to make this claim and are then widely quoted by others. It does not take long for the press and policymakers to echo these quotes. In fact, there is little data to justify this assertion and the studies on which the assertion is based do not support it.
A prime example showing the perpetuation of the reign of error can be found in a recent report from Complete College America. Entitled “Remediation: Higher Education’s Bridge to Nowhere,” the report contends at the outset that the “current remediation system is broken” and that “remediation doesn’t work.” As evidence for this assertion, the report claims “research shows that students who skip their remedial assignments do just as well in gateway courses as those who took remediation first.” This erroneously suggests that all remediation has failed. Because the authors do not bother to cite a reference for this claim, we can only assume that they are using the Martorell and McFarlin, Boatman and Long, and the Calcagno and Long studies to justify it. As previously noted, this is not what either publication actually says.
The “Bridge to Nowhere” report goes on to project that, according to their data, only 9.5 percent of those who take remedial courses will graduate from a community college within three years while 13.9 percent of those who do not take remedial courses will graduate within three years. The authors cite these figures as further evidence of the failure of remediation. We do not disagree with these figures but we do disagree with the interpretation of them. The authors of “Bridge to Nowhere” appear to be arguing that it is participation in remediation that accounts for this difference in graduation rates. This argument is based on the assumption that correlation implies causality, a well-known fallacy among researchers. Furthermore, as Bettinger and Long in a 2005 study point out, “a simple comparison of students in and out of remediation is likely to suggest negative effects due to important academic differences in the underlying populations.” Students placing into remediation are disproportionately characterized by known risk factors such as being minority, low income, first generation and underprepared. For such students it is likely that these factors account more for low graduation rates than participation in remediation.
We do not argue with the data provided in any of these reports, nor do we question the methodology used in obtaining the data. We also concur with many of the recommendations in “Bridge to Nowhere.” We would further agree that remediation as currently delivered in U.S. community colleges is very much in need of reform. However, we disagree that all remediation has failed everywhere for all students as many policymakers and news reporters seem to believe. There is simply no credible scientific evidence to support this belief. Unfortunately, that does not stop organizations like Complete College America and others from asserting that remediation has failed, thus creating a self-fulfilling prophecy.
A more recent example took place in the Connecticut Legislature this year. Based on the misinterpretations of research and the fallacious arguments discussed here, the Legislature required that remediation be limited to one semester in all its colleges and replaced by embedded support services in gateway courses. This also happens to be one of the major recommendations of Complete College America. Although we agree that this might be a good solution for some students, particularly for those at the top of the remediation score distribution, it is not a good solution for all students. In fact, there is no single solution for all underprepared college students. There are many tools validated by varying amounts of research available to address the needs of underprepared college students through improved remedial courses and a variety of separate or embedded support services.
Neither colleges and universities nor policymakers, then, should conclude that all remediation has failed and engage in knee-jerk attempts to eliminate it. We need to reform remediation and guide our reform efforts with accurate data and sound research. We need to explore various alternatives, including some of those proposed by Complete College America and others. Nonetheless, we disagree that eliminating all remedial courses is a wise course of action. As Bahr points out, remediation “is so fundamental to the activities of the community college that significant alterations to remedial programs would change drastically the educational geography of these institutions. This should give pause to those who advocate the elimination of remediation … as the implication of such changes would be profound indeed.” We believe that arguing for such profound change as the elimination of remediation for all students on the basis of so little evidence is not only ill-advised but will also undermine the goal of improving college completion.
Hunter R. Boylan is the director of the National Center for Developmental Education and a professor of higher education at Appalachian State University. Alexandros Goudas is an English instructor at Delta Community College.