Imagine yourself emerging from the Way Back Machine in London, England. It’s 1526. Henry VIII is on the throne. You furtively duck into a shop, and quickly head to the back room. You’ve come to buy an English translation of the New Testament. The mere possession of this book is punishable by death.
In the 1520s, having open access to books (knowledge) was a dangerous game. It threatened the establishment. It meant that ordinary people could see for themselves what the elite had guarded so closely.
Enter Thomas Cranmer, Archbishop of Canterbury, who commissioned the publication of the “Great Bible,” in English, making it available to every church where it was chained to the pulpit to ensure it was accessible and didn’t “disappear.” While the publisher paid for this access with his life, in three short years readers were provided so that everyone, even the illiterate, could hear the Word of God proclaimed in their native English.
Fast-forward 472 years. You’re a college student. You’ve taken advantage of some amazing opportunities in the online world. You’ve listened to Nobel laureates discuss the Eurozone crisis and explain how current difficulties relate (or not) to classical theories of economics. You’ve worked through the underlying physics and chemistry for nearly every episode of "MythBusters." You regularly watch the TED lectures. And you’ve even taken courses from the Open Learning Initiative and from OpenCourseWare at MIT. Now you want the academic credit for those forms of learning.
Although you won’t actually be burned at the stake as Cranmer was, you have a very good chance of experiencing the modern version of this torture because it is equally threatening to the elite. It goes something like this.
First, you’ll be asked to produce the sacred document, otherwise known as a transcript, indicating that you officially took the course. No transcript you say? Sorry — your learning is then considered “illegitimate,” and you’re then often cast out into the night where there is weeping and gnashing of teeth as you stumble back to the very beginning of college to start over.
While an exaggeration, today -- through such outlets as TED, various open-source course initiatives, and primary sources through digital content providers -- we all have access to the knowledge that previously was the province of academia. In the same way that access to the New Testament gave otherwise uneducated English people access to the very heart of Christianity, that access is “dangerous.” It threatens the central notion of what a college or university exists to do, and so, by extension, threatens the very raison d’etre of faculty and staff.
Threats to a well-entrenched status quo are not well-received. But the funny thing about many of them — whether books or ideas — is that they often quickly become the mainstream.
Higher education is facing the very situation that confronted our colleagues in the P-12 world when home schooling threatened the world order. Initially considered a fringe activity of substandard quality, the sector figured out that if appropriate standards (i.e., learning outcomes) were agreed upon and stated clearly, it didn’t really matter what path students took to get to the knowledge destination.
Higher education needs to take a lesson from that experience and work much harder on specifying our analog of the Common Core State Standards. The tools are there, and have been there for a very, very long time. It just has not been in our self-interest to develop and agree on them. But we’d better, and we’d better do it now. Otherwise, it will be done to us.
What Thomas Cranmer figured out was that it was impossible to execute people fast enough to stem the desire to access the new sources of knowledge. So he wholeheartedly adopted the reform, and made it his own. What we need to learn from that is to accept the reality that anyone can access the same information we academics used to carefully mete out, so the best approach is to adapt and make that reality our own. We need to create a higher educational system that embraces competency-based achievement, realign the milestones by which we gauge increasing levels of knowledge/competence, and redefine degrees on this basis.
We have an instructive example. Standard 14 of the Middle States’ Characteristics of Excellence pertains to the assessment of student learning. The standard requires that students be told what “…knowledge, skills, and competencies [they] are expected to exhibit upon successful completion of a course, academic program, co-curricular program, general education requirement, or other specific set of experiences… .”
As stated in the Standard, the objective is “…to answer the question, ‘Are our students learning what we want them to learn?’ ” Such assessment is “an essential component of the assessment of institutional effectiveness” (Characteristics of Excellence,, p. 63). The description then goes on to discuss how learning outcome assessment should be designed and its results used. Nowhere is there a discussion of credits.
Given that we already have an accreditation system based on the assessment of student learning (i.e., knowledge/competence acquisition), then it is a rather straightforward matter of taking the existing approach to the next step to complete the conversion process from one grounded on credit accumulation (irrespective of learning) to one based on demonstrated learning outcomes.
More specifically, we need to adopt the approach already taken in many professions of clearly articulating what students are supposed to know and be able to demonstrate at various levels of educational attainment, and create accreditation standards and metrics that reflect it. This would put real teeth in the assessment of student learning outcomes by putting consequences on not doing it well, as well as put the focus on where the content comes from and its quality assurance that underlies the knowledge/competence we expect students to acquire.
When that happens, the recognition of prior learning becomes very straightforward, and its source becomes irrelevant as long as the appropriate competencies are shown. In other words, we already have all the basic elements necessary to take the Cranmeresque step of moving from banning the immediate and unquestioned acceptance of demonstrated knowledge/competence to creating the postsecondary equivalent of the Book of Common Prayer.
The Brave New World
Adopting an accreditation system predicated on the authentic assessment of student learning outcomes liberates faculty to serve a much more important role — that of academic mentor and guide for the student’s learning and knowledge/competence acquisition process. In a way, this will return us to the past, whereby through the judicious use of technology faculty will be able to provide far more individualized instruction to many more students than the current system could ever possible allow or support. In another way, it means that the kind of individualized attention we give to doctoral students can be extended to all. This would be a major improvement for students and faculty alike.
To continue to have legitimacy, accreditation must focus on the core issue — student learning. Accreditation must begin certifying that students actually learn, and that what they learn matches the stated objectives of a course, an academic program, or a specific set of objectives (such as in general education). In short, accreditation must move from certifying that an institution claims that it is doing what it is supposed to do to certifying that students are learning and progressing in their acquisition of knowledge/competence.
Because people can simply wander around the Web and pick up content that is neither amalgamated by a content provider nor verified for accuracy, it will become necessary for some entity to engage in quality assurance in terms of learning outcomes. The job of verifiying bona fide knowledge/competencies and establishing where along the continuum of knowledge/competence acquisition a student falls can become the province of organizations that resemble LearningCounts.org, or even broader entities.
In both cases (i.e., using content offered by an certified provider or doing it on your own with no official guidance), a credential or type of certification would be provided each time a new level of knowledge/competence is reached. The student would then deposit those credentials or certifications into a credential bank for future reference. The student -- not the registrar’s office -- owns the credential
Degrees Deconstructed and Decoupled
We get to this alternate accreditation world in two ways: by clearly defining what each degree means and aligning accreditation with content providers (not institutions that confer degrees).
This requires that we come to quick agreement on what different types of degrees mean. In the United States the TuningUSA effort is just beginning the work of more clearly articulating what knowledge/competencies a student is supposed to demonstrate before being awarded a postsecondary degree.
This is in contrast with the current practice of awarding degrees based on a student's spending a specified minimum amount of clock-defined time amassing an arbitrary number of credits and obtaining a minimum set of grades. Nothing in the current definition says anything about what knowledge or competencies the student actually demonstrates. We need to test it — look up the degree requirements for English literature degrees across a variety of institutions and compare them. This loose approach is in contrast to efforts in other parts of the world, such as Europe, where degree qualifications discussions have been ongoing for over a decade.
Once the accreditation focus is placed on student learning outcomes for real, accreditation becomes tied to learning and is decoupled from institutions granting degrees. Accreditation then becomes aligned with entities that provide content and the parcels or “courses” in which they are delivered. The seal of accreditation would then be placed on the separate pieces of content offered by content providers who demonstrate that the content offered comes with embedded authentic assessment of learning. To be sure, most of these providers will still be postsecondary institutions, but the accreditation umbrella is extended more broadly to reflect the current reality that content comes from many sources.
In such a system, regional accreditation no longer gives thumbs-up or thumbs-down only on the traditional degree-granting institution. Rather, it focuses on what is provided by any entity that wants to claim it’s in the business of offering content. If and only if that content meets certain standards would it be “accredited.”
Shifting the focus from the institutional level to the content level would strengthen the link between accreditation and federal financial aid eligibility. If and only if a student was using content from an accredited source would the student be able to apply for and receive federal financial aid. Likewise, if the student has amassed knowledge/competencies from self-instruction or from noncertified sources and wants to convert that into “certified learning,” then federal financial aid could be spent only at accredited entities in that business.
Charting a Future Course
The possible future I have described here is both scary and exciting. We can choose to sit down in the captain’s chair and help chart our own course by fully embracing new opportunities while really being serious about quality as defined as authentic assessment of the acquisition of knowledge/competence. Or we can put up the shields, claim that the way we provide access to knowledge now is to remain immutable for all time and that change will bring our world crashing down and condemn us to eternal damnation, and have a modern equivalent of Thomas Cranmer bring it all crashing down.
It’s up to us. Shields will not work. We have only one real option if we want to build on the true legacy and meaning of education: to boldly go where accreditation has never gone before.
John C. Cavanaugh is Chancellor of the Pennsylvania State System of Higher Education. This essay is adapted from a speech he gave Tuesday at the annual meeting of the Middle States Commission on Higher Education.
Way back in 1973, during my second year as a faculty member at a historically black college, I was teaching a descriptive statistics course to a class of about 15 master's degree students in urban planning. One very bright student, whom I will only identify by his first name, was particularly resistant. One day after he had sounded off in class yet again, I asked him why he refused to learn this material. Here's the reply he gave me that was burned into my brain forever.
Roger: Don't you know that statistics is the white man's trick bag?
Me (stunned, slack-jawed): Why ... why do you say that, Roger?
Roger: Well, have you ever met any statistics that had anything good to say about black people?
Our class was hip deep into the infamous "Moynihan Report", various poverty reports, census reports, and other sources of the negative indicators of what used to be called "The Urban Crisis." Indeed, I had become a professor of urban planning at a black college so that I could teach black students how to change some of those bad numbers. Believing as I do today that you can't fix a problem if you don't admit you have the problem, Roger's blunt challenge left me speechless because it was my first confrontation with an admonition that would assume shifting forms over the following decades; and I was nowhere near as sure of my own position back then as I am today.
Sometimes the admonition sounded like: "We already know what's wrong, so we don't need any more studies." Other times it would appear as "If you say those things now that we have black superintendents, black mayors, black congressmen, etc, etc, etc, our adversaries will use your data to demean our black leaders." From time to time advocates of affirmative action would caution against discussing low graduation rates or achievement gaps because such talk might undermine their efforts. And sometimes the admonition would appear in its most seductive form as "We really need better presentations of our successes, not more documentations of our failures."
Almost 50 years after the civil rights revolution, black progress has been substantial, but nowhere near as substantial as we had hoped it would be and nowhere near as substantial as we needed it to be. By this time, as the Great Recession grinds more and more middle class black families into poverty, and as the black-white gaps in student achievement persist and sometimes widen at all educational levels, you would think that the problem deniers would admit that their strategy hasn't worked, that it has undermined our efforts to achieve the full promise of the victorious revolution led by Dr. King and his courageous peers.
You would think so, but you would be wrong. The problem deniers are still out there, still encouraging us to send our children buck naked into the world's jungles like so many black Harry Potters protected only by maternal fantasies that they can be anything they want to be. No, they can't. It still takes a hell of a lot more guts and talent and other resources for a black man or a black woman to achieve the same level of success as for a white man or a white woman.
I concede that substantial, albeit insufficient black progress had been made since the 60s and 70s. That's the good news. But this progress has not been equally achieved by all schools and colleges that educate black students, or by all black businesses, or by all black professionals. In other words, there has also been substantial variance in black achievements. From a statistical viewpoint, this is great news because statistical analysis thrives on variance. Case studies of success by themselves can be exceedingly counter-productive because they can lull their readers into believing that things are really OK, that it's just the "bias" in the media that makes us think that things aren't progressing as rapidly as we expected them to.
However, if 80 percent of a black population of students, businesses, or professionals fall below some specified levels of satisfactory performance, then well-constructed case studies of a few of the high achieving 20 percent could provide valuable clues to the factors that contributed to their success. In other words, given the fact that all other things are still unequal, we need to figure out how these high achievers attained their greater success. Then we have to devise policies that encourage the low performers to adopt (and adapt) the best practices of the high achievers. This strategy has worked in a variety of other contexts with an important caveat. Best practices must usually be adapted, i.e., modified to fit the local culture. Carbon copies are rarely effective.
With regards to the critical tasks of identifying the factors that contribute to black achievement, I say it's high time for the problem deniers to put away their magic wands, leave Hogwarts, and join the rest of us in the real world. There's nothing magical about black high achievers, but we can't assume that we already know the secrets of their success.
For example, even the most cursory examination of the six-year graduation rates among the nation's 105 historically black colleges is sufficient to blow away any notion that it's just a matter of budgets. The richest HBCU does not have highest graduation rate, nor does the poorest HBCU have the lowest; and this disparity grows much larger when one considers graduation rates vs. per student dollar expenditures. Some HBCUs are making far more productive use of their tuition dollars than others. We need good case studies coupled with statistical analysis to figure out what the more productive HBCUs are doing right; then we have to encourage the lower performing HBCUs to adopt (and adapt) these best practices.
Nor should we confine our analysis to HBCUs. At this point almost 90 percent of America's black students attend integrated, mainstream colleges and universities, not black colleges. And the gross statistics show that some of these colleges and universities have far more impressive retention rates and six year graduation rates for their black students than others. Indeed some colleges and universities have achieved comparable rates for their black and white students. So once again the question becomes: What are the high performers doing that enables them to perform substantially better than the rest? And how can we encourage the lower performers to adopt (and adapt) the high performers' best practices.
Similar statistical analyses and case studies could be conducted with regard to highly productive black students and highly productive black instructors at all educational levels.
The good news is that efforts to identify best practices with regard to black achievement are gaining more support; and, of course, the bad news is that they aren't gaining enough support. But the substantial progress that black Americans have made since the 60s and 70s assures us that there is far more variability in the circumstances of black Americans today than back then. This greater variability should facilitate a golden age of statistical policy analysis because it gives us better opportunities than in decades past to figure out what really works for black Americans and why. Instead of being the white man's trick bag, statistics can become the black man's leverage.
Much attention has been directed at college completion rates in the past two years, since President Obama announced his goal that the United States will again lead the world with the highest proportion of college graduates by 2020. The most recent contribution to this dialogue was last month’s release of "Time Is the Enemy"by Complete College America.
Much in the introduction to this report is welcome. Expanding completion rate reporting to include part-time students, recognizing that more students are juggling employment and family responsibilities with college, acknowledging that many come to college unprepared for college-level work -- such awareness should inform our policy choices. All in higher education share the desire expressed by Complete College America that more students complete their programs, and do so in less time.
The graduation rates for two-year institutions included in "Time Is the Enemy" show, however, just how inadequate our current measures are for assessing community college student degree progress -- a shortfall also acknowledged by the appointment of the federal Committee on Measures of Student Success, which is charged with making recommendations to the U.S. education secretary by April. Our current national completion measures for community colleges underestimate the true progress of students, presenting a misleading picture of the performance of these open-admissions institutions.
The following suggestions might inform a new set of national metrics for assessing student performance at two-year institutions.
Completion Rates for Community Colleges Should Include Transfers to Baccalaureate Institutions. Although community colleges usually advise students aiming for a bachelor’s degree to complete their associate degree before transferring, to reap the benefits of additional tuition savings and attain a credential, transferring before attaining the associate degree is, for many students, a rational decision. Accepting admission and assimilating into competitive baccalaureate programs and institutions, establishing mentorships with professors in the intended baccalaureate major, or embracing the residential college experience may all lead students to transfer before completing the associate degree. In addition, for a variety of reasons, universities may delay admission of incoming freshmen to the spring semester and advise them to start in the fall at a community college. These students are not seeking degrees at the community college, and will transfer after one semester. Thus, for two-year institutions, preparing students for transfer to a four-year institution should be considered an outcome as favorable as a student earning an associate degree.
The appropriate completion measure for community colleges is a combined graduation-transfer rate. The preferred metric is the percentage of students in the initial cohort who have graduated and/or transferred to a four-year institution. It is important to include transfers to out-of-state institutions in these calculations. In Maryland, a fourth of the community college transfers to baccalaureate institutions enroll in colleges and universities outside of Maryland. Reliance on state reporting systems that do not utilize national databases such as the National Student Clearinghouse to report this metric results in serious underestimates of student success. The need to track transfers across state lines is a major reason for the so-far-unsuccessful push for a national unit record system.
Comparisons of completion rates at community colleges and four-year institutions, where transfer is not included in the community college measure, are inappropriate. Reports such as "Time Is the Enemy" that report graduation rates for community colleges, with table labels such as “Associate Degree-seeking Students,” are misleading in that these calculations include many students who are pursuing baccalaureate transfer programs with no intention of earning the associate.
Completion Rate Calculations Should Exclude Students Not Seeking Degrees. Community colleges serve many students not seeking a college degree, and these students should be excluded from the calculation of completion rates. A student’s stated intent at entry is not adequate to identify degree-seekers, since students may be uncertain about their goals and goals may change. Enrollment in a degree program is not adequate, since students without a degree goal must declare a program in order to be eligible for financial aid, and many colleges force students to choose a major in order to gather gauge student interest for advising purposes.
A better way to define degree-seeking status is based on student behavior. Have students demonstrated pursuit of a degree by enrolling in more than two or three classes? A minimum number of attempted hours is the preferred way of defining the cohort to study. In Maryland, to be included in the denominator of graduation-transfer rates, a student must attempt at least 18 hours within two years of entry. Hours in developmental or remedial courses are included. This way of defining the cohort has several benefits. It does not exclude students beginning as part-time students, as IPEDS does. It eliminates transient students with short-term job skill enhancement or personal enrichment motives. By using attempted hours as the threshold, rather than earned credits as in some other states, this definition does not bias the sample toward success. Students who fail all their courses and earn zero credits will still be in the cohort if they have attempted 18 hours. And finally, it seems reasonable that students show some evidence of effort to persist if institutions are to be held accountable for their degree attainment.
Recognize that Community College Students Who Start Full-time Typically Do Not Remain Full-time. A number of studies suggest that the majority of community college students initially enrolling full-time switch to part-time attendance. This contrasts with students at most four-year institutions, who start and remain full-time. For example, 52 percent of students at community colleges that participate in the Achieving the Dream project began as full-time students. Yet only 31 percent attended full-time for the entire first year. Studies of Florida’s community colleges find similar results. Most students end up with a combination of full-time and part-time attendance, regardless of their initial status. Among students enrolled at least three additional semesters, only 30 percent of Florida’s “full-time” community college students enrolled full-time every semester. As a Florida College System report concludes, “Expecting a ‘full-time’ student to complete an associate degree in two years or even three assumes that the student remains full-time and this is most often not the case. As a result, students will progress at rates slower than assumed by models that consider initial full-time students to be full-time throughout their time in college.” Thus, comparisons of completion rates at 2-year and 4-year institutions, even controlling for full-time status in the first semester, are misleading. Studies at my college suggest that completion rates of community college students who start full-time and continuously attend full-time without interruption are comparable to completion rates attained at many four-year institutions.
Extend the Time for Assessing Completion to at least Six Years. “Normal Time” to completion excludes most associate degree completers.Due to part-time attendance, interrupted studies, and the need to complete remedial education, most associate degree graduates take more than three years to complete. Completion rates calculated at the end of three or four years will undercount true completion. It is not uncommon for a third of associate degree completers to take more than four years to complete their degree. At my institution, fully 5 percent of our associate degree recipients take 10 or more years to complete their “two-year” degree. These students are not failures; they are heroes. Yes, we would all like students to complete their degrees more quickly. But if life circumstances dictate a slower pace, let us support these students in their remarkable persistence. And, in our accountability reporting, recognize that our completion rate statistics are time-bound and fail to account for all who will eventually succeed in their degree pursuit.
When Comparing Completion Rates, Compare Institutions with Similar Students. Differences in completion rates among institutions largely reflect differences in student populations.Community college students who are similar to students at four-year institutions in academic preparation, and in their ability to consistently attend full-time, achieve completion rates comparable to those at many four-year institutions. In Maryland, if you include transfer as a community college completion, community colleges have four-year completion rates equal or higher than the eight-year bachelor’s degree graduation rates at a majority of the state’s four-year institutions with open or low-selectivity admissions. And the completion rate of college-ready community college students -- those not needing developmental education — is similar to all but the most selective four-year schools. At my college, 88 percent of the students in our honors program have graduated with an associate degree in two years. This graduation rate is comparable with that of Johns Hopkins and above that of the flagship University of Maryland at College Park.
Students at four-year institutions who are similar in profile to the typical community college student have completion rates similar to those attained at community colleges. This is not a new finding. A March 1996 report, "Beginning Postsecondary Students: Five Years Later," identified the following “risk factors” affecting bachelor’s degree completion: delayed enrollment in higher education, being a GED recipient, being financially independent, having children, being a single parent, attending part-time, and working full-time while enrolled. Fifty-four percent of the students who had none of these risk factors earned the bachelor’s degree within five years. The graduation rate for students with just one of these risk factors fell to 42 percent. For students with two risk factors the bachelor’s degree graduation rate was 21 percent, and for those with three or more the graduation rate was 13 percent.
Readers of this essay who work at community colleges are probably smiling to themselves. For most community colleges, the majority, if not the overwhelming majority, of students are coping with several of these risk factors. And this list does not account for the need of most community college students for developmental or remedial education. The comparability of completion rates at two- and four-year institutions, when student characteristics are controlled for, should not be a surprising finding.
If we must compare completion rates, it is incumbent upon analysts to account for differences in the academic preparation and life circumstances of student populations. This can be done by sophisticated statistical analysis, or in the selection of peer groups of institutions with similar admissions policies and student body demographics.
Support Hopeful Signs at the Federal Level. The work to date of the Committee on Measures of Student Success authorized by the Higher Education Act of 2008 is encouraging. The committee is to make recommendations to the Secretary of Education by April 2012 regarding the accurate reporting of completion rates for community colleges.
A number of the recommendations in the committee’s draft report issued September 2, 2011 would greatly improve reporting of completion statistics for community colleges:
Defining the degree-seeking cohort for calculating completion rates by looking at student behavior, such as a threshold number of hours attempted.
Recognizing that “preparing students for transfer to a four-year institution is an equally positive outcome as a student earning an associate’s degree.”
Reporting a combined graduation-transfer rate as the primary outcome measure for degree-seeking students.
Creating an interim, persistence measure combining lateral transfer with retention at the initial institution.
These recommendations show an understanding of the student populations served by community colleges. Inclusion of these definitions and measures in federal IPEDS reporting would provide more meaningful peer, state, and national benchmarks for all community colleges.
Americans are infatuated with rankings. Or at least they seem to love arguing about them. Whether it’s the Bowl Championship Series or the 10 best-dressed list, debate rages. Mostly, this is harmless fun (not counting the Texas Congressman who called the Bowl Championship Series “communism”). But trying to rank colleges and universities in the same way we do football teams has the potential to seriously confuse the public about issues of real importance to our society.
The recent suggestions that Clemson manipulates data to improve its standings in the U.S. News and World Report (U.S. News) rankings are a case in point. I don’t believe any of the charges. First, according to the reports of journalists who were there, the accusations incorrectly describe one of the most important elements of the University’s reporting to the magazine (whether benefits are included in salary calculations) -- if that one is wrong, it’s hard to see the others as credible. Second, my organization (the South Carolina Commission on Higher Education) has experienced nothing but the highest integrity from Clemson on data and on all other issues.
The controversy not only reveals a distressing misunderstanding of the key facts, it also illustrates how the rankings can fail to represent what universities really do. To explain this I need to describe Clemson’s strategic plan.
I first encountered Clemson’s strategic planning some seven years ago while I was in another state and visiting South Carolina to review proposals for its endowed chairs program. At that time, Clemson had the best planning process I’d seen. I’ve not read the planning documents of every university in the country, of course, but I’m confident most academic leaders would agree Clemson’s is one of the best.
What makes the plan so good?
First, instead of the abstract rhetoric that characterizes the planning of too many of the nation’s research universities, Clemson’s plan is specific and pragmatic, with clear and measurable goals.
More important, Clemson does a wonderful job of focusing its goals on tangible benefits to students and the state. One of many examples is the Clemson University International Center for Automotive Research (CU-ICAR) which ties a goal of increasing research funding to a strategically-selected emphasis area focused on automotive and transportation technology. CU-ICAR advances the state’s economic development and leverages faculty expertise to increase the quality of vehicles and associated products while preparing students for jobs.
Plans are good but implementation is essential, and Clemson’s annual “report card” shows dramatic progress -- and also demonstrates that the University isn’t focused on magazine rankings. Of the 27 items in the report card, only eight reference U.S. News -- things such as high graduation rates that are priorities of all universities. Put another way, I’m confident Clemson’s strategic plan would be largely the same if the magazine and its rankings didn’t exist.
Why does Clemson reference the U.S. News rankings at all, then? One likely reason is that the public expects it. Another is that the magazine’s metrics conform relatively well to Clemson’s category -- the selective research university.
But that brings up the other problem with the rankings. One of the strengths of U.S. higher education -- one of the few areas where our nation consistently leads the world -- is in options available to students. We have institutions that serve a wide variety of needs and support a great range of public purposes.
If you did exceptional work in high school and want to be in a research-focused college environment, we have great options, including Clemson and the University of South Carolina in my state. Excellent students can also choose from smaller liberal arts focused universities and/or less expensive research institutions. If you’re a late bloomer and need help getting to your fullest potential, we have wonderful choices there also. Ditto if you need to commute.
And then there are internal focus areas such as great books, military studies, community service, and many more. Trying to rank all these variations would be as silly as if the Bowl Championship Series attempted a single ranking across all sports, not just for football. It would also be pointless.
To illustrate its narrow focus, U.S. News allocates 15 percent of its points to the quality of the incoming student body, 10 percent to financial resources, and most of the rest to factors that are strongly related to funding.
What if we rated universities on how they do with less-prepared students and shoestring funding? U.S. News wouldn’t be much help here. It does give 5 percent to a “value added” graduation rate, but that’s doubled by the financial resources category. If we used those criteria I think all of our universities in South Carolina would excel (also our technical colleges, but that category isn’t rated by U.S. News). And, of course, my state isn’t alone in this.
If ranking is so problematic, why are we facing a blizzard of new comparative measures in higher education? The data-centric emphasis appears to stem from the thinking of an emerging group of national experts who regularly describe higher education as being in crisis, and therefore in desperate need of reform. More numbers, they suggest, will guide us to a solution.
I have two concerns about this approach. First, from my historian’s perspective, the whole reform agenda seems askew. American higher education is what it always has been -- overall very successful in what it does. The core problem is that the importance of higher education to society has changed, and colleges and universities are being asked to serve larger numbers of students, many of whom are poorly prepared and/or lack the stable financial background and family belief in the value of education that enhance persistence.
My second concern is with the idea that more data and more rankings and new formulas will lead to substantive change. I’m all in favor of more efficiency (Lumina’s Making Opportunity Affordable is a great initiative) and clearly we need to do a better job in accountability for student learning. I also agree that there are areas where additional data are needed. But I’m afraid that huge efforts at measuring and ranking will lead to at best marginal improvements in things that matter -- notably student learning and success to graduation.
It’s also the case that the emphasis on data -- and the inevitable controversy -- has the potential to distract us from some core issues. For example, funding really does matter (U.S. News certainly thinks so). The reality is that public colleges and universities have been doing more with less for a very long time. But you can’t do that forever and now we’re getting less for less. Using our meager resources to compile more data just in case it might be useful and spending scarce time tinkering with funding formulas and creating ever more elaborate rankings won’t solve the underlying fiscal crisis.
So what is the solution if higher education is to meet the nation’s new needs? My list has just three categories.
First, continuous improvement in student learning and in operational efficiency -- with a recognition that since campuses are largely maxed out on their own, greater efficiency will likely require additional multi-institutional approaches.
Second, more excellent planning of the focused and pragmatic type that Clemson has implemented (fortunately, I think this is very much in process with the current generation of presidents).
And, finally, there must be public recognition that, because higher education is essential to our economic development and quality of life, we can’t afford to have it keep sinking as a state funding priority. This last is one area where I think ranking does matter.
Garrison Walters is executive director of the South Carolina Commission on Higher Education.
At a time that community colleges are under growing pressure to collect and analyze data to improve what they do, their capabilities in institutional research are far behind where they should be, according to a new report.