Both are eagerly anticipated and extensively covered by the news media – particularly during spring months. Both contain key indicators necessary for informed, high-stakes decisions that affect the nation’s economy. But that’s where the similarities end.
The Digest of Education Statistics is an indispensable handbook for education analysts around the country, with detailed information on everything from elementary school enrollments to postsecondary institutions' revenue sources. Unfortunately the data are not very current or granular. While graduation season for 2010-11 is coming to a close at most institutions around the country, the latest national data for the aggregate number of students earning degrees are from 2008-9 -- two full years ago.
In contrast, the auto sales report shows sales by manufacturer and car models monthly. This week we learn that auto sales fell in May for the first time in eight months due to a combination of high gas prices and a shortage of fuel-efficient models created by Japan’s record earthquake. Industry-wide sales fell 3.7 percent to 1.06 million vehicles sold in May, down from 1.1 million a year earlier. The Japanese car shortage benefited Chrysler, which enjoyed a 10 percent increase in sales.
These are the kinds of data carmakers look at constantly to help make decisions about production volume, plant closings (or openings), and pricing. Policymakers are also looking at this data to see whether the bailout of the auto industry was worth the price tag, and whether tax incentives and mileage regulations are changing the mix of models sold in the U.S.
Similar data on bachelor's and associate degrees awarded, or engineering and music degrees, are nearly two years old. So what do we know about the kind of short-term credentials students turned to in response to the nose-diving economy? Or about which states and institutions stepped up their game to meet the ambitious goal President Obama set to have the highest proportion of college graduates in the world by 2020? Not much.
Further, there is an important debate right now between those who see higher education's mission primarily as preparing graduates for jobs and economic success, and others who worry about the erosion of general education in today's colleges and universities. Tracking the number of jobs created each month as the economy recovers serves as a starting point for talking about the quality of those jobs -- are they temporary jobs, low-paid service-sector jobs, highly skilled professional jobs, jobs in new businesses or old industries?
We should be having a similar national conversation about our college graduates. Wouldn't it be nice to know whether or not there have been any recent changes, perhaps as a result of the economy or of external pressures on colleges, in the mix of vocationally oriented and broader liberal arts degrees awarded around the country?
Granted, college graduates are not cars, and industrial analogies to higher education often ring hollow. But aren’t they much more important? While higher education represents an increasing share of students’ budgets and of the American economy, and is a linchpin to job creation, we’re largely navigating in the dark. New governors and legislators are trying to get their bearings and figure out how to meet ambitious goals with the scarce resources available. A new Congress is starting to talk about whether we can afford Pell grants as we currently know them. And they’re having those conversations with two-year-old data. Imagine setting national policy for the car industry based on GM sales in June 2009. Something needs to change.
Reliable Estimates Are Possible in the Current System
As a former state-level official who was responsible for getting this kind of data out in Florida, I understand some of the reasons for the delay in reporting college completion data. The university system I worked for could be only as fast and as accurate as its slowest and least accurate member.
So imagine the situation of the federal National Center for Education Statistics (NCES), where instead of the 11 universities we had to worry about, they have 7,000. As I write this, someone at NCES or one of its contractors is still trying to figure out how a new beauty school in Alaska managed to award 50 nuclear engineering doctorates last year -- or something similarly strange.
But that size is also an advantage. If Florida State University didn’t report reliable numbers, the accuracy of any state-level report would be severely compromised. If Florida State doesn’t report to NCES, on the other hand, it’s a rounding error. And the Florida States of the world rarely cause problems -- it’s the rural community college that just lost its lone institutional researcher, or the new beauty school in Alaska that doesn’t yet have one.
Releasing Data Earlier Is Possible
Reporting more current college completion data is possible. Here’s how. NCES collects data on degrees awarded once a year. The deadline for 2009-10 completions was last October 28. By early February, preliminary numbers are available for institutional researchers’ use, but the final numbers are released in the summer, a full year after the prior year’s students have actually graduated.
The preliminary numbers available in February have proven to be a reliable estimate. In my experience, there is usually very little difference between the totals that can be calculated with the preliminary data and what eventually comes out several months later. In fact, reliable estimates for the nation and for most states could probably be reported as early as November or December, based on “early returns” from the fall survey. While that’s still a few months after the peak of graduation season, it’s gaining nearly two years of valuable time and information.
Reporting more current college completion data is worth doing. Some states recognize this and make every effort to get data out early. Kentucky, for example, recently reported that public and private credentials awarded in that state are up 11 percent in 2010-11, largely at the associate and certificate level. Such timely reporting is noteworthy – and rare. But even precocious data gatherers like Kentucky will find their numbers hard to interpret without knowing whether the trend in other states is up 15 percent or down 3 percent. And it will remain a strictly local news story, timed differently in every state, rather than an occasion for national reflection on the state of higher education. By contrast, consider the healthy competition for business and jobs under way among governors. When new unemployment numbers come out weekly, governors aren't just looking at their own states, they’re measuring themselves against their neighbors and against the nation as a whole.
The timely national release of top-line completion numbers would put a day on the calendar to spark a recurring national discussion about how we’re doing across state lines and relative to one another and to our ambitious goals. We can imagine states and colleges themselves vying to be among the top-performing institutions and using it in their marketing and recruitment efforts. The competition for new jobs is fierce among states, and a number of governors have tried using tax policy to poach business from other states. Wouldn’t it be nice to see similar competition based on recent state trends in numbers of highly skilled graduates?
Complete College Completion Data
In addition to being more timely, it’s important for college completion data to be more comprehensive. Most states have good data on their public institutions’ graduates well before NCES releases national numbers, but information on private colleges (both nonprofit and for-profit) is spotty. And yet none of the big attainment goals set by states, the White House, or the Lumina Foundation for Education can be achieved unless private higher education contributes a big share of the needed graduates.
Making good strategic higher education decisions at the state level requires analysis of both public and private institution data. Perhaps a steep drop in nursing graduates at public colleges is spurring discussion of financial incentives to graduate more RNs. But if nursing degrees at private institutions are booming, that may not seem such a wise use of public funds. And if the trend is the same at private colleges, then perhaps the incentives should be available there as well.
As our state and federal elected officials continue making difficult policy and budget choices, we should hope that they are doing so based on data that are current and that bring to the surface trends in higher education that can guide informed, effective budgeting and policy making. If we can generate detailed auto sales data monthly, unemployment claims weekly, and stock market updates by the second, we should be able to produce college completion data sooner than two years after the fact.
Nate Johnson served as executive director of planning and analysis for the State University System of Florida and as associate director of institutional research at the University of Florida. He is currently a senior consultant for HCM Strategists, a health and education public policy and advocacy firm.
The Council for Higher Education Accreditation (CHEA) has carefully followed the Health, Education, Labor and Pensions (HELP) Committee hearings on for-profit higher education, especially as attention has focused on accreditation. As a national institutional membership organization charged with coordination of nongovernmental accreditation, CHEA has also paid close attention to Congressional concerns about the credit hour and accreditation as reflected in Representative George Miller’s hearing this past summer and Senator Richard Durbin’s recent request of accrediting organizations for information about completion, retention and student defaults, among other issues.
A single message is emerging from the Congress in these inquiries: based on examining a small number of institutions and actions by accrediting organizations, members of Congress appear to have determined that the entire accreditation enterprise is failing to do its job.
How did we move so quickly from identification of a modest number of concerns to a wholesale condemnation of the accreditation enterprise? How is this judgment warranted? While we must address the concerns that have been raised, are we on a path that could, at the same time, significantly diminish the value of accreditation to students, society and government?
Issues that members of Congress have identified over the past year require careful and immediate attention from educational institutions and accreditors. These have included allegations of aggressive and misleading recruitment practices, misleading marketing, inadequate persistence, completion and graduation, and rapid enrollment growth. We agree on important points:
Some decisions made by accrediting organizations have been questionable.
We are all alarmed about the combination of loan debt and failure to achieve educational goals for some students.
Everyone is concerned about the need to assure reliable evidence of quality.
Federal money should be available only to effective higher education institutions.
The federal government has a legitimate interest in holding federally recognized accrediting organizations accountable.
However, we need some context. Some 7,400 institutions are accredited by recognized accreditors. Of these, attention has been focused on 15 campuses involving five of the 80 recognized accrediting organizations. Do members of Congress think that what has been found on some campuses applies to all of higher education? To all accreditation activity?
The Value of Accreditation
We believe that Congress is overlooking compelling evidence of the value and longstanding effectiveness of accreditation.
Accreditation has a long relationship with the federal government, serving as a reliable authority of academic quality and providing access to federal funds. Called “gatekeeping,” this responsibility goes back to the 1950s. Would the government have continued to rely on accreditors for almost 60 years if these organizations had been perceived as incompetent?
Accreditation plays several important roles in U.S. society, in addition to assisting the federal government. State governments rely on accreditation when making judgments related to funding and licensure. The private sector depends on accreditation when recruiting graduates or providing funds for research or tuition assistance. Students and the public rely on accreditation to signal threshold quality and some likelihood of successful transfer of credit. From where have the decades of reliance come, if not from public confidence in accreditation?
Internationally, accredited status is the first and fundamental indicator that a U.S. institution or program has value. U.S. accrediting organizations are in demand and the U.S. process is replicated in many places around the world. Would this be the case if accreditation were the shoddy enterprise sometimes depicted?
We can avoid the hyperbole of “best in the world” and still point to the United States as sustaining one of the strongest, most effective and diverse higher education enterprises in the world, simultaneously providing extraordinary -- not perfect -- access and quality. No one is claiming that accreditation caused this to happen. But -- how can one deny that accreditation is part of this enormous success?
Accreditation is grounded in and reflects the core academic values of higher education: the importance of institutional academic leadership, the power and effectiveness of peer/professional review, and the centrality of academic freedom. These values are building blocks that have been essential to the success of the higher education enterprise mentioned above. Does it make sense to ignore this?
Building on this history, accreditation has also accepted responsibility for change. Responding to calls from the higher education community, government and foundations, accreditation is enhancing its transparency and strengthening its attention to student achievement. CHEA, through its publications, conferences and recognition process, has promoted these changes for more than a decade. There is progress here, even as the pace of change may be too slow for some.
It’s About Accountability
CHEA is concerned that this wholesale condemnation of accreditation is part of an emerging approach to accountability on the part of government that, if fully implemented, will be problematic for students, colleges and universities and government itself. This approach dates from the 2006 report of the Secretary of Education’s Commission on the Future of Higher Education and is based on what some members of the commission considered to be the limitations of both higher education and accreditation. The modest number of instances in which, arguably, institutions and accreditors in the for-profit arena have been lacking appears to reinforce the perceived wisdom of this emerging approach.
Until five years ago, government, through the U.S. Department of Education and the National Advisory Committee on Institutional Quality and Integrity, held accreditation accountable through evidence that (1) accrediting organizations set expectations of quality through their standards and processes, (2) institutions and programs worked to meet the standards and (3) accreditors would take action when standards were not met, up to and including denial or removal of accredited status. The federal government invested in this peer/professional review process because it was considered to be an effective means to examine the academic operation of colleges and universities, whether curriculums, academic standards, faculty. The courts, in case after case, sustained this position of deference to professional judgment of quality by the academy. The system worked.
This system is vanishing. Accreditor judgment is being augmented or supplanted by government judgment.
Government now questions whether simply holding accreditors accountable for having and maintaining standards and processes is sufficient. Officials are more and more inclined to decide the standards and processes for which accreditors are accountable. Government is taking a next step to determine the content and level of expectation of accreditation standards and how various accreditation processes are to be carried out.
There is a context for this shift. The large amount of money flowing into higher education is often cited as a primary factor in government’s expanded examination of colleges, universities and accreditation. Congress is appropriately concerned about the return on its major investment of federal financial aid dollars. Millions of students are receiving a total of $150 billion in aid on an annual basis. And, especially in today’s world of economic difficulty, there are many questions around funding higher education access unless there is a high probability of student success, usually defined in terms of completion, graduation or job acquisition. Other issues – international competitiveness and the importance of postsecondary education to the economic well-being of students and society -- are brought up as well.
It is reasonable to ask, however, whether this emerging approach to accountability is good for students, for higher education and for the country. Do we want Congress making decisions about the number of credit hours that students can earn, instead of faculty who have been making these determinations for generations? Do we want Congress, instead of academic administrators, attending to enrollment growth in individual colleges and universities? Do we want Congress, rather than governing boards of colleges and universities, scrutinizing executive compensation? Why is Congress, not the decision-making commissions and councils, designing accreditation appeal processes?
We are addressing the serious concerns about a small number of institutions and accreditation actions. However, this does not require an approach to accountability that can fundamentally undermine the worth of accreditation and destroy its value to students, society and government.
Judith S. Eaton is president of the Council for Higher Education Accreditation.
Numbers fascinate and inform. Numbers add precision and authority to an observation (although not necessarily as much as often perceived). The physical sciences revolve around the careful measurement of precise and repeatable observations, usually in carefully controlled experiments.
The social sciences, on the other hand, face a much more challenging task, dealing with the behavior of people who have an unfortunate tendency to think for themselves, and who refuse to behave in a manner predicted by elegant theories.
Under the circumstances, it's really quite remarkable that statistical predictions are as useful as they are. Advertisers ignore, at their peril, conclusions based on data gathered on large numbers of people acting alike. Supermarket shoppers or football fans behave in much the same way, no matter the infinite number of ways each member of the population differs in other respects. In their interaction with the location of shelved foods -- or forward passes caught -- few of these variations make a difference.
Population samples comprised of large numbers of uniform members can be defined, observations made, statistical calculations made, and policy deduced with astonishing accuracy.
Efforts have been made to extend this methodology to the classroom, and trillions of data elements have been gathered over the past 30 years describing K-12 activities, students, inputs, and outcomes. But judging from the state of K-12 education, little in the way of useful policy or teaching strategy has emerged. The reason is not immediately clear, but one surmises that while the curriculum path for K-12 children is similar, the natural variation among children, in teachers, in social circumstances and in school environment makes it impossible to create a uniform population out of which samples can be drawn.
At the postsecondary level, the problem facing the number gatherer is greatly exacerbated. Every student is different, almost intentionally so. A college might have 25 different majors each with three or four concentrations. Students take different core courses in different order, from different teachers. They mature differently, experience life differently and approach their studies differently. When all the variables which relate to college learning are taken into account, there is no broad student population. Put another way, the maximum size of the population to be examined is one!
This reality informed traditional accreditation. Experts in a field spoke to numbers of students, interviewed faculty, observed classroom lectures, and, using their own experience and expertise as backdrop, arrived at a holistic conclusion. There was nothing "scientific" about the process, but it proved remarkably successful. This is the accreditation that is universally acknowledged to have enabled American colleges and universities to remain independent, diverse, and the envy of the world.
In 1985, or thereabout, voices were heard offering a captivating proposal. Manufacturers, they said, are able to produce vast numbers of items successfully, with ever-decreasing numbers of defects, using counting and predictive strategies. Could not similar approaches enhance higher education, provided there were sufficient outcome data available? Some people, including then-Secretary of Education William Bennett, swallowed the argument whole. Others resisted, and the controversy played itself out (and was recorded!) in the proceedings of the National Advisory Committee on Accreditation and Institutional Eligibility (predecessor of the current National Advisory Committee on Institutional Quality and Integrity) between 1986 and 1990.
Advocates persisted, and states, one by one, were convinced of the necessity to measure student learning. And measure they did! Immense amounts of money, staff time, and energy went into gathering and storing numbers. Numbers that had no relevance to higher education, to effectiveness, to teaching or to learning. "Experts" claimed that inputs didn't count, and those who objected were derided as the accreditors who, clipboard in hand, wandered around "counting books in the library."
At one point, the U.S. Department of Education also adopted the quantitative "student outcomes" mantra, and accrediting agencies seeking recognition by the education secretary were told to "assess." "Measure student learning outcomes," the department ordered, "and base decisions on the results of these measurements."
Under duress, accreditors complied and subsequently imposed so-called accountability measures on defenseless colleges and universities. In essence, the recognition function was used as a club to force accreditation to serve as a conduit, instead of barrier, to government intrusion into the affairs of independent postsecondary institutions.
Today, virtually all those who headed accreditation agencies in the 1990s are gone, and the new group of accreditors arrived with measured student learning outcomes and assessment requirements firmly in place. Similarly, college administrators hired in the last decade must profess fealty to the data theology. Both in schools and in accrediting agencies, a culture of assessment for its own sake has settled in.
But cautionary voices remain, arguing that the focus on quantitative measures and the use of rubrics which have never been substantiated for reliability and validity, are costly to the goals of teaching and learning.
Numbers displace. Accreditors have been forced to rely on irrelevant numerical measures, rather than on the intense direct interaction that is one of the essentials of peer review. If there are failings to accreditation, they are at least partially due to decisions made on the basis of "data," rather than the intensely human interaction between site visitors and students, faculty, alumni, and staff.
Numbers mislead. Poor schools are able to provide satisfactory numbers, because the proxies proposed as establishing institutional success are, at best, remotely connected to quality and are therefore easily gamed. Bad schools can almost invariably produce good numbers.
Numbers distort. Participants at a national conference sponsored a few years ago by the U.S. Department of Education were astonished to learn that colleges had paid students to take the Collegiate Learning Assessment. Other researchers pointed out that seniors attributed no importance to the CLA and performed indifferently. Under the circumstances, it is impossible to use CLA results as a basis for a value added conclusion. Can we legitimately have a national conversation about the "lack of evidence of growth of critical thinking" in college, based on such data?
Numbers distract. The focus on assessment has captured the center stage of national educational groups for almost two decades. A quick review of annual meeting agendas of major national education conferences reveals that pervasive assessment topics moved educators from their proper concentration on learning and teaching. Seemingly, many people believe that effective assessment will result in improved teaching and learning. One observer compared this leap in logic to improving the health of a deathly ill person by taking his temperature. The current emphasis on "better" measures, then, would correspond to using an improved thermometer.
Numbers divert. Faculty members spend an untold number of hours outside of classroom time on useless assessment exercises. At least some of this time would otherwise have been available for engagement with students. Numbers divert our focus in other ways as well. Instead of conversations about deep thinking, lifelong learning, and carefully structured small experiments to address achievement gaps, faculty must focus on assessment and measurement!
Assessment has become a recognizable cost center at some institutions, still without any policy outcomes or improvements to teaching and learning, in spite of almost thirty years of effort.
This is not to be taken as a blanket attack on numbers. There are fields, particularly those with an occupational component, for which useful correlations between numerical outcomes and quality can be made. There are accrediting agencies which are instituting numerical measures in a carefully controlled, modest fashion, establishing correlations and realities, and building from there. Finally, there are fields with discrete, denumerable outcomes for which numbers can contribute to an understanding and a measure of effectiveness. But many other accreditors have been forced to impose measuring protocols, which speak to the flaws noted above.
It's time to restore balance. Government must begin to realize that while it is bigger than anyone else, it is not wiser. And those who triggered this thirty-year, devastatingly costly experiment should have the decency to admit they were wrong (as did one internationally known proponent at the February 4th NACIQI meeting, stating "with respect to measuring student learning outcomes, we are not there yet").
The past should serve as an object lesson for the future, particularly in view of the recently released Degree Qualifications Profile (DQP) bearing all the signs of another "proxy" approach to the judgment of quality.
Our costly "numbers" experience tells us that nothing should be done to implement this DQP until after a multi-year series of small experiments and pilot programs has been in place and preliminary conclusions drawn. Should benefits emerge, an iterative process with ever more relevant features can be presented to the postsecondary community. If not, not.
But no more should a social experiment be imposed on the American people, without the slightest indication of reliability, validity or even relevance to reality.
Bernard Fryshman is an accreditor and a professor of physics.