Bachelor's degree recipients in 2007-8 who began their postsecondary educations at a community college took almost 20 percent longer to complete their degrees than did those who started out at a four-year institution, those who began at four-year private colleges finished faster than did those at four-year public and for-profit institutions, and those who delayed entry into college by more than a year out of high school took almost 60 percent longer to complete their degrees than did those who went directly to college.
That a large number of academically gifted and economically affluent students (or their parents) have become savvy consumers, getting their first two years of general education courses out of the way at low-cost community colleges rather than pricier state schools and liberal arts colleges?
That by doing so, these would-be competitive admissions students are taking up a large number of slots at community colleges that would otherwise be filled by less academically gifted or less economically affluent students?
That private nonprofit schools, meanwhile, are maintaining their competitive admissions edge by providing more merit-based tuition discounts rather than need-based tuition discounts? That by doing so, these schools become less and less of an option for those less fortunate?
And that, as the number of well-paying blue collar jobs shrinks in response to the changing nature of the economy, the American middle class must either contract, or the skills needed to gain and retain a well-paying job must somehow expand?
I hope we can find consensus around those points. Most people can at least agree on the connection between college education and well-paying jobs, and the need to up-skill the American workforce in order to defend a society in which the benefits of middle class living are widely shared and enjoyed. Most can also agree that higher education access is shrinking in response to a variety of external pressures, including state budget cuts to higher education and a more consumer-savvy insistence on tuition dollar value.
Now we reach the question where many people disagree. Do less well academically prepared, less affluent individuals deserve an opportunity to receive a higher education? And, if so, should they attend institutions best situated to respond to their particular academic, social and emotional needs, or should they be forced to accept whatever public school option may be available -- regardless of the institution’s track record in retaining and graduating students?
These are the questions at the heart of the current debate surrounding private sector colleges and universities (PSCUs). These institutions cost the student more to attend than a public school does, but, through generous subsidies, taxpayers pay the bulk of education costs at community colleges, not students. As a result, the absolute cost of postsecondary attendance is actually less at the private sector alternative. The Institute for Higher Education Policy recently issued a report about low-income adults in postsecondary education, noting -- as many in higher education have long been aware -- that a significant percentage of low income and minority students attend PSCUs and community colleges. From the perspective of our critics, PSCUs “target” these students while community colleges “serve” them.
Both types of institutions operate in what is largely an open admissions environment (although my own institution does not). Both serve the adult student, who is often financially independent. Both strive to provide students with an education that facilitates career-focused employment (although community colleges wear many other postsecondary hats as well). Both use advertising as well as word of mouth referrals to attract students. But many PSCU students have already attended a community college and opted out for various reasons, including the long waits to enter the most popular programs, large class sizes and inflexible schedules. These problems are all made worse by state budget cuts to higher education.
PSCU students do pay more out of their own pockets than do community college students, but PSCU students see the cost justified by what they receive in return. This value expresses itself in greater individual attention and support … in having confidence in academic skills restored where they may be flagging … in gaining new motivation to succeed and seeing that motivation reinforced through success itself ... and in making the connection between classroom learning and employable skills real and direct.
Two-year PSCU institutions graduate students at three times the rate of community colleges. Placement rates are the bottom line on career-focused education, however, and while community colleges offer lower-cost career programs without outcome metrics, PSCUs must match their career education offerings with real placement of students in relevant jobs. Again, PSCU students see this outcomes-based approach as a difference worth paying for.
In this broader context, the irony of PSCUs being accused of “targeting” students becomes clear. Apparently where some see targeting of low income and minority students unable to make informed decisions about their futures, we see tailoring of postsecondary education to suit a nontraditional student population -- and a better fit all around.
Arthur Keiser is chairman of the Association of Private Sector Colleges and Universities and chancellor of Keiser University.
Do majors matter? Since students typically spend more time in their area of concentration than anywhere else in the curriculum, majors ought to live up to their name and produce really major benefits. But do they?
Anthony P. Carnevale, the Director of Georgetown’s Center for Education and the Workforce, had recently provided a clear answer. Majors matter a lot -- a lot of dollars and cents. In a report entitled “What’s it Worth,” he shows how greatly salaries vary by major, from $120,000 on average for petroleum engineers down to $29,000 for counseling psychologists.
But what if one asked whether majors make differing contributions to students’ cognitive development? The answer is once again yes, but the picture looks very different from the one in the Georgetown study.
A few years ago, Paul Sotherland, a biologist at Kalamazoo College in Michigan, asked an unnecessary question and got not an answer but a tantalizing set of new questions. It was unnecessary because most experts in higher education already knew the answer, or thought they did: as far as higher-order cognitive skills are concerned, it doesn’t matter what you teach; it’s how you teach it.
What Sotherland found challenged that conventional wisdom and raised new questions about the role of majors in liberal education. Here’s what he did. Kalamazoo had been using the Collegiate Learning Assessment (CLA) to track its students’ progress in critical thinking and analytical reasoning. After a few years it become clear that Kalamazoo students were making impressive gains from their first to their senior years. Sotherland wondered if those gains were across the board or varied from field to field.
Since gains in CLA scores tend to follow entering ACT or SAT scores, they “corrected” the raw data to see what gains might be attributed to instruction. They found significant differences among the divisions, with the largest gains (over 200 points) in foreign languages, about half that much in the social sciences, still less in the fine arts and in the humanities, least of all in the natural sciences .
How was this to be explained? Could reading Proust somehow hone critical thinking more than working in the lab? (Maybe so.)
But the sample size was small and came from one exceptional institution, one where students in all divisions did better than their SAT scores would lead one to expect, and where the average corrected gain on CLA is 1.5 standard deviations, well above the national average. (Perhaps Inside Higher Ed should sponsor the “Kalamazoo Challenge,” to see if other institutions can show even better results in their CLA data.)
The obvious next step was to ask Roger Benjamin of the Collegiate Learning Assessment if his associates would crunch some numbers for me. They obliged, with figures showing changes over four years for both parts of the CLA -- the performance task and analytical writing. Once again, the figures were corrected on the basis of entering ACT or SAT scores.
The gains came in clusters. At the top was sociology, with an average gain of just over 0.6 standard deviations. Then came multi- and interdisciplinary studies, foreign languages, physical education, math, and business with gains of 0.50 SDs or more.
The large middle cluster included (in descending order) education, health-related fields, computer and information sciences, history, psychology, law enforcement, English, political science, biological sciences, and liberal and general studies.
Behind them, with gains between 0.30 and 0.49 SDs, came communications (speech, journalism, television, radio etc.), physical sciences, nursing, engineering, and economics. The smallest gain (less than 0.01 standard deviations) was in architecture.
The list seemed counterintuitive to me when I first studied it, just as the Kalamazoo data had. In each case, ostensibly rigorous disciples, including most of the STEM disciplines (the exception was math) had disappointing results. Once again the foreign languages shone, while most other humanistic disciplines cohabited with unfamiliar bedfellows such as computer science and law enforcement. Social scientific fields scattered widely, from sociology at the very top to economics close to the bottom.
When one looks at these data, one thing is immediately clear. The fields that show the greatest gains in critical thinking are not the fields that produce the highest salaries for their graduates. On the contrary, engineers may show only small gains in critical thinking, but they often command salaries of over $100,000. Economists may lag as well, but not at salary time, when, according to “What’s It Worth” their graduates enjoy median salaries of $70,000. At the other end majors in sociology and French, German and other commonly taught foreign languages may show impressive gains, but they have to be content with median salaries of $45,000.
But what do these data tell us about educational practice? It seems unlikely that one subject matter taken by itself has a near-magical power to result in significant cognitive gains while another does nothing of the sort. If that were the case, why do business majors show so much more progress than economics majors? Is there something in the content of a physical education major (0.50 SDs) that makes it inherently more powerful than a major in one of the physical sciences (0.34 SDs)? I doubt it.
Since part of the CLA is based on essays students write during the exam, perhaps the natural science majors simply had not written enough to do really well on the test. (That’s the usual first reaction, I find, to unexpected assessment results -- "there must be something wrong with the test.") That was, however, at best a partial explanation, since it didn’t account for the differences among the other fields. English majors, for example, probably write a lot of papers, but their gains were no greater than those of students in computer sciences or health-related fields.
Another possibility is that certain fields attract students who are ready to hone their critical thinking skills. If so, it would be important to identify what it is in each of those fields that attract such students to it. Are there, for example, “signature pedagogies” that have this effect? If so, what are they and how can their effects be maximized? Or is it that certain pedagogical practices, whether or not they attract highly motivated students, increase critical thinking capacities – and others as well? For example, the Wabash national study has identified four clusters of practices that increase student engagement and learning in many areas (good teaching and high-quality interactions with faculty, academic challenge and high expectations, diversity experiences, and higher-order, integrative, and reflective learning).
Some fields, moreover, may encourage students to “broaden out” -- potentially important for the development of critical thinking capacities as one Kalamazoo study suggests. Other disciplines may discourage such intellectual range.
One other hypothesis, I believe, also deserves closer consideration. The CLA is a test of post-formal reasoning. That is, it does not seek to find out if students know the one right answer to the problems it sets; on the contrary, it rewards the ability to consider the merits of alternative approaches. That suggests that students who develop the habit of considering alternative viewpoints, values and outcomes and regularly articulate and weigh alternative possibilities may have an advantage when taking the CLA exam, and quite possibly in real-life settings as well.
Since the study of foreign languages constantly requires the consideration of such alternatives, their study may provide particularly promising venues for the development of such capacities. If so, foreign languages have a special claim on attention and resources even in a time of deep budgetary cuts. Their "signature pedagogies," moreover, may provide useful models for other disciplines.
These varying interpretations of the CLA data open up many possibilities for improving students’ critical thinking. But will these possibilities be fully utilized without new incentives? The current salary structure sends a bad signal when it puts the money where students make very small gains in critical thinking, and gives scant reward to fields that are high performers in this respect . (For example, according to the College & University Professional Association for Human Resources, full professors in engineering average over $114,000, while those in foreign languages average just over $85,000.
Isn’t it time to shift some resources to encourage experimentation in all fields to develop the cognitive as well as the purely financial benefits of the major?
W. Robert Connor
W. Robert Connor is senior advisor to the Teagle Foundation.
Almost every college or university publishes a number called the student/faculty ratio as an indicator of undergraduate instructional quality. Among the many spurious data points exploited by commercial ranking agencies, this one holds a special place.
The mythology would have it that a low ratio, say 10 students per faculty member, indicates a university whose undergraduates take most of their instruction in small groups with a faculty instructor, and presumably learn best in those conditions. In contrast, a high number, say 25 students per faculty member, might lead us to think of large classes and less effective, impersonal instruction.
These common impressions represent mostly pure public relations. The ratio means none of this because the numbers used to calculate it are usually unreliable for comparing different universities or colleges and because the basic premise about small classes is flawed.
To illustrate the meaninglessness of the ratio, imagine two universities with exactly the same number of students, say 5,000, and the same number of faculty, say 500. Both institutions would report a student/faculty ratio of 10, and following common wisdom, we might imagine that both have the same teaching environment. The data do not show however, what the faculty do with their time.
Imagine that the first university has faculty of high prestige by virtue of their research accomplishments, and that these faculty spend half of their time in the classroom and half in research activities, a pattern typical of research institutions. Imagine, too, that the second university in our example has faculty less active in research but fully committed to the teaching mission of their college. Where the research-proficient faculty at our first institution spend only half their time in class, the teaching faculty in the second institution spend all of their time in the classroom.
Correcting the numbers to reflect the real commitment of faculty to teaching would give an actual student to teaching-faculty ratio of 20 to 1 for the research institution and 10 to 1 for the teaching college. The official reported ratio is wildly misleading at best.
The official student/faculty ratio is suspect for yet another reason. It appears as an indicator of something valuable in an institution's teaching and learning process. The reported ratio implies that having a small number of students in a class indicates high instructional quality and effective learning.It may be that in K-12 settings, small class sizes help struggling students learn. In reasonably high quality colleges and universities, however, the evidence is different.
In some classes, for example those that teach beginning languages or performance studios in music, students do learn better when taught in small to very small groups. In the core business curriculum, in basic economics, in art and music appreciation, in history and psychology introductory courses, and many other subjects, students learn as much in large classes of more than 100 as they do in small classes of fewer than 25. In real life, smart universities mix large and small classes so that students can get small classes when small size makes a difference and find a place in large classes when that format works just as well.
A Different Way?
If universities really cared to give students, prospective students and parents a picture of the instructional pattern at their institutions, they would erase the unhelpful student/faculty ratio and instead, provide a more useful measure.
They could analyze the transcripts of their most recent graduating class and report the pattern of large and small classes actually experienced by graduating seniors.
How many courses did the graduates take in their major that had fewer than 20 students, how many general education courses did they take with over 50 or over 100?How many of their courses during their undergraduate years had a tenure-track faculty instructor, and how many had a visitor, a part-time faculty, or a teaching assistant as an instructor?
This kind of report would encourage institutions to explain why the nontenure track instructors teach as well as the tenure-track faculty, and it would give parents and prospective students an accurate understanding of the actual teaching mix they should expect during their undergraduate years.
Such accuracy might not be as good advertising as the misleading student/faculty ratio, but it would have the virtue of reflecting reality, and it would encourage us to talk clearly about the design and the delivery of the undergraduate education we provide.
Unfortunately, some of us are old enough to have passed through various incarnations of the accountability movement in higher education. Periodically university people or their critics rediscover the notion of accountability, as if the notion of being accountable to students, parents, legislators, donors, federal agencies, and other institutional constituencies were something new and unrecognized by our colleagues. We appear to have entered another cycle, signaled by the publication last month of a call to action by the State Higher Education Executive Officers (SHEEO) association, with support from the Ford Foundation, called "Accountability for Better Results."
The SHEEO report has the virtue of recognizing many of the reasons why state-level accountability systems fail, and focuses its attention primarily on the issue of access and graduation rates. While this is a currently popular and important topic, the SHEEO report illustrates why the notion of "accountability" by itself has little meaning. Universities and colleges have many constituencies, consumers, funding groups, interested parties, and friends. Every group expects the university to do things in ways that satisfy their goals and objectives, and seek "accountability" from the institution to ensure that their priorities drive the university’s performance. While each of these widely differentiated accountability goals may be appropriate for each group, the sum of these goals do not approach anything like "institutional accountability."
Accountability has special meaning in public universities where it usually signifies a response to the concerns of state legislators and other public constituencies that a campus is actually producing what the state wants with the money the state provides. This is the most common form of accountability, and often leads to accountability systems or projects that attempt to put all institutions of higher education into a common framework to ensure the wise expenditure of state money on the delivery of higher education products to the people.
In this form, accountability is usually a great time sink with no particular value, although it has the virtue of keeping everyone occupied generating volumes of data of dubious value in complex ways that will exhaust the participants before having any useful impact. The SHEEO report is particularly clear on this point.
This form of accountability has almost no practical utility because state agencies cannot accurately distinguish one institution of higher education from the other for the purposes of providing differential funding. If the state accountability system does not provide differential funding for differential performance, then the exercise is more in the nature of an intense conversation about what good things the higher education system should be doing rather than a process for creating a system that could actually hold institutions accountable for their performance.
Public agencies rarely hold institutions accountable because to do so requires that they punish the poor performers or at least reward the good performers. No institution wants a designation as a poor performer. An institution with problematic performance characteristics as measured by some system will mobilize every political agent at its disposal (local legislators, powerful alumni and friends, student advocates, parents) to modify the accountability criteria to include sufficient indicators on which they can perform well.
In response to this political pressure, and to accommodate the many different kinds, types and characteristics of institutions, the accountability system usually ends up with 20, 30 or more accountability measures. No institution will do well on all of them, and every institution will do well on many of them, so in the end, all institutions will qualify as reasonably effective to very effective, and all will remain funded more or less as before.
The lifecycle of this process is quite long and provides considerable opportunity for impassioned rhetoric about how well individual institutions serve their students and communities, how effective the research programs are in enhancing economic development, how valuable the public service activities enhance the state, and so on. At the end, when most participants have exhausted their energy and rhetoric, and when the accountability system has achieved stasis, everyone will declare a victory and the accountability impulse will go dormant for several years until rediscovered again.
Often, state accountability systems offer systematic data reporting schemes with goals and targets defined in terms of improvement, but without incentives or sanctions. These systems assume that the value of measuring alone will motivate institutions to improve to avoid being marked as ineffective. This kind of system has value in identifying the goals and objectives of the state for its institutions, but often relegates the notion of accountability to the reporting of data rather than the allocation of money, where it could make a significant difference.
If an institution, state, or other entity wants to insist on improved performance from universities, they must specify the performance they seek and then adjust state appropriations to reward those who meet or exceed the established standard. Reductions in state budgets for institutions that fail to perform are rare for obvious political reasons, but the least effective system is one that allocates funds to poorly performing institutions with the expectation that the reward for poor performance will motivate improvement. One key to effective performance improvement, reinforced in the SHEEO report, is strictly limiting the number of key indicators for measuring improvement. If the number of indicators exceeds 10, the exercise is likely to find all institutions performing well on some indicator and therefore all deserving of continued support.
Often the skepticism that surrounds state accountability systems stems from a mismatch between the goals of the state (with an investment of perhaps 30 percent or less of the institutional budget) and those of the institutions. Campuses may seek nationally competitive performance in research, teaching, outreach, and other activities. States may seek improvement in access and student graduation rates as the primary determinants of accountability. Institutions may see the state’s efforts as detracting from the institution’s drive toward national reputation and success. Such mismatches in goals and objectives often weaken the effectiveness of state accountability programs.
Universities are very complex and serve many constituencies with many different expectations about the institutions’ activities. Improvement comes from focusing carefully on particular aspects of an institution’s performance, identifying reliable and preferably nationally referenced indicators, and then investing in success. While the selection of improvement goals and the development of good measures are essential, the most important element in all improvement programs is the ability to move money to reward success.
If an accountability system only measures improvement and celebrates success, it will produce a warm glow of short duration. Performance improvement is hard work and takes time, while campus budgets change every year. Effective measurement is often time consuming and sometimes difficult, and campus units will not participate effectively unless there is a reward. The reward that all higher education institutions and their constituent units understand is money. This is not necessarily money reflected in salary increases, although that is surely effective in some contexts.
Primarily what motivates university improvement, however, is the opportunity to enhance the capacity of a campus. If a campus teaches more students, and as a result earns the opportunity to recruit additional faculty members, this financial reward is of major significance and will motivate continued improvement. At the same time, the campus that seeks improvement cannot reward failure. If enrollment declines, the campus should not receive compensatory funding in hopes of future improvement. Instead, a poorly performing campus should work harder to get better so it too can earn additional support.
In public institutions, the small proportion of state funding within the total budget limits the ability of state systems to influence campus behavior by reallocating funding. In particular, in many states, most of the public money pays for salaries, and reallocating funds proves difficult. Nonetheless, most public systems and legislatures can identify some funds to allocate as a reward for improved performance. Even relatively small budget increases represent a significant reward for campus achievements.
Accountability, as the SHEEO report highlights, is a word with no meaning until we define the measures and the purpose. If we mean accountability to satisfy public expectations for multiple institutions on many variables, we can expect that the exercise will be time consuming and of little practical impact. If we mean accountability to improve the institution’s performance in specific ways, then we know we need to develop a few key measures and move at least some money to reward improvement.
John V. Lombardi
John V. Lombardi, chancellor and professor of history at the University of Massachusetts Amherst, writes Reality Check every two weeks. Scott McLemee's column, Intellectual Affairs, will return Thursday.
At the annual meeting of one of the regional accrediting agencies a few years ago, I wandered into the strangest session I’ve witnessed in any academic gathering. The first presenter, a young woman, reported on a meeting she had attended that fall in an idyllic setting. She had, she said, been privileged to spend three days “doing nothing but talking assessment” with three of the leading people in the field, all of whom she named and one of whom was on this panel with her. “It just doesn’t get any better than that!” she proclaimed. I kept waiting for her to pass on some of the wisdom and practical advice she had garnered at this meeting, but it didn’t seem to be that kind of presentation.
The title of the next panel I chose suggested that I would finally learn what accrediting agencies meant by “creating a culture of assessment.” This group of presenters, four in all, reenacted the puppet show they claimed to have used to get professors on their campus interested in assessment. The late Jim Henson, I suspect, would have advised against giving up their day jobs.
And thus it was with all the panels I tried to attend. I learned nothing about what to assess or how to assess it. Instead, I seemed to have wandered into a kind of New Age revival at which the already converted, the true believers, were testifying about how great it was to have been washed in the data and how to spread the good news among non-believers on their campus.
Since that time, I’ve examined several successful accreditation self-studies, and I’ve talked to vice presidents, deans, and faculty members, but I’m still not sure about what a “culture of assessment” is. As nearly as I can determine, once a given institution has arrived at a state of profound insecurity and perpetual self-scrutiny, it has created a “culture of assessment.” The self-criticism and mutual accusation sessions favored by Communist hardliners come to mind, as does a passage from a Credence Clearwater song: “Whenever I ask, how much should I give? The only answer is more, more!”
Most of the faculty resistance we face in trying to meet the mandates of the assessment movement, it seems to me, stems from a single issue: professors feel professionally distrusted and demeaned. The much-touted shift in focus from teaching to student learning at the heart of the assessment movement is grounded in the presupposition that professors have been serving their own ends and not meeting the needs of students. Some fall into that category, but whatever damage they do is greatly overstated, and there is indeed a legitimate place in academe for those professors who are not for the masses. A certain degree of quirkiness and glorious irrelevance were once considered par for the course, and students used to be expected to take some responsibility for their own educations.
Clearly, from what we are hearing about the new federal panel studying colleges, the U.S. Department of Education believes that higher education is too important to be left to academics. What we are really seeing is the re-emergence of the anti-intellectualism endemic to American culture and a corresponding redefinition of higher education in terms of immediately marketable preparation for specific jobs or careers. The irony is that the political party that would get big government off our backs has made an exception of academe.
This is not to suggest, of course, that everything we do in the name of assessment is bad or that we don’t have an obligation to determine that our instruction is effective and relevant. At the meeting of the National Association of Schools of Art and Design, I heard a story that illustrates how the academy got into this fix. It seems an accreditor once asked an art faculty member what his learning outcomes were for the photography course he was teaching that semester. The faculty member replied that he had no learning outcomes because he was trying to turn students into artists and not photographers. When asked then how he knew when his students had become artists, he replied, “I just know.”
Perhaps he did indeed “just know.” One of the most troubling aspects of the assessment movement, to my mind, is the tendency to dismiss the larger, slippery issues of sense and sensibility and to measure educational effectiveness only in terms of hard data, the pedestrian issues we can quantify. But, by the same token, every photographer must master the technical competencies of photography and learn certain aesthetic principles before he or she can employ the medium to create art. The photography professor in question was being disingenuous. He no doubt expected students to reach a minimal level of photographic competence and to see that competence reflected in a portfolio of photographs that rose to the level of art. His students deserved to have these expectations detailed in the form of specific learning outcomes.
Thus it is, or should be, with all our courses. Everyone who would teach has a professional obligation to step back and to ask himself or herself two questions: What, at a minimum, do I want students to learn, and how will I determine whether they have learned it? Few of us would have a problem with this level of assessment, and most of us would hardly need to be prompted or coerced to adjust our methods should we find that students aren’t learning what we expect them to learn. Where we fall out, professors and professional accreditors, is over the extent to which we should document or even formalize this process.
I personally have heard a senior official at an accrediting agency say that “if what you are doing in the name of assessment isn’t really helping you, you’re doing it wrong.” I recommend that we take her at her word. In my experience -- first as a chair and later as a dean -- it is helpful for institutions to have course outlines that list the minimum essential learning outcomes and which suggest appropriate assessment methods for each course. It is helpful for faculty members and students to have syllabi that reflect the outcomes and assessment methods detailed in the corresponding course outlines. It is also helpful to have program-level objectives and to spell out where and how such objectives are met.
All these things are helpful and reasonable, and accrediting agencies should indeed be able to review them in gauging the effectiveness of a college or university. What is not helpful is the requirement to keep documenting the so-called “feedback loop” -- the curricular reforms undertaken as a result of the assessment process. The presumption, once again, would seem to be that no one’s curriculum is sound and that assessment must be a continuous process akin to painting a suspension bridge or a battleship. By the time the painters work their way from one end to the other, it is time to go back and begin again. “Out of the cradle, endlessly assessing,” Walt Whitman might sing if he were alive today.
Is it any wonder that we have difficulty inspiring more than grudging cooperation on the part of faculty? Other professionals are largely left to police themselves. Not so academics, at least not any longer. We are being pressured to remake ourselves along business lines. Students are now our customers, and the customer is always right. Colleges used to be predicated on the assumption that professors and other professionals have a larger frame of reference and are in a better position than students to design curricula and set requirements. I think it is time to reaffirm that principle; and, aside from requiring the “helpful” documents mentioned above, it is past time to allow professors to assess themselves.
Regarding the people who have thrown in their lot with the assessment movement, to each his or her own. Others, myself included, were first drawn to the academic profession because it alone seemed to offer an opportunity to spend a lifetime studying what we loved, and sharing that love with students, no matter how irrelevant that study might be to the world’s commerce. We believed that the ultimate end of what we would do is to inculcate both a sensibility and a standard of judgment that can indeed be assessed but not guaranteed or quantified, no matter how hard we try. And we believed that the greatest reward of the academic life is watching young minds open up to that world of ideas and possibilities we call liberal education. To my mind, it just doesn’t get any better than that.
Edward F. Palm
Edward F. Palm is dean of social sciences and humanities at Olympic College, in Bremerton, Wash.
College officials and members of the public are watching with intense interest -- and, in some quarters, trepidation -- the proceedings of the U.S. Secretary of Education's Commission on the Future of Higher Education. Given that interest, the following is a memorandum that the panel's chairman, Charles Miller, wrote to its members offering his thinking about one of its thorniest subjects: accountability. As always on Inside Higher Ed, comments are welcomed below.
To: Members, The Secretary of Education’s Commission on the Future of Higher Education
From: Charles Miller, Chairman
Dear Commission Members:
The following is a synopsis of several ongoing efforts, in support of the Commission, in one of our principal areas of focus, "Accountability." The statements and opinions presented in the memo are mine and are not intended to be final conclusions or recommendations, although there may be a developing consensus.
I would appreciate feedback, directly or through the staff, in any form that is most convenient. This memo will be made public in order to promote and continue an open dialogue on measuring institutional performance and student learning in higher education.
As a Commission, our discussions to date have shown a number of emerging demands on the higher education system, which require us to analyze, clarify and reframe the accountability discussion. Four key goals or guiding principles in this area are beginning to take shape.
First, more useful and relevant information is needed. The federal government currently collects a vast amount of information, but unfortunately policy makers, universities, students and taxpayers continue to lack key information to enable them to make informed decisions.
Second, we need to improve, and even fix, current accountability processes, such as accreditation, to ensure that our colleges and universities are providing the highest quality education to their students.
Third, we need to do a much better job of aligning our resources to our broad societal needs. In order to remain competitive, our system of higher education must provide a world-class education that prepares students to compete in a global knowledge economy.
And finally, we need to assure that the American public understand through access to sufficient information, particularly in the area of student learning, what they are getting for their investment in a college education.
Commission Meeting (12/6/05)
At our Nashville meeting, the Commission heard three presentations from a panel on “Accountability.” Panelists represented the national, state and institutional perspectives and in the subsequent discussion, an informal consensus developed that there is a critical need for improved public information systems to measure and compare institutional performance and student learning in consumer-friendly formats, defining consumers broadly as students, families, taxpayers, policy makers and the general public.
Needs for a Modern University Education
The college education needed for the competitive, global environment in the future is far more than specific, factual knowledge; it is about capability and capacity to think and develop and continue to learn. An insightful quote from an educator describes the situation well:
“We are attempting to educate and prepare students (hire people in the workforce) today so that they are ready to solve future problems, not yet identified, using technologies not yet invented, based on scientific knowledge not yet discovered.”
--Professor Joseph Lagowski, University of Texas at Austin
Trends in Measuring Student Learning
There is gathering momentum for measuring through testing what students learn or what skills they acquire in college beyond a traditional certificate or degree.
Very recently, new testing instruments have been developed which measure an important set of skills to be acquired in college: critical thinking, analytic reasoning, problem solving, and written communications.
The Commission is reviewing promising new developments in the area of student testing, which indicate a significant improvement in measuring student learning and related institutional performance. Three independent efforts have shown promise:
A multi-year trial by the Rand Corporation, which included 122 higher education institutions, led to the development of a test measuring critical thinking, analytic reasoning and other skills. As a result of these efforts, a new entity called Collegiate Learning Assessment has been formed by researchers involved and the tests will now be further developed and marketed widely.
A new test measuring college level reading, mathematics, writing and critical thinking has been developed by the Educational Testing Service and will begin to be marketed in January 2006. This test is designed for colleges to assess their general education outcomes, so the results may be used to improve the quality of instruction and learning.
The National Center for Public Policy and Higher Education developed a new program of testing student learning in five states, which has provided highly promising results and which suggests expansion of such efforts would be clearly feasible.
An evaluation of these new testing regimes provides evidence of a significant advancement in measuring student learning -- especially in measuring the attainment of skills most needed in the future.
Furthermore, new educational delivery models are being created, such as the Western Governors University, which uses a variety of built-in assessment techniques to determine the achievement of certain skills being taught, rather than hours-in-a-seat. These new models are valid alternatives to the older models of teaching and learning and may well prove to be superior for some teaching and learning objectives in terms of cost effectiveness.
There are constructive examples of leadership in higher education in addressing the issues of accountability and student learning, such as the excellent work by the Association of American Colleges and Universities.
The AAC&U has developed a unique and significant approach to accountability and learning assessment, discussed in two recent reports, “Our Students’ Best Work” (2004) and “Liberal Education Outcomes” (2005).
The AAC&U accountability model focuses on undergraduate liberal arts education and emphasizes learning outcomes. The primary purpose is to engage campuses in identifying the core elements of a quality liberal arts education experience and measuring students’ experience in achieving these goals -- core learning and skills that anyone with a liberal arts degree should have. AAC&U specifically does not endorse a single standardized test, but acknowledges that testing can be a useful part of the multiple measures recommended in their framework.
In this model, departments and faculty are expected to be given the primary responsibility to define and assess the outcomes of the liberal arts education experience.
Federal and State Leadership
The federal government currently collects a great deal of information from the higher education system. It may be time to re-examine what the government collects to make sure that it’s useful and helpful to the consumers of the system.
Many states are developing relevant state systems of accountability in order to measure the performance of public higher education institutions. In its recommendations about accountability in higher education, the State Higher Education Executive Officers group has endorsed a focus on learning assessment.
Institutional Performance Measurement
What is clearly lacking is a nationwide system for comparative performance purposes, using standard formats. Private ranking systems, such as the U.S. News and World Report “Best American Colleges” publications, use a limited set of data, which is not necessarily relevant for measuring institutional performance or providing the public with information needed to make critical decisions.
The Commission, with assistance of its staff and other advisors and consultants, is attempting to develop the framework for a viable database to measure institutional performance in a consumer-friendly, flexible format.
Historically, accreditation has been the nationally mandated mechanism to improve institutional quality and assure a basic level of accountability in higher education.
Accreditation and related issues of articulation are in need of serious reform in the view of many, especially the need for more outcomes-based approaches. Also in need of substantial improvement are the regional variability in standards, the independence of accreditation, its usefulness for consumers, and its response to new forms of delivery such as internet-based distance learning.
The Commission is reviewing the various practices of institutional and programmatic accreditation. A preliminary analysis will be presented and various possible policy recommendations will be developed.
My old friend Archilochus, the Greek lyric poet who has been resting comfortably since the Seventh Century B.C., has been getting a lot of rousing attention lately. And that’s a good thing considering what’s been happening recently in Washington, D.C.
A new federal commission formed by Education Secretary Margaret Spellings has been pushing the idea of holding colleges more accountable for the outcomes of their undergraduate education, which has prompted talk of a federally mandated assessment. I don’t know anything that would make it harder to improve student learning than a national or federal assessment. And that’s where Archilochus can help.
Years ago Sir Isaiah Berlin picked up the Greek poet’s famous aphorism, "The fox knows many things but the hedgehog knows one thing,” and used it as the title of his famous essay, and now Philip Tetlock, in his new book, Expert Political Judgment: How Good Is it? (Princeton University Press, 2005) has classified pundits into two categories: Hedgehogs, who have a single big idea or explanation, and Foxes, who look for a lot of intersecting causes. (He found that, by and large, the Foxes do better at predicting what’s to come, except once in a while when the prickly Hedgehogs see something really important, and don’t get distracted, no matter what.)
Most of us in academe are foxes, but I want to suggest that we think like hedgehogs for a while, and concentrate on one thing and one thing only -- student learning. Although we can’t ignore the political context, we shouldn’t do this in reaction to the perceived pressure from the federal commission. We should do it, instead, because it’s the one thing on which the flourishing of liberal education most depends right now. We need to do it for our students and for ourselves as educators.
When I became president of the Teagle Foundation two and a half years ago, I worried a lot about the alleged decline and fall of liberal education. The figures I studied showed a decreasing percentage of undergraduates majoring in the traditional disciplines of the liberal arts; some colleges that I visited, or whose leaders I met, seemed to be turning their backs on liberal education; short term marketing strategies seemed to be eclipsing long term educational values.
Recently, however, I’ve experienced another eclipse, one in which three tendencies I have been observing block out my old worries. The three trends are:
A shift in goals from content to cognition
The demand for accountability
A new knowledge base for teaching
None of these is an unambiguous Good Thing, and there are enough tricks and traps in each of these trends to challenge both foxes and hedgehogs. But in my view -- on balance -- the collision of these trends present the opportunity to take liberal education to a new level.
It is now possible, in ways that were out of our reach just a few years ago, to teach better and greatly to invigorate student engagement and learning. We can do that, I am convinced, while recommitting ourselves and our institutions to the core educational values of liberal education.
This all comes with a big “IF.” We can reach that higher level only if we focus, focus, focus on student learning -- all of us, faculty, deans, presidents, foundation officers. We all have to become hedgehogs.
Let me explain why I feel so confident that if we focus in this way, liberal education can reach that new level of excellence. In my explanation I will say a few words about each of the three tendencies to which I just alluded, and then try to imagine what liberal education could be like if they are brought together in an integrated system.
1. First, “from content to cognition,” that is, a shift in the stated goals of liberal education from certain subject matter that every educated person should know to certain cognitive capacities that ought to be developed in all students. Over the past few decades, many colleges and universities have come to define their goals as the development of cognitive capacities such as analytical reasoning, critical thinking, clarity of written and oral expression, and moral reasoning. Over the same period the idea that all students should become acquainted with certain texts, topics, and aspects of human experience has pretty much disappeared from curricular thinking.
Curmudgeonly old classicist that I am, I find it hard to imagine a liberal education in which students do not meet Socrates and confront his insistence that the unexamined life is not worth living. Nor can I convince myself that these cognitive goals can be attained in total abstraction, without the specificity and challenge contributed by disciplinary knowledge. Content still matters.
But the shift from content to cognition does have one great benefit: It compels us to think hard about what we want students to have gained once they complete a course or a curriculum. It should make us be explicit about how each course, maybe each assignment, contributes to one cognitive goal or another. In educational jargon, it makes us more “intentional” and thereby much more likely to succeed.
2. Accountability. We are also witnessing a widening demand in many sectors of American society for greater accountability. We owe it all to our friends at Enron, and all the other wonderful playgrounds of corporate greed and corruption. But education is not going to escape the demand for accountability, nor will assessment be restricted to K-12 education. As my friend Steve Wheatley, of the American Council of Learned Societies, put it, “The train is a-comin’ and its name is assessment.”
More systematic assessment of the results of higher education is, as you well know, being demanded by accrediting agencies, governing boards, state legislators, and increasingly the general public. Now, with a federal commission on board the roar of the engine is getting louder and closer.
You and your colleagues may not like to see that train bearing down on your tranquil campus. And you may well share my anger if Congress tells engineers from the Department of Education to run the train. They tried that in K-12 education and I’m not sure whether the results are a disaster or a joke. The best defense is clearly to get out ahead and do assessment right, and do it now.
This top down pressure for assessment naturally provokes skepticism and resistance, especially from faculty members. What happens if we can reverse the direction and look at assessment from the ground up? Let me tell you a story. When the Teagle Foundation began to ask whether it should undertake some initiative in the assessment area, we convened one of our “Listenings,” bringing together for a few days faculty, administrators and experts in assessment to advise us. There was plenty of skepticism and some hostility. I began to think maybe this was not such a good idea.
But late in the gathering, two people stood up to speak from the floor. One said in effect, “As scholars we value knowledge. How as teachers can we reject something that might let us know more about our students’ learning?” Another speaker said, “Maybe we can teach better if we know more. It’s worth a try.” For me, and for others at that session, that turned the day. Now the Teagle Foundation has made faculty-led, ground-up assessment one of its top priorities. Nothing, I believe, has greater potential for invigorating student learning in the liberal arts.
All this is built around one essential point: We can teach better and students can learn better if their learning is systematically and appropriately assessed.
3. The third trend is the one that makes me confident that we have nothing to fear from properly crafted assessment. Today we know far more about how students learn and what works in teaching that we did just a few years ago. We know what works -- first year seminars, inclusion of undergraduates in research projects, problem-based learning, collaborative projects, coordination of service learning, internships and overseas study with courses and curricula, lots of writing and speaking opportunities with prompt and thorough faculty feedback, capstone experiences in the senior year and so on. (See Section Six of Liberal Education Outcomes, a 2005 publication from the Association of American Colleges and Universities).
These are not just bright ideas from educational theorists. They have been tested and usually rigorously evaluated. And although graduate schools keep it a well hidden secret, the cat is now out of the bag. This new knowledge has been drawn together, concisely summarized, and made easily accessible in Derek Bok’s brand new book, Our Underachieving Colleges (Princeton Press 2006). Every professor should read this book: Its greatest merit is that Bok demolishes the excuses we academics have used to avoid change.
Let me give one example. My friend David Porter, former president of Skidmore College and now a classics professor at Williams College, defines a liberal education as “what you have learned once you have forgotten the facts.” How long would you guess it takes to forget those facts?
Bok has the answer: “… [T]the average student will be unable to recall most of the factual content of a typical lecture within fifteen minutes after the end of class. In contrast, interests, values and cognitive skills are all likely to last longer, as are concepts and knowledge that students have acquired … through their own mental efforts.”
Fifteen minutes! You might say, “We’ve known that, more or less, for a long time.” Then why is lecturing still the dominant mode of instruction in so many settings? Bok offers several answers, the most damaging of which is complacency. He points out, for example, that one poll of faculty members found that 90 percent thought they were “above average” teachers. Welcome to Lake Wobegon.
Can these three trends -- cognitive capacities replacing content, accountability, the new knowledge base for college teaching -- come together and reinforce one another? The key question is whether academic leaders will focus on this and make it happen.
Imagine what such convergence can do for an institution that sets clear, assessable goals for itself in the development of its students’ cognitive capacities. It doesn’t matter whether the institution is multibillionaire Harvard or a struggling college far from the River Charles: There’s no group of college students whose frontal lobes won’t benefit from some additional exercise.
The institution that I am imagining does some testing to establish a base line and then looks at every aspect of student learning to see how each part can contribute to those goals. It finds out what its students need and what the Big Questions of value and meaning are that can invigorate their engagement with liberal education. It uses the new knowledge base to change some of its practices and try out new ideas. It searches appropriate means of assessment; if NSSE, the National Survey of Student Engagement, or CLA, the Collegiate Learning Assessment, don’t seem quite right for its setting, there are others or, if need be, the institution develops its own.
But whatever means of assessment it chooses, it doesn’t let the results sit in the office of Institutional Research; it uses them in an iterative process, steadily ratcheting up its effectiveness. The students see this; they understand better why they are studying what might otherwise seem remote or irrelevant material. Their enthusiasm increases; they tell their friends and younger siblings. The director of admissions smiles somewhat more often. So do the fund raisers. The alumni and friends of the institution see what is happening; their pride makes them more generous to alma mater. Maybe eventually even U.S. News sees that something is happening, and it is not prestige, pecking order, or wealth. It’s called “student learning.”
This systematic, iterative process of change will do a lot for an institution, for its students and for its faculty. I bet it will make hedgehogs out of them -- focused on, excited by, renewed through their concern for student learning. Most of us went into college teaching for complex reasons, but one of them, I believe, was that we knew it would be a joy to help young people develop their mental capacities. It’s easy to forget that as we get older, to wander away, to end up forgetting that we have something to profess. But the satisfaction is waiting there where we suspected it was when we started -- in helping those students learn and grow.
Now, thanks to this convergence of changes, we can rediscover that satisfaction. We can teach better and students can learn better. That should make hedgehog very happy indeed.
I hear someone muttering: “Not on my campus; my faculty will never buy into that kind of change.” Don’t be so sure. In my old job at the National Humanities Center, when we were developing programs to let new knowledge in the humanistic disciplines invigorate K-12 and college teaching, Richard Schramm, the talented designer of those programs, told me that he could not recall ever being turned down by an NHC fellow or former fellow when he asked them to help with this work. (For one such program see ) That matches what we are finding at the Teagle Foundation in developing our new College Community Connections program.
Scholars of great distinction have been willing to roll up their sleeves, and pitch in working with kids on disadvantaged neighborhoods in New York, where public schools are often part of the problem rather than part of the solution. These busy, much sought after academics were, I concluded, looking for something fresh, well designed, and capable of renewing their satisfaction in helping students learn. You may find that some of your colleagues are hungry and thirsty for renewal of this sort and that they are ready to try out new ways of invigorating student learning.
Every environment is different, but here’s a suggestion about how one might build momentum and consensus. Try this on your campus. Get your dean to call Princeton Press and order copies of Derek Bok’s book Underachieving Colleges for every departmental chair. Ask them to read it and discuss it with their colleagues and then to meet with you and let you know what the response is. If 413 pages or $29.95 is too much for already strained attention spans or budgets, print out a copy of this article and ask your faculty colleagues whether they agree or disagree. Let them rip it apart. Let them be as prickly as … as prickly as hedgehogs. They may well have a better idea than any of these. The important thing is to focus on that one crucial idea: We can teach better and students can learn better. The only question is How?, and the only way to answer is by being hedgehogs focused on that one crucial thing, improving student learning.
W. Robert Connor
W. Robert Connor is president of the Teagle Foundation. This essay was adapted from a speech given to the American Conference of Academic Deans in January.
Accountability, not access, has been the central concern of this Congress in its fitful efforts to reauthorize the Higher Education Act. The House of Representatives has especially shown itself deaf to constructive arguments for improving access to higher education for the next generation of young Americans, and dizzy about what sensible accountability measures should look like. The version of the legislation approved last week by House members has merit only because it lacks some of the strange and ugly accountability provisions proposed during the past three years, though a few vestiges of these bad ideas remain.
Why should colleges and universities be subject to any scheme of accountability? Because the Higher Education Act authorizes billions of dollars in grants and loans for lower-income students as it aims to make college accessible for all. This aid goes directly to students selecting from among a very broad array of institutions: private, public and proprietary; small and large; residential, commuter and on-line. Not unreasonably, the federal government wants to ensure that the resources being provided are used only at credible institutions. Hence, its insistence on accountability.
The financial limits on student aid were largely set in February when Congress hacked $12 billion from loan funds available to many of those same low-income students. With that action, the federal government shifted even more of the burden of access onto families and institutions of higher education, despite knowing that the next generation of college aspirants will be both significantly more numerous and significantly less affluent.
Now the Congress is at work on the legislation’s accountability provisions, and regardless of allocating far fewer dollars members of both chambers are considering still more intrusive forms of accountability. They appear to have been guided by no defensible conception of what is appropriate accountability.
Colleges and universities serve an especially important role for the nation -- a public purpose -- and they do so whether they are public or private or proprietary in status. The nation has a keen interest in their success. And in an era of heightened economic competition from the European Union, China, India and elsewhere, never has that interest been stronger.
In parallel with other kinds of institutions that serve the public interest, colleges and universities should make themselves publicly accountable for their performance in four dimensions: Are they honest, safe, fair, and effective? These are legitimate questions we ask about a wide variety of businesses: food and drug companies, banks, insurance and investment firms, nursing homes and hospitals, and many more.
Are they honest? Is it possible to read the financial accounts of colleges and universities to see that they conduct their business affairs honestly and transparently? Do they use the funds they receive from the federal government for the intended purposes?
Are they safe? Colleges and universities can be intense environments. Especially with regard to residential colleges and universities, do students face unacceptable risks due to fire, crime, sexual harassment or other preventable hazards?
Are they fair? Do colleges and universities make their programs genuinely available to all, without discrimination on grounds irrelevant to their missions? Given this nation’s checkered history with regard to race, sex, and disability, this is a kind of scrutiny that should be faced by any public-serving institution.
Existing federal laws quite appropriately govern measures dealing with all of these issues already. For the most part, accountability in each area can best be accomplished by asking colleges and universities to disclose information about their performance in a common and, hopefully, simple manner. No doubt measures for dealing with this required disclosure could be improved. But these three questions have not been the focus of debate during this reauthorization.
On the other hand, Congress has devoted considerable attention to a question that, while completely legitimate, has been poorly understood:
Are they effective? Do students who enroll really learn what colleges and universities claim to teach? This question should certainly be front and center in the debate over accountability.
Institutions of higher education deserve sharp criticism for past failure to design and carry out measures of effectiveness. Broadly speaking, the accreditation process has been our approach to asking and answering this question. For too long, accreditation focused on whether a college or university had adequate resources to accomplish its mission. This was later supplanted by a focus on whether an institution had appropriate processes. But over the past decade, accreditation has finally come to focus on what it should -- assessment of learning.
An appropriate approach to the question of effectiveness must be multiple, independent and professionally grounded. We need multiple measures of whether students are learning because of the wide variety of kinds of missions in American higher education; institutions do not all have identical purposes. Whichever standards a college or university chooses to demonstrate effectiveness, they should not be a creation of the institution itself -- nor of government officials -- but rather the independent development of professional educators joined in widely recognized and accepted associations.
Earlham College has used the National Survey of Student Engagement since its inception. We have made significant use of its findings both for re-accreditation and for improvement of what we do. We are also now using the Collegiate Learning Assessment. I believe these are the best new measures of effectiveness, but we need many more such instruments so that colleges and universities and choose the ones most appropriate to assessing fulfillment of learning in the scope of their particular missions.
Until the 11th hour, the House version of the Higher Education Act contained a provision that would have allowed states to become accreditors, a role they are ill equipped to play. Happily, that provision now has been eliminated. Meanwhile, however, the Commission on the Future of Higher Education, appointed by U.S. Secretary of Education Margaret Spellings, is flirting with the idea of proposing a mandatory one-size-fits-all national test.
Much of the drama of the accountability debate has focused on a fifth and inappropriate issue: affordability. Again until the 11th hour, the House version of the bill contained price control provisions. While these largely have been removed, the bill still requires some institutions that increase their price more rapidly than inflation to appoint a special committee that must include outsiders to review their finances. This is an inappropriate intrusion on autonomy, especially for private institutions.
Why is affordability an inappropriate aspect of accountability? Because in the United States we look to the market to “get the prices right,” not heavy-handed regulation or accountability provisions. Any student looking to attend a college or university has thousands of choices available to him or her at a range of tuition rates. Most have dozens of choices within close commuting distance. There is plenty of competition among higher education institutions.
Let’s keep the accountability debate focused on these four key issues: honesty, safety, fairness, and effectiveness. With regard to the last and most important of these, let’s put our best efforts into developing multiple, independent, professionally grounded measures. And let’s get back to the other key issue, which is: How do we provide access to higher education for the next generation of Americans?
Douglas C. Bennett is president and professor of politics at Earlham College, in Indiana.
The details of accreditation are so arcane and complex that the entire topic is confusing and controversial throughout all of education. When we're immersed in the details of accreditation, it's often exceedingly difficult to see the forest for all the trees. But at the core, accreditation is a very simple concept: Accreditation is a process of self-regulation that exists solely to serve the public interest.
When I say "public interest" I mean the interests of three overlapping but identifiably distinct groups:
The interests of members of the general public in their own personal health, safety, and economic well-being.
The interests of government and elected officials at all levels in assuring wise and effective use of taxpayer dollars.
The consumer interests of students and their families in "getting what they pay for" -- certifications in their chosen fields that genuinely qualify them for employment and for practicing their professions competently and honestly.
Saying that a particular program or degree or institution is "accredited" should and must convey to these publics strong assurance that it meets acceptable minimum standards of quality and integrity.
Aside from the public interest, what other interests are there? Well, there are the interests of the accredited institutions, the interests of existing professional practitioners and their industry groups, and the interests of the accrediting organizations themselves. There is no automatic assurance that these latter interests are always and everywhere consistent with the public interest, so self-regulation (accreditation) necessarily involves consistent and vigilant management of this inherent conflict of interest. It is an inherent conflict because the general public, the government, and the students do not have the technical expertise to set curricular and other educational standards and monitor compliance.
I assume it is generally agreed that it is inconceivable to have anyone other than medical professionals defining the necessary elements and performance standards of medical education. Does the American Medical Association do a good job of protecting the public from fraud and incompetence? Yes, for the most part. But you don't need to talk to very many people to hear cynicism. It is the worst behaviors and the lowest standards of professional competence that create this cynicism, and that taints all doctors as well as the AMA. That is why our standards at the bottom or threshold level are so very important. I submit to that the bedrock principle and the highest priority for everyone involved in higher education (the institutions, the professional groups, the accrediting organizations, and those who recognize or certify the accreditors) should be and must be to manage these conflicts of interest in ways that are transparent, and that place the public interest ahead of our own several self-interests.
If I could draw an analogy: Think about why the names Enron and WorldCom are so familiar. Publicly owned corporations must open their books to independent accounting firms that are expected to examine them and issue reports assuring the public that acceptable financial reporting and business practices are being followed, and warning the public when they are not. But there is an inherent conflict of interest in this process: The companies being audited are the customers of the accounting firms. This presents an apparent disincentive to look too closely or report too diligently lest the accounting firms lose clients to other firms who are more willing to apply loose standards. Obviously, this conflict was not well-managed by the accounting industry and, as a result, one of the world's largest and previously most respected accounting firms no longer exists, and all U.S. corporations (honest and otherwise) are saddled with an extraordinarily complex and expensive set of new government regulations.
If we don't manage our conflicts well, rest assured one or more of our publics -- the students, the government, or the public at large - will rise up and take care of it for us in ways that will be expensive, burdensome, poorly designed, and counterproductive. That would be in no one's best interest - ironically, not even in the public's best interest.
I must acknowledge that our current system of self-regulation is, by and large, working very well, just as most accounting firms and most companies are, and always have been, honest. Some of us, especially in the public sector of higher education, wonder how much more accountability we could possibly stand, and what, if any, value-added there could possibly be if more were imposed on us. At the University of Wisconsin at Madison, for example, we offer 409 differently named degrees -- 136 majors at the bachelor's level, 156 at the master's level, 109 at the Ph.D. level, and 8 professional degrees, 7 of which carry the term "doctor," a point I will return to later.
By Board of Regents policy, every one of our degree programs gets a thorough review at least every 10 years, so we are conducting about 40 program reviews every year, and one full cycle of reviews involves just about every academic official on campus. These internal reviews carry negligible out-of-pocket cost, but conservatively consume about 20 FTE of people's time annually. We are also required by the legislature to report annually on a long list of performance indicators that includes things like time-to-degree, access and affordability, and graduation rates, among many other things. In addition, about 100 of our degree programs are accredited by 32 different special accreditors and, of course, the entire university is accredited by the North Central Association. One complete cycle of these accreditations costs about $5,000,000 and the equivalent of 35 FTE of year-round effort. (Annualized, it is about $850,000 and 6 FTE).
I mention the costs, not to complain about these reviews as expensive burdens, but to emphasize that we put a great deal of real money and real effort into self-examination and accountability. Far from being a burden, accreditation and self-study reviews form the central core of our institutional strategic planning and quality improvement programs. The major two-year-long self-study we do for our North Central accreditation, in particular, forms the entire basis for the campus strategic plan, priorities, goals, and quality improvements we adopt for the next 10-year period. As such, it is the most important and valuable exercise we undertake in any 10-year period, and we honestly and sincerely attribute most of the improvements we've made in recent decades to things learned in these intensive self-studies. I think all public universities and established private universities could give similar testimony. Having said all this, let me turn, now, to some of the reasons for the growing public cries for better accountability, and some of the problems I think we need to address in our system of self-regulation:
1. Even in the best-performing universities, there is still considerable room for improvement. To mention one high-visibility area, I think it is nothing short of scandalous that, in 2006, the average six-year graduation rate is only around 50 percent nationwide. Either we are doing a disservice to under-prepared or unqualified students by admitting them in the first place, or we are failing perfectly capable students by not giving them the advising and other help they need to graduate. Either way, we are wasting money and human capital inexcusably. Even at universities like mine, where the graduation rate is now 80 percent, if there are peer institutions doing better (and there are), then 80 percent should be considered unacceptably low.
Now, if we were pressured to increase that number quickly to 85 percent or 90 percent and threatened with severe sanctions for failing to do so, we could meet any established goal by lowering our graduation standards, or by fudging our numbers in plausibly defensible ways, or by doing any number of other things that would satisfy our self-interest but fail the public-interest test. Who's to stop us? Well, I submit these are exactly the sorts of conflicts of interest the accrediting organizations should be expected to monitor and resolve in the public interest. The public interest is in a better-educated public, not in superficial compliance with some particular standard. The public relies on accreditors to keep their eye on the right ball. More generally, accrediting organizations are in an excellent -- maybe even unique -- position to identify best practices and transfer them from one colleges to another, improving our entire system of higher education.
2. A second set of problems involves accreditation of substandard or even fraudulent schools and programs. Newspapers have been full of reports of such institutions, many of them operating for years, without necessarily providing a good education to their students. For years, I have listened to the complaints of our deans of education, business, allied health, and some other areas, that "fly-by-night" schools or "motel schools" were competing unfairly with them or giving absurd amounts of credit for impossibly small amounts of work or academic content.
I must admit that I usually dismissed these complaints lightly, telling them they should pay more attention to the quality and value of their own programs, and let free enterprise and competition drive out the low value products. I felt they (our deans) had a conflict of interest, and they wanted someone to enforce a monopoly for them. More recently I have concluded that our deans were, in fact, the only ones paying attention to the public interest. Our schools of education (not the motel schools) are the ones being held responsible for the quality of our K-12 teachers, and they are tired of being told they are turning out an inferior product when shabby but accredited programs are an increasingly large part of the problem. The public school teachers, themselves, have a conflict of interest: They are required to earn continuing education credits from accredited programs, and it is in their interest to satisfy this requirement at the lowest possible cost to themselves. So the quality of the cheapest or quickest credit is of great importance in the public interest, and the only safeguard for that public interest is the vigilance of the accrediting organizations. I lay this problem squarely at the feet of the U.S. Department of Education, the state departments of public instruction, and the education accreditors. They all need to clean up their acts in the public interest.
3. Cost of education. There is currently lots of hand-wringing on the topic of the "cost of education." What is really meant by the hand-wringers is not the cost of education, but the price of education to the students and their families: the fact that tuition rates are inflating at a far faster rate than the CPI. I've made a very important distinction here: the distinction between cost and price. If education were a manufactured product sold to a homogeneous class of customers in a competitive market with multiple providers, then it would be reasonable to assume there is a simple cause-and-effect relationship between cost and price. But that is not the case.
Very few students pay tuition that covers the actual cost of their education. Most students pay far less than the true cost, and some pay far more. In aggregate, the difference is made up by donors (endowment income) at private colleges, and by state taxpayers at public institutions. Since public colleges enroll more than 75 percent of all students, the overall picture -- the price of higher education to students and their parents -- is heavily influenced by what's going on in the public sector, and the picture is not pretty.
In virtually every state in the country, governors and legislators are providing a smaller share of operating funds for higher education than they used to, and partially offsetting the decrease by super-inflationary increases in tuition. They tell themselves this is not hurting higher education because, after all, the resulting tuitions are still much lower than the advertised tuitions at comparable private colleges, so their public institutions are still a "bargain." This view represents a fundamental misunderstanding of the nature of the "private model." Private institutions do not substitute high tuition for state support. They substitute gifts and endowment income for state support, and discount their tuitions to the tune of nearly 50 percent on the average.
There is a very good reason why there are so few large private universities: It is because very few schools can amass the endowments required to make the private model work. Of the 100 largest postsecondary schools in the country, 92 are public, and ALL of the 25 largest institutions are public. There is no way the private model can be scaled up to educate a significant fraction of all the high school graduates in the country. Substituting privately financed endowments for public taxpayer support nationwide would require aggregate endowments totaling $1.3 trillion, or about six times more than the total of all current endowments of public and private colleges and universities in the country. This simply is not going to happen.
So, to the extent that states are pursuing an impossible dream, they are endangering the health and future of our entire system of higher education. Whose responsibility is it to red-flag this situation? Who is responsible for looking out for the overall health of a large, decentralized, diverse public/private system of higher education? When public (or, for that matter, private) colleges point out the hazards of our current trends, they are vulnerable to charges of self-interest. We are accused of waste and inefficiency, and told that we simply need to tighten our belts and become more businesslike.
I don't know of a single university president who wouldn't welcome additional suggestions for genuinely useful efficiencies that have not already been implemented. Is there a legitimate role here for the U.S. Department of Education and the accrediting organizations? To the extent that accrediting organizations take this seriously and use their vast databases of practices and indicators to disseminate best practices nationwide, we would all be better off. Accreditors should be applauding institutions that are on the leading edge of efficiency, and helping, warning, and eventually penalizing waste and inefficiency, all in the spirit of protecting the public interest. Instead, I'm afraid many accreditors are pushing us in entirely different directions.
4. Another category of problem area is what I will call "protectionism." I have already said there is an inherent conflict of interest in that professional experts must be relied upon to define and control access to the professions. This means that the special accreditors have a special burden to demonstrate that their accreditation standards serve the best interests of the public, and not just the interests of the accredited programs or the profession. Chancellors and provosts get more complaints and see more abuses in this area of accreditation than any other. I will start with a hypothetical and then mention only a small sampling of examples.
In Wisconsin, we are under public and legislative pressure to produce more college-educated citizens -- more bachelor's, master's, and doctoral degrees. Suppose the University of Wisconsin announced next week that any students who completed our 60 credits, or two years, of general education would be awarded a bachelor's degree; that completing two more years in a major would result in a master's degree; and that one year of graduate school would produce a degree entitling the graduate to be called "doctor."
I hope and assume this would be met with outrage. I hope and assume it would result in an uproar among alumni who felt their degrees had been cheapened. I hope and assume it would result in legislative intervention. I even hope and assume it would result in loss of all our accreditations.
That's an extreme example, and most of what I hope and assume would probably happen. But we are already seeing this very phenomenon of degree inflation, and it is being caused by the professions themselves! This is particularly problematic in the health professions, where, it seems, everyone wants to be called "doctor." I have no problem whatsoever with the professional societies and their accreditors telling us what a graduate must know to practice safely and professionally. I have a big problem, though, when they hand us what amounts to a master's-level curriculum and tell us the resulting degree must be called a "doctor of X." This is a transparently self-interested ploy by the profession, and I see no conceivable argument that it is in the public interest. All it does is further confuse an already confusing array of degree names and titles, to no useful purpose.
I asked some of my fellow presidents and chancellors to send me their favorite examples, and I got far too many to include here. Interestingly, and tellingly, most people begged me to hide their institutional identity if I used their examples. I'll let you decide why they might fear being identified. Here are a few:
A business accreditor insisting that no other business-related courses may be offered by any other school or college on campus.
An allied health program at the bachelor's level (offered at a branch campus of an integrated system) that had to be discontinued because the accreditors decreed they could only offer programs at the bachelor's level if they also offered programs at the master's level at the same campus.
An architecture program that was praised for the strength and quality of its curriculum, its graduates, and its placements, and then had its accreditation period halved for a number of trivial resource items such as the sizes of their brand-new drafting tables that had been selected by their star faculty;
Some years ago, the American Bar Association was sanctioned by the U.S. Department of Justice for using accreditation in repeated attempts to drive up faculty salaries in law schools.
The Committee on Institutional Cooperation (the Big Ten universities plus the University of Chicago) publishes a brochure suggesting reasonable standards for special accreditation. The suggested standards are common-sense things that any reasonable person would agree protect the public interest while not unreasonably constraining the institution or holding accredited status hostage for increased resources or status when the existing resources and status are clearly adequate. They focus on results rather than inputs or pathways to those results. Similar guidelines have been adopted by other associations of universities.
So, when I was provost, I routinely handed copies of that brochure to site-visit teams when they started their reviews, saying "Please don't tell me this program needs more faculty, more space, higher salaries, or a different reporting line. Just tell me whether or not they are doing a good job and producing exemplary graduates." Inevitably, or at least more often than not, at the exit interview, I heard "This program has a decades-long record of outstanding performance and exemplary graduates, but their continued accreditation is endangered unless they get (some combination of) more faculty, higher salaries, a higher S&E budget, larger offices, more space in general, greater independence, a different reporting line, their own library, a very specific degree for the chair or director, tenure for (whomever), ... etc." Often, the program was put on some form of notice such as interim review with a return visit to check for such improvements.
Aside: It is perfectly natural for the faculty members of site-visit teams to feel a special bond with the colleagues whose program they are evaluating. It is natural for the evaluators to want to "help" these colleagues in what they perceive as the zero-sum resource struggles that occur everywhere. It is also natural for them to want to enhance the status of programs associated with their field. But, resource considerations should be irrelevant to accreditation status unless the resources being provided are demonstrably below the minimum needed to deliver high-quality education and outcomes. Similarly, "status" considerations are out of place unless the current status or reporting line demonstrably harms the students or the public interest. It is the responsibility of the professional staffs of accrediting organizations to provide faculty evaluators with warnings about conflict of interest and guidelines on ethical conduct of the evaluation.
Let me end with one of the most egregious examples I have yet encountered, and a current one from the University of Wisconsin. Our medical school spent more than a year in serious introspection and strategic planning, with special attention on its role in addressing the national crisis in health care costs. What topic could be more front-and-center in the public interest? The medical school faculty and administration concluded (among other things) that it is in the public interest for medical schools to pay more attention to public health and prevention, and try to reduce the need for acute and expensive interventions after preventable illnesses have occurred. To signal this changed emphasis, they voted to change the name of the school from "The School of Medicine" to "The School of Medicine and Public Health." They simultaneously developed a formal public health track for their M.D. curriculum.
I am told that we cannot have this school accredited as a school of public health because the accreditation organization insists that schools of public health must be headed by deans who are distinct from, and at the same organizational level as, deans of medicine. In particular, deans of public health may not be subordinate to, nor the same as, deans of medicine. This, despite the fact that the whole future of medicine may evolve in the direction of public health emphasis, and this may well be in the best interests of the country. Ironically, to the best of my knowledge, our current dean of medicine is the only M.D. on our faculty who holds a commission as an officer in the Public Health Service.
I have used some extreme examples and maybe some extreme characterizations intentionally. Often, important points of principle are best illuminated by extreme cases and examples. If there are any readers who are not offended by anything here, then I have failed. I hope everyone was offended by at least one thing. I also hope I am provably wrong about some things I've said. But, most of all, I hope to stimulate a vigorous debate on this vitally important topic.
John D. Wiley
John D. Wiley is chancellor of the University of Wisconsin at Madison. This essay is a revised version of a talk Wiley gave at the annual meeting of the Council on Higher Education Accreditation.