Accreditation

Stepping Out

Smart Title: 

With sector under continued attack, 20 for-profit colleges -- all regionally accredited -- seek to distinguish themselves with increased transparency.

Accreditation at Risk

Smart Title: 

Agency threatens to terminate approval of online for-profit college in California, citing its failure to reveal problems with certifying that students met degree requirements.

Do Majors Matter?

Do majors matter? Since students typically spend more time in their area of concentration than anywhere else in the curriculum, majors ought to live up to their name and produce really major benefits. But do they?

Anthony P. Carnevale, the Director of Georgetown’s Center for Education and the Workforce, had recently provided a clear answer. Majors matter a lot -- a lot of dollars and cents. In a report entitled “What’s it Worth,” he shows how greatly salaries vary by major, from $120,000 on average for petroleum engineers down to $29,000 for counseling psychologists.

But what if one asked whether majors make differing contributions to students’ cognitive development? The answer is once again yes, but the picture looks very different from the one in the Georgetown study.

A few years ago, Paul Sotherland, a biologist at Kalamazoo College in Michigan, asked an unnecessary question and got not an answer but a tantalizing set of new questions. It was unnecessary because most experts in higher education already knew the answer, or thought they did: as far as higher-order cognitive skills are concerned, it doesn’t matter what you teach; it’s how you teach it.

What Sotherland found challenged that conventional wisdom and raised new questions about the role of majors in liberal education. Here’s what he did. Kalamazoo had been using the Collegiate Learning Assessment (CLA) to track its students’ progress in critical thinking and analytical reasoning. After a few years it become clear that Kalamazoo students were making impressive gains from their first to their senior years. Sotherland wondered if those gains were across the board or varied from field to field.

So he and his associates tabulated their CLA results for each of the five divisions of the college’s curriculum -- fine arts, modern and classical languages and literatures, humanities, natural sciences and mathematics, and social sciences.

Since gains in CLA scores tend to follow entering ACT or SAT scores, they “corrected” the raw data to see what gains might be attributed to instruction. They found significant differences among the divisions, with the largest gains (over 200 points) in foreign languages, about half that much in the social sciences, still less in the fine arts and in the humanities, least of all in the natural sciences .

How was this to be explained? Could reading Proust somehow hone critical thinking more than working in the lab? (Maybe so.)

But the sample size was small and came from one exceptional institution, one where students in all divisions did better than their SAT scores would lead one to expect, and where the average corrected gain on CLA is 1.5 standard deviations, well above the national average. (Perhaps Inside Higher Ed should sponsor the “Kalamazoo Challenge,” to see if other institutions can show even better results in their CLA data.)

The obvious next step was to ask Roger Benjamin of the Collegiate Learning Assessment if his associates would crunch some numbers for me. They obliged, with figures showing changes over four years for both parts of the CLA -- the performance task and analytical writing. Once again, the figures were corrected on the basis of entering ACT or SAT scores.

The gains came in clusters. At the top was sociology, with an average gain of just over 0.6 standard deviations. Then came multi- and interdisciplinary studies, foreign languages, physical education, math, and business with gains of 0.50 SDs or more.

The large middle cluster included (in descending order) education, health-related fields, computer and information sciences, history, psychology, law enforcement, English, political science, biological sciences, and liberal and general studies.

Behind them, with gains between 0.30 and 0.49 SDs, came communications (speech, journalism, television, radio etc.), physical sciences, nursing, engineering, and economics. The smallest gain (less than 0.01 standard deviations) was in architecture.

The list seemed counterintuitive to me when I first studied it, just as the Kalamazoo data had. In each case, ostensibly rigorous disciples, including most of the STEM disciplines (the exception was math) had disappointing results. Once again the foreign languages shone, while most other humanistic disciplines cohabited with unfamiliar bedfellows such as computer science and law enforcement. Social scientific fields scattered widely, from sociology at the very top to economics close to the bottom.

When one looks at these data, one thing is immediately clear. The fields that show the greatest gains in critical thinking are not the fields that produce the highest salaries for their graduates. On the contrary, engineers may show only small gains in critical thinking, but they often command salaries of over $100,000. Economists may lag as well, but not at salary time, when, according to “What’s It Worth” their graduates enjoy median salaries of $70,000. At the other end majors in sociology and French, German and other commonly taught foreign languages may show impressive gains, but they have to be content with median salaries of $45,000.

But what do these data tell us about educational practice? It seems unlikely that one subject matter taken by itself has a near-magical power to result in significant cognitive gains while another does nothing of the sort. If that were the case, why do business majors show so much more progress than economics majors? Is there something in the content of a physical education major (0.50 SDs) that makes it inherently more powerful than a major in one of the physical sciences (0.34 SDs)? I doubt it.

Since part of the CLA is based on essays students write during the exam, perhaps the natural science majors simply had not written enough to do really well on the test. (That’s the usual first reaction, I find, to unexpected assessment results -- "there must be something wrong with the test.") That was, however, at best a partial explanation, since it didn’t account for the differences among the other fields. English majors, for example, probably write a lot of papers, but their gains were no greater than those of students in computer sciences or health-related fields.

Another possibility is that certain fields attract students who are ready to hone their critical thinking skills. If so, it would be important to identify what it is in each of those fields that attract such students to it. Are there, for example, “signature pedagogies” that have this effect? If so, what are they and how can their effects be maximized? Or is it that certain pedagogical practices, whether or not they attract highly motivated students, increase critical thinking capacities – and others as well? For example, the Wabash national study has identified four clusters of practices that increase student engagement and learning in many areas (good teaching and high-quality interactions with faculty, academic challenge and high expectations, diversity experiences, and higher-order, integrative, and reflective learning).

Some fields, moreover, may encourage students to “broaden out” -- potentially important for the development of critical thinking capacities as one Kalamazoo study suggests. Other disciplines may discourage such intellectual range.

One other hypothesis, I believe, also deserves closer consideration. The CLA is a test of post-formal reasoning. That is, it does not seek to find out if students know the one right answer to the problems it sets; on the contrary, it rewards the ability to consider the merits of alternative approaches. That suggests that students who develop the habit of considering alternative viewpoints, values and outcomes and regularly articulate and weigh alternative possibilities may have an advantage when taking the CLA exam, and quite possibly in real-life settings as well.

Since the study of foreign languages constantly requires the consideration of such alternatives, their study may provide particularly promising venues for the development of such capacities. If so, foreign languages have a special claim on attention and resources even in a time of deep budgetary cuts. Their "signature pedagogies," moreover, may provide useful models for other disciplines.

These varying interpretations of the CLA data open up many possibilities for improving students’ critical thinking. But will these possibilities be fully utilized without new incentives? The current salary structure sends a bad signal when it puts the money where students make very small gains in critical thinking, and gives scant reward to fields that are high performers in this respect . (For example, according to the College & University Professional Association for Human Resources, full professors in engineering average over $114,000, while those in foreign languages average just over $85,000.

Isn’t it time to shift some resources to encourage experimentation in all fields to develop the cognitive as well as the purely financial benefits of the major?

Author/s: 
W. Robert Connor
Author's email: 
newsroom@insidehighered.com

W. Robert Connor is senior advisor to the Teagle Foundation.

Why States Shouldn't Accredit

In my work as Oregon’s college evaluator, I am often asked why state approval is not "as good as accreditation" or "equivalent to accreditation."

We may be about to find out, to our sorrow: One version of the Higher Education Act reauthorization legislation moving through Congress quietly allows states to become federally recognized accreditors. A senior official in the U.S. Department of Education has confirmed that one part of the legislation would eliminate an existing provision that says state agencies can be recognized as federally approved accreditors only if they were recognized by the education secretary before October 1, 1991. Only one, the New York State Board of Regents, met the grandfather provision. By striking the grandfather provision, any state agency would be eligible to seek recognition.

If such a provision becomes law, we will see exactly why some states refuse to recognize degrees issued under the authority of other states: It is quite possible to be state-approved and a low-quality degree provider.Which states allow poor institutions to be approved to issue degrees?

Here are the Seven Sorry Sisters: Alabama (split authority for assessing and recognizing degrees), Hawaii (poor standards, excellent enforcement of what little there is), Idaho (poor standards, split authority), Mississippi (poor standards, political interference), Missouri (poor standards, political interference), New Mexico (grandfathered some mystery degree suppliers) and of course the now infamous Wyoming (poor standards, political indifference or active support of poor schools).

Wyoming considers degree mills and other bottom-feeders to be a source of economic development. You’d think that oil prices would relieve their need to support degree mills. Even the Japanese television network NHK sent a crew to Wyoming to warn Japanese citizens about the cluster of supposed colleges there: Does the state care so little for foreign trade it does not care that 10 percent of the households in Japan saw that program? You’d think that Vice President Dick Cheney and U.S. Senator Mike Enzi, who now chairs the committee responsible for education, would care more about the appalling reputation of their home state. Where is Alan Simpson when we need him?  

In the world of college evaluation, these seven state names ring out like George Carlin’s “Seven Words You Can’t Say On Television,” and those of us responsible for safeguarding the quality of degrees in other states often apply some of those words to so-called “colleges” approved to operate in these states -- so-called “colleges” like Breyer State University in Alabama and Idaho (which “State” does this for-profit represent, anyway?).

There are some dishonorable mentions, too, such as California, where the standards are not bad but enforcement has been lax and the process awash in well-heeled lobbyists.  The new director of California’s approval agency, Barbara Ward, seems much tougher than recent placeholders -- trust someone trained as a nurse to carry a big needle and be prepared to use it.

The obverse of this coin is that in some states, regulatory standards are higher than the standards of national accreditors, as Oregon discovered when we came across an accredited college with two senior officials sporting fake degrees.  The national accreditors, the Accrediting Commission of Career Schools and Colleges of Technology and the Accrediting Bureau of Health Education Schools, had not noticed this until we mentioned it to them. What exactly do they review, if they completely ignore people’s qualifications?

The notion that membership in an accrediting association is voluntary is, of course, one of the polite fictions that higher education officials sometimes say out loud when they are too far from most listeners to inspire a round of laughter. In fact, losing accreditation is not far removed from a death sentence for almost any college, because without accreditation, students are not eligible for federal financial aid, and without such aid, most of them can’t go to school – at least to that school.  

For this reason, if Congress ever decoupled aid eligibility from accreditation by one of the existing accreditors -- for example, by allowing state governments to become accreditors -- the “national” accreditors of schools would dry up and blow away by dawn the next day: They serve no purpose except as trade associations and milking machines for federal aid dollars.

The Libertarian View of Degrees

One view of the purpose and function of college degrees suggests that the government need not concern itself with whether a degree is issued by an accredited college or even a real college. This might be considered the classic libertarian view: that employers, clients and other people should come to their own conclusions, based on their own research, regarding whether a credential called a “degree” by the entity that issued (or printed) it is appropriate for a particular job or need.  This view is universally propounded by the owners of degree mills, who become wealthy by selling degrees to people who think they can get away with using them this way.

The libertarian view is tempting, but presupposes a capacity and inclination to evaluate that most employers have always lacked and always will, while of course an average private citizen is even more removed from that ability and inclination.   Who will actually do the research that the hypothetical perfect employer should do?

Consider the complexities of the U.S. accreditation system, the proliferation of fake accreditors complete with names nearly identical to real ones (there were at least two fake DETCs, imitating the real Distance Education Training Council, in 2005), phone numbers, carefully falsified lists of approved schools, Web sites showing buildings far from where the owners had ever been and other accoutrements.

To the morass of bogus accreditors in the U.S., add the world. Hundreds of jurisdictions, mostly not English-speaking, issuing a bewildering array of credentials under regimens not quite like American postsecondary education. Add a layer of corruption in some states and countries, a genial indifference in others, a nearly universal lack of enforcement capacity and you have a recipe for academic goulash that even governments are hard-pressed to render into proper compartments.  In the past 10 days my office has worked with national officials in England, Sweden, The Netherlands, Canada and Australia to sort out suspicious degree validations. Very few businesses and almost no private citizens are capable of doing this without an exhausting allocation of time and resources. It does not and will not happen.

Should state governments accredit colleges?

State governments, not accreditors or the federal government, are the best potential guarantors of degree program quality at all but the major research universities, but only if they take their duty seriously, set and maintain high standards and keep politicians from yanking on the strings of approval as happens routinely in some states. Today, fewer than a dozen states have truly solid standards, most are mediocre and several, including the Seven Sorry Sisters, are quite poor.

If Congress is serious about allowing states to become accreditors, there must be a reason.  I can think of at least two reasons. First, such an action would kill off many existing accreditors without having their work added to the U.S. Department of Education (which no one in their right mind, Democrat, Republican or Martian, wants to enlarge). This would count as devolutionary federalism (acceptable to both parties under the right conditions).

The second reason is the one that is never spoken aloud. There will be enormous, irresistible pressure on many state governments to accredit small religious schools that could never get accredited even by specialized religious accreditors today. The potential bounty in financial aid dollars for all of those church-basement colleges is incalculable.

Remember that another provision of the same proposed statute would prohibit even regionally accredited universities from screening out transfer course work based on the nature of the accreditor.  Follow the bread crumbs and the net result will be a huge bubble of low-end courses being hosed through the academic pipeline, with the current Congressional leadership cranking the nozzle.

The possibility of such an outcome should provide impetus to the discussions that have gone on for many years regarding the need for some uniformity (presumably at a level higher than that of the Seven Sorry Sister states) in standards for state approval of colleges. We need a “model code” for state college approvals, something that leading states can agree to (with interstate recognition of degrees) and that states with poor standards can aspire to.

The universe of 50 state laws, some excellent and some abysmal, allows poor schools to venue-shop and then claim that their state approval makes them good schools when they are little better than diploma mills. We must do better.

Should states accredit colleges? Only if they can do it well. Today’s record is mixed, and Congress should not give states the power to accredit (or allow the Department of Education to give states the power) until they have proven that their own houses are in order. That day has not yet come.

Author/s: 
Alan L. Contreras
Author's email: 
info@insidehighered.com

Alan L. Contreras has been administrator of the Oregon Office of Degree Authorization, a unit of the Oregon Student Assistance Commission, since 1999. His views do not necessarily represent those of the commission.

Classify Programs, Not Colleges

The new Carnegie classifications have emerged from gestation, showing a great deal of thought and energy, which is too bad. Once again we are classifying the boxes and not the fruit.

The education establishment works very hard to say that the classifications are not intended to represent a pecking order among institutions, but the rest of the world instantly uses it that way. The gold rush mentality causes perfectly respectable regional colleges (e.g., Western Oregon University, in my neck of the woods) to wriggle and stretch through all manner of political hoops to become “Universities,” even faux flagships such as the magically relabeled Missouri State University (another perfectly respectable regional college all tarted up with nowhere to go).  

There are other classification systems. Although the nation’s system of college accreditation and state approval is not exactly a college classification system, in some ways it functions as one. There are overlapping hierarchies of academic seraphim, cherubim and what Jack Aubrey, in one of Patrick O’Brien’s novels, calls “ordinary foremast angels.” Regional accreditors, national accreditors, state agencies and licensing boards all watch with proprietary care the shifting Cassini divisions between their roles and jurisdictions.

It is time to recognize that these boxes, too, are not that different in their basic descriptions except at the lowest levels, and that what matters is the quality of programs colleges contain as related to their mission.  

All college degree programs are not created equal, nor are they equal today. This may be obvious to my friends who hold senior faculty positions at the University of Oregon, Illinois, Northwestern and elsewhere in the upper strata of research institutions. These, after all, are major research universities, formally authorized to condescend by their role as the top layer in the Carnegie classification system. Likewise, my friends who hold positions at Washington & Jefferson, Reed, Davidson and other fine liberal arts colleges can nod politely from their elfin perch in the canopy layer, content to consort with fine young minds.  

The distinction is less obvious to those who work in and attend the great bulk of American colleges and universities, but it is nonetheless true. All colleges glaze the clay that they are given. The clay is largely formed by the time it reaches college, but the nature of both the formed clay and the available glaze differs widely, and society expects the resultant china to perform differently under different conditions. Let us recognize this reality and stop comparing unlike things.

Meandering through the pages of any college catalog looking at degree programs is much like walking the streets of an old western ghost town (or a movie set of one). All of the programs are excellent, leading their field and cutting edge -- apparently all the faculty trained at Lake Wobegon U. The main drag of programs consists of an impressive array of two-story buildings, all of similar appearance on the front. Some have two stories of solid building behind them, full of rooms and people. Others are mainly false fronts, behind which awaits what amounts to a conveyor belt: “this way to the Egress.”  

It is time to stop classifying colleges and start classifying degree programs. Today this is done on an occasional basis for certain doctoral programs by the National Research Council, but other programs are largely ignored except by specialized accreditors in certain fields. All college degrees issued in the United States should be formally classified according to the nature of the work necessary to obtain them. Classification should be mandatory and no college degree should be exempt from it. 

Such a classification scheme would allow students, employers and all other interested people to decide whether a particular college degree is what they want, either as a learning experience or in an employee, co-worker or colleague. There are other classification schemes already in existence, but they do not provide the right kind of information needed by most students, potential students and employers.

Each degree-granting program operating legally in the United States should be classified according to the strength of its program as determined by experts in its field.  This system should not be applied to colleges, only to degree programs individually, because there is so much variation among programs at each school except the very best and the abysmal.

Note that this system says nothing about admission standards, only about program quality. There is no reason for a program to adjust its quality and expectations based on who enrolls in it: programs should decide what level they should most sensibly be at and stay there. Students will, for the most part, self-select based on program type just as they do now. In this system, programs would be classified as follows, as determined by peers in the field, with U and G representing undergraduate and graduate programs:

Honors (U). The best undergraduate programs, maintaining the highest expectations of students, and using the most difficult and complex curricula. Intended to provide superlative undergraduate learning for its own value and, secondarily, to prepare students to study in Research programs.

Research (G). The highest level graduate programs, intended to train professional researchers and faculty for colleges and universities, exclusive of licensed professional fields.  There is no such thing as a “research institution,” there are only research-level programs, and it is time for the higher education establishment to admit this.

Professional (U, G). Programs that train students to practice in licensed professions.  Programs of this nature can most effectively be evaluated by professionals in the field, in part using professional licensure rates and reputational surveys within licensed professions. The Carnegie system already has a similar category.

Standard (U, G). Programs designed for a wide variety of students, but not as challenging as Honors programs, with less ambitious expectations. These programs are not designed to prepare students to obtain doctoral degrees in Research programs, although some top students may succeed in such programs.

Basic (U, G). Programs that meet the basic expectations of a college-level degree program but do not meet the requirements for a Standard designation owing to some academic deficiencies.

Nonstandard (U, G). Programs that do not meet the basic expectations of a college-level degree program, or which decline to be evaluated.

New. Designation of “New” can be applied to any program, only at its own request, during its first five years of operation, as a qualifier for any other classification.  Few programs show their true colors right out of the box.

In another of Patrick O’Brien’s novels, Stephen Maturin reminds us that “the kinds of happiness cannot be compared.”  In some ways, neither can the kinds of degrees. The first necessary step, however, is to recognize that differences exist and to acknowledge them, rather than pretending that all regionally accredited colleges produce the same kind of degree-earning experience, or that degrees issued for one purpose are comparable to those issued for another. This is fiction.

Let us stop using institutional classifications of dubious meaning and start classifying academic programs using a system that is honest, based on evaluation by faculty and helps people understand college degree programs, as well as pointing out which emperors are naked and which paupers wear cloth of gold.

Author/s: 
Alan L. Contreras
Author's email: 
info@insidehighered.com

Alan L. Contreras has been administrator of the Oregon Office of Degree Authorization, a unit of the Oregon Student Assistance Commission, since 1999. His views do not necessarily represent those of the commission.

No Professor Left Behind

At the annual meeting of one of the regional accrediting agencies a few years ago, I wandered into the strangest session I’ve witnessed in any academic gathering. The first presenter, a young woman, reported on a meeting she had attended that fall in an idyllic setting. She had, she said, been privileged to spend three days “doing nothing but talking assessment” with three of the leading people in the field, all of whom she named and one of whom was on this panel with her. “It just doesn’t get any better than that!” she proclaimed. I kept waiting for her to pass on some of the wisdom and practical advice she had garnered at this meeting, but it didn’t seem to be that kind of presentation.

The title of the next panel I chose suggested that I would finally learn what accrediting agencies meant by “creating a culture of assessment.” This group of presenters, four in all, reenacted the puppet show they claimed to have used to get professors on their campus interested in assessment. The late Jim Henson, I suspect, would have advised against giving up their day jobs.  

And thus it was with all the panels I tried to attend. I learned nothing about what to assess or how to assess it. Instead, I seemed to have wandered into a kind of New Age revival at which the already converted, the true believers, were testifying about how great it was to have been washed in the data and how to spread the good news among non-believers on their campus.

Since that time, I’ve examined several successful accreditation self-studies, and I’ve talked to vice presidents, deans, and faculty members, but I’m still not sure about what a “culture of assessment” is. As nearly as I can determine, once a given institution has arrived at a state of profound insecurity and perpetual self-scrutiny, it has created a “culture of assessment.”  The self-criticism and mutual accusation sessions favored by Communist hardliners come to mind, as does a passage from a Credence Clearwater song: “Whenever I ask, how much should I give? The only answer is more, more!”     

Most of the faculty resistance we face in trying to meet the mandates of the assessment movement, it seems to me, stems from a single issue: professors feel professionally distrusted and demeaned. The much-touted shift in focus from teaching to student learning at the heart of the assessment movement is grounded in the presupposition that professors have been serving their own ends and not meeting the needs of students. Some fall into that category, but whatever damage they do is greatly overstated, and there is indeed a legitimate place in academe for those professors who are not for the masses. A certain degree of quirkiness and glorious irrelevance were once considered par for the course, and students used to be expected to take some responsibility for their own educations.

Clearly, from what we are hearing about the new federal panel studying colleges, the U.S. Department of Education believes that higher education is too important to be left to academics. What we are really seeing is the re-emergence of the anti-intellectualism endemic to American culture and a corresponding redefinition of higher education in terms of immediately marketable preparation for specific jobs or careers. The irony is that the political party that would get big government off our backs has made an exception of academe.  

This is not to suggest, of course, that everything we do in the name of assessment is bad or that we don’t have an obligation to determine that our instruction is effective and relevant.  At the meeting of the National Association of Schools of Art and Design, I heard a story that illustrates how the academy got into this fix. It seems an accreditor once asked an art faculty member what his learning outcomes were for the photography course he was teaching that semester. The faculty member replied that he had no learning outcomes because he was trying to turn students into artists and not photographers. When asked then how he knew when his students had become artists, he replied, “I just know.”

Perhaps he did indeed “just know.” One of the most troubling aspects of the assessment movement, to my mind, is the tendency to dismiss the larger, slippery issues of sense and sensibility and to measure educational effectiveness only in terms of hard data, the pedestrian issues we can quantify. But, by the same token, every photographer must master the technical competencies of photography and learn certain aesthetic principles before he or she can employ the medium to create art. The photography professor in question was being disingenuous. He no doubt expected students to reach a minimal level of photographic competence and to see that competence reflected in a portfolio of photographs that rose to the level of art. His students deserved to have these expectations detailed in the form of specific learning outcomes.

Thus it is, or should be, with all our courses. Everyone who would teach has a professional obligation to step back and to ask himself or herself two questions: What, at a minimum, do I want students to learn, and how will I determine whether they have learned it? Few of us would have a problem with this level of assessment, and most of us would hardly need to be prompted or coerced to adjust our methods should we find that students aren’t learning what we expect them to learn. Where we fall out, professors and professional accreditors, is over the extent to which we should document or even formalize this process.

I personally have heard a senior official at an accrediting agency say that “if what you are doing in the name of assessment isn’t really helping you, you’re doing it wrong.” I recommend that we take her at her word. In my experience -- first as a chair and later as a dean -- it is helpful for institutions to have course outlines that list the minimum essential learning outcomes and which suggest appropriate assessment methods for each course. It is helpful for faculty members and students to have syllabi that reflect the outcomes and assessment methods detailed in the corresponding course outlines. It is also helpful to have program-level objectives and to spell out where and how such objectives are met.

All these things are helpful and reasonable, and accrediting agencies should indeed be able to review them in gauging the effectiveness of a college or university. What is not helpful is the requirement to keep documenting the so-called “feedback loop” -- the curricular reforms undertaken as a result of the assessment process. The presumption, once again, would seem to be that no one’s curriculum is sound and that assessment must be a continuous process akin to painting a suspension bridge or a battleship. By the time the painters work their way from one end to the other, it is time to go back and begin again. “Out of the cradle, endlessly assessing,” Walt Whitman might sing if he were alive today.

Is it any wonder that we have difficulty inspiring more than grudging cooperation on the part of faculty? Other professionals are largely left to police themselves. Not so academics, at least not any longer. We are being pressured to remake ourselves along business lines. Students are now our customers, and the customer is always right. Colleges used to be predicated on the assumption that professors and other professionals have a larger frame of reference and are in a better position than students to design curricula and set requirements. I think it is time to reaffirm that principle; and, aside from requiring the “helpful” documents mentioned above, it is past time to allow professors to assess themselves.

Regarding the people who have thrown in their lot with the assessment movement, to each his or her own. Others, myself included, were first drawn to the academic profession because it alone seemed to offer an opportunity to spend a lifetime studying what we loved, and sharing that love with students, no matter how irrelevant that study might be to the world’s commerce. We believed that the ultimate end of what we would do is to inculcate both a sensibility and a standard of judgment that can indeed be assessed but not guaranteed or quantified, no matter how hard we try. And we believed that the greatest reward of the academic life is watching young minds open up to that world of ideas and possibilities we call liberal education. To my mind, it just doesn’t get any better than that.

Author/s: 
Edward F. Palm
Author's email: 
info@insidehighered.com

Edward F. Palm is dean of social sciences and humanities at Olympic College, in Bremerton, Wash.

Memo From the Chairman

College officials and members of the public are watching with intense interest -- and, in some quarters, trepidation -- the proceedings of the U.S. Secretary of Education's Commission on the Future of Higher Education. Given that interest, the following is a memorandum that the panel's chairman, Charles Miller, wrote to its members offering his thinking about one of its thorniest subjects: accountability. As always on Inside Higher Ed, comments are welcomed below.

      

To: Members, The Secretary of Education’s Commission on the Future of Higher Education

From: Charles Miller, Chairman   

Dear Commission Members:

The following is a synopsis of several ongoing efforts, in support of the Commission, in one of our principal areas of focus, "Accountability." The statements and opinions presented in the memo are mine and are not intended to be final conclusions or recommendations, although there may be a developing consensus.
   
I would appreciate feedback, directly or through the staff, in any form that is most convenient. This memo will be made public in order to promote and continue an open dialogue on measuring institutional performance and student learning in higher education.
 

Overview

As a Commission, our discussions to date have shown a number of emerging demands on the higher education system, which require us to analyze, clarify and reframe the accountability discussion. Four key goals or guiding principles in this area are beginning to take shape. 
 
First, more useful and relevant information is needed. The federal government currently collects a vast amount of information, but unfortunately policy makers, universities, students and taxpayers continue to lack key information to enable them to make informed decisions.
 
Second, we need to improve, and even fix, current accountability processes, such as accreditation, to ensure that our colleges and universities are providing the highest quality education to their students. 
 
Third, we need to do a much better job of aligning our resources to our broad societal needs. In order to remain competitive, our system of higher education must provide a world-class education that prepares students to compete in a global knowledge economy.  
 
And finally, we need to assure that the American public understand through access to sufficient information, particularly in the area of student learning, what they are getting for their investment in a college education.       
 

Commission Meeting (12/6/05)

At our Nashville meeting, the Commission heard three presentations from a panel on “Accountability.” Panelists represented the national, state and institutional perspectives and in the subsequent discussion, an informal consensus developed that there is a critical need for improved public information systems to measure and compare institutional performance and student learning in consumer-friendly formats, defining consumers broadly as students, families, taxpayers, policy makers and the general public.

 

Needs for a Modern University Education

The college education needed for the competitive, global environment in the future is far more than specific, factual knowledge; it is about capability and capacity to think and develop and continue to learn. An insightful quote from an educator describes the situation well:

“We are attempting to educate and prepare students (hire people in the workforce) today so that they are ready to solve future problems, not yet identified, using technologies not yet invented, based on scientific knowledge not yet discovered.”    

--Professor Joseph Lagowski, University of Texas at Austin

 

Trends in Measuring Student Learning

There is gathering momentum for measuring through testing what students learn or what skills they acquire in college beyond a traditional certificate or degree.

Very recently, new testing instruments have been developed which measure an important set of skills to be acquired in college: critical thinking, analytic reasoning, problem solving, and written communications.

The Commission is reviewing promising new developments in the area of student testing, which indicate a significant improvement in measuring student learning and related institutional performance. Three independent efforts have shown promise:

  • A multi-year trial by the Rand Corporation, which included 122 higher education institutions, led to the development of a test measuring critical thinking, analytic reasoning and other skills. As a result of these efforts, a new entity called Collegiate Learning Assessment has been formed by researchers involved and the tests will now be further developed and marketed widely.
  • A new test measuring college level reading, mathematics, writing and critical thinking has been developed by the Educational Testing Service and will begin to be marketed in January 2006. This test is designed for colleges to assess their general education outcomes, so the results may be used to improve the quality of instruction and learning.
  • The National Center for Public Policy and Higher Education developed a new program of testing student learning in five states, which has provided highly promising results and which suggests expansion of such efforts would be clearly feasible.

An evaluation of these new testing regimes provides evidence of a significant advancement in measuring student learning -- especially in measuring the attainment of skills most needed in the future. 
   
Furthermore, new educational delivery models are being created, such as the Western Governors University, which uses a variety of built-in assessment techniques to determine the achievement of certain skills being taught, rather than hours-in-a-seat. These new models are valid alternatives to the older models of teaching and learning and may well prove to be superior for some teaching and learning objectives in terms of cost effectiveness.

 

Institutional Leadership

There are constructive examples of leadership in higher education in addressing the issues of accountability and student learning, such as the excellent work by the Association of American Colleges and Universities.

The AAC&U has developed a unique and significant approach to accountability and learning assessment, discussed in two recent reports, “Our Students’ Best Work” (2004) and “Liberal Education Outcomes” (2005).

The AAC&U accountability model focuses on undergraduate liberal arts education and emphasizes learning outcomes. The primary purpose is to engage campuses in identifying the core elements of a quality liberal arts education experience and measuring students’ experience in achieving these goals -- core learning and skills that anyone with a liberal arts degree should have. AAC&U specifically does not endorse a single standardized test, but acknowledges that testing can be a useful part of the multiple measures recommended in their framework.

In this model, departments and faculty are expected to be given the primary responsibility to define and assess the outcomes of the liberal arts education experience.

Federal and State Leadership    

The federal government currently collects a great deal of information from the higher education system. It may be time to re-examine what the government collects to make sure that it’s useful and helpful to the consumers of the system.

Many states are developing relevant state systems of accountability in order to measure the performance of public higher education institutions. In its recommendations about accountability in higher education, the State Higher Education Executive Officers group has endorsed a focus on learning assessment.

Institutional Performance Measurement

What is clearly lacking is a nationwide system for comparative performance purposes, using standard formats. Private ranking systems, such as the U.S. News and World Report “Best American Colleges” publications, use a limited set of data, which is not necessarily relevant for measuring institutional performance or providing the public with information needed to make critical decisions.

The Commission, with assistance of its staff and other advisors and consultants, is attempting to develop the framework for a viable database to measure institutional performance in a consumer-friendly, flexible format.

Accreditation

Historically, accreditation has been the nationally mandated mechanism to improve institutional quality and assure a basic level of accountability in higher education. 

Accreditation and related issues of articulation are in need of serious reform in the view of many, especially the need for more outcomes-based approaches. Also in need of substantial improvement are the regional variability in standards, the independence of accreditation, its usefulness for consumers, and its response to new forms of delivery such as internet-based distance learning.

The Commission is reviewing the various practices of institutional and programmatic accreditation. A preliminary analysis will be presented and various possible policy recommendations will be developed.

Author/s: 
Charles Miller
Author's email: 
info@insidehighered.com

Deaf and Dizzy Lawmakers

Accountability, not access, has been the central concern of this Congress in its fitful efforts to reauthorize the Higher Education Act. The House of Representatives has especially shown itself deaf to constructive arguments for improving access to higher education for the next generation of young Americans, and dizzy about what sensible accountability measures should look like. The version of the legislation approved last week by House members has merit only because it lacks some of the strange and ugly accountability provisions proposed during the past three years, though a few vestiges of these bad ideas remain.

Why should colleges and universities be subject to any scheme of accountability? Because the Higher Education Act authorizes billions of dollars in grants and loans for lower-income students as it aims to make college accessible for all. This aid goes directly to students selecting from among a very broad array of institutions: private, public and proprietary; small and large; residential, commuter and on-line. Not unreasonably, the federal government wants to ensure that the resources being provided are used only at credible institutions. Hence, its insistence on accountability.  

The financial limits on student aid were largely set in February when Congress hacked $12 billion from loan funds available to many of those same low-income students. With that action, the federal government shifted even more of the burden of access onto families and institutions of higher education, despite knowing that the next generation of college aspirants will be both significantly more numerous and significantly less affluent.

Now the Congress is at work on the legislation’s accountability provisions, and regardless of allocating far fewer dollars members of both chambers are considering still more intrusive forms of accountability. They appear to have been guided by no defensible conception of what is appropriate accountability.

Colleges and universities serve an especially important role for the nation -- a public purpose -- and they do so whether they are public or private or proprietary in status. The nation has a keen interest in their success. And in an era of heightened economic competition from the European Union, China, India and elsewhere, never has that interest been stronger.

In parallel with other kinds of institutions that serve the public interest, colleges and universities should make themselves publicly accountable for their performance in four dimensions: Are they honest, safe, fair, and effective? These are legitimate questions we ask about a wide variety of businesses: food and drug companies, banks, insurance and investment firms, nursing homes and hospitals, and many more.

Are they honest? Is it possible to read the financial accounts of colleges and universities to see that they conduct their business affairs honestly and transparently? Do they use the funds they receive from the federal government for the intended purposes?

Are they safe? Colleges and universities can be intense environments. Especially with regard to residential colleges and universities, do students face unacceptable risks due to fire, crime, sexual harassment or other preventable hazards?  

Are they fair? Do colleges and universities make their programs genuinely available to all, without discrimination on grounds irrelevant to their missions? Given this nation’s checkered history with regard to race, sex, and disability, this is a kind of scrutiny that should be faced by any public-serving institution.

Existing federal laws quite appropriately govern measures dealing with all of these issues already. For the most part, accountability in each area can best be accomplished by asking colleges and universities to disclose information about their performance in a common and, hopefully, simple manner. No doubt measures for dealing with this required disclosure could be improved. But these three questions have not been the focus of debate during this reauthorization.

On the other hand, Congress has devoted considerable attention to a question that, while completely legitimate, has been poorly understood:

Are they effective? Do students who enroll really learn what colleges and universities claim to teach? This question should certainly be front and center in the debate over accountability.    

Institutions of higher education deserve sharp criticism for past failure to design and carry out measures of effectiveness. Broadly speaking, the accreditation process has been our approach to asking and answering this question. For too long, accreditation focused on whether a college or university had adequate resources to accomplish its mission. This was later supplanted by a focus on whether an institution had appropriate processes. But over the past decade, accreditation has finally come to focus on what it should -- assessment of learning.  

An appropriate approach to the question of effectiveness must be multiple, independent and professionally grounded. We need multiple measures of whether students are learning because of the wide variety of kinds of missions in American higher education; institutions do not all have identical purposes. Whichever standards a college or university chooses to demonstrate effectiveness, they should not be a creation of the institution itself -- nor of government officials -- but rather the independent development of professional educators joined in widely recognized and accepted associations.   

Earlham College has used the National Survey of Student Engagement since its inception. We have made significant use of its findings both for re-accreditation and for improvement of what we do. We are also now using the Collegiate Learning Assessment.  I believe these are the best new measures of effectiveness, but we need many more such instruments so that colleges and universities and choose the ones most appropriate to assessing fulfillment of learning in the scope of their particular missions.  

Until the 11th hour, the House version of the Higher Education Act contained a provision that would have allowed states to become accreditors, a role they are ill equipped to play. Happily, that provision now has been eliminated.  Meanwhile, however, the Commission on the Future of Higher Education, appointed by U.S. Secretary of Education Margaret Spellings, is flirting with the idea of proposing a mandatory one-size-fits-all national test.  

Much of the drama of the accountability debate has focused on a fifth and inappropriate issue: affordability. Again until the 11th hour, the House version of the bill contained price control provisions. While these largely have been removed, the bill still requires some institutions that increase their price more rapidly than inflation to appoint a special committee that must include outsiders to review their finances. This is an inappropriate intrusion on autonomy, especially for private institutions.  

Why is affordability an inappropriate aspect of accountability? Because in the United States we look to the market to “get the prices right,” not heavy-handed regulation or accountability provisions. Any student looking to attend a college or university has thousands of choices available to him or her at a range of tuition rates. Most have dozens of choices within close commuting distance. There is plenty of competition among higher education institutions.  

Let’s keep the accountability debate focused on these four key issues: honesty, safety, fairness, and effectiveness. With regard to the last and most important of these, let’s put our best efforts into developing multiple, independent, professionally grounded measures. And let’s get back to the other key issue, which is: How do we provide access to higher education for the next generation of Americans?     

Author/s: 
Doug Bennett
Author's email: 
info@insidehighered.com

Douglas C. Bennett is president and professor of politics at Earlham College, in Indiana.

Conflicting Interests

The details of accreditation are so arcane and complex that the entire topic is confusing and controversial throughout all of education. When we're immersed in the details of accreditation, it's often exceedingly difficult to see the forest for all the trees. But at the core, accreditation is a very simple concept: Accreditation is a process of self-regulation that exists solely to serve the public interest.

When I say "public interest" I mean the interests of three overlapping but identifiably distinct groups:

  • The interests of members of the general public in their own personal health, safety, and economic well-being.
  • The interests of government and elected officials at all levels in assuring wise and effective use of taxpayer dollars.
  • The consumer interests of students and their families in "getting what they pay for" -- certifications in their chosen fields that genuinely qualify them for employment and for practicing their professions competently and honestly.

Saying that a particular program or degree or institution is "accredited" should and must convey to these publics strong assurance that it meets acceptable minimum standards of quality and integrity.

Aside from the public interest, what other interests are there? Well, there are the interests of the accredited institutions, the interests of existing professional practitioners and their industry groups, and the interests of the accrediting organizations themselves. There is no automatic assurance that these latter interests are always and everywhere consistent with the public interest, so self-regulation (accreditation) necessarily involves consistent and vigilant management of this inherent conflict of interest. It is an inherent conflict because the general public, the government, and the students do not have the technical expertise to set curricular and other educational standards and monitor compliance.

I assume it is generally agreed that it is inconceivable to have anyone other than medical professionals defining the necessary elements and performance standards of medical education. Does the American Medical Association do a good job of protecting the public from fraud and incompetence? Yes, for the most part. But you don't need to talk to very many people to hear cynicism. It is the worst behaviors and the lowest standards of professional competence that create this cynicism, and that taints all doctors as well as the AMA. That is why our standards at the bottom or threshold level are so very important.
I submit to that the bedrock principle and the highest priority for everyone involved in higher education (the institutions, the professional groups, the accrediting organizations, and those who recognize or certify the accreditors) should be and must be to manage these conflicts of interest in ways that are transparent, and that place the public interest ahead of our own several self-interests.

If I could draw an analogy: Think about why the names Enron and WorldCom are so familiar. Publicly owned corporations must open their books to independent accounting firms that are expected to examine them and issue reports assuring the public that acceptable financial reporting and business practices are being followed, and warning the public when they are not. But there is an inherent conflict of interest in this process: The companies being audited are the customers of the accounting firms. This presents an apparent disincentive to look too closely or report too diligently lest the accounting firms lose clients to other firms who are more willing to apply loose standards. Obviously, this conflict was not well-managed by the accounting industry and, as a result, one of the world's largest and previously most respected accounting firms no longer exists, and all U.S. corporations (honest and otherwise) are saddled with an extraordinarily complex and expensive set of new government regulations.

If we don't manage our conflicts well, rest assured one or more of our publics -- the students, the government, or the public at large - will rise up and take care of it for us in ways that will be expensive, burdensome, poorly designed, and counterproductive. That would be in no one's best interest - ironically, not even in the public's best interest.

I must acknowledge that our current system of self-regulation is, by and large, working very well, just as most accounting firms and most companies are, and always have been, honest. Some of us, especially in the public sector of higher education, wonder how much more accountability we could possibly stand, and what, if any, value-added there could possibly be if more were imposed on us. At the University of Wisconsin at Madison, for example, we offer 409 differently named degrees -- 136 majors at the bachelor's level, 156 at the master's level, 109 at the Ph.D. level, and 8 professional degrees, 7 of which carry the term "doctor," a point I will return to later.

By Board of Regents policy, every one of our degree programs gets a thorough review at least every 10 years, so we are conducting about 40 program reviews every year, and one full cycle of reviews involves just about every academic official on campus. These internal reviews carry negligible out-of-pocket cost, but conservatively consume about 20 FTE of people's time annually. We are also required by the legislature to report annually on a long list of performance indicators that includes things like time-to-degree, access and affordability, and graduation rates, among many other things. In addition, about 100 of our degree programs are accredited by 32 different special accreditors and, of course, the entire university is accredited by the North Central Association. One complete cycle of these accreditations costs about $5,000,000 and the equivalent of 35 FTE of year-round effort. (Annualized, it is about $850,000 and 6 FTE).

I mention the costs, not to complain about these reviews as expensive burdens, but to emphasize that we put a great deal of real money and real effort into self-examination and accountability. Far from being a burden, accreditation and self-study reviews form the central core of our institutional strategic planning and quality improvement programs. The major two-year-long self-study we do for our North Central accreditation, in particular, forms the entire basis for the campus strategic plan, priorities, goals, and quality improvements we adopt for the next 10-year period. As such, it is the most important and valuable exercise we undertake in any 10-year period, and we honestly and sincerely attribute most of the improvements we've made in recent decades to things learned in these intensive self-studies. I think all public universities and established private universities could give similar testimony. Having said all this, let me turn, now, to some of the reasons for the growing public cries for better accountability, and some of the problems I think we need to address in our system of self-regulation:

1. Even in the best-performing universities, there is still considerable room for improvement. To mention one high-visibility area, I think it is nothing short of scandalous that, in 2006, the average six-year graduation rate is only around 50 percent nationwide. Either we are doing a disservice to under-prepared or unqualified students by admitting them in the first place, or we are failing perfectly capable students by not giving them the advising and other help they need to graduate. Either way, we are wasting money and human capital inexcusably. Even at universities like mine, where the graduation rate is now 80 percent, if there are peer institutions doing better (and there are), then 80 percent should be considered unacceptably low.

Now, if we were pressured to increase that number quickly to 85 percent or 90 percent and threatened with severe sanctions for failing to do so, we could meet any established goal by lowering our graduation standards, or by fudging our numbers in plausibly defensible ways, or by doing any number of other things that would satisfy our self-interest but fail the public-interest test. Who's to stop us? Well, I submit these are exactly the sorts of conflicts of interest the accrediting organizations should be expected to monitor and resolve in the public interest. The public interest is in a better-educated public, not in superficial compliance with some particular standard. The public relies on accreditors to keep their eye on the right ball. More generally, accrediting organizations are in an excellent -- maybe even unique -- position to identify best practices and transfer them from one colleges to another, improving our entire system of higher education.

2. A second set of problems involves accreditation of substandard or even fraudulent schools and programs. Newspapers have been full of reports of such institutions, many of them operating for years, without necessarily providing a good education to their students. For years, I have listened to the complaints of our deans of education, business, allied health, and some other areas, that "fly-by-night" schools or "motel schools" were competing unfairly with them or giving absurd amounts of credit for impossibly small amounts of work or academic content.

I must admit that I usually dismissed these complaints lightly, telling them they should pay more attention to the quality and value of their own programs, and let free enterprise and competition drive out the low value products. I felt they (our deans) had a conflict of interest, and they wanted someone to enforce a monopoly for them. More recently I have concluded that our deans were, in fact, the only ones paying attention to the public interest. Our schools of education (not the motel schools) are the ones being held responsible for the quality of our K-12 teachers, and they are tired of being told they are turning out an inferior product when shabby but accredited programs are an increasingly large part of the problem. The public school teachers, themselves, have a conflict of interest: They are required to earn continuing education credits from accredited programs, and it is in their interest to satisfy this requirement at the lowest possible cost to themselves. So the quality of the cheapest or quickest credit is of great importance in the public interest, and the only safeguard for that public interest is the vigilance of the accrediting organizations. I lay this problem squarely at the feet of the U.S. Department of Education, the state departments of public instruction, and the education accreditors. They all need to clean up their acts in the public interest.

3. Cost of education. There is currently lots of hand-wringing on the topic of the "cost of education." What is really meant by the hand-wringers is not the cost of education, but the price of education to the students and their families: the fact that tuition rates are inflating at a far faster rate than the CPI. I've made a very important distinction here: the distinction between cost and price. If education were a manufactured product sold to a homogeneous class of customers in a competitive market with multiple providers, then it would be reasonable to assume there is a simple cause-and-effect relationship between cost and price. But that is not the case.

Very few students pay tuition that covers the actual cost of their education. Most students pay far less than the true cost, and some pay far more. In aggregate, the difference is made up by donors (endowment income) at private colleges, and by state taxpayers at public institutions. Since public colleges enroll more than 75 percent of all students, the overall picture -- the price of higher education to students and their parents -- is heavily influenced by what's going on in the public sector, and the picture is not pretty.

In virtually every state in the country, governors and legislators are providing a smaller share of operating funds for higher education than they used to, and partially offsetting the decrease by super-inflationary increases in tuition. They tell themselves this is not hurting higher education because, after all, the resulting tuitions are still much lower than the advertised tuitions at comparable private colleges, so their public institutions are still a "bargain."
This view represents a fundamental misunderstanding of the nature of the "private model." Private institutions do not substitute high tuition for state support. They substitute gifts and endowment income for state support, and discount their tuitions to the tune of nearly 50 percent on the average.

There is a very good reason why there are so few large private universities: It is because very few schools can amass the endowments required to make the private model work. Of the 100 largest postsecondary schools in the country, 92 are public, and ALL of the 25 largest institutions are public. There is no way the private model can be scaled up to educate a significant fraction of all the high school graduates in the country. Substituting privately financed endowments for public taxpayer support nationwide would require aggregate endowments totaling $1.3 trillion, or about six times more than the total of all current endowments of public and private colleges and universities in the country. This simply is not going to happen.

So, to the extent that states are pursuing an impossible dream, they are endangering the health and future of our entire system of higher education. Whose responsibility is it to red-flag this situation? Who is responsible for looking out for the overall health of a large, decentralized, diverse public/private system of higher education? When public (or, for that matter, private) colleges point out the hazards of our current trends, they are vulnerable to charges of self-interest. We are accused of waste and inefficiency, and told that we simply need to tighten our belts and become more businesslike.

I don't know of a single university president who wouldn't welcome additional suggestions for genuinely useful efficiencies that have not already been implemented. Is there a legitimate role here for the U.S. Department of Education and the accrediting organizations? To the extent that accrediting organizations take this seriously and use their vast databases of practices and indicators to disseminate best practices nationwide, we would all be better off. Accreditors should be applauding institutions that are on the leading edge of efficiency, and helping, warning, and eventually penalizing waste and inefficiency, all in the spirit of protecting the public interest. Instead, I'm afraid many accreditors are pushing us in entirely different directions.

4. Another category of problem area is what I will call "protectionism." I have already said there is an inherent conflict of interest in that professional experts must be relied upon to define and control access to the professions. This means that the special accreditors have a special burden to demonstrate that their accreditation standards serve the best interests of the public, and not just the interests of the accredited programs or the profession. Chancellors and provosts get more complaints and see more abuses in this area of accreditation than any other. I will start with a hypothetical and then mention only a small sampling of examples.

In Wisconsin, we are under public and legislative pressure to produce more college-educated citizens -- more bachelor's, master's, and doctoral degrees. Suppose the University of Wisconsin announced next week that any students who completed our 60 credits, or two years, of general education would be awarded a bachelor's degree; that completing two more years in a major would result in a master's degree; and that one year of graduate school would produce a degree entitling the graduate to be called "doctor."

I hope and assume this would be met with outrage. I hope and assume it would result in an uproar among alumni who felt their degrees had been cheapened. I hope and assume it would result in legislative intervention. I even hope and assume it would result in loss of all our accreditations.

That's an extreme example, and most of what I hope and assume would probably happen. But we are already seeing this very phenomenon of degree inflation, and it is being caused by the professions themselves! This is particularly problematic in the health professions, where, it seems, everyone wants to be called "doctor." I have no problem whatsoever with the professional societies and their accreditors telling us what a graduate must know to practice safely and professionally. I have a big problem, though, when they hand us what amounts to a master's-level curriculum and tell us the resulting degree must be called a "doctor of X." This is a transparently self-interested ploy by the profession, and I see no conceivable argument that it is in the public interest. All it does is further confuse an already confusing array of degree names and titles, to no useful purpose.

I asked some of my fellow presidents and chancellors to send me their favorite examples, and I got far too many to include here. Interestingly, and tellingly, most people begged me to hide their institutional identity if I used their examples. I'll let you decide why they might fear being identified. Here are a few:

  • A business accreditor insisting that no other business-related courses may be offered by any other school or college on campus.
  • An allied health program at the bachelor's level (offered at a branch campus of an integrated system) that had to be discontinued because the accreditors decreed they could only offer programs at the bachelor's level if they also offered programs at the master's level at the same campus.
  • An architecture program that was praised for the strength and quality of its curriculum, its graduates, and its placements, and then had its accreditation period halved for a number of trivial resource items such as the sizes of their brand-new drafting tables that had been selected by their star faculty;

Some years ago, the American Bar Association was sanctioned by the U.S. Department of Justice for using accreditation in repeated attempts to drive up faculty salaries in law schools.

The Committee on Institutional Cooperation (the Big Ten universities plus the University of Chicago) publishes a brochure suggesting reasonable standards for special accreditation. The suggested standards are common-sense things that any reasonable person would agree protect the public interest while not unreasonably constraining the institution or holding accredited status hostage for increased resources or status when the existing resources and status are clearly adequate. They focus on results rather than inputs or pathways to those results. Similar guidelines have been adopted by other associations of universities.

So, when I was provost, I routinely handed copies of that brochure to site-visit teams when they started their reviews, saying "Please don't tell me this program needs more faculty, more space, higher salaries, or a different reporting line. Just tell me whether or not they are doing a good job and producing exemplary graduates." Inevitably, or at least more often than not, at the exit interview, I heard "This program has a decades-long record of outstanding performance and exemplary graduates, but their continued accreditation is endangered unless they get (some combination of) more faculty, higher salaries, a higher S&E budget, larger offices, more space in general, greater independence, a different reporting line, their own library, a very specific degree for the chair or director, tenure for (whomever), ... etc." Often, the program was put on some form of notice such as interim review with a return visit to check for such improvements.

Aside: It is perfectly natural for the faculty members of site-visit teams to feel a special bond with the colleagues whose program they are evaluating. It is natural for the evaluators to want to "help" these colleagues in what they perceive as the zero-sum resource struggles that occur everywhere. It is also natural for them to want to enhance the status of programs associated with their field. But, resource considerations should be irrelevant to accreditation status unless the resources being provided are demonstrably below the minimum needed to deliver high-quality education and outcomes. Similarly, "status" considerations are out of place unless the current status or reporting line demonstrably harms the students or the public interest. It is the responsibility of the professional staffs of accrediting organizations to provide faculty evaluators with warnings about conflict of interest and guidelines on ethical conduct of the evaluation.

Let me end with one of the most egregious examples I have yet encountered, and a current one from the University of Wisconsin. Our medical school spent more than a year in serious introspection and strategic planning, with special attention on its role in addressing the national crisis in health care costs. What topic could be more front-and-center in the public interest? The medical school faculty and administration concluded (among other things) that it is in the public interest for medical schools to pay more attention to public health and prevention, and try to reduce the need for acute and expensive interventions after preventable illnesses have occurred. To signal this changed emphasis, they voted to change the name of the school from "The School of Medicine" to "The School of Medicine and Public Health." They simultaneously developed a formal public health track for their M.D. curriculum.

I am told that we cannot have this school accredited as a school of public health because the accreditation organization insists that schools of public health must be headed by deans who are distinct from, and at the same organizational level as, deans of medicine. In particular, deans of public health may not be subordinate to, nor the same as, deans of medicine. This, despite the fact that the whole future of medicine may evolve in the direction of public health emphasis, and this may well be in the best interests of the country. Ironically, to the best of my knowledge, our current dean of medicine is the only M.D. on our faculty who holds a commission as an officer in the Public Health Service.

I have used some extreme examples and maybe some extreme characterizations intentionally. Often, important points of principle are best illuminated by extreme cases and examples. If there are any readers who are not offended by anything here, then I have failed.  I hope everyone was offended by at least one thing. I also hope I am provably wrong about some things I've said. But, most of all, I hope to stimulate a vigorous debate on this vitally important topic.

Author/s: 
John D. Wiley
Author's email: 
info@insidehighered.com

John D. Wiley is chancellor of the University of Wisconsin at Madison. This essay is a revised version of a talk Wiley gave at the annual meeting of the Council on Higher Education Accreditation.

Accreditation: Why We Must Change

Accreditation has been high on the agenda of the Secretary of Education’s Commission on the Future of Higher Education -- and not in very flattering ways. In “issue papers” and in-person discussions, members of the commission and others have offered many criticisms of current accreditation practice and expressed little faith or trust in accreditation as a viable force for quality for the future.  

In response, accreditation and higher education officials have questioned the legitimacy of a number of the commission’s criticisms and pointed to the successful history and considerable capacity of accreditation as a reliable authority on higher education quality. Other officials are shrugging off the commission’s conversation with a “this too shall pass” response.

But just as it would be a mistake for the commission to ignore or sideline accreditation as a force for quality, it would be a mistake for the accreditation and higher education communities to ignore the concerns and calls for change from the commission. All of us who believe in the importance and ultimate value of accreditation need to take seriously what we have heard.

That doesn’t mean that I agree with all of what’s been said in the commission’s deliberations to “improve accreditation” or to “transform accreditation” -- especially when these comments are based on an (erroneous) perception of accreditation as a failed system. But, I do think that we should heed some of the criticism -- calls for accreditation to pay more attention to institutional performance and student learning outcomes, to additional transparency, to increased rigor in accreditation standards (moving toward “world class”) and to expanded support for innovation, especially in the for-profit sector.

There is an additional -- and quite worrisome -- call from the commission: to aggressively nationalize the accreditation and quality discussion, captured by concepts such as the “National Accreditation Foundation,” the “National Accreditation Working Group,” the “National Accreditation Framework” in the commission documents. These constructs are cause for concern because they can easily lead to a single set of national standards by which to judge all of higher education quality or can lead to federalizing of accreditation, expanding direct federal control and prescriptiveness with regard to standards, policy and practice.

Short of nationalizing or federalizing, accreditation has a good deal of capacity in place so that we are and can continue to be responsive to some of these calls and sustain our leadership in academic quality. Accreditors have already done much work in some of these areas, such as more attention to student learning outcomes and institutional performance in accreditation standards and transparency. The Council for Higher Education Accreditation and the U.S. Department of Education, the two external review bodies that scrutinize accreditation for quality (because they “recognize” accreditors), have standards that include expectations that accreditors will address these and other issues, such as innovation and public participation.

I think nationalizing or federalizing accreditation would take us down the wrong road. But I also part ways with some of my colleagues in accreditation and higher education, from whom we’re hearing comments like “leave us alone,” “trust us” and “you don’t understand us.” Some are saying that an accreditation change agenda should proceed -- but should consist only of changes we like on a timetable acceptable to us. There is little acknowledgment that, in today’s society, a self-regulatory enterprise such as accreditation may now require a higher level of evidence and transparency than we are currently providing. There are few nods to the importance of additional effort to sustain faith and trust in the enterprise.

Yet it is all too easy to envisage a scenario in which either nationalization, federalization, loss of leadership or loss of faith and trust might come about. Suppose, for example, that the calls from the commission continue to gather attention and support. Suppose that the pace of change established by accreditation is simply not swift enough to constitute a viable response. Suppose that actors in the private sector step in and develop new mechanisms to gather information about higher education quality in a more transparent and evidence-based way, sidelining accreditation. Even worse, the federal government might decide that it can proceed with federalizing a “single set of standards” approach to quality, even within the current legal and regulatory framework provided by the current Higher Education Act.

There is an alternative scenario. We in accreditation and higher education can use the commission as a constructive external stimulus. We can acknowledge the commission’s message, making sure that we are the leaders for change. It is in our best interest to convert the national attention that the commission has brought to accreditation from a negative to a positive.

For example, accreditation and higher education can commit to progressive proposals that address several of the commission’s calls. We can agree to:

  • Accelerate the current accreditation emphasis on evidence of institutional performance and student learning outcomes, assuring that the language of accreditation standards converts into energetic development and use of evidence of the results of teaching and learning.
  • Break the current impasse in our debate on additional transparency about accredited status, committing ourselves to more fully inform the public about what it means to be accredited: What are institutional strengths? What might be improved?  What does an accreditation review tell students about the services they receive from an institution?
  • Build national capacity for comparability of the key features of accredited institutions and programs, agreeing to a small set of indicators of quality that the public can use to compare institutions.
  • Focus on moving from threshold accreditation standards to greater rigor, especially as this relates to general education and the undergraduate curriculum, as part of a national effort to increase global competitiveness.

Making progress on such proposals will not be easy. First, it will require that accreditation and higher education give greater priority to directly serving the public interest than in the past. Second, we will need to confront the all-too-human tendencies toward complacency, defensiveness and resistance to change. Third and most important, it may require that accreditors and higher education leaders alike face fundamental questions about how much we value and support a strengthened accreditation system. Accreditation will have limited capacity to change unless higher education supports such efforts.

We need public faith and trust in accreditation as a force for quality in the future. We need to sustain and enhance our leadership for academic quality. We need to consider some changes in the conduct of the business of our enterprise.

Author/s: 
Judith S. Eaton
Author's email: 
editor@insidehighered.com

Judith S. Eaton is president of the Council for Higher Education Accreditation, an association of 3,000 colleges and universities that recognizes 60 institutional and programmatic accrediting organizations.

Pages

Subscribe to RSS - Accreditation
Back to Top