(Editor’s note: When Congress renewed the Higher Education Act last summer, lawmakers scuttled the National Advisory Committee on Institutional Quality and Integrity. which advises the U.S. secretary of education on accreditation issues. The panel is soon to be reconstituted with new members appointed by the secretary and by leaders in the House and Senate. The column below offers advice to those making the selections, and to their eventual selections.)
Accreditation long preceded the establishment of the U.S. Department of Education, and it is only within the past 50 years that a nexus was established between accreditation and the ability of a postsecondary institution to award federal student financial aid. Congress, in helping needy students attend college, needed to determine which schools are of sufficient quality to participate in programs under Title IV of the Higher Education Act.
Since we do not have a Ministry of Education, this determination required that the Department of Education frequently look over the shoulder of accrediting agencies to read their lists of quality institutions. Accrediting agencies receive no federal support and therefore are, in a very real sense, simply doing the department a favor in sharing these privately determined and privately supported lists.
There is another side to the story, of course. Accrediting agencies are supported by the schools and colleges they accredit, and are anxious to have the department consult their lists, keeping their accredited institutions eligible to participate.
One further element. Since anyone can create an accrediting agency, there must be standards whereby an accrediting agency is determined to be a “reliable authority” -- i.e., accrediting agencies must be recognized by the Secretary of Education. This has spawned a series of regulations surrounding accreditation; every recognized agency must meet a number of requirements, submit appropriate reports, and petition for renewal of recognition every five years.
The Department of Education has developed a great deal of expertise in executing this process, staffed by policy analysts who examine petitions, participate in site visits, draw conclusions, and prepare a report.
All of this plays itself out before the National Advisory Committee on Institutional Quality and Integrity, known as NACIQI. Although this is nominally an advisory committee, the Secretary of Education has usually followed the recommendations of this body, so the individuals sitting on NACIQI have a great deal of influence on the process, and therefore, on all of American higher education.
Suffice it to say that the precarious balance between a government needing to consult the lists of recognized accrediting agencies, and the desire of accrediting agencies to be so recognized, can be greatly nuanced by the nature, knowledge and personal outlooks of the members of NACIQI.
Against this background one must admire the businessmen, legislators, lawyers and financial aid officers who will serve on NACIQI, often armed with little more than their own personal exposure to college, as background to the task of recognizing accrediting agencies.
If past experience is prelude, these will be intelligent, involved, alert and sensitive people. By the time their three year term of office is over, many, perhaps most, will have begun to understand the complexities of accreditation. They will have sensed the nuances of difference between agencies and between fields, understood the nature of peer review, and appreciated the manner in which accreditation enhances quality and leads to change -- and sometimes to improvement.
They will also have begun to appreciate the conditions leading to successful student outcomes and the role of faculty, students and facilities (inputs!) in a college’s successfully completing its mission.
Until these revelations happen, members will know little more than what they were told in Department of Education training sessions. They will know to watch for conflicts of interest, they will be introduced to the government lawyer who will be present to help them navigate the regulations, and they will hear about the conduct and impact of their sessions.
But they won’t get to meet accreditors first hand, and will not have an opportunity to participate in an actual, on site accreditation visit. They will possibly be addressed by people who will cast doubt on peer review, and who claim that expert judgment is not sufficient to establish the quality of programs and institutions. Numbers, they will be told, are needed instead. No matter how irrelevant, incomplete, inconsequential, or purposeless the numbers happen to be.
NACIQI members, at least in the early years, may not be in a position to challenge those assumptions, with questions like: Why were these particular measurements chosen? What have such numbers shown in the past? What improvements to teaching, learning, and policy resulted from their use? Where is evidence of the validity, reliability, and relevance of these measurements?
They probably will not have participated in conversations that questioned the cost/benefit of all this measurement. Nor will they realize that ”accountability” has not been defined, operationally or otherwise, and that the theories that govern public education policy, have rarely, if ever, been subjected to experimentation and test to scientific standards.
Polite, attentive NACIQI members will hear presentations from persuasive speakers who work at the peripheries of higher education, but not in it. They will not hear from the people who teach philosophy and English, art and accounting and engineering to ever increasing numbers of indifferently prepared students.
The Department of Education staff is not at fault: It’s simply that the accreditation agenda has been taken over by those who bring a regulatory mindset to the process. Hence, NACIQI rarely hears words such as “scholarly,” ”deep study,” ”well read,” and “erudite.” They do hear about measures, templates, benchmarks and graduation rates.
As in the past, NACIQI stands to be diverted, unless new members are prepared to defend the academy, to preserve accreditation, and, from the very outset, to challenge, to question, and to examine.
Transparency, in certain frameworks, is a blessing. In accreditation it can convert the site visitor (who benefits from open, frank and revealing interactions) to a regulator (who is reduced to checking off boxes in a rigid grid). “Does transparency have a chilling effect on the site visitor’s report?” is a question that might be appropriate for a NACIQI member.
When hearing about graduation rates, NACIQI might profitably ask whether human interactions (a legislator’s effectiveness, marriage, religious experience), and enterprises (hospitals, jails, retirement communities, colleges and universities) can, indeed, be reduced to numbers.
In a phrase, NACIQI must have people at the ready, from day one, to push back. When they hear talk about student engagement, for example, they must be able to ask questions like: “Does engagement create successful students or are successful students engaged?” And to ask about the experiments, to scientific standard, that support the answers.
NACIQI members must not hesitate to go beyond the surface to seek out the ideas and complexities of accreditation. They should not be intimidated by words like “regulations” and “statutes.” Law as applied to education is a guide, not a barrier or a set of blinders. The Higher Education Act expects that NACIQI members will be people of a variety of backgrounds who are fair and perceptive, with good judgment. NACIQI was not intended to be made up entirely of lawyers.
Education is an analog process that cannot be properly understood or described in terms of discrete digital elements. Even if a qualitative question can be posed, NACIQI members should be prepared for a quantitative response.
At one particularly painful period, accrediting agencies were being asked about the number of applicant schools they rejected. Some members of NACIQI seemed impatient with anything other than a simple numeric answer. Actually, rejection often takes place before a formal application is submitted. The head of the school makes contact, gets to spend a long time with an accreditor, learns enough about the process to know what may be missing, and how to get ready for the future. By the time most schools apply, they’re ready to succeed. A number tells a distorted story, if that!
There are many other such pitfalls; fortunately transcripts of previous NACIQI meetings are readily available, so that anyone can relive some of the highlights (or lowlights) of the past. Attending a conference of the Council for Higher Education Accreditation and/or spending time with accreditors will be invaluable as well.
In the end, the success of NACIQI will depend upon the selection of people who are independent thinkers, knowledgeable, confident, and able to clash in the world of ideas – knowing that every word they utter will be recorded, transcribed, and made available to everyone, everywhere.
Oh yes, by the way, welcome.
Bernard Fryshman is an accreditor and a professor of physics.
From time to time there is discussion in higher ed circles about the desirability of developing a system of college approval using interstate reciprocity based on a model code. The reason this subject comes up more and more often is that more colleges are operating outside their original state of licensure. Schools end up complying with a dozen different sets of state laws and, in many cases, pay significant fees to multiple jurisdictions. All of this has the net effect of increasing the cost of serving students.
Because of the exceptionally decentralized system of college operations and approvals in the U.S., there is no meaningful federal approval that can be relied on to guarantee that certain standards are met.
Reliance on accrediting bodies does not work for a number of reasons. First, accreditors are membership-based organizations; they are not set up to operate as enforcement agents. Also, they are not structurally or legally capable of resolving student complaints, which is a significant role that states currently handle. They have standards that vary somewhat from group to group. In many cases they do not have frequent enough contact with schools. Finally, they are not answerable to the public in any reasonably direct way.
I have heard college leaders argue that they should not be answerable to the public. It is important to remember that although faculty require the freedom to pursue truth where it may lead them without political interference, colleges as a whole are indeed answerable to the public. In fact, only a government can give them degree-granting power, under U.S. law. This is our only bulwark against diploma mills, and the admirable recent actions by Wyoming and Alabama governments to snuff some dubious colleges demonstrates its necessity.
I have heard accreditors argue that because their standards are acceptable to the U.S. Department of Education, states should treat those standards as automatically acceptable. This assumes that the Department of Education has sufficient academic standards that it requires accreditors to enforce, which it does not. The feds do a fairly good job of making sure that colleges who get federal aid are capable of handling it, but they are not in the academic program oversight business. I do not think that any discussion of interstate standards or reciprocity should get tangled up in a discussion of what accreditors or the feds do.
But what do the states do? I work as principal college evaluator for Oregon, and have also done evaluations for several other states. The things that states focus on, and which any interstate agreements would have to incorporate, tend to be detailed and prescriptive, unlike the bulk of accreditation standards.
For example, every three years Oregon requires our approved private-college programs to provide my office with detailed qualification information for every faculty member, full-time and part-time. We look at exactly what their degrees are, what their experience is, and what courses they teach. We often find colleges using faculty to teach in fields in which they are not qualified. We fix that problem.
That is just one example, but it is something that no other type of agency, state or federal, does, except in certain narrow contexts such as evaluation of grant applicants. Why do we do it? Because states are legally responsible for the quality of the educational programs at all colleges, public and private, that operate in our jurisdiction, and in many cases only the state has that responsibility. We have to do it because no one else does or can. We take that role seriously and for the most part (California and Hawaii being the most obvious exceptions) we do it well.
It is time for states to look carefully at each other’s laws and figure out a way to recognize each other’s work when it meets certain minimum standards. What should those standards include? Although there are many possible things to evaluate about a college, the core of any model code upon which reciprocity could be based would have to include the following.
Faculty qualifications. Without a careful look at who is teaching what, and whether they are qualified to do so, meaningful evaluation of a college’s quality is not possible.
Curriculum. Are the programs in each field structured in a reasonable way, comparable to the norm at similar institutions?
Award of credit. Is credit awarded based on an appropriate amount of student work (for example, are schools prevented from giving a degree based on a weekend’s work)? Is credit awarded primarily based on teaching by the school’s own faculty? Is transfer credit limited to schools of demonstrably similar quality? Is credit by examination limited? Is so-called “life experience” credit strictly limited and carefully evaluated?
Admissions. Are admitted students capable of performing college-level work? Are they provided accurate information during the recruitment and admission process? Are any job placement claims backed by solid data?
Finances. Is the college solvent? Does it have adequate reserves to get through periods of falling enrollment? Are fees established and assessed in an appropriate manner, and only on a term-by-term basis? Are refunds available on an appropriate schedule, also term-by-term?
Are there other issues? Certainly, among which are student services, library access and the experience of college managers. However, the five categories shown above have proven to be the crucial ones in my years of experience as an evaluator. The reason is that a failure of performance in any one of these five almost certainly means that the college is not acting appropriately, cannot succeed and is likely to founder. Indeed, a major failure in any of these five should lead the responsible state government to take action to make certain that the college cleans up its act or is closed.
If I could be certain that another state was doing a good job of enforcement in the five core categories, would I be willing to allow a college based in that state to operate in Oregon without going through my own state’s detailed and expensive evaluation process? Yes, with a couple of provisos.
First, faculty teaching only at Oregon’s branch would have to be evaluated by someone, either my office or the state of origin. That is a fairly straightforward task and could be handled by either state, though if they are local residents it probably makes more sense for them to be screened by the state where they teach.
The larger issue is that of student complaints. One of the reasons that offices like mine exist is to provide students who have a bad experience owing to inappropriate actions by a college with a way to get complaints resolved without resorting to litigation. In effect, we are a mediator with a very large stick in the closet. In my ten years as Oregon’s chief evaluator, I have rarely had to use the stick, though I have occasionally cast an ostentatious glance in its direction for effect. Sometimes student complaints are simply not justified or don’t violate any state rule. Sometimes a student complaint uncovers a very significant issue that a college needs to fix. A state can compel corrective action.
It is impractical to expect a student in Oregon to get complaints resolved by a state agency in Indiana or Texas. It seems clear that any streamlined state approval reciprocity would need to leave a significant chunk of problem-solving in the hands of the state where the problem happened. That in turn would require that a model code and reciprocity agreement include arrangements for interstate cooperation in such issues. In practice, I work with my colleagues in other states (and several Canadian provinces) quite often already. We help each other with various kinds of issues. I have no doubt that states willing to sign a reciprocity agreement would be willing to help each other make it work.
So how do we begin? Well-meaning education organizations with little knowledge of the practicalities of how state approvals actually work will decide that they should simply invent such a system without bothering to involve actual regulators. To preclude this kind of bumblehandedness, we need the states to simply get to work on this project and develop a workable model code. An attempt to do this happened in the 1970s, but it was not timely. Today, with so many schools operating across state lines, the need has never been greater.
Alan Contreras works for the State of Oregon. His views do not necessarily represent those of his employer.
Skipping lightly over 350 years of history, we will take the bachelor's degree as the true given of American higher education. Since we do not have a ministry of education, the definition of this degree is imprecise, variable and sometimes even fluid, as independent and autonomous institutions experiment, compete, modify, and adapt.
There are bounds, of course. States and accreditors ensure that level and scope remain consistent with commonly understood and accepted standards. Not fixed, but clearly recognizable.
Lest we be diverted, we will also have to disregard the 100 year chronology of the Carnegie Unit (CU) and focus on its present role.
In the life of a student and in the career of a teacher, four years is a very long time. The term is a more tractable unit, and at a time when colleges all offered similar programs of limited variety, it probably sufficed. Students moved through college in lock step and there was less of a need for a measure of accomplishment on a course by course basis.
That's not true anymore, and the Carnegie Unit has emerged as a means of identifying accomplishment and progress towards a degree. But it is not a mute measure of elapsed time!
Again, the fundamental given of higher education is the totality of the degree. This involves a content path, a variety of teachers, courses building on one another, and it involves strategies developed over 100 or 150 years of experience. The degree structure encourages a slow but steady change in topics and emphasis to enable courses to remain relevant, fresh, and comprehensive. This in turn enables a students to leave college and enter a job, a career, graduate program, or the professions.
The degree encompasses a generally agreed upon quantity of material delivered over four years, and the CU provides an orderly means of breaking down this totality. The degree presents a coherent track, an organized program, and a series of accomplishments, and so too must the CU.
But the Carnegie Unit is intended to be used only in conjunction with other information. Along with curriculum, catalog description, prerequisites, grades, grade point average, and distribution requirements, credit hours can help describe effort, content, and accomplishment.
A credit hour makes sense only when there is a background of distribution requirements, junior or senior level course work, a degree map, and a topic under discussion. Making progress toward a degree depends on all of these characteristics. The three credits earned in Chem I cannot be offered to replace three credits in Accounting. Indeed, poor planning could leave a student with 150 or more credits and no degree!
Everyone within higher education understands the limits of the Carnegie Unit as well as its usefulness as a medium of exchange, or a lingua franca.
The CU enables useful and easy conversations to take place between departments and among schools. Without this shorthand, every interaction would have to encompass consideration of content, rigor, and intellectual challenge. With it, a course map can be agreed upon that lists 40 courses across 10 departments in an easily comprehensible manner, without the details and minutiae of each course.
The allocation of credit hours is neither mindless nor haphazard. Credit hours are determined by the quantity of material a student is expected to acquire, by the rigor, and by the intellectual challenge he or she will face. So too, the time and effort. History, experience and professional judgment all influence the decision to assign a certain number of credit hours to a certain quantity and intensity of material.
There is within each field and each subject area a commonly understood level of prerequisites, skills, and competencies that students must bring to bear in addressing a conventional college course. A norm is established, and vigorously protected. Experts in any given field can usually look at a curriculum, a textbook, or an examination and deduce whether or not the credits being offered for the course are consistent with common usage.
There are variations of course, but within reason. A student presenting three credits in Calculus 101 will find acceptance of these credits virtually everywhere.
There is a cross-fertilization enhancing the norm that occurs as students become faculty members elsewhere, as faculty members change campuses, as students transfer, as textbooks become widely available, as graduates enter professional programs, and as accrediting team visitors travel to different campuses. Comparisons and conversations, as well as experience, all help create a common credit hour currency which speaks volumes in academe, but only haltingly everywhere else.
For the most part, it is a tenured (and largely jaundiced) faculty that jealously protects the integrity of the courses they teach and the degree that they stand for. While there may be grade inflation in some schools, there is rarely credit hour inflation.
A teacher who does not complete his/her course's goals will at one point or another hear from a colleague teaching a more advanced course to students who do not have all the necessary prerequisites. A school whose students do poorly on the bar examination will examine all aspects of the program leading to graduation. This is all part of a vast self-correcting mechanism that protects students and protects the enterprise.
Also important is the textbook marketplace, in which a handful of texts have gained widespread acceptability at least partially because there is an excellent fit with commonly agreed-upon course descriptions and the number of credits assigned.
Difficulty with allocating credit hour appears at the peripheries. Programs that are unusual in their content or structure, weekend programs, accelerated programs, study abroad, experimental and innovative programs all have a common burden: assigning a certain number of credit hours which signifies accomplishment and progress towards a degree, and which are consistent with the norms of higher education.
Courses offered in conventional format are usually associated with a certain quantity of seat time. This, in turn, provides a template against which courses delivered in unfamiliar or unconventional format can be measured. Teachers who have offered the same course on campus and online know what to expect of students at the end of a course and will also be an excellent source of information for assigning a reasonable number of credit hour to the online (or weekend, or study abroad) course.
We do not live in a wild west environment. While there are innovative and nontraditional programs of all kinds, accomplishment is almost invariably measured against existing bricks and mortar classroom time. There are glitches -- but that's all they are. With several million different courses (and assigned Carnegie Units) offered each year, a handful of extravagant claims do not a national emergency make!
In this connection it's worth noting that it is not the role of the accreditor to "give guidance" in matters relating to credit hours. The assignment of credit hours is a faculty's prerogative and responsibility. It is the accreditor's role to ensure that this was done in a reasonable manner consistent with the field and with broad norms. This, of course, is why site visitors are comprised of experts capable of making such judgments.
For completeness, it's important to mention two parties which, by design, use the credit hour as a simple measure of time.
College administration is one. Different courses require different talents, unequal effort ("I fill up two blackboards each period while he plays movies."), dissimilar exam grading burdens.... None of these factors plays a role in determining faculty salaries -- nor should they. The system works, with every faculty member teaching the same number of credit hours.
Government is the other. Student financial aid takes no notice of differences in subject area, level, course title, rigor, challenge, or school reputation. The Department of Education has avoided interfering with the internal workings of postsecondary schools and should continue to do so.
But herein lies a danger, because there is a disconnect between the Department's usage of the CU and the manner in which it is understood and used in higher education. This is the reason we are being asked to define the credit hour as a simple measure, when as noted above, its use and understandings are quite comprehensive and quite complex.
Defining the credit hour will undo the easy exchange, the ready conversations, and the fuzziness which sometime enables us to coexist. It will generate considerations of other indicators which will similarly have to be precisely defined. Will we accelerate students according to their grade point averages? Will a 4.0 student in physics trump a 4.0 in sociology (or vice versa)?
Will teachers be paid according to the number of students they teach? According to their brightness? Their preparation for the course? Will students pay more for a 45 hour program in a philosophy seminar than a survey course?
In our litigious society, we have to keep an eye on the inevitable aggrieved student. Did he fail the course because the teacher did not provide a full 50 minutes of teaching? Does humor constitute a part of the teaching hour? Does the time spent by some other student talking constitute "teaching"? How about the time spent by the teachers walking up and down the aisles watching as students work a computer assignment? And how do we determine that the student has devoted two hours of out-of-class student work? Will we have cameras, beepers, monitors? What constitutes two hours of out-of-class student work?
Defining a credit hour has implications that could conceivably cause great dislocation and misunderstanding in higher education.
If there are concerns, they are rare. And these unusual episodes should not be allowed to drive higher education, just like the rare high school "diploma mill" student should not be permitted to impose on all of higher education another costly, time consuming, and unnecessary burden.
Bernard Fryshman is an accreditor and a professor of physics.
The agenda for change before U.S. higher education is already very long. But with its recent reports on three regional accrediting agencies, the Office of the Inspector General of the Department of Education has moved the definition of the credit hour closer to the top than I had ever imagined.
If the community procrastinates and the out-of-date Carnegie Unit becomes the default definition applied by the department, accrediting agencies and the institutions and programs they accredit will experience greater upset and confusion than they expect or want.
Based on my experience in higher education, I know that for decades faculties assigned credit hours according to a fairly complex although unwritten matrix. But perhaps I received the wrong introduction to the collegiate credit hour as long ago as 1962.
That year Lewis & Clark College, my alma mater, ran a breathtaking experiment. With several other freshman colleagues, I spent my first collegiate semester in Japan. I took four three-credit courses, two of which were completely independent study and two of which involved about six weeks of face-to-face instruction. I took exams in the latter two and turned in lengthy papers in all except Japanese language.
I could not discern any mathematical formula based on seat time and/or study time that made these each three-credit courses. Nor did it bother me that the actual workload for each three-credit course seemed different. I assumed the faculties of record, as well as the L&C faculty as a whole, must have agreed to the assignment of credit hours. Looking back on that experience, I can also testify that had time on task alone been the measure of learning, I probably deserved four credits each in a couple of those courses.
The “flexibility” of the credit hour continued throughout my collegiate career. In finishing my undergraduate studies, I had a three-credit honor’s thesis course that had no structured time commitments. It prepared me for graduate school where, after finishing a sequence of courses, I registered faithfully each semester for credit-bearing “independent” courses for my doctoral research. I assumed that everyone in the academy understood that the use of credit hours to measure student learning often was not tied to seat time or study time.
My decade as a classroom instructor essentially confirmed that understanding. During it I experienced my share of faculty squabbles over losses of a class day -- and the contact hours it represented -- to such things as post-Thanksgiving Fridays and campus-wide days devoted to the discussions of the issue of the moment. Differences among faculty opinions most often were ironed out in curriculum committees and faculty senates. Sometimes contact hours figured into those debates; sometimes other faculty expectations of student activity counted more heavily. But most faculties seemed to have a basic understanding of how to assign credits.
As I moved from campus to campus in the 1970s, I saw that this understanding apparently carried across institutional boundaries. I moved from institutions with 15-week semesters to others with 10-week semesters. I created courses for the four- to six-week courses in a “4-1-4” or a “4-4-1” academic calendar, and once I taught summer school sessions on a six-week calendar. Calendars shifted, but allocation of credit hours, at least to me, appeared to follow some well-understood “industry standards” related to mastery of course content and only loosely tied to the contact hours of a Carnegie Unit.
The fact is that professional judgment by the faculty long ago supplanted seat and study time in the determination of award of credit hours. Faculties, drawing on education and experience, determine what knowledge and skills a student should master; faculties determine how to break into courses and modules the learning processes necessary for that mastery; and faculties determine the rigor, content, and examination strategies appropriate the award of a specified number of credit hours. Individual members of the faculty might propose the course and the credit it should bear, but most often it is their faculty colleagues who make the final determination through curriculum approval processes. It has proven to be a decent system that provides a way to tally up learning while allowing for considerable flexibility in delivering education and evaluating learning.
Colleges and universities that serve adult learners by recognizing achieved learning through portfolio evaluations or ACE credit equivalency determinations or CLEP testing have for decades unbundled credit hours from a rigid formula of seat time and study time. Colleges and universities that have integrated work-study and community service into their credit-bearing courses have as well. In making these important educational pathways work, expert judgments by faculties determine the award of credit hours, either by assigning those hours directly or accepting them in transfer.
I think back on the times that credit hours influenced accreditation actions when I was with the Higher Learning Commission. To be sure, truncation of a standard academic calendar most often triggered concern. Frequently, however, the key issues had less to do with time on task than rigor of expected learning. Inevitably, expert judgment of faculty rather than contact/study hours informed the decision about the appropriateness of the challenged credit award. Those evaluation team members pored over course syllabuses, evaluated the rigor of the assigned work and study, talked with teaching faculty and students, and sometimes reviewed samples of student work. In some cases they concluded that the award of credit was pretty much in line with industry standards; sometimes they proposed that the accrediting agency require that an institution rework its internal systems for determining the award of credit; and sometimes they found the disconnect between achieved learning and assigned credit to be so out of whack that they recommended denial or withdrawal of accreditation.
The Office of the Inspector General prefers auditable measures for performance. It reads the Higher Education Act with its multiple references to credit hours to demand such measures. It appears to propose that the Carnegie Unit is a pretty good place to start. It has little patience with the difficulty of translating professional judgment into some readily auditable matrix. Considering how little that OIG really understands about higher education, I was only a little surprised by how much weight that office placed on such a weak reed.
I was surprised by how quickly voices from the academy and the department proposed that educational quality should, indeed, probably be linked to the Carnegie Unit. A yardstick based on seat time and supposedly related study time to measure collegiate learning is just the wrong tool.
Years ago others wiser than I said it was time to find a new way to measure achieved learning. That advice was prompted not by the time-on-task mentality of the OIG but instead by growing discontent over the lack of dependable transfer of credits from one college to another. Credit hours in too many transfer debates become separated from the actual learning achieved by the student. Faculties in receiving institutions are more likely to question the fit of the curriculum represented by the credits than they are to question the award of the credit hours themselves. Frequently when credits transfer, they just don’t count toward the degree. But the transfer issue has not gained enough traction to bring about a community-wide review of the credit hour.
The current OIG challenge ought to be sand under the spinning wheels of the higher education community on this matter. If the inspector general decides that when it comes to credit hours the law requires something more measurable than professional judgment and if the department agrees, then instead of retreating to the old time-on-task formulas, the higher education community must hold up for review and major revision the credit hour system of measuring learning. The community has too much experience in assigning credit hours to very different learning experiences to try to return to artificial formulas based on contact and study hours.
Clearly no one is particularly interested in having the Department of Education lead this important exercise. Thanks to the much-vaunted decentralization of higher education in the United States, leadership for the endeavor is difficult to identify easily. But a dozen leaders from higher education associations, accrediting agencies, SHEEOs, faculty organizations, and interested foundations must find a way to create a process as important to higher education in this century as the National Education Association and Carnegie Foundation efforts were to the last century. After all, the Carnegie Unit and the credit hour resulted from that seminal work.
With the Carnegie Unit hanging around as the weighty fallback in these resurrected discussions of the credit hour, we must move with dispatch to recast this academic measurement to fit contemporary higher education and the learning achieved by students in it.
Steven D. Crow
Steven D. Crow is CEO of S.D.Crow & Co., which consults on accreditation and other issues, and former president of the Higher Learning Commission of the North Central Association of Colleges and Schools.
As the Council for Higher Education Accreditation gathers for its 2010 Annual Conference this week in Washington, I want to urge a change in the way we do accreditation in the United States to strengthen how it supports public accountability.
Let’s require that every institution of higher education prepare a "public learning audit" as part its accreditation materials. This would be a stand-alone statement disclosing the evidence that the institution has regarding student learning, and commenting on why the institution takes the approach to assessing learning that it does. The college or university would itself prepare the statement. The statement would be reviewed by the visiting team during periodic accreditation cycles, and approved by the relevant accrediting body. The learning audit would be publicly available to anyone, placed on the Web sites of the institution in question and of the accrediting body.
Earlham College’s public learning audit would highlight some aspects of our results from the National Survey of Student Engagement and the Collegiate Learning Assessment, and it would provide a link to our full scores on these instruments. It would also discuss the quite varied program by program assessments we do in each of our majors. It might discuss our growing interest in and experimentation with electronic portfolios of student work. And finally it would say something about what we have learned from assessment that is leading us to make changes to improve the education we offer. These elements are an honest reflection of what we are doing about assessment at Earlham, an approach we believe works well for our liberal arts mission. Another institution, one with a different mission or even a different perspective on how to carry through good assessment, would include other elements in their public learning audit.
Why do we need such public learning audits? An exchange between Peter Ewell (vice president of the National Center for Higher Education Management Systems) and the members of the Spellings Commission in the spring of 2006 was one of the more interesting but less constructive moments in the hearings that preceded the commission’s report. Ewell patiently reminded commission members that virtually every step toward better and more widely-conducted assessment of learning in higher education over the previous quarter century had happened because of accreditation. He showed that a good deal of progress had been made. Both at that session and in their final report, commission members, on the other hand, made it clear they had lost patience with accreditation.
“Accreditation, the large and complex public-private system of federal, state, and private regulators, has significant shortcomings,” (p. 14) concluded the Spellings Commission in its final report. No, not everyone would agree that the parenthetical phrase is an accurate, brief description of accreditation. Nevertheless, “significant shortcomings” is a blunt, clear conclusion. The report goes on to say “Accreditation reviews are typically kept private, and those that are made public still focus on process reviews more than bottom-line results for learning and costs” (p. 14). And later in the report, “Higher education institutions should make aggregate summary results of all postsecondary learning measures, e.g., test scores, certification and licensure attainment, time to degree, graduation rates, and other relevant measures, publicly available in a consumer-friendly form as a condition of accreditation” (p. 23).
There has been steadily rising agreement from all sides that we need to put more focus on learning outcomes in accreditation, but on the question of making accreditation more public we have been in a stalemate for years. Critics (such as those on the commission) have insisted that accreditation self-studies and visit reports be made public. Defenders of the current system argue that making these documents public would undermine the candor needed to make these documents useful for institutional improvement. Besides, note defenders, accreditation self-studies and visit reports are lengthy, complex documents; no one in the public would really want to read them.
Public learning audits would provide a middle ground and break the stalemate. Accrediting agencies would provide a rubric for preparing these statements. Colleges and universities would have considerable latitude within these rubrics about how to assess student learning. That latitude would allow an institution to shape its report around its mission(s), and to make its own choices regarding instruments or approaches for assessing learning. These statements would be prepared as stand-alone documents, with institutions knowing that these statements would become public.
Ten years (the normal length of time before an accreditation needs to be renewed) would be too long a shelf life for these learning audits, so the public learning audits would be revised every two years. At the time of re-accreditation, visiting teams would look back over the sequence of statements to assure that these were forthcoming and adequate.
We do not need to require that all the materials prepared for accreditation be made public. But we need to recognize, all of us, that the effectiveness of an institution of higher education with regard to student learning is a legitimate focus of public accountability. Any college or university enrolling students who receive federal financial aid should be subject to such accountability. Public learning audits would be a useful vehicle for providing such accountability. We do not need Congress or the Department of Education to act. Accrediting bodies could make the change themselves.
Douglas C. Bennett
Douglas C. Bennett is president of Earlham College.