At the annual meeting of one of the regional accrediting agencies a few years ago, I wandered into the strangest session I’ve witnessed in any academic gathering. The first presenter, a young woman, reported on a meeting she had attended that fall in an idyllic setting. She had, she said, been privileged to spend three days “doing nothing but talking assessment” with three of the leading people in the field, all of whom she named and one of whom was on this panel with her. “It just doesn’t get any better than that!” she proclaimed. I kept waiting for her to pass on some of the wisdom and practical advice she had garnered at this meeting, but it didn’t seem to be that kind of presentation.
The title of the next panel I chose suggested that I would finally learn what accrediting agencies meant by “creating a culture of assessment.” This group of presenters, four in all, reenacted the puppet show they claimed to have used to get professors on their campus interested in assessment. The late Jim Henson, I suspect, would have advised against giving up their day jobs.
And thus it was with all the panels I tried to attend. I learned nothing about what to assess or how to assess it. Instead, I seemed to have wandered into a kind of New Age revival at which the already converted, the true believers, were testifying about how great it was to have been washed in the data and how to spread the good news among non-believers on their campus.
Since that time, I’ve examined several successful accreditation self-studies, and I’ve talked to vice presidents, deans, and faculty members, but I’m still not sure about what a “culture of assessment” is. As nearly as I can determine, once a given institution has arrived at a state of profound insecurity and perpetual self-scrutiny, it has created a “culture of assessment.” The self-criticism and mutual accusation sessions favored by Communist hardliners come to mind, as does a passage from a Credence Clearwater song: “Whenever I ask, how much should I give? The only answer is more, more!”
Most of the faculty resistance we face in trying to meet the mandates of the assessment movement, it seems to me, stems from a single issue: professors feel professionally distrusted and demeaned. The much-touted shift in focus from teaching to student learning at the heart of the assessment movement is grounded in the presupposition that professors have been serving their own ends and not meeting the needs of students. Some fall into that category, but whatever damage they do is greatly overstated, and there is indeed a legitimate place in academe for those professors who are not for the masses. A certain degree of quirkiness and glorious irrelevance were once considered par for the course, and students used to be expected to take some responsibility for their own educations.
Clearly, from what we are hearing about the new federal panel studying colleges, the U.S. Department of Education believes that higher education is too important to be left to academics. What we are really seeing is the re-emergence of the anti-intellectualism endemic to American culture and a corresponding redefinition of higher education in terms of immediately marketable preparation for specific jobs or careers. The irony is that the political party that would get big government off our backs has made an exception of academe.
This is not to suggest, of course, that everything we do in the name of assessment is bad or that we don’t have an obligation to determine that our instruction is effective and relevant. At the meeting of the National Association of Schools of Art and Design, I heard a story that illustrates how the academy got into this fix. It seems an accreditor once asked an art faculty member what his learning outcomes were for the photography course he was teaching that semester. The faculty member replied that he had no learning outcomes because he was trying to turn students into artists and not photographers. When asked then how he knew when his students had become artists, he replied, “I just know.”
Perhaps he did indeed “just know.” One of the most troubling aspects of the assessment movement, to my mind, is the tendency to dismiss the larger, slippery issues of sense and sensibility and to measure educational effectiveness only in terms of hard data, the pedestrian issues we can quantify. But, by the same token, every photographer must master the technical competencies of photography and learn certain aesthetic principles before he or she can employ the medium to create art. The photography professor in question was being disingenuous. He no doubt expected students to reach a minimal level of photographic competence and to see that competence reflected in a portfolio of photographs that rose to the level of art. His students deserved to have these expectations detailed in the form of specific learning outcomes.
Thus it is, or should be, with all our courses. Everyone who would teach has a professional obligation to step back and to ask himself or herself two questions: What, at a minimum, do I want students to learn, and how will I determine whether they have learned it? Few of us would have a problem with this level of assessment, and most of us would hardly need to be prompted or coerced to adjust our methods should we find that students aren’t learning what we expect them to learn. Where we fall out, professors and professional accreditors, is over the extent to which we should document or even formalize this process.
I personally have heard a senior official at an accrediting agency say that “if what you are doing in the name of assessment isn’t really helping you, you’re doing it wrong.” I recommend that we take her at her word. In my experience -- first as a chair and later as a dean -- it is helpful for institutions to have course outlines that list the minimum essential learning outcomes and which suggest appropriate assessment methods for each course. It is helpful for faculty members and students to have syllabi that reflect the outcomes and assessment methods detailed in the corresponding course outlines. It is also helpful to have program-level objectives and to spell out where and how such objectives are met.
All these things are helpful and reasonable, and accrediting agencies should indeed be able to review them in gauging the effectiveness of a college or university. What is not helpful is the requirement to keep documenting the so-called “feedback loop” -- the curricular reforms undertaken as a result of the assessment process. The presumption, once again, would seem to be that no one’s curriculum is sound and that assessment must be a continuous process akin to painting a suspension bridge or a battleship. By the time the painters work their way from one end to the other, it is time to go back and begin again. “Out of the cradle, endlessly assessing,” Walt Whitman might sing if he were alive today.
Is it any wonder that we have difficulty inspiring more than grudging cooperation on the part of faculty? Other professionals are largely left to police themselves. Not so academics, at least not any longer. We are being pressured to remake ourselves along business lines. Students are now our customers, and the customer is always right. Colleges used to be predicated on the assumption that professors and other professionals have a larger frame of reference and are in a better position than students to design curricula and set requirements. I think it is time to reaffirm that principle; and, aside from requiring the “helpful” documents mentioned above, it is past time to allow professors to assess themselves.
Regarding the people who have thrown in their lot with the assessment movement, to each his or her own. Others, myself included, were first drawn to the academic profession because it alone seemed to offer an opportunity to spend a lifetime studying what we loved, and sharing that love with students, no matter how irrelevant that study might be to the world’s commerce. We believed that the ultimate end of what we would do is to inculcate both a sensibility and a standard of judgment that can indeed be assessed but not guaranteed or quantified, no matter how hard we try. And we believed that the greatest reward of the academic life is watching young minds open up to that world of ideas and possibilities we call liberal education. To my mind, it just doesn’t get any better than that.
Edward F. Palm
Edward F. Palm is dean of social sciences and humanities at Olympic College, in Bremerton, Wash.
College officials and members of the public are watching with intense interest -- and, in some quarters, trepidation -- the proceedings of the U.S. Secretary of Education's Commission on the Future of Higher Education. Given that interest, the following is a memorandum that the panel's chairman, Charles Miller, wrote to its members offering his thinking about one of its thorniest subjects: accountability. As always on Inside Higher Ed, comments are welcomed below.
To: Members, The Secretary of Education’s Commission on the Future of Higher Education
From: Charles Miller, Chairman
Dear Commission Members:
The following is a synopsis of several ongoing efforts, in support of the Commission, in one of our principal areas of focus, "Accountability." The statements and opinions presented in the memo are mine and are not intended to be final conclusions or recommendations, although there may be a developing consensus.
I would appreciate feedback, directly or through the staff, in any form that is most convenient. This memo will be made public in order to promote and continue an open dialogue on measuring institutional performance and student learning in higher education.
As a Commission, our discussions to date have shown a number of emerging demands on the higher education system, which require us to analyze, clarify and reframe the accountability discussion. Four key goals or guiding principles in this area are beginning to take shape.
First, more useful and relevant information is needed. The federal government currently collects a vast amount of information, but unfortunately policy makers, universities, students and taxpayers continue to lack key information to enable them to make informed decisions.
Second, we need to improve, and even fix, current accountability processes, such as accreditation, to ensure that our colleges and universities are providing the highest quality education to their students.
Third, we need to do a much better job of aligning our resources to our broad societal needs. In order to remain competitive, our system of higher education must provide a world-class education that prepares students to compete in a global knowledge economy.
And finally, we need to assure that the American public understand through access to sufficient information, particularly in the area of student learning, what they are getting for their investment in a college education.
Commission Meeting (12/6/05)
At our Nashville meeting, the Commission heard three presentations from a panel on “Accountability.” Panelists represented the national, state and institutional perspectives and in the subsequent discussion, an informal consensus developed that there is a critical need for improved public information systems to measure and compare institutional performance and student learning in consumer-friendly formats, defining consumers broadly as students, families, taxpayers, policy makers and the general public.
Needs for a Modern University Education
The college education needed for the competitive, global environment in the future is far more than specific, factual knowledge; it is about capability and capacity to think and develop and continue to learn. An insightful quote from an educator describes the situation well:
“We are attempting to educate and prepare students (hire people in the workforce) today so that they are ready to solve future problems, not yet identified, using technologies not yet invented, based on scientific knowledge not yet discovered.”
--Professor Joseph Lagowski, University of Texas at Austin
Trends in Measuring Student Learning
There is gathering momentum for measuring through testing what students learn or what skills they acquire in college beyond a traditional certificate or degree.
Very recently, new testing instruments have been developed which measure an important set of skills to be acquired in college: critical thinking, analytic reasoning, problem solving, and written communications.
The Commission is reviewing promising new developments in the area of student testing, which indicate a significant improvement in measuring student learning and related institutional performance. Three independent efforts have shown promise:
A multi-year trial by the Rand Corporation, which included 122 higher education institutions, led to the development of a test measuring critical thinking, analytic reasoning and other skills. As a result of these efforts, a new entity called Collegiate Learning Assessment has been formed by researchers involved and the tests will now be further developed and marketed widely.
A new test measuring college level reading, mathematics, writing and critical thinking has been developed by the Educational Testing Service and will begin to be marketed in January 2006. This test is designed for colleges to assess their general education outcomes, so the results may be used to improve the quality of instruction and learning.
The National Center for Public Policy and Higher Education developed a new program of testing student learning in five states, which has provided highly promising results and which suggests expansion of such efforts would be clearly feasible.
An evaluation of these new testing regimes provides evidence of a significant advancement in measuring student learning -- especially in measuring the attainment of skills most needed in the future.
Furthermore, new educational delivery models are being created, such as the Western Governors University, which uses a variety of built-in assessment techniques to determine the achievement of certain skills being taught, rather than hours-in-a-seat. These new models are valid alternatives to the older models of teaching and learning and may well prove to be superior for some teaching and learning objectives in terms of cost effectiveness.
There are constructive examples of leadership in higher education in addressing the issues of accountability and student learning, such as the excellent work by the Association of American Colleges and Universities.
The AAC&U has developed a unique and significant approach to accountability and learning assessment, discussed in two recent reports, “Our Students’ Best Work” (2004) and “Liberal Education Outcomes” (2005).
The AAC&U accountability model focuses on undergraduate liberal arts education and emphasizes learning outcomes. The primary purpose is to engage campuses in identifying the core elements of a quality liberal arts education experience and measuring students’ experience in achieving these goals -- core learning and skills that anyone with a liberal arts degree should have. AAC&U specifically does not endorse a single standardized test, but acknowledges that testing can be a useful part of the multiple measures recommended in their framework.
In this model, departments and faculty are expected to be given the primary responsibility to define and assess the outcomes of the liberal arts education experience.
Federal and State Leadership
The federal government currently collects a great deal of information from the higher education system. It may be time to re-examine what the government collects to make sure that it’s useful and helpful to the consumers of the system.
Many states are developing relevant state systems of accountability in order to measure the performance of public higher education institutions. In its recommendations about accountability in higher education, the State Higher Education Executive Officers group has endorsed a focus on learning assessment.
Institutional Performance Measurement
What is clearly lacking is a nationwide system for comparative performance purposes, using standard formats. Private ranking systems, such as the U.S. News and World Report “Best American Colleges” publications, use a limited set of data, which is not necessarily relevant for measuring institutional performance or providing the public with information needed to make critical decisions.
The Commission, with assistance of its staff and other advisors and consultants, is attempting to develop the framework for a viable database to measure institutional performance in a consumer-friendly, flexible format.
Historically, accreditation has been the nationally mandated mechanism to improve institutional quality and assure a basic level of accountability in higher education.
Accreditation and related issues of articulation are in need of serious reform in the view of many, especially the need for more outcomes-based approaches. Also in need of substantial improvement are the regional variability in standards, the independence of accreditation, its usefulness for consumers, and its response to new forms of delivery such as internet-based distance learning.
The Commission is reviewing the various practices of institutional and programmatic accreditation. A preliminary analysis will be presented and various possible policy recommendations will be developed.
Accountability, not access, has been the central concern of this Congress in its fitful efforts to reauthorize the Higher Education Act. The House of Representatives has especially shown itself deaf to constructive arguments for improving access to higher education for the next generation of young Americans, and dizzy about what sensible accountability measures should look like. The version of the legislation approved last week by House members has merit only because it lacks some of the strange and ugly accountability provisions proposed during the past three years, though a few vestiges of these bad ideas remain.
Why should colleges and universities be subject to any scheme of accountability? Because the Higher Education Act authorizes billions of dollars in grants and loans for lower-income students as it aims to make college accessible for all. This aid goes directly to students selecting from among a very broad array of institutions: private, public and proprietary; small and large; residential, commuter and on-line. Not unreasonably, the federal government wants to ensure that the resources being provided are used only at credible institutions. Hence, its insistence on accountability.
The financial limits on student aid were largely set in February when Congress hacked $12 billion from loan funds available to many of those same low-income students. With that action, the federal government shifted even more of the burden of access onto families and institutions of higher education, despite knowing that the next generation of college aspirants will be both significantly more numerous and significantly less affluent.
Now the Congress is at work on the legislation’s accountability provisions, and regardless of allocating far fewer dollars members of both chambers are considering still more intrusive forms of accountability. They appear to have been guided by no defensible conception of what is appropriate accountability.
Colleges and universities serve an especially important role for the nation -- a public purpose -- and they do so whether they are public or private or proprietary in status. The nation has a keen interest in their success. And in an era of heightened economic competition from the European Union, China, India and elsewhere, never has that interest been stronger.
In parallel with other kinds of institutions that serve the public interest, colleges and universities should make themselves publicly accountable for their performance in four dimensions: Are they honest, safe, fair, and effective? These are legitimate questions we ask about a wide variety of businesses: food and drug companies, banks, insurance and investment firms, nursing homes and hospitals, and many more.
Are they honest? Is it possible to read the financial accounts of colleges and universities to see that they conduct their business affairs honestly and transparently? Do they use the funds they receive from the federal government for the intended purposes?
Are they safe? Colleges and universities can be intense environments. Especially with regard to residential colleges and universities, do students face unacceptable risks due to fire, crime, sexual harassment or other preventable hazards?
Are they fair? Do colleges and universities make their programs genuinely available to all, without discrimination on grounds irrelevant to their missions? Given this nation’s checkered history with regard to race, sex, and disability, this is a kind of scrutiny that should be faced by any public-serving institution.
Existing federal laws quite appropriately govern measures dealing with all of these issues already. For the most part, accountability in each area can best be accomplished by asking colleges and universities to disclose information about their performance in a common and, hopefully, simple manner. No doubt measures for dealing with this required disclosure could be improved. But these three questions have not been the focus of debate during this reauthorization.
On the other hand, Congress has devoted considerable attention to a question that, while completely legitimate, has been poorly understood:
Are they effective? Do students who enroll really learn what colleges and universities claim to teach? This question should certainly be front and center in the debate over accountability.
Institutions of higher education deserve sharp criticism for past failure to design and carry out measures of effectiveness. Broadly speaking, the accreditation process has been our approach to asking and answering this question. For too long, accreditation focused on whether a college or university had adequate resources to accomplish its mission. This was later supplanted by a focus on whether an institution had appropriate processes. But over the past decade, accreditation has finally come to focus on what it should -- assessment of learning.
An appropriate approach to the question of effectiveness must be multiple, independent and professionally grounded. We need multiple measures of whether students are learning because of the wide variety of kinds of missions in American higher education; institutions do not all have identical purposes. Whichever standards a college or university chooses to demonstrate effectiveness, they should not be a creation of the institution itself -- nor of government officials -- but rather the independent development of professional educators joined in widely recognized and accepted associations.
Earlham College has used the National Survey of Student Engagement since its inception. We have made significant use of its findings both for re-accreditation and for improvement of what we do. We are also now using the Collegiate Learning Assessment. I believe these are the best new measures of effectiveness, but we need many more such instruments so that colleges and universities and choose the ones most appropriate to assessing fulfillment of learning in the scope of their particular missions.
Until the 11th hour, the House version of the Higher Education Act contained a provision that would have allowed states to become accreditors, a role they are ill equipped to play. Happily, that provision now has been eliminated. Meanwhile, however, the Commission on the Future of Higher Education, appointed by U.S. Secretary of Education Margaret Spellings, is flirting with the idea of proposing a mandatory one-size-fits-all national test.
Much of the drama of the accountability debate has focused on a fifth and inappropriate issue: affordability. Again until the 11th hour, the House version of the bill contained price control provisions. While these largely have been removed, the bill still requires some institutions that increase their price more rapidly than inflation to appoint a special committee that must include outsiders to review their finances. This is an inappropriate intrusion on autonomy, especially for private institutions.
Why is affordability an inappropriate aspect of accountability? Because in the United States we look to the market to “get the prices right,” not heavy-handed regulation or accountability provisions. Any student looking to attend a college or university has thousands of choices available to him or her at a range of tuition rates. Most have dozens of choices within close commuting distance. There is plenty of competition among higher education institutions.
Let’s keep the accountability debate focused on these four key issues: honesty, safety, fairness, and effectiveness. With regard to the last and most important of these, let’s put our best efforts into developing multiple, independent, professionally grounded measures. And let’s get back to the other key issue, which is: How do we provide access to higher education for the next generation of Americans?
Douglas C. Bennett is president and professor of politics at Earlham College, in Indiana.
The details of accreditation are so arcane and complex that the entire topic is confusing and controversial throughout all of education. When we're immersed in the details of accreditation, it's often exceedingly difficult to see the forest for all the trees. But at the core, accreditation is a very simple concept: Accreditation is a process of self-regulation that exists solely to serve the public interest.
When I say "public interest" I mean the interests of three overlapping but identifiably distinct groups:
The interests of members of the general public in their own personal health, safety, and economic well-being.
The interests of government and elected officials at all levels in assuring wise and effective use of taxpayer dollars.
The consumer interests of students and their families in "getting what they pay for" -- certifications in their chosen fields that genuinely qualify them for employment and for practicing their professions competently and honestly.
Saying that a particular program or degree or institution is "accredited" should and must convey to these publics strong assurance that it meets acceptable minimum standards of quality and integrity.
Aside from the public interest, what other interests are there? Well, there are the interests of the accredited institutions, the interests of existing professional practitioners and their industry groups, and the interests of the accrediting organizations themselves. There is no automatic assurance that these latter interests are always and everywhere consistent with the public interest, so self-regulation (accreditation) necessarily involves consistent and vigilant management of this inherent conflict of interest. It is an inherent conflict because the general public, the government, and the students do not have the technical expertise to set curricular and other educational standards and monitor compliance.
I assume it is generally agreed that it is inconceivable to have anyone other than medical professionals defining the necessary elements and performance standards of medical education. Does the American Medical Association do a good job of protecting the public from fraud and incompetence? Yes, for the most part. But you don't need to talk to very many people to hear cynicism. It is the worst behaviors and the lowest standards of professional competence that create this cynicism, and that taints all doctors as well as the AMA. That is why our standards at the bottom or threshold level are so very important. I submit to that the bedrock principle and the highest priority for everyone involved in higher education (the institutions, the professional groups, the accrediting organizations, and those who recognize or certify the accreditors) should be and must be to manage these conflicts of interest in ways that are transparent, and that place the public interest ahead of our own several self-interests.
If I could draw an analogy: Think about why the names Enron and WorldCom are so familiar. Publicly owned corporations must open their books to independent accounting firms that are expected to examine them and issue reports assuring the public that acceptable financial reporting and business practices are being followed, and warning the public when they are not. But there is an inherent conflict of interest in this process: The companies being audited are the customers of the accounting firms. This presents an apparent disincentive to look too closely or report too diligently lest the accounting firms lose clients to other firms who are more willing to apply loose standards. Obviously, this conflict was not well-managed by the accounting industry and, as a result, one of the world's largest and previously most respected accounting firms no longer exists, and all U.S. corporations (honest and otherwise) are saddled with an extraordinarily complex and expensive set of new government regulations.
If we don't manage our conflicts well, rest assured one or more of our publics -- the students, the government, or the public at large - will rise up and take care of it for us in ways that will be expensive, burdensome, poorly designed, and counterproductive. That would be in no one's best interest - ironically, not even in the public's best interest.
I must acknowledge that our current system of self-regulation is, by and large, working very well, just as most accounting firms and most companies are, and always have been, honest. Some of us, especially in the public sector of higher education, wonder how much more accountability we could possibly stand, and what, if any, value-added there could possibly be if more were imposed on us. At the University of Wisconsin at Madison, for example, we offer 409 differently named degrees -- 136 majors at the bachelor's level, 156 at the master's level, 109 at the Ph.D. level, and 8 professional degrees, 7 of which carry the term "doctor," a point I will return to later.
By Board of Regents policy, every one of our degree programs gets a thorough review at least every 10 years, so we are conducting about 40 program reviews every year, and one full cycle of reviews involves just about every academic official on campus. These internal reviews carry negligible out-of-pocket cost, but conservatively consume about 20 FTE of people's time annually. We are also required by the legislature to report annually on a long list of performance indicators that includes things like time-to-degree, access and affordability, and graduation rates, among many other things. In addition, about 100 of our degree programs are accredited by 32 different special accreditors and, of course, the entire university is accredited by the North Central Association. One complete cycle of these accreditations costs about $5,000,000 and the equivalent of 35 FTE of year-round effort. (Annualized, it is about $850,000 and 6 FTE).
I mention the costs, not to complain about these reviews as expensive burdens, but to emphasize that we put a great deal of real money and real effort into self-examination and accountability. Far from being a burden, accreditation and self-study reviews form the central core of our institutional strategic planning and quality improvement programs. The major two-year-long self-study we do for our North Central accreditation, in particular, forms the entire basis for the campus strategic plan, priorities, goals, and quality improvements we adopt for the next 10-year period. As such, it is the most important and valuable exercise we undertake in any 10-year period, and we honestly and sincerely attribute most of the improvements we've made in recent decades to things learned in these intensive self-studies. I think all public universities and established private universities could give similar testimony. Having said all this, let me turn, now, to some of the reasons for the growing public cries for better accountability, and some of the problems I think we need to address in our system of self-regulation:
1. Even in the best-performing universities, there is still considerable room for improvement. To mention one high-visibility area, I think it is nothing short of scandalous that, in 2006, the average six-year graduation rate is only around 50 percent nationwide. Either we are doing a disservice to under-prepared or unqualified students by admitting them in the first place, or we are failing perfectly capable students by not giving them the advising and other help they need to graduate. Either way, we are wasting money and human capital inexcusably. Even at universities like mine, where the graduation rate is now 80 percent, if there are peer institutions doing better (and there are), then 80 percent should be considered unacceptably low.
Now, if we were pressured to increase that number quickly to 85 percent or 90 percent and threatened with severe sanctions for failing to do so, we could meet any established goal by lowering our graduation standards, or by fudging our numbers in plausibly defensible ways, or by doing any number of other things that would satisfy our self-interest but fail the public-interest test. Who's to stop us? Well, I submit these are exactly the sorts of conflicts of interest the accrediting organizations should be expected to monitor and resolve in the public interest. The public interest is in a better-educated public, not in superficial compliance with some particular standard. The public relies on accreditors to keep their eye on the right ball. More generally, accrediting organizations are in an excellent -- maybe even unique -- position to identify best practices and transfer them from one colleges to another, improving our entire system of higher education.
2. A second set of problems involves accreditation of substandard or even fraudulent schools and programs. Newspapers have been full of reports of such institutions, many of them operating for years, without necessarily providing a good education to their students. For years, I have listened to the complaints of our deans of education, business, allied health, and some other areas, that "fly-by-night" schools or "motel schools" were competing unfairly with them or giving absurd amounts of credit for impossibly small amounts of work or academic content.
I must admit that I usually dismissed these complaints lightly, telling them they should pay more attention to the quality and value of their own programs, and let free enterprise and competition drive out the low value products. I felt they (our deans) had a conflict of interest, and they wanted someone to enforce a monopoly for them. More recently I have concluded that our deans were, in fact, the only ones paying attention to the public interest. Our schools of education (not the motel schools) are the ones being held responsible for the quality of our K-12 teachers, and they are tired of being told they are turning out an inferior product when shabby but accredited programs are an increasingly large part of the problem. The public school teachers, themselves, have a conflict of interest: They are required to earn continuing education credits from accredited programs, and it is in their interest to satisfy this requirement at the lowest possible cost to themselves. So the quality of the cheapest or quickest credit is of great importance in the public interest, and the only safeguard for that public interest is the vigilance of the accrediting organizations. I lay this problem squarely at the feet of the U.S. Department of Education, the state departments of public instruction, and the education accreditors. They all need to clean up their acts in the public interest.
3. Cost of education. There is currently lots of hand-wringing on the topic of the "cost of education." What is really meant by the hand-wringers is not the cost of education, but the price of education to the students and their families: the fact that tuition rates are inflating at a far faster rate than the CPI. I've made a very important distinction here: the distinction between cost and price. If education were a manufactured product sold to a homogeneous class of customers in a competitive market with multiple providers, then it would be reasonable to assume there is a simple cause-and-effect relationship between cost and price. But that is not the case.
Very few students pay tuition that covers the actual cost of their education. Most students pay far less than the true cost, and some pay far more. In aggregate, the difference is made up by donors (endowment income) at private colleges, and by state taxpayers at public institutions. Since public colleges enroll more than 75 percent of all students, the overall picture -- the price of higher education to students and their parents -- is heavily influenced by what's going on in the public sector, and the picture is not pretty.
In virtually every state in the country, governors and legislators are providing a smaller share of operating funds for higher education than they used to, and partially offsetting the decrease by super-inflationary increases in tuition. They tell themselves this is not hurting higher education because, after all, the resulting tuitions are still much lower than the advertised tuitions at comparable private colleges, so their public institutions are still a "bargain." This view represents a fundamental misunderstanding of the nature of the "private model." Private institutions do not substitute high tuition for state support. They substitute gifts and endowment income for state support, and discount their tuitions to the tune of nearly 50 percent on the average.
There is a very good reason why there are so few large private universities: It is because very few schools can amass the endowments required to make the private model work. Of the 100 largest postsecondary schools in the country, 92 are public, and ALL of the 25 largest institutions are public. There is no way the private model can be scaled up to educate a significant fraction of all the high school graduates in the country. Substituting privately financed endowments for public taxpayer support nationwide would require aggregate endowments totaling $1.3 trillion, or about six times more than the total of all current endowments of public and private colleges and universities in the country. This simply is not going to happen.
So, to the extent that states are pursuing an impossible dream, they are endangering the health and future of our entire system of higher education. Whose responsibility is it to red-flag this situation? Who is responsible for looking out for the overall health of a large, decentralized, diverse public/private system of higher education? When public (or, for that matter, private) colleges point out the hazards of our current trends, they are vulnerable to charges of self-interest. We are accused of waste and inefficiency, and told that we simply need to tighten our belts and become more businesslike.
I don't know of a single university president who wouldn't welcome additional suggestions for genuinely useful efficiencies that have not already been implemented. Is there a legitimate role here for the U.S. Department of Education and the accrediting organizations? To the extent that accrediting organizations take this seriously and use their vast databases of practices and indicators to disseminate best practices nationwide, we would all be better off. Accreditors should be applauding institutions that are on the leading edge of efficiency, and helping, warning, and eventually penalizing waste and inefficiency, all in the spirit of protecting the public interest. Instead, I'm afraid many accreditors are pushing us in entirely different directions.
4. Another category of problem area is what I will call "protectionism." I have already said there is an inherent conflict of interest in that professional experts must be relied upon to define and control access to the professions. This means that the special accreditors have a special burden to demonstrate that their accreditation standards serve the best interests of the public, and not just the interests of the accredited programs or the profession. Chancellors and provosts get more complaints and see more abuses in this area of accreditation than any other. I will start with a hypothetical and then mention only a small sampling of examples.
In Wisconsin, we are under public and legislative pressure to produce more college-educated citizens -- more bachelor's, master's, and doctoral degrees. Suppose the University of Wisconsin announced next week that any students who completed our 60 credits, or two years, of general education would be awarded a bachelor's degree; that completing two more years in a major would result in a master's degree; and that one year of graduate school would produce a degree entitling the graduate to be called "doctor."
I hope and assume this would be met with outrage. I hope and assume it would result in an uproar among alumni who felt their degrees had been cheapened. I hope and assume it would result in legislative intervention. I even hope and assume it would result in loss of all our accreditations.
That's an extreme example, and most of what I hope and assume would probably happen. But we are already seeing this very phenomenon of degree inflation, and it is being caused by the professions themselves! This is particularly problematic in the health professions, where, it seems, everyone wants to be called "doctor." I have no problem whatsoever with the professional societies and their accreditors telling us what a graduate must know to practice safely and professionally. I have a big problem, though, when they hand us what amounts to a master's-level curriculum and tell us the resulting degree must be called a "doctor of X." This is a transparently self-interested ploy by the profession, and I see no conceivable argument that it is in the public interest. All it does is further confuse an already confusing array of degree names and titles, to no useful purpose.
I asked some of my fellow presidents and chancellors to send me their favorite examples, and I got far too many to include here. Interestingly, and tellingly, most people begged me to hide their institutional identity if I used their examples. I'll let you decide why they might fear being identified. Here are a few:
A business accreditor insisting that no other business-related courses may be offered by any other school or college on campus.
An allied health program at the bachelor's level (offered at a branch campus of an integrated system) that had to be discontinued because the accreditors decreed they could only offer programs at the bachelor's level if they also offered programs at the master's level at the same campus.
An architecture program that was praised for the strength and quality of its curriculum, its graduates, and its placements, and then had its accreditation period halved for a number of trivial resource items such as the sizes of their brand-new drafting tables that had been selected by their star faculty;
Some years ago, the American Bar Association was sanctioned by the U.S. Department of Justice for using accreditation in repeated attempts to drive up faculty salaries in law schools.
The Committee on Institutional Cooperation (the Big Ten universities plus the University of Chicago) publishes a brochure suggesting reasonable standards for special accreditation. The suggested standards are common-sense things that any reasonable person would agree protect the public interest while not unreasonably constraining the institution or holding accredited status hostage for increased resources or status when the existing resources and status are clearly adequate. They focus on results rather than inputs or pathways to those results. Similar guidelines have been adopted by other associations of universities.
So, when I was provost, I routinely handed copies of that brochure to site-visit teams when they started their reviews, saying "Please don't tell me this program needs more faculty, more space, higher salaries, or a different reporting line. Just tell me whether or not they are doing a good job and producing exemplary graduates." Inevitably, or at least more often than not, at the exit interview, I heard "This program has a decades-long record of outstanding performance and exemplary graduates, but their continued accreditation is endangered unless they get (some combination of) more faculty, higher salaries, a higher S&E budget, larger offices, more space in general, greater independence, a different reporting line, their own library, a very specific degree for the chair or director, tenure for (whomever), ... etc." Often, the program was put on some form of notice such as interim review with a return visit to check for such improvements.
Aside: It is perfectly natural for the faculty members of site-visit teams to feel a special bond with the colleagues whose program they are evaluating. It is natural for the evaluators to want to "help" these colleagues in what they perceive as the zero-sum resource struggles that occur everywhere. It is also natural for them to want to enhance the status of programs associated with their field. But, resource considerations should be irrelevant to accreditation status unless the resources being provided are demonstrably below the minimum needed to deliver high-quality education and outcomes. Similarly, "status" considerations are out of place unless the current status or reporting line demonstrably harms the students or the public interest. It is the responsibility of the professional staffs of accrediting organizations to provide faculty evaluators with warnings about conflict of interest and guidelines on ethical conduct of the evaluation.
Let me end with one of the most egregious examples I have yet encountered, and a current one from the University of Wisconsin. Our medical school spent more than a year in serious introspection and strategic planning, with special attention on its role in addressing the national crisis in health care costs. What topic could be more front-and-center in the public interest? The medical school faculty and administration concluded (among other things) that it is in the public interest for medical schools to pay more attention to public health and prevention, and try to reduce the need for acute and expensive interventions after preventable illnesses have occurred. To signal this changed emphasis, they voted to change the name of the school from "The School of Medicine" to "The School of Medicine and Public Health." They simultaneously developed a formal public health track for their M.D. curriculum.
I am told that we cannot have this school accredited as a school of public health because the accreditation organization insists that schools of public health must be headed by deans who are distinct from, and at the same organizational level as, deans of medicine. In particular, deans of public health may not be subordinate to, nor the same as, deans of medicine. This, despite the fact that the whole future of medicine may evolve in the direction of public health emphasis, and this may well be in the best interests of the country. Ironically, to the best of my knowledge, our current dean of medicine is the only M.D. on our faculty who holds a commission as an officer in the Public Health Service.
I have used some extreme examples and maybe some extreme characterizations intentionally. Often, important points of principle are best illuminated by extreme cases and examples. If there are any readers who are not offended by anything here, then I have failed. I hope everyone was offended by at least one thing. I also hope I am provably wrong about some things I've said. But, most of all, I hope to stimulate a vigorous debate on this vitally important topic.
John D. Wiley
John D. Wiley is chancellor of the University of Wisconsin at Madison. This essay is a revised version of a talk Wiley gave at the annual meeting of the Council on Higher Education Accreditation.
In response, accreditation and higher education officials have questioned the legitimacy of a number of the commission’s criticisms and pointed to the successful history and considerable capacity of accreditation as a reliable authority on higher education quality. Other officials are shrugging off the commission’s conversation with a “this too shall pass” response.
But just as it would be a mistake for the commission to ignore or sideline accreditation as a force for quality, it would be a mistake for the accreditation and higher education communities to ignore the concerns and calls for change from the commission. All of us who believe in the importance and ultimate value of accreditation need to take seriously what we have heard.
That doesn’t mean that I agree with all of what’s been said in the commission’s deliberations to “improve accreditation” or to “transform accreditation” -- especially when these comments are based on an (erroneous) perception of accreditation as a failed system. But, I do think that we should heed some of the criticism -- calls for accreditation to pay more attention to institutional performance and student learning outcomes, to additional transparency, to increased rigor in accreditation standards (moving toward “world class”) and to expanded support for innovation, especially in the for-profit sector.
There is an additional -- and quite worrisome -- call from the commission: to aggressively nationalize the accreditation and quality discussion, captured by concepts such as the “National Accreditation Foundation,” the “National Accreditation Working Group,” the “National Accreditation Framework” in the commission documents. These constructs are cause for concern because they can easily lead to a single set of national standards by which to judge all of higher education quality or can lead to federalizing of accreditation, expanding direct federal control and prescriptiveness with regard to standards, policy and practice.
Short of nationalizing or federalizing, accreditation has a good deal of capacity in place so that we are and can continue to be responsive to some of these calls and sustain our leadership in academic quality. Accreditors have already done much work in some of these areas, such as more attention to student learning outcomes and institutional performance in accreditation standards and transparency. The Council for Higher Education Accreditation and the U.S. Department of Education, the two external review bodies that scrutinize accreditation for quality (because they “recognize” accreditors), have standards that include expectations that accreditors will address these and other issues, such as innovation and public participation.
I think nationalizing or federalizing accreditation would take us down the wrong road. But I also part ways with some of my colleagues in accreditation and higher education, from whom we’re hearing comments like “leave us alone,” “trust us” and “you don’t understand us.” Some are saying that an accreditation change agenda should proceed -- but should consist only of changes we like on a timetable acceptable to us. There is little acknowledgment that, in today’s society, a self-regulatory enterprise such as accreditation may now require a higher level of evidence and transparency than we are currently providing. There are few nods to the importance of additional effort to sustain faith and trust in the enterprise.
Yet it is all too easy to envisage a scenario in which either nationalization, federalization, loss of leadership or loss of faith and trust might come about. Suppose, for example, that the calls from the commission continue to gather attention and support. Suppose that the pace of change established by accreditation is simply not swift enough to constitute a viable response. Suppose that actors in the private sector step in and develop new mechanisms to gather information about higher education quality in a more transparent and evidence-based way, sidelining accreditation. Even worse, the federal government might decide that it can proceed with federalizing a “single set of standards” approach to quality, even within the current legal and regulatory framework provided by the current Higher Education Act.
There is an alternative scenario. We in accreditation and higher education can use the commission as a constructive external stimulus. We can acknowledge the commission’s message, making sure that we are the leaders for change. It is in our best interest to convert the national attention that the commission has brought to accreditation from a negative to a positive.
For example, accreditation and higher education can commit to progressive proposals that address several of the commission’s calls. We can agree to:
Accelerate the current accreditation emphasis on evidence of institutional performance and student learning outcomes, assuring that the language of accreditation standards converts into energetic development and use of evidence of the results of teaching and learning.
Break the current impasse in our debate on additional transparency about accredited status, committing ourselves to more fully inform the public about what it means to be accredited: What are institutional strengths? What might be improved? What does an accreditation review tell students about the services they receive from an institution?
Build national capacity for comparability of the key features of accredited institutions and programs, agreeing to a small set of indicators of quality that the public can use to compare institutions.
Focus on moving from threshold accreditation standards to greater rigor, especially as this relates to general education and the undergraduate curriculum, as part of a national effort to increase global competitiveness.
Making progress on such proposals will not be easy. First, it will require that accreditation and higher education give greater priority to directly serving the public interest than in the past. Second, we will need to confront the all-too-human tendencies toward complacency, defensiveness and resistance to change. Third and most important, it may require that accreditors and higher education leaders alike face fundamental questions about how much we value and support a strengthened accreditation system. Accreditation will have limited capacity to change unless higher education supports such efforts.
We need public faith and trust in accreditation as a force for quality in the future. We need to sustain and enhance our leadership for academic quality. We need to consider some changes in the conduct of the business of our enterprise.
Judith S. Eaton
Judith S. Eaton is president of the Council for Higher Education Accreditation, an association of 3,000 colleges and universities that recognizes 60 institutional and programmatic accrediting organizations.
A recent report released by the Secretary of Education’s Commission on the Future of Higher Education recommends some major changes in the way accreditation operates in the United States. Perhaps the most significant of these is a proposal that a new accrediting framework “require institutions and programs to move toward world-class quality” using best practices and peer institution comparisons on a national and world basis. Lovely words, and utterly fatal to the proposal.
The principal difficulty with this lofty goal is that outside of a few rarefied contexts, most people do not want our educational standards to get higher. They want the standards to get lower. The difficulty faced by the commission is that public commissions are not allowed to say this out loud because we who make policy and serve in leadership roles are supposed to pretend that people want higher standards.
In fact, postsecondary education for most people is becoming a commodity. Degrees are all but generic, except for those people who want to become professors or enter high-income professions and who therefore need to get their degrees from a name-brand graduate school.
The brutal truth is that higher standards, applied without regard for politics or any kind of screeching in the hinterlands, would result in fewer colleges, fewer programs, and an enormous decrease in the number and size of the schools now accredited by national accreditors. The commission’s report pretends that the concept of regional accreditation is outmoded and that accreditors ought to in essence be lumped together in the new Great Big Accreditor, which is really Congress in drag.
This idea, when combined with the commitment to uniform high standards set at a national or international level, results in an educational cul-de-sac: It is not possible to put the Wharton School into the same category as a nationally accredited degree-granting business college and say “aspire to the same goals.”
The commission attempts to build a paper wall around this problem by paying nominal rhetorical attention to the notion of differing institutional missions. However, this is a classic question-begging situation: if the missions are so different, why should the accreditor be the same for the sake of sameness? And if all business schools should aspire to the same high standards based on national and international norms, do we need the smaller and the nationally accredited business colleges at all?
The state of Oregon made a similar attempt to establish genuine, meaningful standards for all high school graduates starting in 1991 and ending, for most purposes, in 2006, with little but wasted money and damaged reputations to show for it. Why did it fail? Statements of educational quality goals issued by the central bureaucracy collided with the desire of communities to have every student get good grades and a diploma, whether or not they could read, write or meet minimal standards. Woe to any who challenge the Lake Wobegon Effect.
So let us watch the commission, and its Congressional handlers, as it posits a nation and world in which the desire for higher standards represents what Americans want. This amiable fiction follows in a long history of such romans a clef written by the elite, for the elite and of the elite while pretending to be what most people want. They have no choice but to declare victory, but the playing field will not change.
Alan L. Contreras
Alan L. Contreras has been administrator of the Oregon Office of Degree Authorization, a unit of the Oregon Student Assistance Commission, since 1999. His views do not necessarily represent those of the commission.
In more than 30 years of involvement in accreditation and postsecondary education, I have rarely seen a body of any kind stimulate so much debate, discussion and review as the Secretary of Education's Commission on the Future of Higher Education.
This is probably to be expected, given the seriousness with which the Secretary is following the deliberations of the commission, the quality of its members, and the sometimes provocative proposals that have emerged, particularly with respect to accreditation.
Judith S. Eaton, president of the Council of Higher Education Accreditation, suggested in these pages an aggressive response, noting that "actors in the private sector [could] step in and develop new mechanisms to gather information about higher education quality in a more transparent and evidence-based way, sidelining accreditation."
Eaton makes a strong case for change in the "conduct of the business of our enterprise," but I believe a great deal of calm and dispassionate debate is in order before recommending change.
First some context. The relationship between government and accrediting agencies is that of partners -- wary partners, but partners. (For those who want a little more background on why this is so, please follow this link.) There is a dynamic equilibrium in effect, ensuring that if change is to take place, it will be done responsibly, with careful review, and with the input of the entire postsecondary community.
Viewing the commission’s proposals through that prism, a federal accrediting system that is not of the academy itself, that does not enjoy the confidence of the schools being visited, would quickly reduce to a regulatory system, and a regulatory system will simply not work. Schools are open and frank when talking to colleagues; regulators never learn about limitations and deficiencies that are regularly discussed with accreditors. Federally operated accreditation would be adversarial in nature and would not allow the professional judgment that is so central to higher education. When we take into account the possibility of political input, it is clear that federal accreditation will fail.
A national accrediting system could conceivably work but without the outcomes that have made American higher education the envy of the world and without the successes that bring other nations to study accreditation and to emulate it. Accreditation is not just a paper process. On the one hand, the institutions and programs being accredited play a key role in determining the standards and policies under which they are recognized. At the same time, accreditation agency staff knows much more about schools and programs than appears on reports. There are personal relationships that add immeasurably to an agency's ability to assess a school and its function. These reasons call for smaller agencies rather than a single national body.
In addition, the multiplicity of accrediting bodies helps create intellectual ferment, diverse approaches, experimentation and the sharing of strategies and techniques and the cross fertilization of ideas that lead to improvement in accreditation. Conferences and papers sponsored by the Council on Higher Education Accreditation are often both stimulus and venue for this healthy interaction which in turn leads to responsible, carefully monitored, and targeted change.
Our thoughtful and reasoned response to the commission should point out that for over two decades, states, accreditors and scholars alike have sought valid indicators of institutional quality, without success. Similarly, those seeking measures of student learning have failed to identify strategies that accomplish this accurately and comprehensively. Suffice it to add that should such valid measures surface, accreditors and schools will enthusiastically adopt them. Note the emphasis on the word valid.
We might also explain that calls for transparency will result in a reluctance of schools to be open and straightforward to accreditors, and influence site visitors to write defensively. We would emphasize the importance of accreditation in establishing a threshold for quality, and in fostering a culture where institutions and programs seek to improve beyond that threshold.
And we would make it clear that in accreditation, improvement does not require structural change, and not all change is improvement.
Bernard Fryshman is executive vice president of the Association of Advanced Rabbinical and Talmudic Schools' Accreditation Commission.
Assessment will make higher education accountable. That’s the claim of many federal and state education policy makers, as illustrated by the Commission on the Future of Higher Education. Improved assessment has become for many the lever to control rising tuition and to inform the public about how much students might learn (and whether they learn at all). But many in higher education worry that assessment can become a simplistic tool -- producing data and more work for colleges, but potentially little else.
Has the politicization of assessment deepened the divide between higher education and the public? How can assessment play the role wished for by policy makers to gauge accountability and affordability and also be a powerful tool for faculty members and college presidents and provosts to use to improve quality and measure competitiveness? Successful policies will include practices that lead to confidence, trust and satisfaction -- confidence by faculty members in the multiple roles of assessment, trust by the public that assessment will bring accountability, and satisfaction by the leaders such as the presidents that assessment will restore the public’s confidence in higher education. A tall order to be sure, but we believe assessment – done correctly -- can play a pivotal role in the resolution to the current debate on cost and quality.
For confidence, trust and satisfaction to occur, higher education and public officials must each take two steps. Higher education must first recognize that public accountability is a fact and an appropriate expectation. This means muting the calls by public higher education for more autonomy from state and federal government based simply on the declining percent of the annual higher education budget provided by public sources. This argument may help gain the attention of policy makers regarding the financial conundrums in higher education but it is not a suitable argument against accountability. Between federal and state sources, billions of dollars have been invested in higher education over the nearly 150 years of public higher education. The public deserves to know that its investments of the past are being used well today -- efficiently and effectively.
In response, federal and state policy makers need to publicly embrace the notion advocated as early as 1997 that quality is based on “high standards not standardization.” Higher education’s differentiation is a great gift to America. The cornerstone of American higher education -- institutions with a diversity of missions -- is meeting the educational needs of different kinds of students with different levels of preparation and ability to pay. It is important to recognize that assessment must match and reinforce the pluralism of American higher education. America is graced with many different kinds of colleges -- private, public, religious, secular, research, etc. It is important to have an assessment system that encourages colleges and universities to pursue unique missions.
A second step is for higher education to make transparent the evidence of quality that the public needs in order to trust higher education. “Just trust us,” is no longer sufficient as higher education has flexed its independence in setting ever increasing tuition rates in spite of the public’s belief that it has been excessive. Trust is built on transparency of evidence not mere declarations of quality. Practically a few indicators of quality that cut across higher education are going to be required. For example, surrogate and indirect measures of learning and development captured by student surveys, amount of need-based financial assistance, dollars per student invested in advising services, and dollars per faculty member dedicated to instructional and curricular development are some possibilities. Public opinion is heavily on the side of legislators and members of Congress on this issue.
For public policy makers, it is imperative to accept the notion that to assess is to share the evidence and then to care. Caring requires action and support not just criticism. Public policy makers must educate themselves about the complexity of higher education teaching, research and public engagement. This means accepting that the indicators of quality of the work of the academy are complex, as they should be. Whatever indicators are chosen, the benchmarks will vary by type of college or university. Take graduation rates as an example. Inevitably, highly selective colleges and universities are much more likely to have higher graduation rates than those with access as a goal. The students being admitted to the highly selective colleges and universities already have demonstrated their ability to achieve and have the study skills and background to be successful in college. Open access colleges and universities, on the other hand, have a greater percentage of students who are at risk, need to develop study skills in college, and are in general less prepared for the riggers of college study when compared to those with high achievement records out of high school. But these characteristics -- which frequently also result in lower graduation rates -- do not make these colleges and universities inadequate or not worthy of public support. Many great thinkers have said that a nation can be judged by how it treats its poor; this same argument works for education. The goal for everyone is to do better, starting where the students are -- not where we would like them to be when admitted.
With both sides changing their approaches, the public and higher education can productively focus on how together they can use assessment as an effective tool to determine quality and foster improvement. In doing so, we offer eight recommendations that if followed can offer the faculty the confidence they demand that assessment is a valid tool for communicating the evidence of student learning and development, the presidents the satisfaction that when all is said and done, it will have been worth the effort, and the public the trust that higher education is responsive to its concerns.
1. Recognize that assessment can serve both those within the academy and those outside of it, but different approaches to assessment are required. Faculty members and students can use assessment to provide the feedback that creates patterns and provides insight for their own discussion and decision making. To them assessment is not to be some distant mechanical process far removed from teaching and learning. On the other hand, parents, prospective students, collaborators, and policy makers also can benefit from the results of assessment but the evidence is very different. Through institutional assessment, they can know that specific colleges and universities are more or less effective as places to educate students, which types of students they best serve, and the best fit for jointly tackling society’s problems.
2. Focus on creating a culture of evidence as opposed to a culture of outcomes. Language and terms are important in this endeavor. The latter implies a rigidity of ends, whereas the former reflects the dynamic nature of learning, student development and solution making. A “teaching for the test” mentality cannot be the goal for most academic programs. We know from experience that assessment strategies that have relied heaviest on external standardized measures of achievement have been inadequate to detect with any precision any of the complex learning and developmental goals of higher education, e.g. critical thinking, commitment, values.
3. Accept that measurement of basic academic and vocationally oriented skills and competences may be appropriate for segments of the student population. For example, every time we get on an airplane we think of the minimum (and hopefully) high standards of the training of the pilots and the rigorous assessment procedures that “guarantee” quality assurance.
4. Avoid generic comparisons between colleges and universities as much as possible. A norm-referenced approach to testing guarantees that one half of the colleges and universities will be below average. The goal is not to be above average on some arbitrary criterion, but to achieve the unique mission and purpose of the specific college and university. A better strategy is to build off one’s strengths -- at both the individual and institutional level. Doing so reinforces an asset rather than a deficit view of both individual and institutional behavior leading to positive change and pride in institutional purpose. In order to benchmark progress, identify similar institutions. Such practices will encourage more differentiation in higher education and work to stem the tide of institutions clamoring to catch up with or be like what is perceived as a more prestigious college or university. "Be what you are, do it exceptionally well, and we will do what we can to fund you" would be a good state education policy.
5. Focus on tools that assess a range of student talent, not just one type or set of skills or knowledge. Multiple perspectives are critical to portraying the complexity of students’ achievements and the most effective learning and development environments for the enrolled students. All components of the learning environment, including student experiences outside the classroom and in the community must be assessed. We must measure what is meaningful, not give meaning to what we measure or test. Sometimes simple quantitative data such as graduation rates and records of employments are sufficient and essential for accountability purposes. But to give a full portrayal of student learning and development and environmental assessment, many types of evidence in addition to achievement tests are needed. Sometimes portfolio assessment will be appropriate, and at other times standardized exams will be sufficient.
6. Connect assessment with development and change. Assessment has been most useful when driven by commitment to l earn, create and develop, not when it has been mandated for purposes of administration and policy making. Assessment is the means, not the end. It is an important tool to be sure, but it always needs to point to some action by the participating stakeholders and parties.
7. Create campus conversations about establishing effective environments for the desirable ends of a college education. Assessment can contribute to this discussion. In its best from, assessment focuses discussion, not make decisions. People do that, and people need to be engaged in conversations and dialogue in ways that they focus not on the evidence but the solutions. As we stated earlier, to assess is to share and care. When groups of faculty get together to discuss the evaluations of their students they initially focus, somewhat defensively, on the assessment evidence (and the biases inherent in such endeavors), but as they get to know and trust each other they focus on how to help each other to improve.
8. Emphasize assessment’s role in “value added” strategies. Assessment should be informing the various publics about how the educational experiences of students or of the institutional engagement in the larger society is bringing value to the students and society. All parties need to get used to the idea that education can be conceptualized and interpreted in terms of a return on investment. But this can only be accomplished if we know what we are aiming for. This will be different for each college and university and that is why the dialogue with policy makers is so crucial. For some, the primary goal of college will focus on guiding students in their self discovery and contributing to society; for others it will be more on making a living; for yet others on understanding the world in which we live.
When both the public and higher education accept and endorse the principle that assessment is less about compliance or standardization and more about sharing, caring and transparency, then confidence, trust and satisfaction will be more likely. We believe that higher education must take the lead by focusing on student learning and development and engage with the public in collaborative decision making. If not, policy makers may conclude that they have only the clubs of compliance and standardization to get higher education’s attention.
Larry Braskamp and Steven Schomberg
Larry A. Braskamp, formerly senior vice President for academic affairs at Loyola University Chicago, is professor of education at the university. Steven Schomberg, retired in 2005 as Vice Chancellor for Public Engagement and Institutional Relations, University of Illinois at Urbana-Champaign.
The world of genuine education awoke to a rude surprise on July 29, 2006, for on the previous day Mr. Justice Eady of the British High Court ruled that the Daily Mirror newspaper had libeled a celebrity hypnotist by saying (in articles in 1997 and again in 2003) that his Ph.D. from LaSalle University of Louisiana was bogus. The hypnotist, one Paul McKenna, who performs on television and works with many famous clients, had no other degrees at the time. The judge said that the newspaper had not shown that its statements about LaSalle were “substantially true.”
This is nonsense. Seek in vain for LaSalle of Louisiana, unless you seek among the records of the Federal Bureau of Investigation, U.S. Postal Inspection Service, the courts of Louisiana or the records of the federal prison in Beaumont, Tex., where LaSalle’s owner, who used various names, served time for running the fraudulent college from which McKenna acquired his degree.
Having read the entire opinion, I can say that Eady deserves an award for having listened to such a peculiar case filled with half-truths, quarter-truths and untruths and actually written an exceptionally clear, thoughtful opinion about it. That his conclusions are fundamentally mistaken on the question of whether LaSalle degrees are degrees in the usual sense of the term has as much to do with the exceptional obscurity of American education law as it does about Mr. McKenna’s actions.
It also sets a very bad precedent for the international use of degrees.
Anyone interested in the actual history of LaSalle can read about it in Allen Ezell and John Bear’s book Degree Mills (Prometheus, 2005). LaSalle, its history, its brethren and its spawn are all detailed there. A similar account is available in the new, now ironically named publication “Guide to Bogus Institutions and Documents” (June, 2006) from the American Association of Collegiate Registrars and Admission Officers.
McKenna argued successfully that he did not know that LaSalle’s accreditation was baseless, and the judge agreed that it was unreasonable for him to have been expected to know. That is doubtful but not outrageous, considering that Mr. McKenna is an uneducated person who had to go outside the U.K. to acquire any degree. But the judge went on to say, as quoted by Michael Herman in the Times of London:
“…whatever one may think of the academic quality of his work, or of the degree granted by LaSalle, it would not be accurate to describe it as “bogus.”
These words appear mild and even judicial in temperament, but consider what they mean in reality. A judge in one country has declared that an entity in another country is a legitimate doctoral institution, contrary to the universally understood status of the degree supplier in its home country. The misunderstanding comes about because the judge does not realize that authority to issue degrees in the U.S. comes from states that may in fact have no meaningful standards for such programs. This was the case in Louisiana in the 1990s, as it is in Mississippi today. It can therefore be true, as the judge found, that LaSalle was issuing degrees legally under the laws of Louisiana, but also true, which the judge did not grasp, that the degrees issued did not represent academic learning.
The term “bogus” as tossed around in this case did not refer to the legality of the institution; it referred to the nature of its product relative to the standard expectations of a college degree.
The McKenna case therefore sets a strange precedent for who decides the international use of degrees.Until now, we could generally assume that each country got to decide what is and is not a meaningful college degree within its own boundaries. The fact that LaSalle was briefly allowed to operate as a religious exempt institution in Louisiana (a status acquired by building a church on its lawn) became irrelevant on the day that its owner was convicted of degree fraud, and of course its Ph.D.s were risible from the beginning.
All degrees are by definition academic credentials. Doctoral degrees issued by LaSalle are invalid academic credentials. LaSalle never issued what anyone in American education would accept as genuine degrees. The fact that a few U.S. employers mistakenly allowed such degrees to be used speaks of shoddy screening practices at employers who hired LaSalle degree holders, not of degree acceptability. The fact that McKenna claimed to have sent course work to the owner of LaSalle does not make its degrees genuine.
Eady’s opinion assumes that any entity claiming to be a college is capable of issuing genuine doctoral degrees, provided that it can produce the barest mist of a holographic image of pillars around itself. I readily concede that he has the right to do that within the norms of British law, provided that he is making that decision about a British degree grantor. In the McKenna case, he made that decision about a U.S. degree-grantor, which he should not have, and he got it wrong.
In paragraph 36 of the opinion, the judge wrote that whether a LaSalle degree is “scholarship worthy of academic recognition” is not the matter being litigated. The fact that the judge italicized the word “academic” only emphasizes the underlying problem: All Ph.D.s are academic, and must be so to be genuine. There is no such thing as a nonacademic Ph.D. In paragraph 60, Eady repeats this odd view when he mentions the distinction between the academic value of the Ph.D. and “its practical use.” It is not difficult to get some practical use out of a bogus Ph.D. -- for a while. If that were the standard upon which issuance of Ph.D.s were to be based, we’d all be calling each other Doctor.
It is less important that Eady got it wrong than that he made a determination about a foreign college contrary to how the home nation treats that college. I hope that other British judges do not make the same error, and that this case remains, not an anomaly, but an utterly freakish result, as it is widely viewed in the education community.
Finally, it is important to consider the difference between this case and the recent cases involving fake schools in Liberia. In the St. Regis case, which involved the issuance of documents that appeared genuine, approving that entity to issue Liberian degrees a U.S. court was presented evidence that the college’s approval in Liberia had been obtained through fradulent means and bribery, and that the approval was therefore invalid, as were the degrees issued by the entities. Any nation should have the right to decide that degrees issued by a so-called college in a foreign country are substandard or fake, and therefore unusable in the receiving nation, based on evidence supporting that view, because degrees cannot be imported like coal: Degrees are not commodities and do not contain the same ingredients. All nations need the right to protect their citizens from fakes.
No nation has the right to compel acceptance of degrees issued by a fake school in another country, simply because someone thought it was a real school. Mr. Justice Eady has done the academic community a favor by saying that the Mirror newspaper had not shown that LaSalle was sufficiently bogus. This should wake up the British ministry in charge of postsecondary education, which will, I hope, establish a meaningful screening system for grossly substandard degrees issued by fly-by-night suppliers in other countries.
Alan L. Contreras
Alan L. Contreras has been administrator of the Oregon Office of Degree Authorization, a unit of the Oregon Student Assistance Commission, since 1999. His views do not necessarily represent those of the commission.
Now that Education Secretary Margaret Spellings is using the report of her Commission on the Future of Higher Education to stake out accreditation as the de rigueur battlefront/seed ground/hammer/hoe, we are seeing institutions and accrediting agencies and higher education associations alike scrambling to raise their hands high to the Department of Education in a show-and-tell fest, unprecedented since another commission’s report card, "A Nation at Risk: The Imperative for Educational Reform," was sent home nearly a quarter of a century ago.
While faculty, deans, and provosts are earnestly trying to address the accountability issue and to apply a wide range of instructional and enrollment patterns made possible through new uses of technology -- such as wholly online courses and degree programs, hybrid courses and programs with blends of face-/seat-time and online work alongside traditional campus-based learning; collaborative learning tools; and immersive simulation learning environments (see the EDUCAUSE Learning Intiative 7 Things You Should Know About… series) -- they face the challenges of decreasing resources, increasing enrollments, more demands for non-traditional courses, and a growing entry level population who arrive in class without the basic skills needed to succeed.
To be successful, major academic redesign efforts often require the involvement of individuals with skills and knowledge not available at the department level where most of the discipline-specific work is done. While experts in technology, in assessment, in teaching methodology, and in course and program design are sometimes made available to faculty and academic offices, the registrar is, unfortunately, rarely involved in these discussions from the earliest stages.
Such an omission can be costly because the registrar can often be a critical component in academic transformation. No matter which of the many possible outcomes of the accountability movement we are talking about -- whether a national unit record system; new metrics for gauging academic progress and graduation rates; adaptable information systems for new forms of instructional design; discipline-specific measures of learning outcomes; mission-, demographic-, and Carnegie class-specific success standards or a more direct match between learning outcomes, assessment and grading criteria -- in each instance new support systems and policy changes will often be required, and in each instance the registrar is a key agent for any changes that may be required.
In the role of translator, arbiter, influencer, recorder, encoder, manipulator, and implementer of academic policy, grading protocols and keeper of official transcript records, privacy policies, enterprise information system architecture, real and virtual classroom usage rules, and academic calendar parameters, the registrar in involved in a wide array of campus activities below the radar of most faculty and many administrators. The registrar, however, can play a vital role in academic innovation by providing invaluable policy counsel and advice about the degree to which information systems can be customized, and, ultimately, can grease the tracks of academic innovation.
The role of the registrar in academic innovation
The registrar has, in fact, a major role to play in four of the most basic academic initiatives found on many campuses:
Redesigning and improving the quality of courses and curricula.
Enhancing the processes of course management and delivery to create more options and increased flexibility.
Translating academic policies into efficient and easily used procedures and refining campus-wide inter-departmental records management procedures accordingly.
Maintain official academic records and related processes in accord with state and federal privacy legislation while providing faculty and students with the information they require for quality advising and decision-making.
At far too many institutions, academic support, management, and information systems have simply been unable to keep up with the demands and requirements of faculty and academic units as they explore new applications of technology and new patterns of teaching and learning to improve the retention of students, to increase the involvement of students in the community, and to improve the quality and effectiveness of their academic programs.
The problem is a basic one. Many of the academic procedures and structures we now use were developed in a time when colleges and universities were far different than they are today. The challenges were fewer, the instructional capabilities of today’s technology not even dreamed of, the students far more homogenous and motivated, and interaction between the disciplines was the exception and not the rule, with most instruction taking place on campus in the classroom, the library, or the laboratory. It was a far less complex world for students, faculty, administrators, and staff.
Typical efforts to redesign courses and curricula involve faculty working alone or on a team with other faculty in the discipline. Experience has shown, however, that the most effective projects include, in addition to the stakeholder faculty members, others who bring to the table expertise in areas not found in most departments. Without this broader participation key questions will go often go unasked and unanswered, and important options will remain unexplored.
Serving on the core team should be the key faculty members, and an instructional designer or faculty member from another discipline who understands process of change and brings to the table the knowledge of the research on teaching and learning and the ability and willingness to ask hard questions and to test assumptions. Available to the team should be experts on assessment, on technology, and, while often overlooked, the registrar to anticipate and assist in making the necessary adjustments that will be required in academic regulations and system support.
The common issues
When comprehensive course or curriculum redesign efforts get underway at either the graduate or undergraduate level a number of fundamental questions need to be addressed. Among them:
What were the assumptions being made by faculty about the students entering their courses and degree programs, and how accurate were the assumptions?
What knowledge and skills did students actually bring to particular classes or programs? (If students entered an introductory course with a wide range of knowledge and competencies, why should they all start at the same place? If students had advanced skills or knowledge, could they be exempted from certain units within a course or curriculum?)
Must all students move through a course or program at the same pace? If some students required more time to complete a unit, how could we handle grades at the end of the semester when the work was not yet complete? When students move at different rates, have different requirements based on prior knowledge and experience, and if work might carry over from semester to semester, how can we handle credits, grades, student charges and faculty loads not to mention various student-aid issues?
The Syracuse experience offers three key lessons that can guide other campuses.
First, without the registrar as a key player from the start, no easy synergy can be developed between instructional innovation, academic policy, records procedures, and system adaptation. If those directing the project, whether the focus be on on-campus, off-campus or a combination of both settings, are building on the latest research on teaching and learning and are “thinking outside of the box” new administrative systems will be required and these changes will be impossible to implement without the active participation of the registrars office.
Second, new technology innovations such as e-portfolios and course/learning management systems are often implemented under accelerated pressure jeopardizing compliance with external privacy regulations that the registrar could have anticipated.
Third, unless an individual or a design organization (i.e., the registrar or a teaching and learning support unit) becomes a visible proponent of opportunity to adapt technology and policy, new visions will chafe against tradition and sputter at best. The registrar often brings to the project a knowledge of the institutional change culture, the political and technical history of the institution, and remembers what has worked and why.
Without the active involvement of the registrar, schools, colleges and academic departments attempting to significantly improve the quality of their academic program can anticipate inefficient or retarded progress.
Robert M. Diamond and Peter B. DeBlois
Robert M. Diamond is president of the National Academy for Academic Leadership and professor emeritus at Syracuse University, where he played a major role in the development of the flexible credit and continuous registration system. Peter B. DeBlois, currently director of communications and publishing at EDUCAUSE, served as university registrar at Syracuse University from 1985–2001. Before that, he served as director of registration and records and assistant director of freshman English. He helped design and implement Syracuse’s flexible credit and continuous registration system.