My old friend Archilochus, the Greek lyric poet who has been resting comfortably since the Seventh Century B.C., has been getting a lot of rousing attention lately. And that’s a good thing considering what’s been happening recently in Washington, D.C.
A new federal commission formed by Education Secretary Margaret Spellings has been pushing the idea of holding colleges more accountable for the outcomes of their undergraduate education, which has prompted talk of a federally mandated assessment. I don’t know anything that would make it harder to improve student learning than a national or federal assessment. And that’s where Archilochus can help.
Years ago Sir Isaiah Berlin picked up the Greek poet’s famous aphorism, "The fox knows many things but the hedgehog knows one thing,” and used it as the title of his famous essay, and now Philip Tetlock, in his new book, Expert Political Judgment: How Good Is it? (Princeton University Press, 2005) has classified pundits into two categories: Hedgehogs, who have a single big idea or explanation, and Foxes, who look for a lot of intersecting causes. (He found that, by and large, the Foxes do better at predicting what’s to come, except once in a while when the prickly Hedgehogs see something really important, and don’t get distracted, no matter what.)
Most of us in academe are foxes, but I want to suggest that we think like hedgehogs for a while, and concentrate on one thing and one thing only -- student learning. Although we can’t ignore the political context, we shouldn’t do this in reaction to the perceived pressure from the federal commission. We should do it, instead, because it’s the one thing on which the flourishing of liberal education most depends right now. We need to do it for our students and for ourselves as educators.
When I became president of the Teagle Foundation two and a half years ago, I worried a lot about the alleged decline and fall of liberal education. The figures I studied showed a decreasing percentage of undergraduates majoring in the traditional disciplines of the liberal arts; some colleges that I visited, or whose leaders I met, seemed to be turning their backs on liberal education; short term marketing strategies seemed to be eclipsing long term educational values.
Recently, however, I’ve experienced another eclipse, one in which three tendencies I have been observing block out my old worries. The three trends are:
A shift in goals from content to cognition
The demand for accountability
A new knowledge base for teaching
None of these is an unambiguous Good Thing, and there are enough tricks and traps in each of these trends to challenge both foxes and hedgehogs. But in my view -- on balance -- the collision of these trends present the opportunity to take liberal education to a new level.
It is now possible, in ways that were out of our reach just a few years ago, to teach better and greatly to invigorate student engagement and learning. We can do that, I am convinced, while recommitting ourselves and our institutions to the core educational values of liberal education.
This all comes with a big “IF.” We can reach that higher level only if we focus, focus, focus on student learning -- all of us, faculty, deans, presidents, foundation officers. We all have to become hedgehogs.
Let me explain why I feel so confident that if we focus in this way, liberal education can reach that new level of excellence. In my explanation I will say a few words about each of the three tendencies to which I just alluded, and then try to imagine what liberal education could be like if they are brought together in an integrated system.
1. First, “from content to cognition,” that is, a shift in the stated goals of liberal education from certain subject matter that every educated person should know to certain cognitive capacities that ought to be developed in all students. Over the past few decades, many colleges and universities have come to define their goals as the development of cognitive capacities such as analytical reasoning, critical thinking, clarity of written and oral expression, and moral reasoning. Over the same period the idea that all students should become acquainted with certain texts, topics, and aspects of human experience has pretty much disappeared from curricular thinking.
Curmudgeonly old classicist that I am, I find it hard to imagine a liberal education in which students do not meet Socrates and confront his insistence that the unexamined life is not worth living. Nor can I convince myself that these cognitive goals can be attained in total abstraction, without the specificity and challenge contributed by disciplinary knowledge. Content still matters.
But the shift from content to cognition does have one great benefit: It compels us to think hard about what we want students to have gained once they complete a course or a curriculum. It should make us be explicit about how each course, maybe each assignment, contributes to one cognitive goal or another. In educational jargon, it makes us more “intentional” and thereby much more likely to succeed.
2. Accountability. We are also witnessing a widening demand in many sectors of American society for greater accountability. We owe it all to our friends at Enron, and all the other wonderful playgrounds of corporate greed and corruption. But education is not going to escape the demand for accountability, nor will assessment be restricted to K-12 education. As my friend Steve Wheatley, of the American Council of Learned Societies, put it, “The train is a-comin’ and its name is assessment.”
More systematic assessment of the results of higher education is, as you well know, being demanded by accrediting agencies, governing boards, state legislators, and increasingly the general public. Now, with a federal commission on board the roar of the engine is getting louder and closer.
You and your colleagues may not like to see that train bearing down on your tranquil campus. And you may well share my anger if Congress tells engineers from the Department of Education to run the train. They tried that in K-12 education and I’m not sure whether the results are a disaster or a joke. The best defense is clearly to get out ahead and do assessment right, and do it now.
This top down pressure for assessment naturally provokes skepticism and resistance, especially from faculty members. What happens if we can reverse the direction and look at assessment from the ground up? Let me tell you a story. When the Teagle Foundation began to ask whether it should undertake some initiative in the assessment area, we convened one of our “Listenings,” bringing together for a few days faculty, administrators and experts in assessment to advise us. There was plenty of skepticism and some hostility. I began to think maybe this was not such a good idea.
But late in the gathering, two people stood up to speak from the floor. One said in effect, “As scholars we value knowledge. How as teachers can we reject something that might let us know more about our students’ learning?” Another speaker said, “Maybe we can teach better if we know more. It’s worth a try.” For me, and for others at that session, that turned the day. Now the Teagle Foundation has made faculty-led, ground-up assessment one of its top priorities. Nothing, I believe, has greater potential for invigorating student learning in the liberal arts.
All this is built around one essential point: We can teach better and students can learn better if their learning is systematically and appropriately assessed.
3. The third trend is the one that makes me confident that we have nothing to fear from properly crafted assessment. Today we know far more about how students learn and what works in teaching that we did just a few years ago. We know what works -- first year seminars, inclusion of undergraduates in research projects, problem-based learning, collaborative projects, coordination of service learning, internships and overseas study with courses and curricula, lots of writing and speaking opportunities with prompt and thorough faculty feedback, capstone experiences in the senior year and so on. (See Section Six of Liberal Education Outcomes, a 2005 publication from the Association of American Colleges and Universities).
These are not just bright ideas from educational theorists. They have been tested and usually rigorously evaluated. And although graduate schools keep it a well hidden secret, the cat is now out of the bag. This new knowledge has been drawn together, concisely summarized, and made easily accessible in Derek Bok’s brand new book, Our Underachieving Colleges (Princeton Press 2006). Every professor should read this book: Its greatest merit is that Bok demolishes the excuses we academics have used to avoid change.
Let me give one example. My friend David Porter, former president of Skidmore College and now a classics professor at Williams College, defines a liberal education as “what you have learned once you have forgotten the facts.” How long would you guess it takes to forget those facts?
Bok has the answer: “… [T]the average student will be unable to recall most of the factual content of a typical lecture within fifteen minutes after the end of class. In contrast, interests, values and cognitive skills are all likely to last longer, as are concepts and knowledge that students have acquired … through their own mental efforts.”
Fifteen minutes! You might say, “We’ve known that, more or less, for a long time.” Then why is lecturing still the dominant mode of instruction in so many settings? Bok offers several answers, the most damaging of which is complacency. He points out, for example, that one poll of faculty members found that 90 percent thought they were “above average” teachers. Welcome to Lake Wobegon.
Can these three trends -- cognitive capacities replacing content, accountability, the new knowledge base for college teaching -- come together and reinforce one another? The key question is whether academic leaders will focus on this and make it happen.
Imagine what such convergence can do for an institution that sets clear, assessable goals for itself in the development of its students’ cognitive capacities. It doesn’t matter whether the institution is multibillionaire Harvard or a struggling college far from the River Charles: There’s no group of college students whose frontal lobes won’t benefit from some additional exercise.
The institution that I am imagining does some testing to establish a base line and then looks at every aspect of student learning to see how each part can contribute to those goals. It finds out what its students need and what the Big Questions of value and meaning are that can invigorate their engagement with liberal education. It uses the new knowledge base to change some of its practices and try out new ideas. It searches appropriate means of assessment; if NSSE, the National Survey of Student Engagement, or CLA, the Collegiate Learning Assessment, don’t seem quite right for its setting, there are others or, if need be, the institution develops its own.
But whatever means of assessment it chooses, it doesn’t let the results sit in the office of Institutional Research; it uses them in an iterative process, steadily ratcheting up its effectiveness. The students see this; they understand better why they are studying what might otherwise seem remote or irrelevant material. Their enthusiasm increases; they tell their friends and younger siblings. The director of admissions smiles somewhat more often. So do the fund raisers. The alumni and friends of the institution see what is happening; their pride makes them more generous to alma mater. Maybe eventually even U.S. News sees that something is happening, and it is not prestige, pecking order, or wealth. It’s called “student learning.”
This systematic, iterative process of change will do a lot for an institution, for its students and for its faculty. I bet it will make hedgehogs out of them -- focused on, excited by, renewed through their concern for student learning. Most of us went into college teaching for complex reasons, but one of them, I believe, was that we knew it would be a joy to help young people develop their mental capacities. It’s easy to forget that as we get older, to wander away, to end up forgetting that we have something to profess. But the satisfaction is waiting there where we suspected it was when we started -- in helping those students learn and grow.
Now, thanks to this convergence of changes, we can rediscover that satisfaction. We can teach better and students can learn better. That should make hedgehog very happy indeed.
I hear someone muttering: “Not on my campus; my faculty will never buy into that kind of change.” Don’t be so sure. In my old job at the National Humanities Center, when we were developing programs to let new knowledge in the humanistic disciplines invigorate K-12 and college teaching, Richard Schramm, the talented designer of those programs, told me that he could not recall ever being turned down by an NHC fellow or former fellow when he asked them to help with this work. (For one such program see ) That matches what we are finding at the Teagle Foundation in developing our new College Community Connections program.
Scholars of great distinction have been willing to roll up their sleeves, and pitch in working with kids on disadvantaged neighborhoods in New York, where public schools are often part of the problem rather than part of the solution. These busy, much sought after academics were, I concluded, looking for something fresh, well designed, and capable of renewing their satisfaction in helping students learn. You may find that some of your colleagues are hungry and thirsty for renewal of this sort and that they are ready to try out new ways of invigorating student learning.
Every environment is different, but here’s a suggestion about how one might build momentum and consensus. Try this on your campus. Get your dean to call Princeton Press and order copies of Derek Bok’s book Underachieving Colleges for every departmental chair. Ask them to read it and discuss it with their colleagues and then to meet with you and let you know what the response is. If 413 pages or $29.95 is too much for already strained attention spans or budgets, print out a copy of this article and ask your faculty colleagues whether they agree or disagree. Let them rip it apart. Let them be as prickly as … as prickly as hedgehogs. They may well have a better idea than any of these. The important thing is to focus on that one crucial idea: We can teach better and students can learn better. The only question is How?, and the only way to answer is by being hedgehogs focused on that one crucial thing, improving student learning.
W. Robert Connor
W. Robert Connor is president of the Teagle Foundation. This essay was adapted from a speech given to the American Conference of Academic Deans in January.
Accountability, not access, has been the central concern of this Congress in its fitful efforts to reauthorize the Higher Education Act. The House of Representatives has especially shown itself deaf to constructive arguments for improving access to higher education for the next generation of young Americans, and dizzy about what sensible accountability measures should look like. The version of the legislation approved last week by House members has merit only because it lacks some of the strange and ugly accountability provisions proposed during the past three years, though a few vestiges of these bad ideas remain.
Why should colleges and universities be subject to any scheme of accountability? Because the Higher Education Act authorizes billions of dollars in grants and loans for lower-income students as it aims to make college accessible for all. This aid goes directly to students selecting from among a very broad array of institutions: private, public and proprietary; small and large; residential, commuter and on-line. Not unreasonably, the federal government wants to ensure that the resources being provided are used only at credible institutions. Hence, its insistence on accountability.
The financial limits on student aid were largely set in February when Congress hacked $12 billion from loan funds available to many of those same low-income students. With that action, the federal government shifted even more of the burden of access onto families and institutions of higher education, despite knowing that the next generation of college aspirants will be both significantly more numerous and significantly less affluent.
Now the Congress is at work on the legislation’s accountability provisions, and regardless of allocating far fewer dollars members of both chambers are considering still more intrusive forms of accountability. They appear to have been guided by no defensible conception of what is appropriate accountability.
Colleges and universities serve an especially important role for the nation -- a public purpose -- and they do so whether they are public or private or proprietary in status. The nation has a keen interest in their success. And in an era of heightened economic competition from the European Union, China, India and elsewhere, never has that interest been stronger.
In parallel with other kinds of institutions that serve the public interest, colleges and universities should make themselves publicly accountable for their performance in four dimensions: Are they honest, safe, fair, and effective? These are legitimate questions we ask about a wide variety of businesses: food and drug companies, banks, insurance and investment firms, nursing homes and hospitals, and many more.
Are they honest? Is it possible to read the financial accounts of colleges and universities to see that they conduct their business affairs honestly and transparently? Do they use the funds they receive from the federal government for the intended purposes?
Are they safe? Colleges and universities can be intense environments. Especially with regard to residential colleges and universities, do students face unacceptable risks due to fire, crime, sexual harassment or other preventable hazards?
Are they fair? Do colleges and universities make their programs genuinely available to all, without discrimination on grounds irrelevant to their missions? Given this nation’s checkered history with regard to race, sex, and disability, this is a kind of scrutiny that should be faced by any public-serving institution.
Existing federal laws quite appropriately govern measures dealing with all of these issues already. For the most part, accountability in each area can best be accomplished by asking colleges and universities to disclose information about their performance in a common and, hopefully, simple manner. No doubt measures for dealing with this required disclosure could be improved. But these three questions have not been the focus of debate during this reauthorization.
On the other hand, Congress has devoted considerable attention to a question that, while completely legitimate, has been poorly understood:
Are they effective? Do students who enroll really learn what colleges and universities claim to teach? This question should certainly be front and center in the debate over accountability.
Institutions of higher education deserve sharp criticism for past failure to design and carry out measures of effectiveness. Broadly speaking, the accreditation process has been our approach to asking and answering this question. For too long, accreditation focused on whether a college or university had adequate resources to accomplish its mission. This was later supplanted by a focus on whether an institution had appropriate processes. But over the past decade, accreditation has finally come to focus on what it should -- assessment of learning.
An appropriate approach to the question of effectiveness must be multiple, independent and professionally grounded. We need multiple measures of whether students are learning because of the wide variety of kinds of missions in American higher education; institutions do not all have identical purposes. Whichever standards a college or university chooses to demonstrate effectiveness, they should not be a creation of the institution itself -- nor of government officials -- but rather the independent development of professional educators joined in widely recognized and accepted associations.
Earlham College has used the National Survey of Student Engagement since its inception. We have made significant use of its findings both for re-accreditation and for improvement of what we do. We are also now using the Collegiate Learning Assessment. I believe these are the best new measures of effectiveness, but we need many more such instruments so that colleges and universities and choose the ones most appropriate to assessing fulfillment of learning in the scope of their particular missions.
Until the 11th hour, the House version of the Higher Education Act contained a provision that would have allowed states to become accreditors, a role they are ill equipped to play. Happily, that provision now has been eliminated. Meanwhile, however, the Commission on the Future of Higher Education, appointed by U.S. Secretary of Education Margaret Spellings, is flirting with the idea of proposing a mandatory one-size-fits-all national test.
Much of the drama of the accountability debate has focused on a fifth and inappropriate issue: affordability. Again until the 11th hour, the House version of the bill contained price control provisions. While these largely have been removed, the bill still requires some institutions that increase their price more rapidly than inflation to appoint a special committee that must include outsiders to review their finances. This is an inappropriate intrusion on autonomy, especially for private institutions.
Why is affordability an inappropriate aspect of accountability? Because in the United States we look to the market to “get the prices right,” not heavy-handed regulation or accountability provisions. Any student looking to attend a college or university has thousands of choices available to him or her at a range of tuition rates. Most have dozens of choices within close commuting distance. There is plenty of competition among higher education institutions.
Let’s keep the accountability debate focused on these four key issues: honesty, safety, fairness, and effectiveness. With regard to the last and most important of these, let’s put our best efforts into developing multiple, independent, professionally grounded measures. And let’s get back to the other key issue, which is: How do we provide access to higher education for the next generation of Americans?
Douglas C. Bennett is president and professor of politics at Earlham College, in Indiana.
The details of accreditation are so arcane and complex that the entire topic is confusing and controversial throughout all of education. When we're immersed in the details of accreditation, it's often exceedingly difficult to see the forest for all the trees. But at the core, accreditation is a very simple concept: Accreditation is a process of self-regulation that exists solely to serve the public interest.
When I say "public interest" I mean the interests of three overlapping but identifiably distinct groups:
The interests of members of the general public in their own personal health, safety, and economic well-being.
The interests of government and elected officials at all levels in assuring wise and effective use of taxpayer dollars.
The consumer interests of students and their families in "getting what they pay for" -- certifications in their chosen fields that genuinely qualify them for employment and for practicing their professions competently and honestly.
Saying that a particular program or degree or institution is "accredited" should and must convey to these publics strong assurance that it meets acceptable minimum standards of quality and integrity.
Aside from the public interest, what other interests are there? Well, there are the interests of the accredited institutions, the interests of existing professional practitioners and their industry groups, and the interests of the accrediting organizations themselves. There is no automatic assurance that these latter interests are always and everywhere consistent with the public interest, so self-regulation (accreditation) necessarily involves consistent and vigilant management of this inherent conflict of interest. It is an inherent conflict because the general public, the government, and the students do not have the technical expertise to set curricular and other educational standards and monitor compliance.
I assume it is generally agreed that it is inconceivable to have anyone other than medical professionals defining the necessary elements and performance standards of medical education. Does the American Medical Association do a good job of protecting the public from fraud and incompetence? Yes, for the most part. But you don't need to talk to very many people to hear cynicism. It is the worst behaviors and the lowest standards of professional competence that create this cynicism, and that taints all doctors as well as the AMA. That is why our standards at the bottom or threshold level are so very important. I submit to that the bedrock principle and the highest priority for everyone involved in higher education (the institutions, the professional groups, the accrediting organizations, and those who recognize or certify the accreditors) should be and must be to manage these conflicts of interest in ways that are transparent, and that place the public interest ahead of our own several self-interests.
If I could draw an analogy: Think about why the names Enron and WorldCom are so familiar. Publicly owned corporations must open their books to independent accounting firms that are expected to examine them and issue reports assuring the public that acceptable financial reporting and business practices are being followed, and warning the public when they are not. But there is an inherent conflict of interest in this process: The companies being audited are the customers of the accounting firms. This presents an apparent disincentive to look too closely or report too diligently lest the accounting firms lose clients to other firms who are more willing to apply loose standards. Obviously, this conflict was not well-managed by the accounting industry and, as a result, one of the world's largest and previously most respected accounting firms no longer exists, and all U.S. corporations (honest and otherwise) are saddled with an extraordinarily complex and expensive set of new government regulations.
If we don't manage our conflicts well, rest assured one or more of our publics -- the students, the government, or the public at large - will rise up and take care of it for us in ways that will be expensive, burdensome, poorly designed, and counterproductive. That would be in no one's best interest - ironically, not even in the public's best interest.
I must acknowledge that our current system of self-regulation is, by and large, working very well, just as most accounting firms and most companies are, and always have been, honest. Some of us, especially in the public sector of higher education, wonder how much more accountability we could possibly stand, and what, if any, value-added there could possibly be if more were imposed on us. At the University of Wisconsin at Madison, for example, we offer 409 differently named degrees -- 136 majors at the bachelor's level, 156 at the master's level, 109 at the Ph.D. level, and 8 professional degrees, 7 of which carry the term "doctor," a point I will return to later.
By Board of Regents policy, every one of our degree programs gets a thorough review at least every 10 years, so we are conducting about 40 program reviews every year, and one full cycle of reviews involves just about every academic official on campus. These internal reviews carry negligible out-of-pocket cost, but conservatively consume about 20 FTE of people's time annually. We are also required by the legislature to report annually on a long list of performance indicators that includes things like time-to-degree, access and affordability, and graduation rates, among many other things. In addition, about 100 of our degree programs are accredited by 32 different special accreditors and, of course, the entire university is accredited by the North Central Association. One complete cycle of these accreditations costs about $5,000,000 and the equivalent of 35 FTE of year-round effort. (Annualized, it is about $850,000 and 6 FTE).
I mention the costs, not to complain about these reviews as expensive burdens, but to emphasize that we put a great deal of real money and real effort into self-examination and accountability. Far from being a burden, accreditation and self-study reviews form the central core of our institutional strategic planning and quality improvement programs. The major two-year-long self-study we do for our North Central accreditation, in particular, forms the entire basis for the campus strategic plan, priorities, goals, and quality improvements we adopt for the next 10-year period. As such, it is the most important and valuable exercise we undertake in any 10-year period, and we honestly and sincerely attribute most of the improvements we've made in recent decades to things learned in these intensive self-studies. I think all public universities and established private universities could give similar testimony. Having said all this, let me turn, now, to some of the reasons for the growing public cries for better accountability, and some of the problems I think we need to address in our system of self-regulation:
1. Even in the best-performing universities, there is still considerable room for improvement. To mention one high-visibility area, I think it is nothing short of scandalous that, in 2006, the average six-year graduation rate is only around 50 percent nationwide. Either we are doing a disservice to under-prepared or unqualified students by admitting them in the first place, or we are failing perfectly capable students by not giving them the advising and other help they need to graduate. Either way, we are wasting money and human capital inexcusably. Even at universities like mine, where the graduation rate is now 80 percent, if there are peer institutions doing better (and there are), then 80 percent should be considered unacceptably low.
Now, if we were pressured to increase that number quickly to 85 percent or 90 percent and threatened with severe sanctions for failing to do so, we could meet any established goal by lowering our graduation standards, or by fudging our numbers in plausibly defensible ways, or by doing any number of other things that would satisfy our self-interest but fail the public-interest test. Who's to stop us? Well, I submit these are exactly the sorts of conflicts of interest the accrediting organizations should be expected to monitor and resolve in the public interest. The public interest is in a better-educated public, not in superficial compliance with some particular standard. The public relies on accreditors to keep their eye on the right ball. More generally, accrediting organizations are in an excellent -- maybe even unique -- position to identify best practices and transfer them from one colleges to another, improving our entire system of higher education.
2. A second set of problems involves accreditation of substandard or even fraudulent schools and programs. Newspapers have been full of reports of such institutions, many of them operating for years, without necessarily providing a good education to their students. For years, I have listened to the complaints of our deans of education, business, allied health, and some other areas, that "fly-by-night" schools or "motel schools" were competing unfairly with them or giving absurd amounts of credit for impossibly small amounts of work or academic content.
I must admit that I usually dismissed these complaints lightly, telling them they should pay more attention to the quality and value of their own programs, and let free enterprise and competition drive out the low value products. I felt they (our deans) had a conflict of interest, and they wanted someone to enforce a monopoly for them. More recently I have concluded that our deans were, in fact, the only ones paying attention to the public interest. Our schools of education (not the motel schools) are the ones being held responsible for the quality of our K-12 teachers, and they are tired of being told they are turning out an inferior product when shabby but accredited programs are an increasingly large part of the problem. The public school teachers, themselves, have a conflict of interest: They are required to earn continuing education credits from accredited programs, and it is in their interest to satisfy this requirement at the lowest possible cost to themselves. So the quality of the cheapest or quickest credit is of great importance in the public interest, and the only safeguard for that public interest is the vigilance of the accrediting organizations. I lay this problem squarely at the feet of the U.S. Department of Education, the state departments of public instruction, and the education accreditors. They all need to clean up their acts in the public interest.
3. Cost of education. There is currently lots of hand-wringing on the topic of the "cost of education." What is really meant by the hand-wringers is not the cost of education, but the price of education to the students and their families: the fact that tuition rates are inflating at a far faster rate than the CPI. I've made a very important distinction here: the distinction between cost and price. If education were a manufactured product sold to a homogeneous class of customers in a competitive market with multiple providers, then it would be reasonable to assume there is a simple cause-and-effect relationship between cost and price. But that is not the case.
Very few students pay tuition that covers the actual cost of their education. Most students pay far less than the true cost, and some pay far more. In aggregate, the difference is made up by donors (endowment income) at private colleges, and by state taxpayers at public institutions. Since public colleges enroll more than 75 percent of all students, the overall picture -- the price of higher education to students and their parents -- is heavily influenced by what's going on in the public sector, and the picture is not pretty.
In virtually every state in the country, governors and legislators are providing a smaller share of operating funds for higher education than they used to, and partially offsetting the decrease by super-inflationary increases in tuition. They tell themselves this is not hurting higher education because, after all, the resulting tuitions are still much lower than the advertised tuitions at comparable private colleges, so their public institutions are still a "bargain." This view represents a fundamental misunderstanding of the nature of the "private model." Private institutions do not substitute high tuition for state support. They substitute gifts and endowment income for state support, and discount their tuitions to the tune of nearly 50 percent on the average.
There is a very good reason why there are so few large private universities: It is because very few schools can amass the endowments required to make the private model work. Of the 100 largest postsecondary schools in the country, 92 are public, and ALL of the 25 largest institutions are public. There is no way the private model can be scaled up to educate a significant fraction of all the high school graduates in the country. Substituting privately financed endowments for public taxpayer support nationwide would require aggregate endowments totaling $1.3 trillion, or about six times more than the total of all current endowments of public and private colleges and universities in the country. This simply is not going to happen.
So, to the extent that states are pursuing an impossible dream, they are endangering the health and future of our entire system of higher education. Whose responsibility is it to red-flag this situation? Who is responsible for looking out for the overall health of a large, decentralized, diverse public/private system of higher education? When public (or, for that matter, private) colleges point out the hazards of our current trends, they are vulnerable to charges of self-interest. We are accused of waste and inefficiency, and told that we simply need to tighten our belts and become more businesslike.
I don't know of a single university president who wouldn't welcome additional suggestions for genuinely useful efficiencies that have not already been implemented. Is there a legitimate role here for the U.S. Department of Education and the accrediting organizations? To the extent that accrediting organizations take this seriously and use their vast databases of practices and indicators to disseminate best practices nationwide, we would all be better off. Accreditors should be applauding institutions that are on the leading edge of efficiency, and helping, warning, and eventually penalizing waste and inefficiency, all in the spirit of protecting the public interest. Instead, I'm afraid many accreditors are pushing us in entirely different directions.
4. Another category of problem area is what I will call "protectionism." I have already said there is an inherent conflict of interest in that professional experts must be relied upon to define and control access to the professions. This means that the special accreditors have a special burden to demonstrate that their accreditation standards serve the best interests of the public, and not just the interests of the accredited programs or the profession. Chancellors and provosts get more complaints and see more abuses in this area of accreditation than any other. I will start with a hypothetical and then mention only a small sampling of examples.
In Wisconsin, we are under public and legislative pressure to produce more college-educated citizens -- more bachelor's, master's, and doctoral degrees. Suppose the University of Wisconsin announced next week that any students who completed our 60 credits, or two years, of general education would be awarded a bachelor's degree; that completing two more years in a major would result in a master's degree; and that one year of graduate school would produce a degree entitling the graduate to be called "doctor."
I hope and assume this would be met with outrage. I hope and assume it would result in an uproar among alumni who felt their degrees had been cheapened. I hope and assume it would result in legislative intervention. I even hope and assume it would result in loss of all our accreditations.
That's an extreme example, and most of what I hope and assume would probably happen. But we are already seeing this very phenomenon of degree inflation, and it is being caused by the professions themselves! This is particularly problematic in the health professions, where, it seems, everyone wants to be called "doctor." I have no problem whatsoever with the professional societies and their accreditors telling us what a graduate must know to practice safely and professionally. I have a big problem, though, when they hand us what amounts to a master's-level curriculum and tell us the resulting degree must be called a "doctor of X." This is a transparently self-interested ploy by the profession, and I see no conceivable argument that it is in the public interest. All it does is further confuse an already confusing array of degree names and titles, to no useful purpose.
I asked some of my fellow presidents and chancellors to send me their favorite examples, and I got far too many to include here. Interestingly, and tellingly, most people begged me to hide their institutional identity if I used their examples. I'll let you decide why they might fear being identified. Here are a few:
A business accreditor insisting that no other business-related courses may be offered by any other school or college on campus.
An allied health program at the bachelor's level (offered at a branch campus of an integrated system) that had to be discontinued because the accreditors decreed they could only offer programs at the bachelor's level if they also offered programs at the master's level at the same campus.
An architecture program that was praised for the strength and quality of its curriculum, its graduates, and its placements, and then had its accreditation period halved for a number of trivial resource items such as the sizes of their brand-new drafting tables that had been selected by their star faculty;
Some years ago, the American Bar Association was sanctioned by the U.S. Department of Justice for using accreditation in repeated attempts to drive up faculty salaries in law schools.
The Committee on Institutional Cooperation (the Big Ten universities plus the University of Chicago) publishes a brochure suggesting reasonable standards for special accreditation. The suggested standards are common-sense things that any reasonable person would agree protect the public interest while not unreasonably constraining the institution or holding accredited status hostage for increased resources or status when the existing resources and status are clearly adequate. They focus on results rather than inputs or pathways to those results. Similar guidelines have been adopted by other associations of universities.
So, when I was provost, I routinely handed copies of that brochure to site-visit teams when they started their reviews, saying "Please don't tell me this program needs more faculty, more space, higher salaries, or a different reporting line. Just tell me whether or not they are doing a good job and producing exemplary graduates." Inevitably, or at least more often than not, at the exit interview, I heard "This program has a decades-long record of outstanding performance and exemplary graduates, but their continued accreditation is endangered unless they get (some combination of) more faculty, higher salaries, a higher S&E budget, larger offices, more space in general, greater independence, a different reporting line, their own library, a very specific degree for the chair or director, tenure for (whomever), ... etc." Often, the program was put on some form of notice such as interim review with a return visit to check for such improvements.
Aside: It is perfectly natural for the faculty members of site-visit teams to feel a special bond with the colleagues whose program they are evaluating. It is natural for the evaluators to want to "help" these colleagues in what they perceive as the zero-sum resource struggles that occur everywhere. It is also natural for them to want to enhance the status of programs associated with their field. But, resource considerations should be irrelevant to accreditation status unless the resources being provided are demonstrably below the minimum needed to deliver high-quality education and outcomes. Similarly, "status" considerations are out of place unless the current status or reporting line demonstrably harms the students or the public interest. It is the responsibility of the professional staffs of accrediting organizations to provide faculty evaluators with warnings about conflict of interest and guidelines on ethical conduct of the evaluation.
Let me end with one of the most egregious examples I have yet encountered, and a current one from the University of Wisconsin. Our medical school spent more than a year in serious introspection and strategic planning, with special attention on its role in addressing the national crisis in health care costs. What topic could be more front-and-center in the public interest? The medical school faculty and administration concluded (among other things) that it is in the public interest for medical schools to pay more attention to public health and prevention, and try to reduce the need for acute and expensive interventions after preventable illnesses have occurred. To signal this changed emphasis, they voted to change the name of the school from "The School of Medicine" to "The School of Medicine and Public Health." They simultaneously developed a formal public health track for their M.D. curriculum.
I am told that we cannot have this school accredited as a school of public health because the accreditation organization insists that schools of public health must be headed by deans who are distinct from, and at the same organizational level as, deans of medicine. In particular, deans of public health may not be subordinate to, nor the same as, deans of medicine. This, despite the fact that the whole future of medicine may evolve in the direction of public health emphasis, and this may well be in the best interests of the country. Ironically, to the best of my knowledge, our current dean of medicine is the only M.D. on our faculty who holds a commission as an officer in the Public Health Service.
I have used some extreme examples and maybe some extreme characterizations intentionally. Often, important points of principle are best illuminated by extreme cases and examples. If there are any readers who are not offended by anything here, then I have failed. I hope everyone was offended by at least one thing. I also hope I am provably wrong about some things I've said. But, most of all, I hope to stimulate a vigorous debate on this vitally important topic.
John D. Wiley
John D. Wiley is chancellor of the University of Wisconsin at Madison. This essay is a revised version of a talk Wiley gave at the annual meeting of the Council on Higher Education Accreditation.
Some skepticism by the academy is understandable. Those on the outside sometimes fail to recognize just how much those of us on many college campuses are already talking seriously about the need to measure what we do and to be constructively critical of ourselves. And some of us may not like the tone of the voices that insist on greater accountability or some of the related ideas, including suggestions to eliminate regional accreditation, dismantle the federal student-aid system, and test college students to determine what they’ve learned.
But as assessment becomes a national imperative, college and university leaders face a major challenge: Many of our faculty colleagues are skeptical about the value of external mandates to measure teaching and learning, especially when those outside the academy propose to define the measures. Many faculty members do not accept the need for accountability, but the assessment movement’s success will depend upon faculty because they are responsible for curriculum, instruction and research. All of us -- policy makers, administrators and faculty -- must work together to develop language, strategies and practices that help us appreciate one another and understand the compelling need for assessment -- and why it is in the best interest of faculty and students.
Why is assessment important? We know from the work of researchers like Richard Hersh, Roger Benjamin, Mark Chun and George Kuh that college enrollment will be increasing by more than 15 percent nationally over the next 15 years (and in some states by as much as 50 percent). We also know that student retention rates are low, especially among students of color and low-income students. Moreover, of every 10 children who start 9th grade, only seven finish high school, five start college, and fewer than three complete postsecondary degrees. And there is a 20 percent gap in graduation rates between African Americans (42 percent) and whites (62 percent). These numbers are of particular concern given the rising higher education costs, the nation’s shifting demographics, and the need to educate more citizens from all groups.
At present, we do not collect data on student learning in a systematic fashion and rankings on colleges and universities focus on input measures, rather than on student learning in the college setting. Many people who have thought about this issue agree: We need to focus on “value added” assessment as an approach to determine the extent to which a university education helps students develop knowledge and skills. This approach entails comparing what students know at the beginning of their education and what they know upon graduating. Such assessment is especially useful when large numbers of students are not doing well -- it can and should send a signal to faculty about the need to look carefully at the “big picture” involving coursework, teaching, and the level of support provided to students and faculty.
Many in the academy, however, continue to resist systematic and mandated assessment in large part because of problems they see with K-12 initiatives like No Child Left Behind -- e.g., testing that focuses only on what can be conveniently measured, unacceptable coaching by teachers, and limiting what is taught to what is tested. Many academics believe that what is most valuable in the college experience cannot be measured during the college years because some of the most important effects of a college education only become clearer some time after graduation. Nevertheless, more institutions are beginning to understand that value-added assessment can be useful in strengthening teaching and learning, and even student retention and graduation rates.
It is encouraging that a number of institutions are interested in implementing value-added assessment as an approach to evaluate student progress over time and to see how they compare with other institutions. Such strategies are more effective when faculty and staff across the institution are involved. Examples of some best practices include the following:
Constantly talking with colleagues about both the challenges and successful initiatives involving undergraduate education.
Replicating successful initiatives (best practices from within and beyond the campus), in order to benefit as many students as possible.
Working continuously to improve learning based on what is measured -- from advising practices and curricular issues to teaching strategies -- and making changes based on what we learn from those assessments.
Creating accountability by ensuring that individuals and groups take responsibility for different aspects of student success.
Recruiting and rewarding faculty who are committed to successful student learning (including examining the institutional reward structure).
Taking the long view by focusing on initiatives over extended periods of time -- in order to integrate best practices into the campus culture.
We in the academy need to think broadly about assessment. Most important, are we preparing our students to succeed in a world that will be dramatically different from the one we live in today? Will they be able to think critically about the issues they will face, working with people from all over the globe? It is understandable that others, particularly outside the university, are asking how we demonstrate that our students are prepared to handle these issues.
Assessment is becoming a national imperative, and it requires us to listen to external groups and address the issues they are raising. At the same time, we need to encourage and facilitate discussions among our faculty -- those most responsible for curriculum, instruction, and research -- to grapple with the questions of assessment and accountability. We must work together to minimize the growing tension among groups -- both outside and inside the university -- so that we appreciate and understand different points of view and the compelling need for assessment.
Freeman A. Hrabowski III
Freeman A. Hrabowski III is president of the University of Maryland, Baltimore County. This article is adapted from a keynote address he gave at a conference on assessment this month co-sponsored by the Educational Testing Service and the Carnegie Foundation for the Advancement of Teaching.
Assessment will make higher education accountable. That’s the claim of many federal and state education policy makers, as illustrated by the Commission on the Future of Higher Education. Improved assessment has become for many the lever to control rising tuition and to inform the public about how much students might learn (and whether they learn at all). But many in higher education worry that assessment can become a simplistic tool -- producing data and more work for colleges, but potentially little else.
Has the politicization of assessment deepened the divide between higher education and the public? How can assessment play the role wished for by policy makers to gauge accountability and affordability and also be a powerful tool for faculty members and college presidents and provosts to use to improve quality and measure competitiveness? Successful policies will include practices that lead to confidence, trust and satisfaction -- confidence by faculty members in the multiple roles of assessment, trust by the public that assessment will bring accountability, and satisfaction by the leaders such as the presidents that assessment will restore the public’s confidence in higher education. A tall order to be sure, but we believe assessment – done correctly -- can play a pivotal role in the resolution to the current debate on cost and quality.
For confidence, trust and satisfaction to occur, higher education and public officials must each take two steps. Higher education must first recognize that public accountability is a fact and an appropriate expectation. This means muting the calls by public higher education for more autonomy from state and federal government based simply on the declining percent of the annual higher education budget provided by public sources. This argument may help gain the attention of policy makers regarding the financial conundrums in higher education but it is not a suitable argument against accountability. Between federal and state sources, billions of dollars have been invested in higher education over the nearly 150 years of public higher education. The public deserves to know that its investments of the past are being used well today -- efficiently and effectively.
In response, federal and state policy makers need to publicly embrace the notion advocated as early as 1997 that quality is based on “high standards not standardization.” Higher education’s differentiation is a great gift to America. The cornerstone of American higher education -- institutions with a diversity of missions -- is meeting the educational needs of different kinds of students with different levels of preparation and ability to pay. It is important to recognize that assessment must match and reinforce the pluralism of American higher education. America is graced with many different kinds of colleges -- private, public, religious, secular, research, etc. It is important to have an assessment system that encourages colleges and universities to pursue unique missions.
A second step is for higher education to make transparent the evidence of quality that the public needs in order to trust higher education. “Just trust us,” is no longer sufficient as higher education has flexed its independence in setting ever increasing tuition rates in spite of the public’s belief that it has been excessive. Trust is built on transparency of evidence not mere declarations of quality. Practically a few indicators of quality that cut across higher education are going to be required. For example, surrogate and indirect measures of learning and development captured by student surveys, amount of need-based financial assistance, dollars per student invested in advising services, and dollars per faculty member dedicated to instructional and curricular development are some possibilities. Public opinion is heavily on the side of legislators and members of Congress on this issue.
For public policy makers, it is imperative to accept the notion that to assess is to share the evidence and then to care. Caring requires action and support not just criticism. Public policy makers must educate themselves about the complexity of higher education teaching, research and public engagement. This means accepting that the indicators of quality of the work of the academy are complex, as they should be. Whatever indicators are chosen, the benchmarks will vary by type of college or university. Take graduation rates as an example. Inevitably, highly selective colleges and universities are much more likely to have higher graduation rates than those with access as a goal. The students being admitted to the highly selective colleges and universities already have demonstrated their ability to achieve and have the study skills and background to be successful in college. Open access colleges and universities, on the other hand, have a greater percentage of students who are at risk, need to develop study skills in college, and are in general less prepared for the riggers of college study when compared to those with high achievement records out of high school. But these characteristics -- which frequently also result in lower graduation rates -- do not make these colleges and universities inadequate or not worthy of public support. Many great thinkers have said that a nation can be judged by how it treats its poor; this same argument works for education. The goal for everyone is to do better, starting where the students are -- not where we would like them to be when admitted.
With both sides changing their approaches, the public and higher education can productively focus on how together they can use assessment as an effective tool to determine quality and foster improvement. In doing so, we offer eight recommendations that if followed can offer the faculty the confidence they demand that assessment is a valid tool for communicating the evidence of student learning and development, the presidents the satisfaction that when all is said and done, it will have been worth the effort, and the public the trust that higher education is responsive to its concerns.
1. Recognize that assessment can serve both those within the academy and those outside of it, but different approaches to assessment are required. Faculty members and students can use assessment to provide the feedback that creates patterns and provides insight for their own discussion and decision making. To them assessment is not to be some distant mechanical process far removed from teaching and learning. On the other hand, parents, prospective students, collaborators, and policy makers also can benefit from the results of assessment but the evidence is very different. Through institutional assessment, they can know that specific colleges and universities are more or less effective as places to educate students, which types of students they best serve, and the best fit for jointly tackling society’s problems.
2. Focus on creating a culture of evidence as opposed to a culture of outcomes. Language and terms are important in this endeavor. The latter implies a rigidity of ends, whereas the former reflects the dynamic nature of learning, student development and solution making. A “teaching for the test” mentality cannot be the goal for most academic programs. We know from experience that assessment strategies that have relied heaviest on external standardized measures of achievement have been inadequate to detect with any precision any of the complex learning and developmental goals of higher education, e.g. critical thinking, commitment, values.
3. Accept that measurement of basic academic and vocationally oriented skills and competences may be appropriate for segments of the student population. For example, every time we get on an airplane we think of the minimum (and hopefully) high standards of the training of the pilots and the rigorous assessment procedures that “guarantee” quality assurance.
4. Avoid generic comparisons between colleges and universities as much as possible. A norm-referenced approach to testing guarantees that one half of the colleges and universities will be below average. The goal is not to be above average on some arbitrary criterion, but to achieve the unique mission and purpose of the specific college and university. A better strategy is to build off one’s strengths -- at both the individual and institutional level. Doing so reinforces an asset rather than a deficit view of both individual and institutional behavior leading to positive change and pride in institutional purpose. In order to benchmark progress, identify similar institutions. Such practices will encourage more differentiation in higher education and work to stem the tide of institutions clamoring to catch up with or be like what is perceived as a more prestigious college or university. "Be what you are, do it exceptionally well, and we will do what we can to fund you" would be a good state education policy.
5. Focus on tools that assess a range of student talent, not just one type or set of skills or knowledge. Multiple perspectives are critical to portraying the complexity of students’ achievements and the most effective learning and development environments for the enrolled students. All components of the learning environment, including student experiences outside the classroom and in the community must be assessed. We must measure what is meaningful, not give meaning to what we measure or test. Sometimes simple quantitative data such as graduation rates and records of employments are sufficient and essential for accountability purposes. But to give a full portrayal of student learning and development and environmental assessment, many types of evidence in addition to achievement tests are needed. Sometimes portfolio assessment will be appropriate, and at other times standardized exams will be sufficient.
6. Connect assessment with development and change. Assessment has been most useful when driven by commitment to l earn, create and develop, not when it has been mandated for purposes of administration and policy making. Assessment is the means, not the end. It is an important tool to be sure, but it always needs to point to some action by the participating stakeholders and parties.
7. Create campus conversations about establishing effective environments for the desirable ends of a college education. Assessment can contribute to this discussion. In its best from, assessment focuses discussion, not make decisions. People do that, and people need to be engaged in conversations and dialogue in ways that they focus not on the evidence but the solutions. As we stated earlier, to assess is to share and care. When groups of faculty get together to discuss the evaluations of their students they initially focus, somewhat defensively, on the assessment evidence (and the biases inherent in such endeavors), but as they get to know and trust each other they focus on how to help each other to improve.
8. Emphasize assessment’s role in “value added” strategies. Assessment should be informing the various publics about how the educational experiences of students or of the institutional engagement in the larger society is bringing value to the students and society. All parties need to get used to the idea that education can be conceptualized and interpreted in terms of a return on investment. But this can only be accomplished if we know what we are aiming for. This will be different for each college and university and that is why the dialogue with policy makers is so crucial. For some, the primary goal of college will focus on guiding students in their self discovery and contributing to society; for others it will be more on making a living; for yet others on understanding the world in which we live.
When both the public and higher education accept and endorse the principle that assessment is less about compliance or standardization and more about sharing, caring and transparency, then confidence, trust and satisfaction will be more likely. We believe that higher education must take the lead by focusing on student learning and development and engage with the public in collaborative decision making. If not, policy makers may conclude that they have only the clubs of compliance and standardization to get higher education’s attention.
Larry Braskamp and Steven Schomberg
Larry A. Braskamp, formerly senior vice President for academic affairs at Loyola University Chicago, is professor of education at the university. Steven Schomberg, retired in 2005 as Vice Chancellor for Public Engagement and Institutional Relations, University of Illinois at Urbana-Champaign.
Many people think they know what we should produce with the process we call a college education. Unfortunately, they don’t agree with each other, so the topic of measuring college success provides an endless opportunity for self-assured clarity about what is not at all clear. The current occasion for the revival of this topic, which has had various other high and low points on the national accountability agenda, comes from the Spellings commission’s discussion and draft reports that call for colleges and universities to tell their customers the college will produce for students.
This seemingly reasonable request is like most high level educational principles: dramatic and simple in general and remarkably complicated and difficult in specific. Let’s look at some of the complications.
The product of a college degree is, of course, the student. Many want to assure parents and other customers that their students will emerge from the process of higher education with a specific level of skills and abilities. Recognizing the difficulty and expense of enforcing exit testing on all students, some propose to test a sample of students and infer from the results an achievement score for the institution that customers can then compare with the scores from other institutions. Leaving aside for the moment the touchy question of exactly what we want the students to know, testing that produces a raw institutional score is not likely to work very well by itself.
Everyone knows that smart, well prepared freshmen usually end up as smart well prepared graduating seniors. If students test well entering the institution they are very likely to test well exiting the institution. Our egalitarian spirit worries that institutions whose students are less smart and less well prepared will necessarily score low on these exit tests in comparison to elite institutions with very well prepared students. Every institution that works hard to improve their students’ abilities should get a good score because the idea of improvement inspires everyone. A method to ensure that every institution, whatever the initial quality of its students’ preparation can score well on a national scale goes by the term “value added.”
Value-added methods attempt to measure the ability and preparation of students when they enter the institution, measure the ability and achievement of the students as they leave the institution, and then calculate an improvement score. Value added ascribes the improvement score to the wisdom and dedication of the institution (even if the achievement is actually the students’).
A value-added score, calculated using the same methodology for all higher education institutions in America, would enable an institution with limited resources that admits students with very poor high school records and very low SAT scores but graduates students who have pretty good GRE scores (as an example of an exit exam) to get a 100% score because the improvement or value-added is large. Colleges with superb facilities and resources that admit students with very high SAT scores and very fine high school preparation and graduate students with very good GRE scores could get a 50% score because the improvement measured by the tests would be modest (from terrific coming in to terrific going out). Then, in the national rankings, the first institution could claim to be a much better institution for improvement than the second one.
This discourse fools no one and would actually tell consumers that the institution they want their students to enroll in is the one that has high scores going in and high scores going out rather than the one that has low scores going in and medium scores going out. What matters, as everyone knows, is the score leaving the institution.
This approach also has the perverse effect of devaluing actual accomplishment and ability in favor of improvement. It implies that a student is doing just as well at an institution that graduates at the middle level of accomplishment (but with lots of improvement) as the student would do at an institution that graduates at the top level of accomplishment (but with less improvement).
It does the employer and the student no good to know that the student attended an institution that produces middle level performance from very poor preparation. The employer wants a graduate who has high performance, high skills, high levels of knowledge and ability. The employer is likely less interested in knowing that the student had to work hard to be a middle level performer and more interested in hiring someone with a high level of performance.
If we measure value added (by whatever means), we have to create a test for the end point: what the graduating student knows about the specific subjects studied, about the specific major completed. When we test for what the student knows about the substance of the various fields of study, on some national scale, then we will have a marker for achievement. Once we have this marker for achievement, no one will care much about the marker at the entry level. Everyone will want their student to be in an institution whose scores demonstrate high levels of graduating achievement. It may give struggling institutions a sense of accomplishment to move students from awful preparation to modest achievement, but it will not change the competitive nature of the marketplace nor will it reduce the incentive to get the very best students who will, even if they don’t improve at all, score high on exit exams.
In this discussion, as is true in all efforts to measure institutional quality and performance, nothing is simple and no single number or measure will achieve that national reference point for total college achievement. College, as so many of us repeat over and over, is a complicated experience. There is no standardized college experience.
What we have is a relatively standardized curriculum and time frame. We have a four to five year actual or virtual educational process for students pursuing a traditional four-year baccalaureate degree, we have a general education requirement and a major requirement, and we have a host of extra or enhanced optional or required experiences for students. Within these large categories, the experience of students, the learning of students, and the engagement of students varies dramatically from discipline to discipline within institutions as well as between institutions.
Much of the emphasis on accountability measurement has as its premise the highly destructive goal of homogenizing the content and process of American higher education so that all students have the same experience and the same process. This centralizing drive comforts regulators, but it does not reflect the reality of the marketplace. As we have emphasized before, the American commitment to universal access to higher education requires a high level of variability in institutions, in the educational process, and in the outcomes. We do need good data from our institutions about what they do and what success their graduates have, but we do not need elaborate, centralized, homogeneity enforced by an ever more intrusive regulatory apparatus.
Look around you. Virtually everyone in the room is engaged in a job different from the one they prepared for in college.
This tells a story of a process that transcends content and curriculum, a process that goes beyond training, to the point where education actually took place. You and your colleagues underwent a transformation in the 1,800 or so hours you spent in the classroom interacting with your peers and with 40 or so faculty members at one level or another. You emerged from college having developed the ability to listen, to assimilate, to learn on your own, to project your own insights, opinions and views.
Some faculty members taught you how to think, how to challenge, to have confidence and to be independent. Most of you acquired the ability to analyze and to synthesize. Many acquired a love of learning for its own sake. You found faculty members with a wide variety of skills and goals; some tried to teach you content, as well as discernment. Others projected a point of view and welcomed a contrary view, if well supported.
In all this time, you also acquired knowledge, most of which is long gone. But you are still a different person from the high school graduate who entered college as a freshman. You learned how to read analytically and critically, you began to appreciate the role of originality and creativity. You know how to formulate and defend a hypothesis. And you learned how to assimilate the ideas of others and to interact, whether to support or to disagree.
There is so much else that you acquired, and when you graduated it was not just because you passed a number of courses. The structure, the faculty, the ever more demanding senior courses, the coherence of your major, and the qualities of mind, marked you as a successful outcome.
You are the reason the colleges are proud of what they do and your accomplishments represent the performance that colleges and universities point to in developing and justifying their reputation. Reputations are not developed in a vacuum. You, your parents, your children, your colleagues and your peers are the living remnants of the college experience. Your success justifies the massive resources poured by private Americans into supporting colleges and universities. And your success validates the vocation that characterizes the role of so many faculty members.
There is something special about American higher education, which continues to produce some of the world's greatest scientists and engineers, thinkers and scholars. There is something unique in the education we offer, which provides a breadth, an intellectual depth to accompany the skills and aptitudes of the specialist. And there are the human successes in sectors whose mission is to produce an involved, thinking citizenry.
Not everyone agrees that American higher education is characterized by success. Numbers are quoted indicating that the quality of graduates is not what it used to be. But they forget that sometimes the numbers go down as the numbers go up. As American higher education welcomes people less prepared, less gifted and often less motivated, as the atmosphere at some colleges becomes less rarified by the proliferation of remedial education, the average accomplishment will go down.
Nonetheless they insist it is time to measure learning outcomes. We are to select slices of the educational experience -- those slices that can be measured -- and somehow draw conclusions about all learning. Unfortunately, that which can be measured usually excludes the most important characteristics of a person's education. Depending on the consequences of these measurements, colleges will teach to the test and so, too, will faculty. Everyone wants to succeed, and if success is going to be defined by those outside academe, it is learning and teaching that will feel the pain first. In the end all of society will suffer.
Tragically, the intellectual immersion, which you yourselves recognized as characteristic of the totality of your undergraduate experience, will be compromised. That will happen precisely at the time when young people from emerging communities arrive at the gates of our colleges and universities, desperately needing this kind of intellectual immersion.
In the end, higher education has responded to the call for broad measures of learning outcomes. Several national organizations have committed to encouraging member institutions to experiment in this direction. But we must remember we are talking about experiments. These efforts must remain pilot projects subject to validation carried out within academe. We must further insist that the use of such measures be based on inherent value, rather than governmental mandate.
Government has heard from all the others; it is time to hear from us. From you.
Bernard Fryshman is executive vice president of the Association of Advanced Rabbinical and Talmudic Schools’ Accreditation Commission.
The secretary of education’s Commission on the Future of Higher Education unequivocally advances the notion that the “business” of colleges and universities -- defined primarily in the final report as “preparation for the work force” -- is best advanced by the disclosure of data allowing institutions to be compared to one another, particularly in measurements of student learning. Standardized testing of all college students would be required to produce those comparative quantitative data. Such universal application of testing is forwarded as the guarantee of accountability for what this American democracy requires most essentially from its higher-education institutions. In other words, what has already been applied with mixed success to pre-collegiate education is now to be applied to higher education. In addition to the No Child Left Behind Act, we are to have what might be called No College Left Behind.
In the nation’s current zeal to account for all transfer of teaching and insight through quantitative, standardized testing, perhaps we should advance quantitative measurement into other areas of human meaning and definition. Why leave work undone?
I suggest, for example, that a federal commission propose an accountability initiative for those of faith (not such a wild notion as an increasing number of politicians are calling the traditional separation of church and state unhealthy for the nation). This effort should be titled No God Left Behind. The federal government would demand that places of worship, in order to be deemed successful, efficient and worthy of federal, state and local tax-support exemption, provide quantitative evidence of the effectiveness of their “teaching.” (Places of worship are not unlike colleges and universities in that they are increasing their fund-raising expectations -- their form of “price” -- because of increasing costs.) The faithful, in turn, would be required to provide quantitative evidence of the concrete influence of their respective God upon behaviors within a few years of exposure -- say four years.
And in keeping with the Commission on the Future of Higher Education’s suggestion that one test would be appropriate for all types of higher-education institutions regardless of mission -- liberal-arts colleges, private research universities, public research universities, community colleges, for-profit-online universities, vocational schools -- a standardized test would be applied to a person of faith, whether Christian, Jew, Muslim, Hindi or other “approved” religions. Additionally, a pre-test would be given to the faithful upon initial engagement with their respective God and place of worship, and would be followed by a post-test after four years to assess “value added.”
Of course, I really don’t think No God Left Behind is a good idea. The reasons why also are applicable to No College Left Behind and No Child Left Behind. Most people of faith, I believe, would argue that this quality lies beyond mere human quantitative measurement to validate its worth, that it exists in a variety of forms (only the most radical would argue for the exclusion of faiths that fail a test), and that its effects on human beings may not be immediately evident. None of these assertions, of course, makes faith for believers any less real as a source of improving the quality of human life.
My case for faith continuing to flourish for those who wish it, without proof through standardized testing, shares critical affinities with my argument for higher education not being universally subject to quantitative assessment. There are at least four inter-related issues that confound the Commission’s absolutism towards quantitative measurement to solve the imagined knowledge deficit and lack of contribution to the nation by American higher education.
First, quantitative testing, to be of application, must have as its subject that which can be empirically assessed. Such limitation leaves out critical areas of human knowledge, meaning and definition that are not readily subject to immediate empirical assessment during the course of instruction but are, nevertheless, very real: the development of character thorough trial and error in a residential setting, an appreciation of the arts and aesthetics; a literary and poetic sensibility; a recognition of the responsibilities of citizenship; an appreciation of liberty and freedom; a spirit of business entrepreneurialism; and creativity and inventiveness in the sciences (and I am not talking solely about the short-term acquisition of cultural, historical and political “fact” in these areas).
The commission’s recommendations -- with their focus on workforce preparation -- might well reduce the scope of what is taught and discussed in those institutions to only those areas that can be indisputably measured by a test. An abiding respect for learning, which is not so obviously technical and thus not measurable through standardized assessment, is rooted deeply in the intentions for a distinctively American higher education by our country’s founders. Indeed, Benjamin Rush, a patriot, signer of the Declaration of Independence and founder of several colleges, to include Dickinson, proclaimed this distinctive American relationship among advanced knowledge, abstract concepts and the future well-being of the nation when he said, “Freedom can exist only in a society of knowledge. Without learning, men are incapable of knowing their rights.” The intent of a liberal education is thus defined.
Both propositions are based not on the quantitative assessment of the merely technical, but rather the confidently ambiguous power of existing in a “society of knowledge,” one that would influence learners to a much desired and critically important ideal -- democracy and the diversity of perspective that it secures. There exists in Rush and his co-conspirators, in founding a distinctively American higher education after the end of the revolution, a mature appreciation of the complexity and variety of the instruction necessary to advance a democracy.
Second, and closely related to the perspective of Rush, is that education in America was not intended solely to provide young people for “the work force” through the empirically demonstrated mastery of a limited set of practical skills. Fundamental literacy, numeracy and scientific knowledge were more properly the task of the grammar schools and the academies (high schools). American higher education historically builds on this “technical” accomplishment and engages students in a democratic way of life through both advanced technical and speculative (creative) learning.
Third, students in the United States at all levels of formal education already are the most “tested” by standardized measurement in the world. Yet, we still seem to be in a position of deficit in improving what students actually know and need to know to function productively in society. Do we truly believe that more testing will lead to improved teaching and learning? Are we so convinced that “to test is to learn” despite so much evidence to the contrary?
Fourth, are we oblivious to the fact that, like the flourishing of spirituality only in societies that are generously supportive, the acquisition of knowledge only advances in political entities for which this activity is esteemed and generally valued? A society and government in which only practical, technical knowledge is lauded and that which is more abstract is derided -- such as the long-term, arduous education for the appreciation of democracy, liberty and freedom -- have little chance of moving a people to take the enterprise seriously.
I have no doubt that Secretary Spellings, the Commission members and the chairman, Charles Miller, intend an American higher education that offers the nation and the world graduates who can confront, with knowledge, skill, creativity and an entrepreneurial spirit, the challenges and the opportunities that the world demands. My caution -- and it is a pointed one -- is that in our rush to secure excellence thorough the simplistic and misguided notion of increased quantitative assessment of workforce skills, we will destroy the historic distinctiveness of American higher education.
Derek Bok, in Our Underachieving Colleges, cites numerous commentators over the last few decades alarmed at the perversion of American higher education as it progressively leans to practical and technical knowledge at the expense of more generous, less immediately focused ambitions. For example, Diane Ravitch, an education analyst who has frequently criticized the college establishment, states, “American higher education has remade itself into a vast job-training program in which the liberal arts are no longer central.” And Eric Gould in 2003 observes negatively that, “What we now mean by knowledge is information effective in action, information focused on results. We tend to promote the need for a productive [emphasis added] citizenry rather than a critical, socially responsive, reflective individualism.”
We must never forget that a distinctively American higher education, using a wide variety of internal and external assessments already in place, aims to increase competencies and literacies established prior to college (although far greater public transparency is certainly needed). This ambition the United States shares with the rest of the world. American education, however, infuses this globally shared agenda with something extra, something that has secured its distinction for centuries -- to extend beyond factual and technical knowledge and to introduce its students to what Derek Bok describes as, “more ethically discerning … more knowledgeable and active in civic affairs” -- and that cannot be captured through standardized testing at the moment of introduction, for it unfolds over time and with experience.
Lose this ambition and American higher education has lost permanently its distinction as a democratic society of knowledge.
William G. Durden
William G. Durden is president of Dickinson College.
Of all the ideas to come out of Margaret Spellings's Commission on the Future of Higher Education, the final report proposal that has been the most contentious inside the DC Beltway is the proposal for a unit-records database. There are plenty of other controversial ideas floated in the commission's hearings, briefing papers, and report drafts, but the one bureaucratic detail that most vexed private colleges and student associations over the past year is the idea that the federal government would keep track of every student enrolled in every college and university in the country. Given reports this year about the Pentagon hiring a marketing firm to collect data on teens and college students, the possibility that Big Brother would know every student's grades and financial aid package has worried privacy advocates.
Fortunately, privacy and accountability do not need to be at odds.
The proposal for a unit-records database was floated in a 2005 report that the U.S. Department of Education commissioned. Advocates have argued that the current system of reporting graduation data through the Integrated Postsecondary Education Data System (IPEDS) only captures the experiences of first-time, full-time students who stay in a single college or university for their undergraduate education. How do we capture the experiences of those who transfer, or those who accumulate credits from more than one institution? Theoretically, we could trace such educational paths by tracking individuals, including your Social Security Number or another identifier to link records.
Charles Miller, who led the Spellings commission, was one of the unit-records database advocates and pushed it through the commission's deliberations. Community-college organizations liked the idea, because it would allow them to gain credit for the degrees earned by their alumni. But the National Association of Independent Colleges and Universities, the U.S. Student Association, and other organizations opposed the unit-records database, and in its current form the proposal is certainly dead on arrival as far as Congress is concerned.
There are three problems with a unit records database. The first problem is privacy. I just don't believe that the federal government would keep my children's college student records secure. An October report by the House Committee on Government Reform documents data losses by 19 agencies, including financial aid records that the U.S. Department of Education is responsible for. Who trusts that the federal Department of Education could keep records safe?
The second problem is accuracy. I have worked with the individual-level records of Florida, which has had a student-level database in elementary and secondary education since the early 1990s. If any state could have worked the kinks out, Florida should have. But the database is not perfectly accurate. I have seen records of first graders who are in their 30s (or 40s) and records of other students whose birthdays (as recorded in the database) are in 2008 and 2010. The problem is not that the shepherds of the database system are incompetent but that the management task is overwhelming, and there are insufficient resources to maintain the database. Poorly-paid data entry clerks spend their time entering students into the rolls, entering grades, withdrawals, and dozens of other small bits of information. We probably could have a nearly perfect unit-records database system, if we are willing to spend billions of dollars on maintenance, editing, and auditing. In all likelihood, a unit-records database system for all higher education in the U.S. would push most of the costs onto colleges and universities, with insufficient resources to ensure their complete accuracy.
The third problem with such a database is that the structure and size would be unwieldy. Florida and some other states have extensive experience with unit records, and very few researchers use the data that exist in such states. The structures of the data sets are complicated, and beyond the fact that using the data taxes the resources of even the fastest computers, the expertise needed to understand and work with the structures is specialized. Such experts live in Florida's universities and produce reports because they are the experts. But few others are. There would be no huge bonanza of research that would come from a national unit-records database.
A Solution: Anonymous Diploma Registration
Most of the problems with the unit-records database proposal can be solved if we follow the advice of statistician Steven Banks (from The Bristol Observatory) and change the fundamental orientation away from the question, Who graduated? and toward the question, How many graduated? The first question requires an invasion of privacy, expensive efforts to build and maintain a database, and a complex structure for data that few will use. But the second question -- how many graduated? -- is the one to answer for accountability purposes. It's the question that community colleges want answered for their alumni. And it does not require keeping track of enrollment, course-taking, or financial aid every semester for every student in the country.
All that we need is the post-graduation reporting of diploma recipients by institutions, with birthdates, sex, and some other information but without personal identifiers that would allow easy record linkage. Such a diploma registration system would fit with the process colleges and universities already go through in processing graduations. An anonymous diploma registration system could also identify prior institutions -- high schools where they graduated and other colleges where students earned credits that transferred and were used for graduation. Such an additional part of the system could be phased in, so that colleges and universities record the information when they evaluate transcripts of transfer students and other admissions. The recording of prior institutions would address the need of community colleges to find out where their alumni went and how many graduated with baccalaureate degrees.
Under such a system, any college or university could calculate how many students graduated and the average time to degree (as my institution in Florida already can). Any college or university could also count how many students who transferred to other institutions eventually graduated. High schools would be able to identify how many of their own graduates finished college from either in-state and out-of-state institutions. Institutions could figure out what types of programs helped students graduate, and the public would have information that is more accurate and fairer than the current IPEDS graduation statistics. All of these benefits would happen without having to identify a single student in a new database.
A short column is not the place to describe the complete structure for such a system or to address the inevitable questions. I am presenting the idea in more depth this afternoon at the Minnesota Population Center, and I have established an online tutorial describing the idea of anonymous diploma registration in more detail. But I am convinced that the unit-records database idea is wasteful, dangerous, and unnecessary. Anonymous diploma registration is sufficient to address the most critical questions of how many graduate from institutions, and it does not threaten privacy.
I often mention in my community college classroom that 150 years ago probably none of us would have been in college. "After all," I say, "there's no point in educating women. It's a waste. They're only going to get married and have babies. Besides, their brains can't take all that academic work." I, especially, wouldn't be in college, I add, being Jewish: "We don't want those people in colleges!" I go on to comment that the Irish are, of course, good only for domestic work and hard labor, all being drunkards and sleeping with the pigs as they do. Asians aren't even human, if you educate blacks they get uppity, and so on. And yet, here we are, all of us capable of receiving and appreciating a college education. Times have changed.
In some ways it was easier when only educated white men went to college. The instructors could assume a certain level of competence in Latin and the classics, a common creed, and agreement on appropriate dress and manners. I long ago gave up expecting students to know anything about Noah, Moses, and Jesus, let alone Socrates, Homer, and Galileo, but I was stunned when no one in a class this semester had any idea who "leaps tall buildings with a single bound," or that the author's allusion to the Man of Steel was sarcastic.
In exchange, however, we have people who contribute experiences that may never have been discussed in those all-white, all-male, four-year college classrooms: veterans of Viet Nam, the Gulf War, and Iraq bring vivid perspectives to texts that treat of war; speakers of languages like Tamil add to my growing certainty that English is the only language in the world that uses an apostrophe to designate the genitive tense. The experiences of Tillie Olsen's narrator in "I Stand Here Ironing" come alive when women talk about their own pregnancies, childbirth, and raising children alone.
We still, however, look at an undergraduate college education as something that takes place mainly in a residential school and must be completed in four years. The definition of "success" in any college reflects this no-longer-accurate idea that for everyone ages 18 to 22 (23 at the most), college is the dominant life experience, and the outcome must be a diploma.
At both community colleges where I have taught, a "successful" student completes an associate degree, and does so within three years of entering a program designed to take two years. By this definition, then, the young woman who just transferred to the four-year school with a 3.97 average and a substantial scholarship, 12 credits shy of a community college degree, is officially a failure. She didn't complete a degree program. This example is extreme; however, legislatures and public policy makers have long cited low graduation rates and students who take too long to complete their work as evidence of failure in community colleges. Inside Higher Ed recently reported that the Public Policy Institute of California has issued a report sharply criticizing the retention and transfer rates of California community colleges, concluding that "if community college continues to be the dominant form of higher education for these students, achievement rates for these students must improve."
Why? Or, rather, why is "achievement" built on the old model of the all-white, all-male, four-year residential college? While the report acknowledges the multiple constituencies of community colleges, it still sees the completion of a degree within a certain time limit as the primary goal.
As a non-traditional student, I began my college education at 23 when I took one course in Irish history with John Kelleher (may he rest in peace) at Harvard University Extension. I took it because I had a passion for Irish history, and I took it for credit because I figured, "why not?" Someday in a million years someone might give me a college degree for this. Twelve years later, 8 months pregnant, and having taken a quarter of my college credits in Harvard College (daytime) classes, I received my bachelor's degree with honors. In the meantime, my leisurely pace had enabled me to explore classes thoroughly, two at a time, and evolve into a Jewish studies major who eventually published in national journals. That's something I may not have achieved in a conventional four-year residential program when I was 18.
Comments on the Inside Higher Edarticle on the California report point out that community college students have often been out of school for several years. Many, even the 18-year-olds, come with multiple responsibilities; they may be socially, educationally, or economically disadvantaged. They may arrive with physical, emotional, or learning disabilities. I have taught many students who don't succeed at community college because they are taking four or five courses, working 40 hours a week, and raising children or otherwise contributing to their households. Some have little idea why they are in school or what they want from a college education. Attending college part-time would make sense for these people, too many of whom fail and fail and fail class after class.
Why don't they go part-time? For the growing number of students ages 18 to 24, the top reason is health insurance: If they don't take four courses, they are not covered under their parents' health insurance. If the health insurance companies would take a long hard look at this destructive policy, we might not have so many students failing courses in which they are enrolled solely to maintain that full-time status. But the health insurance model is based on the traditional college model: four years, your parents support you, and you're out.
Another reason students don't go part-time is the structure of financial aid. Again, if the federal and state governments took a long hard look at what they are funding, they might decide that funding a part-time A-average student who makes steady progress is as useful as funding the frantic full-time C-average student who regularly fails one course a semester.
That student may be frantic because she is under pressure to finish the degree quickly, and, while the pressure comes from several directions, much of it comes from the academic expectations based on the traditional model and from organizations like the one that issued the report. Admissions departments, enrollment divisions, counseling centers, instructors, and ultimately students are under subtle but constant pressure to "succeed" by having students complete that degree and complete it "on time" -- in the time determined by model of the all-white, all-male, traditional four-year residential college. Why?
Rodney was a 35-year-old Gulf War veteran, father of three, who passed my intensive remedial reading and writing course having read not one but six books entirely through for the first time in his life. At the end of the year, he decided not to pursue his associate degree, but to transfer to a commercial computer training program. The last time I heard from him, he had graduated from the computer course and been accepted to a much higher paying job, pending a security clearance. Yet he didn't succeed in finishing the college program or transferring to another recognized degree program. He didn't even succeed in achieving his initial goal because, based on his experience, he changed that goal. Is this what failure looks like?
Measuring success in community colleges is not as easy or fast as tallying graduation rates. Colleges may need to make an effort to find out why students like Rodney do not finish or transfer. Because I persisted in investigating, I know that Marc disappeared mid-semester because his mother died and he had to return to his home in the next state to care for his son. Ryan withdrew from all his classes a week before classes began because his National Guard unit was mobilized, and Karen called me from her husband's new posting to ask for help in transferring credits. Some of these students may return to our college, but the college can hardly be held accountable for the fact that they left. The first step in accountability is to find out why students have not returned.
As for when colleges should be held accountable, perhaps we need to look not at graduation and transfer rates but at what students themselves have gained from their experiences. When Fred, who insisted that he hates poetry, analyzed Robert Bly’s “Gratitude to Old Teachers” with such grace and insight that I carried his final exam around with me for three days because reading it made me happy, the college succeeded. Fred succeeded.
When "the boys came home" in the 1940s, the GI Bill helped to change the college model to include men of modest means, many of whom had served in the military after high school and who helped establish the model of returning adult students who juggle family and school obligations. In the 1960s and 1970s, we changed the college model to include women, blacks, Asians, and others traditionally not entitled to an education. In the 1980s and 1990s we changed the college model again to include people who need ramps and elevators and special testing accommodations to gain access to and complete their college educations. Now it's time to change the model again. A community college is open to all people who want to learn. Success means students achieve what they came for: one class, one semester, or a degree that takes ten years. Perhaps after we've redefined success for community colleges, we can share our new model with students now "failing" in four-year colleges.
Jane Arnold is the reading specialist and an assistant professor of English at Adirondack Community College.Â She has taught at the community college level for 15 years and has also taught in private four-year colleges.