For more than a decade, the National Survey of Student Engagement (NSSE) and the Community College Survey of Student Engagement (CCSSE) have provided working faculty members and administrators at over 2,000 colleges and universities with actionable information about the extent to which they and their students are doing things that decades of empirical study have shown to be effective. Recently, a few articles by higher education researchers have expressed reservations about these surveys. Some of these criticisms are well-taken and, as leaders of the two surveys, we take them seriously. But the nature and source of these critiques also compel us to remind our colleagues in higher education just exactly what we are about in this enterprise.
Keeping purposes in mind is keenly important. For NSSE and CCSSE, the primary purpose always has been to provide data and tools useful to higher education practitioners in their work. That’s substantially different from primarily serving academic research. While we have encouraged the use of survey results by academic researchers, and have engaged in a great deal of it ourselves, this basic purpose fundamentally conditions our approach to “validity.” As cogently observed by the late Samuel Messick of the Educational Testing Service, there is no absolute standard of validity in educational measurement. The concept depends critically upon how the results of measurement are used. In applied settings, where NSSE and CCSSE began, the essential test is what Messick called “consequential validity” -- essentially the extent to which the results of measurement are useful, as part of a larger constellation of evidence, in diagnosing conditions and informing action. This is quite different from the pure research perspective, in which “validity” refers to a given measure’s value for building a scientifically rigorous and broadly generalizable body of knowledge.
The NSSE and CCSSE benchmarks provide a good illustration of this distinction. Their original intent was to provide a heuristic for campuses to initiate broadly participatory discussions of the survey data and implications by faculty and staff members. For example, if data from a given campus reveal a disappointing level of academic challenge, educators on that campus might examine students’ responses to the questions that make up that benchmark (for example, questions indicating a perception of high expectations). As such, the benchmarks’ construction was informed by the data, to be sure, but equally informed by decades of past research and experience, as well as expert judgment. They do not constitute “scales” in the scientific measurement tradition but rather groups of conceptually and empirically related survey items. No one asked for validity and reliability statistics when Art Chickering and Zelda Gamson published the well-known Seven Principles for Good Practice in Undergraduate Education some 25 years ago, but that has not prevented their productive application in hundreds of campus settings ever since.
The purported unreliability of student self-reports provides another good illustration of the notion of consequential validity. When a student is asked to tell us the frequency with which she engaged in a particular activity (say, making a class presentation), it is fair to question how well her response reflects the absolute number of times she actually did so. But that is not how NSSE and CCSSE results are typically used. The emphasis is most often placed instead on the relative differences in response patterns across groups -- men and women, chemistry and business majors, students at one institution and those elsewhere, and so on. Unless there is a systematic bias that differentially affects how the groups respond, there is little danger of reaching a faulty conclusion. That said, NSSE and CCSSE have invested considerable effort to investigate this issue through focus groups and cognitive interviews with respondents on an ongoing basis. The results leave us satisfied that students know what we are asking them and can respond appropriately.
Finally, NSSE and CCSSE results have been empirically linked to many important outcomes including retention and degree completion, grade-point-average, and performance on standardized generic skills examinations by a range of third-party multi-institutional validation studies involving thousands of students. After the application of appropriate controls (including incoming ability measures) these relationships are statistically significant, but modest. But, as the work of Ernest Pascarella and Patrick Terenzini attests, such is true of virtually every empirical study of the determinants of these outcomes over the last 40 years. In contrast, the recent handful of published critiques of NSSE and CCSSE are surprisingly light on evidence. And what evidence is presented is drawn from single-institution studies based on relatively small numbers of respondents.
We do not claim that NSSE and CCSSE are perfect. No survey is. As such, we welcome reasoned criticism and routinely do quite a bit of it on our own. The bigger issue is that work on student engagement is part of a much larger academic reform agenda, whose research arm extends beyond student surveys to interview studies and on-campus fieldwork. A prime example is the widely acclaimed volume Student Success in College by George Kuh and associates, published in 2005. To reiterate, we have always enjoined survey users to employ survey results with caution, to triangulate them with other available evidence, and to use them as the beginning point for campus discussion. We wish we had an electron microscope. Maybe our critics can build one. Until then, we will continue to move forward on a solid record of adoption and achievement.
Peter Ewell is senior vice president of the National Center for Higher Education Management Systems and chairs the National Advisory Boards for both NSSE and CCSSE. Kay McClenney is a faculty member at the University of Texas at Austin, where she directs the Center for Community College Student Engagement. Alexander C. McCormick is a faculty member at Indiana University at Bloomington, where he directs the National Survey of Student Engagement.
Bachelor's degree recipients in 2007-8 who began their postsecondary educations at a community college took almost 20 percent longer to complete their degrees than did those who started out at a four-year institution, those who began at four-year private colleges finished faster than did those at four-year public and for-profit institutions, and those who delayed entry into college by more than a year out of high school took almost 60 percent longer to complete their degrees than did those who went directly to college.
That a large number of academically gifted and economically affluent students (or their parents) have become savvy consumers, getting their first two years of general education courses out of the way at low-cost community colleges rather than pricier state schools and liberal arts colleges?
That by doing so, these would-be competitive admissions students are taking up a large number of slots at community colleges that would otherwise be filled by less academically gifted or less economically affluent students?
That private nonprofit schools, meanwhile, are maintaining their competitive admissions edge by providing more merit-based tuition discounts rather than need-based tuition discounts? That by doing so, these schools become less and less of an option for those less fortunate?
And that, as the number of well-paying blue collar jobs shrinks in response to the changing nature of the economy, the American middle class must either contract, or the skills needed to gain and retain a well-paying job must somehow expand?
I hope we can find consensus around those points. Most people can at least agree on the connection between college education and well-paying jobs, and the need to up-skill the American workforce in order to defend a society in which the benefits of middle class living are widely shared and enjoyed. Most can also agree that higher education access is shrinking in response to a variety of external pressures, including state budget cuts to higher education and a more consumer-savvy insistence on tuition dollar value.
Now we reach the question where many people disagree. Do less well academically prepared, less affluent individuals deserve an opportunity to receive a higher education? And, if so, should they attend institutions best situated to respond to their particular academic, social and emotional needs, or should they be forced to accept whatever public school option may be available -- regardless of the institution’s track record in retaining and graduating students?
These are the questions at the heart of the current debate surrounding private sector colleges and universities (PSCUs). These institutions cost the student more to attend than a public school does, but, through generous subsidies, taxpayers pay the bulk of education costs at community colleges, not students. As a result, the absolute cost of postsecondary attendance is actually less at the private sector alternative. The Institute for Higher Education Policy recently issued a report about low-income adults in postsecondary education, noting -- as many in higher education have long been aware -- that a significant percentage of low income and minority students attend PSCUs and community colleges. From the perspective of our critics, PSCUs “target” these students while community colleges “serve” them.
Both types of institutions operate in what is largely an open admissions environment (although my own institution does not). Both serve the adult student, who is often financially independent. Both strive to provide students with an education that facilitates career-focused employment (although community colleges wear many other postsecondary hats as well). Both use advertising as well as word of mouth referrals to attract students. But many PSCU students have already attended a community college and opted out for various reasons, including the long waits to enter the most popular programs, large class sizes and inflexible schedules. These problems are all made worse by state budget cuts to higher education.
PSCU students do pay more out of their own pockets than do community college students, but PSCU students see the cost justified by what they receive in return. This value expresses itself in greater individual attention and support … in having confidence in academic skills restored where they may be flagging … in gaining new motivation to succeed and seeing that motivation reinforced through success itself ... and in making the connection between classroom learning and employable skills real and direct.
Two-year PSCU institutions graduate students at three times the rate of community colleges. Placement rates are the bottom line on career-focused education, however, and while community colleges offer lower-cost career programs without outcome metrics, PSCUs must match their career education offerings with real placement of students in relevant jobs. Again, PSCU students see this outcomes-based approach as a difference worth paying for.
In this broader context, the irony of PSCUs being accused of “targeting” students becomes clear. Apparently where some see targeting of low income and minority students unable to make informed decisions about their futures, we see tailoring of postsecondary education to suit a nontraditional student population -- and a better fit all around.
Arthur Keiser is chairman of the Association of Private Sector Colleges and Universities and chancellor of Keiser University.
Do majors matter? Since students typically spend more time in their area of concentration than anywhere else in the curriculum, majors ought to live up to their name and produce really major benefits. But do they?
Anthony P. Carnevale, the Director of Georgetown’s Center for Education and the Workforce, had recently provided a clear answer. Majors matter a lot -- a lot of dollars and cents. In a report entitled “What’s it Worth,” he shows how greatly salaries vary by major, from $120,000 on average for petroleum engineers down to $29,000 for counseling psychologists.
But what if one asked whether majors make differing contributions to students’ cognitive development? The answer is once again yes, but the picture looks very different from the one in the Georgetown study.
A few years ago, Paul Sotherland, a biologist at Kalamazoo College in Michigan, asked an unnecessary question and got not an answer but a tantalizing set of new questions. It was unnecessary because most experts in higher education already knew the answer, or thought they did: as far as higher-order cognitive skills are concerned, it doesn’t matter what you teach; it’s how you teach it.
What Sotherland found challenged that conventional wisdom and raised new questions about the role of majors in liberal education. Here’s what he did. Kalamazoo had been using the Collegiate Learning Assessment (CLA) to track its students’ progress in critical thinking and analytical reasoning. After a few years it become clear that Kalamazoo students were making impressive gains from their first to their senior years. Sotherland wondered if those gains were across the board or varied from field to field.
Since gains in CLA scores tend to follow entering ACT or SAT scores, they “corrected” the raw data to see what gains might be attributed to instruction. They found significant differences among the divisions, with the largest gains (over 200 points) in foreign languages, about half that much in the social sciences, still less in the fine arts and in the humanities, least of all in the natural sciences .
How was this to be explained? Could reading Proust somehow hone critical thinking more than working in the lab? (Maybe so.)
But the sample size was small and came from one exceptional institution, one where students in all divisions did better than their SAT scores would lead one to expect, and where the average corrected gain on CLA is 1.5 standard deviations, well above the national average. (Perhaps Inside Higher Ed should sponsor the “Kalamazoo Challenge,” to see if other institutions can show even better results in their CLA data.)
The obvious next step was to ask Roger Benjamin of the Collegiate Learning Assessment if his associates would crunch some numbers for me. They obliged, with figures showing changes over four years for both parts of the CLA -- the performance task and analytical writing. Once again, the figures were corrected on the basis of entering ACT or SAT scores.
The gains came in clusters. At the top was sociology, with an average gain of just over 0.6 standard deviations. Then came multi- and interdisciplinary studies, foreign languages, physical education, math, and business with gains of 0.50 SDs or more.
The large middle cluster included (in descending order) education, health-related fields, computer and information sciences, history, psychology, law enforcement, English, political science, biological sciences, and liberal and general studies.
Behind them, with gains between 0.30 and 0.49 SDs, came communications (speech, journalism, television, radio etc.), physical sciences, nursing, engineering, and economics. The smallest gain (less than 0.01 standard deviations) was in architecture.
The list seemed counterintuitive to me when I first studied it, just as the Kalamazoo data had. In each case, ostensibly rigorous disciples, including most of the STEM disciplines (the exception was math) had disappointing results. Once again the foreign languages shone, while most other humanistic disciplines cohabited with unfamiliar bedfellows such as computer science and law enforcement. Social scientific fields scattered widely, from sociology at the very top to economics close to the bottom.
When one looks at these data, one thing is immediately clear. The fields that show the greatest gains in critical thinking are not the fields that produce the highest salaries for their graduates. On the contrary, engineers may show only small gains in critical thinking, but they often command salaries of over $100,000. Economists may lag as well, but not at salary time, when, according to “What’s It Worth” their graduates enjoy median salaries of $70,000. At the other end majors in sociology and French, German and other commonly taught foreign languages may show impressive gains, but they have to be content with median salaries of $45,000.
But what do these data tell us about educational practice? It seems unlikely that one subject matter taken by itself has a near-magical power to result in significant cognitive gains while another does nothing of the sort. If that were the case, why do business majors show so much more progress than economics majors? Is there something in the content of a physical education major (0.50 SDs) that makes it inherently more powerful than a major in one of the physical sciences (0.34 SDs)? I doubt it.
Since part of the CLA is based on essays students write during the exam, perhaps the natural science majors simply had not written enough to do really well on the test. (That’s the usual first reaction, I find, to unexpected assessment results -- "there must be something wrong with the test.") That was, however, at best a partial explanation, since it didn’t account for the differences among the other fields. English majors, for example, probably write a lot of papers, but their gains were no greater than those of students in computer sciences or health-related fields.
Another possibility is that certain fields attract students who are ready to hone their critical thinking skills. If so, it would be important to identify what it is in each of those fields that attract such students to it. Are there, for example, “signature pedagogies” that have this effect? If so, what are they and how can their effects be maximized? Or is it that certain pedagogical practices, whether or not they attract highly motivated students, increase critical thinking capacities – and others as well? For example, the Wabash national study has identified four clusters of practices that increase student engagement and learning in many areas (good teaching and high-quality interactions with faculty, academic challenge and high expectations, diversity experiences, and higher-order, integrative, and reflective learning).
Some fields, moreover, may encourage students to “broaden out” -- potentially important for the development of critical thinking capacities as one Kalamazoo study suggests. Other disciplines may discourage such intellectual range.
One other hypothesis, I believe, also deserves closer consideration. The CLA is a test of post-formal reasoning. That is, it does not seek to find out if students know the one right answer to the problems it sets; on the contrary, it rewards the ability to consider the merits of alternative approaches. That suggests that students who develop the habit of considering alternative viewpoints, values and outcomes and regularly articulate and weigh alternative possibilities may have an advantage when taking the CLA exam, and quite possibly in real-life settings as well.
Since the study of foreign languages constantly requires the consideration of such alternatives, their study may provide particularly promising venues for the development of such capacities. If so, foreign languages have a special claim on attention and resources even in a time of deep budgetary cuts. Their "signature pedagogies," moreover, may provide useful models for other disciplines.
These varying interpretations of the CLA data open up many possibilities for improving students’ critical thinking. But will these possibilities be fully utilized without new incentives? The current salary structure sends a bad signal when it puts the money where students make very small gains in critical thinking, and gives scant reward to fields that are high performers in this respect . (For example, according to the College & University Professional Association for Human Resources, full professors in engineering average over $114,000, while those in foreign languages average just over $85,000.
Isn’t it time to shift some resources to encourage experimentation in all fields to develop the cognitive as well as the purely financial benefits of the major?
W. Robert Connor
W. Robert Connor is senior advisor to the Teagle Foundation.
Almost every college or university publishes a number called the student/faculty ratio as an indicator of undergraduate instructional quality. Among the many spurious data points exploited by commercial ranking agencies, this one holds a special place.
The mythology would have it that a low ratio, say 10 students per faculty member, indicates a university whose undergraduates take most of their instruction in small groups with a faculty instructor, and presumably learn best in those conditions. In contrast, a high number, say 25 students per faculty member, might lead us to think of large classes and less effective, impersonal instruction.
These common impressions represent mostly pure public relations. The ratio means none of this because the numbers used to calculate it are usually unreliable for comparing different universities or colleges and because the basic premise about small classes is flawed.
To illustrate the meaninglessness of the ratio, imagine two universities with exactly the same number of students, say 5,000, and the same number of faculty, say 500. Both institutions would report a student/faculty ratio of 10, and following common wisdom, we might imagine that both have the same teaching environment. The data do not show however, what the faculty do with their time.
Imagine that the first university has faculty of high prestige by virtue of their research accomplishments, and that these faculty spend half of their time in the classroom and half in research activities, a pattern typical of research institutions. Imagine, too, that the second university in our example has faculty less active in research but fully committed to the teaching mission of their college. Where the research-proficient faculty at our first institution spend only half their time in class, the teaching faculty in the second institution spend all of their time in the classroom.
Correcting the numbers to reflect the real commitment of faculty to teaching would give an actual student to teaching-faculty ratio of 20 to 1 for the research institution and 10 to 1 for the teaching college. The official reported ratio is wildly misleading at best.
The official student/faculty ratio is suspect for yet another reason. It appears as an indicator of something valuable in an institution's teaching and learning process. The reported ratio implies that having a small number of students in a class indicates high instructional quality and effective learning.It may be that in K-12 settings, small class sizes help struggling students learn. In reasonably high quality colleges and universities, however, the evidence is different.
In some classes, for example those that teach beginning languages or performance studios in music, students do learn better when taught in small to very small groups. In the core business curriculum, in basic economics, in art and music appreciation, in history and psychology introductory courses, and many other subjects, students learn as much in large classes of more than 100 as they do in small classes of fewer than 25. In real life, smart universities mix large and small classes so that students can get small classes when small size makes a difference and find a place in large classes when that format works just as well.
A Different Way?
If universities really cared to give students, prospective students and parents a picture of the instructional pattern at their institutions, they would erase the unhelpful student/faculty ratio and instead, provide a more useful measure.
They could analyze the transcripts of their most recent graduating class and report the pattern of large and small classes actually experienced by graduating seniors.
How many courses did the graduates take in their major that had fewer than 20 students, how many general education courses did they take with over 50 or over 100?How many of their courses during their undergraduate years had a tenure-track faculty instructor, and how many had a visitor, a part-time faculty, or a teaching assistant as an instructor?
This kind of report would encourage institutions to explain why the nontenure track instructors teach as well as the tenure-track faculty, and it would give parents and prospective students an accurate understanding of the actual teaching mix they should expect during their undergraduate years.
Such accuracy might not be as good advertising as the misleading student/faculty ratio, but it would have the virtue of reflecting reality, and it would encourage us to talk clearly about the design and the delivery of the undergraduate education we provide.
Unfortunately, some of us are old enough to have passed through various incarnations of the accountability movement in higher education. Periodically university people or their critics rediscover the notion of accountability, as if the notion of being accountable to students, parents, legislators, donors, federal agencies, and other institutional constituencies were something new and unrecognized by our colleagues. We appear to have entered another cycle, signaled by the publication last month of a call to action by the State Higher Education Executive Officers (SHEEO) association, with support from the Ford Foundation, called "Accountability for Better Results."
The SHEEO report has the virtue of recognizing many of the reasons why state-level accountability systems fail, and focuses its attention primarily on the issue of access and graduation rates. While this is a currently popular and important topic, the SHEEO report illustrates why the notion of "accountability" by itself has little meaning. Universities and colleges have many constituencies, consumers, funding groups, interested parties, and friends. Every group expects the university to do things in ways that satisfy their goals and objectives, and seek "accountability" from the institution to ensure that their priorities drive the university’s performance. While each of these widely differentiated accountability goals may be appropriate for each group, the sum of these goals do not approach anything like "institutional accountability."
Accountability has special meaning in public universities where it usually signifies a response to the concerns of state legislators and other public constituencies that a campus is actually producing what the state wants with the money the state provides. This is the most common form of accountability, and often leads to accountability systems or projects that attempt to put all institutions of higher education into a common framework to ensure the wise expenditure of state money on the delivery of higher education products to the people.
In this form, accountability is usually a great time sink with no particular value, although it has the virtue of keeping everyone occupied generating volumes of data of dubious value in complex ways that will exhaust the participants before having any useful impact. The SHEEO report is particularly clear on this point.
This form of accountability has almost no practical utility because state agencies cannot accurately distinguish one institution of higher education from the other for the purposes of providing differential funding. If the state accountability system does not provide differential funding for differential performance, then the exercise is more in the nature of an intense conversation about what good things the higher education system should be doing rather than a process for creating a system that could actually hold institutions accountable for their performance.
Public agencies rarely hold institutions accountable because to do so requires that they punish the poor performers or at least reward the good performers. No institution wants a designation as a poor performer. An institution with problematic performance characteristics as measured by some system will mobilize every political agent at its disposal (local legislators, powerful alumni and friends, student advocates, parents) to modify the accountability criteria to include sufficient indicators on which they can perform well.
In response to this political pressure, and to accommodate the many different kinds, types and characteristics of institutions, the accountability system usually ends up with 20, 30 or more accountability measures. No institution will do well on all of them, and every institution will do well on many of them, so in the end, all institutions will qualify as reasonably effective to very effective, and all will remain funded more or less as before.
The lifecycle of this process is quite long and provides considerable opportunity for impassioned rhetoric about how well individual institutions serve their students and communities, how effective the research programs are in enhancing economic development, how valuable the public service activities enhance the state, and so on. At the end, when most participants have exhausted their energy and rhetoric, and when the accountability system has achieved stasis, everyone will declare a victory and the accountability impulse will go dormant for several years until rediscovered again.
Often, state accountability systems offer systematic data reporting schemes with goals and targets defined in terms of improvement, but without incentives or sanctions. These systems assume that the value of measuring alone will motivate institutions to improve to avoid being marked as ineffective. This kind of system has value in identifying the goals and objectives of the state for its institutions, but often relegates the notion of accountability to the reporting of data rather than the allocation of money, where it could make a significant difference.
If an institution, state, or other entity wants to insist on improved performance from universities, they must specify the performance they seek and then adjust state appropriations to reward those who meet or exceed the established standard. Reductions in state budgets for institutions that fail to perform are rare for obvious political reasons, but the least effective system is one that allocates funds to poorly performing institutions with the expectation that the reward for poor performance will motivate improvement. One key to effective performance improvement, reinforced in the SHEEO report, is strictly limiting the number of key indicators for measuring improvement. If the number of indicators exceeds 10, the exercise is likely to find all institutions performing well on some indicator and therefore all deserving of continued support.
Often the skepticism that surrounds state accountability systems stems from a mismatch between the goals of the state (with an investment of perhaps 30 percent or less of the institutional budget) and those of the institutions. Campuses may seek nationally competitive performance in research, teaching, outreach, and other activities. States may seek improvement in access and student graduation rates as the primary determinants of accountability. Institutions may see the state’s efforts as detracting from the institution’s drive toward national reputation and success. Such mismatches in goals and objectives often weaken the effectiveness of state accountability programs.
Universities are very complex and serve many constituencies with many different expectations about the institutions’ activities. Improvement comes from focusing carefully on particular aspects of an institution’s performance, identifying reliable and preferably nationally referenced indicators, and then investing in success. While the selection of improvement goals and the development of good measures are essential, the most important element in all improvement programs is the ability to move money to reward success.
If an accountability system only measures improvement and celebrates success, it will produce a warm glow of short duration. Performance improvement is hard work and takes time, while campus budgets change every year. Effective measurement is often time consuming and sometimes difficult, and campus units will not participate effectively unless there is a reward. The reward that all higher education institutions and their constituent units understand is money. This is not necessarily money reflected in salary increases, although that is surely effective in some contexts.
Primarily what motivates university improvement, however, is the opportunity to enhance the capacity of a campus. If a campus teaches more students, and as a result earns the opportunity to recruit additional faculty members, this financial reward is of major significance and will motivate continued improvement. At the same time, the campus that seeks improvement cannot reward failure. If enrollment declines, the campus should not receive compensatory funding in hopes of future improvement. Instead, a poorly performing campus should work harder to get better so it too can earn additional support.
In public institutions, the small proportion of state funding within the total budget limits the ability of state systems to influence campus behavior by reallocating funding. In particular, in many states, most of the public money pays for salaries, and reallocating funds proves difficult. Nonetheless, most public systems and legislatures can identify some funds to allocate as a reward for improved performance. Even relatively small budget increases represent a significant reward for campus achievements.
Accountability, as the SHEEO report highlights, is a word with no meaning until we define the measures and the purpose. If we mean accountability to satisfy public expectations for multiple institutions on many variables, we can expect that the exercise will be time consuming and of little practical impact. If we mean accountability to improve the institution’s performance in specific ways, then we know we need to develop a few key measures and move at least some money to reward improvement.
John V. Lombardi
John V. Lombardi, chancellor and professor of history at the University of Massachusetts Amherst, writes Reality Check every two weeks. Scott McLemee's column, Intellectual Affairs, will return Thursday.
At the annual meeting of one of the regional accrediting agencies a few years ago, I wandered into the strangest session I’ve witnessed in any academic gathering. The first presenter, a young woman, reported on a meeting she had attended that fall in an idyllic setting. She had, she said, been privileged to spend three days “doing nothing but talking assessment” with three of the leading people in the field, all of whom she named and one of whom was on this panel with her. “It just doesn’t get any better than that!” she proclaimed. I kept waiting for her to pass on some of the wisdom and practical advice she had garnered at this meeting, but it didn’t seem to be that kind of presentation.
The title of the next panel I chose suggested that I would finally learn what accrediting agencies meant by “creating a culture of assessment.” This group of presenters, four in all, reenacted the puppet show they claimed to have used to get professors on their campus interested in assessment. The late Jim Henson, I suspect, would have advised against giving up their day jobs.
And thus it was with all the panels I tried to attend. I learned nothing about what to assess or how to assess it. Instead, I seemed to have wandered into a kind of New Age revival at which the already converted, the true believers, were testifying about how great it was to have been washed in the data and how to spread the good news among non-believers on their campus.
Since that time, I’ve examined several successful accreditation self-studies, and I’ve talked to vice presidents, deans, and faculty members, but I’m still not sure about what a “culture of assessment” is. As nearly as I can determine, once a given institution has arrived at a state of profound insecurity and perpetual self-scrutiny, it has created a “culture of assessment.” The self-criticism and mutual accusation sessions favored by Communist hardliners come to mind, as does a passage from a Credence Clearwater song: “Whenever I ask, how much should I give? The only answer is more, more!”
Most of the faculty resistance we face in trying to meet the mandates of the assessment movement, it seems to me, stems from a single issue: professors feel professionally distrusted and demeaned. The much-touted shift in focus from teaching to student learning at the heart of the assessment movement is grounded in the presupposition that professors have been serving their own ends and not meeting the needs of students. Some fall into that category, but whatever damage they do is greatly overstated, and there is indeed a legitimate place in academe for those professors who are not for the masses. A certain degree of quirkiness and glorious irrelevance were once considered par for the course, and students used to be expected to take some responsibility for their own educations.
Clearly, from what we are hearing about the new federal panel studying colleges, the U.S. Department of Education believes that higher education is too important to be left to academics. What we are really seeing is the re-emergence of the anti-intellectualism endemic to American culture and a corresponding redefinition of higher education in terms of immediately marketable preparation for specific jobs or careers. The irony is that the political party that would get big government off our backs has made an exception of academe.
This is not to suggest, of course, that everything we do in the name of assessment is bad or that we don’t have an obligation to determine that our instruction is effective and relevant. At the meeting of the National Association of Schools of Art and Design, I heard a story that illustrates how the academy got into this fix. It seems an accreditor once asked an art faculty member what his learning outcomes were for the photography course he was teaching that semester. The faculty member replied that he had no learning outcomes because he was trying to turn students into artists and not photographers. When asked then how he knew when his students had become artists, he replied, “I just know.”
Perhaps he did indeed “just know.” One of the most troubling aspects of the assessment movement, to my mind, is the tendency to dismiss the larger, slippery issues of sense and sensibility and to measure educational effectiveness only in terms of hard data, the pedestrian issues we can quantify. But, by the same token, every photographer must master the technical competencies of photography and learn certain aesthetic principles before he or she can employ the medium to create art. The photography professor in question was being disingenuous. He no doubt expected students to reach a minimal level of photographic competence and to see that competence reflected in a portfolio of photographs that rose to the level of art. His students deserved to have these expectations detailed in the form of specific learning outcomes.
Thus it is, or should be, with all our courses. Everyone who would teach has a professional obligation to step back and to ask himself or herself two questions: What, at a minimum, do I want students to learn, and how will I determine whether they have learned it? Few of us would have a problem with this level of assessment, and most of us would hardly need to be prompted or coerced to adjust our methods should we find that students aren’t learning what we expect them to learn. Where we fall out, professors and professional accreditors, is over the extent to which we should document or even formalize this process.
I personally have heard a senior official at an accrediting agency say that “if what you are doing in the name of assessment isn’t really helping you, you’re doing it wrong.” I recommend that we take her at her word. In my experience -- first as a chair and later as a dean -- it is helpful for institutions to have course outlines that list the minimum essential learning outcomes and which suggest appropriate assessment methods for each course. It is helpful for faculty members and students to have syllabi that reflect the outcomes and assessment methods detailed in the corresponding course outlines. It is also helpful to have program-level objectives and to spell out where and how such objectives are met.
All these things are helpful and reasonable, and accrediting agencies should indeed be able to review them in gauging the effectiveness of a college or university. What is not helpful is the requirement to keep documenting the so-called “feedback loop” -- the curricular reforms undertaken as a result of the assessment process. The presumption, once again, would seem to be that no one’s curriculum is sound and that assessment must be a continuous process akin to painting a suspension bridge or a battleship. By the time the painters work their way from one end to the other, it is time to go back and begin again. “Out of the cradle, endlessly assessing,” Walt Whitman might sing if he were alive today.
Is it any wonder that we have difficulty inspiring more than grudging cooperation on the part of faculty? Other professionals are largely left to police themselves. Not so academics, at least not any longer. We are being pressured to remake ourselves along business lines. Students are now our customers, and the customer is always right. Colleges used to be predicated on the assumption that professors and other professionals have a larger frame of reference and are in a better position than students to design curricula and set requirements. I think it is time to reaffirm that principle; and, aside from requiring the “helpful” documents mentioned above, it is past time to allow professors to assess themselves.
Regarding the people who have thrown in their lot with the assessment movement, to each his or her own. Others, myself included, were first drawn to the academic profession because it alone seemed to offer an opportunity to spend a lifetime studying what we loved, and sharing that love with students, no matter how irrelevant that study might be to the world’s commerce. We believed that the ultimate end of what we would do is to inculcate both a sensibility and a standard of judgment that can indeed be assessed but not guaranteed or quantified, no matter how hard we try. And we believed that the greatest reward of the academic life is watching young minds open up to that world of ideas and possibilities we call liberal education. To my mind, it just doesn’t get any better than that.
Edward F. Palm
Edward F. Palm is dean of social sciences and humanities at Olympic College, in Bremerton, Wash.
College officials and members of the public are watching with intense interest -- and, in some quarters, trepidation -- the proceedings of the U.S. Secretary of Education's Commission on the Future of Higher Education. Given that interest, the following is a memorandum that the panel's chairman, Charles Miller, wrote to its members offering his thinking about one of its thorniest subjects: accountability. As always on Inside Higher Ed, comments are welcomed below.
To: Members, The Secretary of Education’s Commission on the Future of Higher Education
From: Charles Miller, Chairman
Dear Commission Members:
The following is a synopsis of several ongoing efforts, in support of the Commission, in one of our principal areas of focus, "Accountability." The statements and opinions presented in the memo are mine and are not intended to be final conclusions or recommendations, although there may be a developing consensus.
I would appreciate feedback, directly or through the staff, in any form that is most convenient. This memo will be made public in order to promote and continue an open dialogue on measuring institutional performance and student learning in higher education.
As a Commission, our discussions to date have shown a number of emerging demands on the higher education system, which require us to analyze, clarify and reframe the accountability discussion. Four key goals or guiding principles in this area are beginning to take shape.
First, more useful and relevant information is needed. The federal government currently collects a vast amount of information, but unfortunately policy makers, universities, students and taxpayers continue to lack key information to enable them to make informed decisions.
Second, we need to improve, and even fix, current accountability processes, such as accreditation, to ensure that our colleges and universities are providing the highest quality education to their students.
Third, we need to do a much better job of aligning our resources to our broad societal needs. In order to remain competitive, our system of higher education must provide a world-class education that prepares students to compete in a global knowledge economy.
And finally, we need to assure that the American public understand through access to sufficient information, particularly in the area of student learning, what they are getting for their investment in a college education.
Commission Meeting (12/6/05)
At our Nashville meeting, the Commission heard three presentations from a panel on “Accountability.” Panelists represented the national, state and institutional perspectives and in the subsequent discussion, an informal consensus developed that there is a critical need for improved public information systems to measure and compare institutional performance and student learning in consumer-friendly formats, defining consumers broadly as students, families, taxpayers, policy makers and the general public.
Needs for a Modern University Education
The college education needed for the competitive, global environment in the future is far more than specific, factual knowledge; it is about capability and capacity to think and develop and continue to learn. An insightful quote from an educator describes the situation well:
“We are attempting to educate and prepare students (hire people in the workforce) today so that they are ready to solve future problems, not yet identified, using technologies not yet invented, based on scientific knowledge not yet discovered.”
--Professor Joseph Lagowski, University of Texas at Austin
Trends in Measuring Student Learning
There is gathering momentum for measuring through testing what students learn or what skills they acquire in college beyond a traditional certificate or degree.
Very recently, new testing instruments have been developed which measure an important set of skills to be acquired in college: critical thinking, analytic reasoning, problem solving, and written communications.
The Commission is reviewing promising new developments in the area of student testing, which indicate a significant improvement in measuring student learning and related institutional performance. Three independent efforts have shown promise:
A multi-year trial by the Rand Corporation, which included 122 higher education institutions, led to the development of a test measuring critical thinking, analytic reasoning and other skills. As a result of these efforts, a new entity called Collegiate Learning Assessment has been formed by researchers involved and the tests will now be further developed and marketed widely.
A new test measuring college level reading, mathematics, writing and critical thinking has been developed by the Educational Testing Service and will begin to be marketed in January 2006. This test is designed for colleges to assess their general education outcomes, so the results may be used to improve the quality of instruction and learning.
The National Center for Public Policy and Higher Education developed a new program of testing student learning in five states, which has provided highly promising results and which suggests expansion of such efforts would be clearly feasible.
An evaluation of these new testing regimes provides evidence of a significant advancement in measuring student learning -- especially in measuring the attainment of skills most needed in the future.
Furthermore, new educational delivery models are being created, such as the Western Governors University, which uses a variety of built-in assessment techniques to determine the achievement of certain skills being taught, rather than hours-in-a-seat. These new models are valid alternatives to the older models of teaching and learning and may well prove to be superior for some teaching and learning objectives in terms of cost effectiveness.
There are constructive examples of leadership in higher education in addressing the issues of accountability and student learning, such as the excellent work by the Association of American Colleges and Universities.
The AAC&U has developed a unique and significant approach to accountability and learning assessment, discussed in two recent reports, “Our Students’ Best Work” (2004) and “Liberal Education Outcomes” (2005).
The AAC&U accountability model focuses on undergraduate liberal arts education and emphasizes learning outcomes. The primary purpose is to engage campuses in identifying the core elements of a quality liberal arts education experience and measuring students’ experience in achieving these goals -- core learning and skills that anyone with a liberal arts degree should have. AAC&U specifically does not endorse a single standardized test, but acknowledges that testing can be a useful part of the multiple measures recommended in their framework.
In this model, departments and faculty are expected to be given the primary responsibility to define and assess the outcomes of the liberal arts education experience.
Federal and State Leadership
The federal government currently collects a great deal of information from the higher education system. It may be time to re-examine what the government collects to make sure that it’s useful and helpful to the consumers of the system.
Many states are developing relevant state systems of accountability in order to measure the performance of public higher education institutions. In its recommendations about accountability in higher education, the State Higher Education Executive Officers group has endorsed a focus on learning assessment.
Institutional Performance Measurement
What is clearly lacking is a nationwide system for comparative performance purposes, using standard formats. Private ranking systems, such as the U.S. News and World Report “Best American Colleges” publications, use a limited set of data, which is not necessarily relevant for measuring institutional performance or providing the public with information needed to make critical decisions.
The Commission, with assistance of its staff and other advisors and consultants, is attempting to develop the framework for a viable database to measure institutional performance in a consumer-friendly, flexible format.
Historically, accreditation has been the nationally mandated mechanism to improve institutional quality and assure a basic level of accountability in higher education.
Accreditation and related issues of articulation are in need of serious reform in the view of many, especially the need for more outcomes-based approaches. Also in need of substantial improvement are the regional variability in standards, the independence of accreditation, its usefulness for consumers, and its response to new forms of delivery such as internet-based distance learning.
The Commission is reviewing the various practices of institutional and programmatic accreditation. A preliminary analysis will be presented and various possible policy recommendations will be developed.
My old friend Archilochus, the Greek lyric poet who has been resting comfortably since the Seventh Century B.C., has been getting a lot of rousing attention lately. And that’s a good thing considering what’s been happening recently in Washington, D.C.
A new federal commission formed by Education Secretary Margaret Spellings has been pushing the idea of holding colleges more accountable for the outcomes of their undergraduate education, which has prompted talk of a federally mandated assessment. I don’t know anything that would make it harder to improve student learning than a national or federal assessment. And that’s where Archilochus can help.
Years ago Sir Isaiah Berlin picked up the Greek poet’s famous aphorism, "The fox knows many things but the hedgehog knows one thing,” and used it as the title of his famous essay, and now Philip Tetlock, in his new book, Expert Political Judgment: How Good Is it? (Princeton University Press, 2005) has classified pundits into two categories: Hedgehogs, who have a single big idea or explanation, and Foxes, who look for a lot of intersecting causes. (He found that, by and large, the Foxes do better at predicting what’s to come, except once in a while when the prickly Hedgehogs see something really important, and don’t get distracted, no matter what.)
Most of us in academe are foxes, but I want to suggest that we think like hedgehogs for a while, and concentrate on one thing and one thing only -- student learning. Although we can’t ignore the political context, we shouldn’t do this in reaction to the perceived pressure from the federal commission. We should do it, instead, because it’s the one thing on which the flourishing of liberal education most depends right now. We need to do it for our students and for ourselves as educators.
When I became president of the Teagle Foundation two and a half years ago, I worried a lot about the alleged decline and fall of liberal education. The figures I studied showed a decreasing percentage of undergraduates majoring in the traditional disciplines of the liberal arts; some colleges that I visited, or whose leaders I met, seemed to be turning their backs on liberal education; short term marketing strategies seemed to be eclipsing long term educational values.
Recently, however, I’ve experienced another eclipse, one in which three tendencies I have been observing block out my old worries. The three trends are:
A shift in goals from content to cognition
The demand for accountability
A new knowledge base for teaching
None of these is an unambiguous Good Thing, and there are enough tricks and traps in each of these trends to challenge both foxes and hedgehogs. But in my view -- on balance -- the collision of these trends present the opportunity to take liberal education to a new level.
It is now possible, in ways that were out of our reach just a few years ago, to teach better and greatly to invigorate student engagement and learning. We can do that, I am convinced, while recommitting ourselves and our institutions to the core educational values of liberal education.
This all comes with a big “IF.” We can reach that higher level only if we focus, focus, focus on student learning -- all of us, faculty, deans, presidents, foundation officers. We all have to become hedgehogs.
Let me explain why I feel so confident that if we focus in this way, liberal education can reach that new level of excellence. In my explanation I will say a few words about each of the three tendencies to which I just alluded, and then try to imagine what liberal education could be like if they are brought together in an integrated system.
1. First, “from content to cognition,” that is, a shift in the stated goals of liberal education from certain subject matter that every educated person should know to certain cognitive capacities that ought to be developed in all students. Over the past few decades, many colleges and universities have come to define their goals as the development of cognitive capacities such as analytical reasoning, critical thinking, clarity of written and oral expression, and moral reasoning. Over the same period the idea that all students should become acquainted with certain texts, topics, and aspects of human experience has pretty much disappeared from curricular thinking.
Curmudgeonly old classicist that I am, I find it hard to imagine a liberal education in which students do not meet Socrates and confront his insistence that the unexamined life is not worth living. Nor can I convince myself that these cognitive goals can be attained in total abstraction, without the specificity and challenge contributed by disciplinary knowledge. Content still matters.
But the shift from content to cognition does have one great benefit: It compels us to think hard about what we want students to have gained once they complete a course or a curriculum. It should make us be explicit about how each course, maybe each assignment, contributes to one cognitive goal or another. In educational jargon, it makes us more “intentional” and thereby much more likely to succeed.
2. Accountability. We are also witnessing a widening demand in many sectors of American society for greater accountability. We owe it all to our friends at Enron, and all the other wonderful playgrounds of corporate greed and corruption. But education is not going to escape the demand for accountability, nor will assessment be restricted to K-12 education. As my friend Steve Wheatley, of the American Council of Learned Societies, put it, “The train is a-comin’ and its name is assessment.”
More systematic assessment of the results of higher education is, as you well know, being demanded by accrediting agencies, governing boards, state legislators, and increasingly the general public. Now, with a federal commission on board the roar of the engine is getting louder and closer.
You and your colleagues may not like to see that train bearing down on your tranquil campus. And you may well share my anger if Congress tells engineers from the Department of Education to run the train. They tried that in K-12 education and I’m not sure whether the results are a disaster or a joke. The best defense is clearly to get out ahead and do assessment right, and do it now.
This top down pressure for assessment naturally provokes skepticism and resistance, especially from faculty members. What happens if we can reverse the direction and look at assessment from the ground up? Let me tell you a story. When the Teagle Foundation began to ask whether it should undertake some initiative in the assessment area, we convened one of our “Listenings,” bringing together for a few days faculty, administrators and experts in assessment to advise us. There was plenty of skepticism and some hostility. I began to think maybe this was not such a good idea.
But late in the gathering, two people stood up to speak from the floor. One said in effect, “As scholars we value knowledge. How as teachers can we reject something that might let us know more about our students’ learning?” Another speaker said, “Maybe we can teach better if we know more. It’s worth a try.” For me, and for others at that session, that turned the day. Now the Teagle Foundation has made faculty-led, ground-up assessment one of its top priorities. Nothing, I believe, has greater potential for invigorating student learning in the liberal arts.
All this is built around one essential point: We can teach better and students can learn better if their learning is systematically and appropriately assessed.
3. The third trend is the one that makes me confident that we have nothing to fear from properly crafted assessment. Today we know far more about how students learn and what works in teaching that we did just a few years ago. We know what works -- first year seminars, inclusion of undergraduates in research projects, problem-based learning, collaborative projects, coordination of service learning, internships and overseas study with courses and curricula, lots of writing and speaking opportunities with prompt and thorough faculty feedback, capstone experiences in the senior year and so on. (See Section Six of Liberal Education Outcomes, a 2005 publication from the Association of American Colleges and Universities).
These are not just bright ideas from educational theorists. They have been tested and usually rigorously evaluated. And although graduate schools keep it a well hidden secret, the cat is now out of the bag. This new knowledge has been drawn together, concisely summarized, and made easily accessible in Derek Bok’s brand new book, Our Underachieving Colleges (Princeton Press 2006). Every professor should read this book: Its greatest merit is that Bok demolishes the excuses we academics have used to avoid change.
Let me give one example. My friend David Porter, former president of Skidmore College and now a classics professor at Williams College, defines a liberal education as “what you have learned once you have forgotten the facts.” How long would you guess it takes to forget those facts?
Bok has the answer: “… [T]the average student will be unable to recall most of the factual content of a typical lecture within fifteen minutes after the end of class. In contrast, interests, values and cognitive skills are all likely to last longer, as are concepts and knowledge that students have acquired … through their own mental efforts.”
Fifteen minutes! You might say, “We’ve known that, more or less, for a long time.” Then why is lecturing still the dominant mode of instruction in so many settings? Bok offers several answers, the most damaging of which is complacency. He points out, for example, that one poll of faculty members found that 90 percent thought they were “above average” teachers. Welcome to Lake Wobegon.
Can these three trends -- cognitive capacities replacing content, accountability, the new knowledge base for college teaching -- come together and reinforce one another? The key question is whether academic leaders will focus on this and make it happen.
Imagine what such convergence can do for an institution that sets clear, assessable goals for itself in the development of its students’ cognitive capacities. It doesn’t matter whether the institution is multibillionaire Harvard or a struggling college far from the River Charles: There’s no group of college students whose frontal lobes won’t benefit from some additional exercise.
The institution that I am imagining does some testing to establish a base line and then looks at every aspect of student learning to see how each part can contribute to those goals. It finds out what its students need and what the Big Questions of value and meaning are that can invigorate their engagement with liberal education. It uses the new knowledge base to change some of its practices and try out new ideas. It searches appropriate means of assessment; if NSSE, the National Survey of Student Engagement, or CLA, the Collegiate Learning Assessment, don’t seem quite right for its setting, there are others or, if need be, the institution develops its own.
But whatever means of assessment it chooses, it doesn’t let the results sit in the office of Institutional Research; it uses them in an iterative process, steadily ratcheting up its effectiveness. The students see this; they understand better why they are studying what might otherwise seem remote or irrelevant material. Their enthusiasm increases; they tell their friends and younger siblings. The director of admissions smiles somewhat more often. So do the fund raisers. The alumni and friends of the institution see what is happening; their pride makes them more generous to alma mater. Maybe eventually even U.S. News sees that something is happening, and it is not prestige, pecking order, or wealth. It’s called “student learning.”
This systematic, iterative process of change will do a lot for an institution, for its students and for its faculty. I bet it will make hedgehogs out of them -- focused on, excited by, renewed through their concern for student learning. Most of us went into college teaching for complex reasons, but one of them, I believe, was that we knew it would be a joy to help young people develop their mental capacities. It’s easy to forget that as we get older, to wander away, to end up forgetting that we have something to profess. But the satisfaction is waiting there where we suspected it was when we started -- in helping those students learn and grow.
Now, thanks to this convergence of changes, we can rediscover that satisfaction. We can teach better and students can learn better. That should make hedgehog very happy indeed.
I hear someone muttering: “Not on my campus; my faculty will never buy into that kind of change.” Don’t be so sure. In my old job at the National Humanities Center, when we were developing programs to let new knowledge in the humanistic disciplines invigorate K-12 and college teaching, Richard Schramm, the talented designer of those programs, told me that he could not recall ever being turned down by an NHC fellow or former fellow when he asked them to help with this work. (For one such program see ) That matches what we are finding at the Teagle Foundation in developing our new College Community Connections program.
Scholars of great distinction have been willing to roll up their sleeves, and pitch in working with kids on disadvantaged neighborhoods in New York, where public schools are often part of the problem rather than part of the solution. These busy, much sought after academics were, I concluded, looking for something fresh, well designed, and capable of renewing their satisfaction in helping students learn. You may find that some of your colleagues are hungry and thirsty for renewal of this sort and that they are ready to try out new ways of invigorating student learning.
Every environment is different, but here’s a suggestion about how one might build momentum and consensus. Try this on your campus. Get your dean to call Princeton Press and order copies of Derek Bok’s book Underachieving Colleges for every departmental chair. Ask them to read it and discuss it with their colleagues and then to meet with you and let you know what the response is. If 413 pages or $29.95 is too much for already strained attention spans or budgets, print out a copy of this article and ask your faculty colleagues whether they agree or disagree. Let them rip it apart. Let them be as prickly as … as prickly as hedgehogs. They may well have a better idea than any of these. The important thing is to focus on that one crucial idea: We can teach better and students can learn better. The only question is How?, and the only way to answer is by being hedgehogs focused on that one crucial thing, improving student learning.
W. Robert Connor
W. Robert Connor is president of the Teagle Foundation. This essay was adapted from a speech given to the American Conference of Academic Deans in January.