Assessment

A More Complete Completion Picture

Smart Title: 

National group includes part-time and other students typically omitted from college success -- and the numbers are not pretty.

Questioning Assumptions

Smart Title: 

Community college leaders say their campuses can do better, rather than focusing on outside forces that are buffeting them.

Measuring Engagement

For more than a decade, the National Survey of Student Engagement (NSSE) and the Community College Survey of Student Engagement (CCSSE) have provided working faculty members and administrators at over 2,000 colleges and universities with actionable information about the extent to which they and their students are doing things that decades of empirical study have shown to be effective. Recently, a few articles by higher education researchers have expressed reservations about these surveys. Some of these criticisms are well-taken and, as leaders of the two surveys, we take them seriously. But the nature and source of these critiques also compel us to remind our colleagues in higher education just exactly what we are about in this enterprise.

Keeping purposes in mind is keenly important. For NSSE and CCSSE, the primary purpose always has been to provide data and tools useful to higher education practitioners in their work. That’s substantially different from primarily serving academic research. While we have encouraged the use of survey results by academic researchers, and have engaged in a great deal of it ourselves, this basic purpose fundamentally conditions our approach to “validity.” As cogently observed by the late Samuel Messick of the Educational Testing Service, there is no absolute standard of validity in educational measurement. The concept depends critically upon how the results of measurement are used. In applied settings, where NSSE and CCSSE began, the essential test is what Messick called “consequential validity” -- essentially the extent to which the results of measurement are useful, as part of a larger constellation of evidence, in diagnosing conditions and informing action. This is quite different from the pure research perspective, in which “validity” refers to a given measure’s value for building a scientifically rigorous and broadly generalizable body of knowledge.

The NSSE and CCSSE benchmarks provide a good illustration of this distinction. Their original intent was to provide a heuristic for campuses to initiate broadly participatory discussions of the survey data and implications by faculty and staff members. For example, if data from a given campus reveal a disappointing level of academic challenge, educators on that campus might examine students’ responses to the questions that make up that benchmark (for example, questions indicating a perception of high expectations). As such, the benchmarks’ construction was informed by the data, to be sure, but equally informed by decades of past research and experience, as well as expert judgment. They do not constitute “scales” in the scientific measurement tradition but rather groups of conceptually and empirically related survey items. No one asked for validity and reliability statistics when Art Chickering and Zelda Gamson published the well-known Seven Principles for Good Practice in Undergraduate Education some 25 years ago, but that has not prevented their productive application in hundreds of campus settings ever since.

The purported unreliability of student self-reports provides another good illustration of the notion of consequential validity. When a student is asked to tell us the frequency with which she engaged in a particular activity (say, making a class presentation), it is fair to question how well her response reflects the absolute number of times she actually did so. But that is not how NSSE and CCSSE results are typically used. The emphasis is most often placed instead on the relative differences in response patterns across groups -- men and women, chemistry and business majors, students at one institution and those elsewhere, and so on. Unless there is a systematic bias that differentially affects how the groups respond, there is little danger of reaching a faulty conclusion. That said, NSSE and CCSSE have invested considerable effort to investigate this issue through focus groups and cognitive interviews with respondents on an ongoing basis. The results leave us satisfied that students know what we are asking them and can respond appropriately.

Finally, NSSE and CCSSE results have been empirically linked to many important outcomes including retention and degree completion, grade-point-average, and performance on standardized generic skills examinations by a range of third-party multi-institutional validation studies involving thousands of students. After the application of appropriate controls (including incoming ability measures) these relationships are statistically significant, but modest. But, as the work of Ernest Pascarella and Patrick Terenzini attests, such is true of virtually every empirical study of the determinants of these outcomes over the last 40 years. In contrast, the recent handful of published critiques of NSSE and CCSSE are surprisingly light on evidence. And what evidence is presented is drawn from single-institution studies based on relatively small numbers of respondents.

We do not claim that NSSE and CCSSE are perfect. No survey is. As such, we welcome reasoned criticism and routinely do quite a bit of it on our own. The bigger issue is that work on student engagement is part of a much larger academic reform agenda, whose research arm extends beyond student surveys to interview studies and on-campus fieldwork. A prime example is the widely acclaimed volume Student Success in College by George Kuh and associates, published in 2005. To reiterate, we have always enjoined survey users to employ survey results with caution, to triangulate them with other available evidence, and to use them as the beginning point for campus discussion. We wish we had an electron microscope. Maybe our critics can build one. Until then, we will continue to move forward on a solid record of adoption and achievement.

Peter Ewell is senior vice president of the National Center for Higher Education Management Systems and chairs the National Advisory Boards for both NSSE and CCSSE. Kay McClenney is a faculty member at the University of Texas at Austin, where she directs the Center for Community College Student Engagement. Alexander C. McCormick is a faculty member at Indiana University at Bloomington, where he directs the National Survey of Student Engagement.

Low-Hanging Fruit

Smart Title: 

Educators consider how they can get "near-completers" to finish up their college degrees.

Paths to the Bachelor's Degree

Smart Title: 

Bachelor's degree recipients in 2007-8 who began their postsecondary educations at a community college took almost 20 percent longer to complete their degrees than did those who started out at a four-year institution, those who began at four-year private colleges finished faster than did those at four-year public and for-profit institutions, and those who delayed entry into college by more than a year out of high school took almost 60 percent longer to complete their degrees than did those who went directly to college.

Targeting, or Serving, Needy Students?

Can we agree on this much?

That a large number of academically gifted and economically affluent students (or their parents) have become savvy consumers, getting their first two years of general education courses out of the way at low-cost community colleges rather than pricier state schools and liberal arts colleges?

That by doing so, these would-be competitive admissions students are taking up a large number of slots at community colleges that would otherwise be filled by less academically gifted or less economically affluent students?

That private nonprofit schools, meanwhile, are maintaining their competitive admissions edge by providing more merit-based tuition discounts rather than need-based tuition discounts? That by doing so, these schools become less and less of an option for those less fortunate?

And that, as the number of well-paying blue collar jobs shrinks in response to the changing nature of the economy, the American middle class must either contract, or the skills needed to gain and retain a well-paying job must somehow expand?

I hope we can find consensus around those points. Most people can at least agree on the connection between college education and well-paying jobs, and the need to up-skill the American workforce in order to defend a society in which the benefits of middle class living are widely shared and enjoyed. Most can also agree that higher education access is shrinking in response to a variety of external pressures, including state budget cuts to higher education and a more consumer-savvy insistence on tuition dollar value.

Now we reach the question where many people disagree. Do less well academically prepared, less affluent individuals deserve an opportunity to receive a higher education? And, if so, should they attend institutions best situated to respond to their particular academic, social and emotional needs, or should they be forced to accept whatever public school option may be available -- regardless of the institution’s track record in retaining and graduating students?

These are the questions at the heart of the current debate surrounding private sector colleges and universities (PSCUs). These institutions cost the student more to attend than a public school does, but, through generous subsidies, taxpayers pay the bulk of education costs at community colleges, not students. As a result, the absolute cost of postsecondary attendance is actually less at the private sector alternative. The Institute for Higher Education Policy recently issued a report about low-income adults in postsecondary education, noting -- as many in higher education have long been aware -- that a significant percentage of low income and minority students attend PSCUs and community colleges. From the perspective of our critics, PSCUs “target” these students while community colleges “serve” them.

Both types of institutions operate in what is largely an open admissions environment (although my own institution does not). Both serve the adult student, who is often financially independent. Both strive to provide students with an education that facilitates career-focused employment (although community colleges wear many other postsecondary hats as well). Both use advertising as well as word of mouth referrals to attract students. But many PSCU students have already attended a community college and opted out for various reasons, including the long waits to enter the most popular programs, large class sizes and inflexible schedules. These problems are all made worse by state budget cuts to higher education.

PSCU students do pay more out of their own pockets than do community college students, but PSCU students see the cost justified by what they receive in return. This value expresses itself in greater individual attention and support … in having confidence in academic skills restored where they may be flagging … in gaining new motivation to succeed and seeing that motivation reinforced through success itself ... and in making the connection between classroom learning and employable skills real and direct.

Two-year PSCU institutions graduate students at three times the rate of community colleges. Placement rates are the bottom line on career-focused education, however, and while community colleges offer lower-cost career programs without outcome metrics, PSCUs must match their career education offerings with real placement of students in relevant jobs. Again, PSCU students see this outcomes-based approach as a difference worth paying for.

In this broader context, the irony of PSCUs being accused of “targeting” students becomes clear. Apparently where some see targeting of low income and minority students unable to make informed decisions about their futures, we see tailoring of postsecondary education to suit a nontraditional student population -- and a better fit all around.

Author/s: 
Arthur Keiser
Author's email: 
newsroom@insidehighered.com

Arthur Keiser is chairman of the Association of Private Sector Colleges and Universities and chancellor of Keiser University.

Do Majors Matter?

Do majors matter? Since students typically spend more time in their area of concentration than anywhere else in the curriculum, majors ought to live up to their name and produce really major benefits. But do they?

Anthony P. Carnevale, the Director of Georgetown’s Center for Education and the Workforce, had recently provided a clear answer. Majors matter a lot -- a lot of dollars and cents. In a report entitled “What’s it Worth,” he shows how greatly salaries vary by major, from $120,000 on average for petroleum engineers down to $29,000 for counseling psychologists.

But what if one asked whether majors make differing contributions to students’ cognitive development? The answer is once again yes, but the picture looks very different from the one in the Georgetown study.

A few years ago, Paul Sotherland, a biologist at Kalamazoo College in Michigan, asked an unnecessary question and got not an answer but a tantalizing set of new questions. It was unnecessary because most experts in higher education already knew the answer, or thought they did: as far as higher-order cognitive skills are concerned, it doesn’t matter what you teach; it’s how you teach it.

What Sotherland found challenged that conventional wisdom and raised new questions about the role of majors in liberal education. Here’s what he did. Kalamazoo had been using the Collegiate Learning Assessment (CLA) to track its students’ progress in critical thinking and analytical reasoning. After a few years it become clear that Kalamazoo students were making impressive gains from their first to their senior years. Sotherland wondered if those gains were across the board or varied from field to field.

So he and his associates tabulated their CLA results for each of the five divisions of the college’s curriculum -- fine arts, modern and classical languages and literatures, humanities, natural sciences and mathematics, and social sciences.

Since gains in CLA scores tend to follow entering ACT or SAT scores, they “corrected” the raw data to see what gains might be attributed to instruction. They found significant differences among the divisions, with the largest gains (over 200 points) in foreign languages, about half that much in the social sciences, still less in the fine arts and in the humanities, least of all in the natural sciences .

How was this to be explained? Could reading Proust somehow hone critical thinking more than working in the lab? (Maybe so.)

But the sample size was small and came from one exceptional institution, one where students in all divisions did better than their SAT scores would lead one to expect, and where the average corrected gain on CLA is 1.5 standard deviations, well above the national average. (Perhaps Inside Higher Ed should sponsor the “Kalamazoo Challenge,” to see if other institutions can show even better results in their CLA data.)

The obvious next step was to ask Roger Benjamin of the Collegiate Learning Assessment if his associates would crunch some numbers for me. They obliged, with figures showing changes over four years for both parts of the CLA -- the performance task and analytical writing. Once again, the figures were corrected on the basis of entering ACT or SAT scores.

The gains came in clusters. At the top was sociology, with an average gain of just over 0.6 standard deviations. Then came multi- and interdisciplinary studies, foreign languages, physical education, math, and business with gains of 0.50 SDs or more.

The large middle cluster included (in descending order) education, health-related fields, computer and information sciences, history, psychology, law enforcement, English, political science, biological sciences, and liberal and general studies.

Behind them, with gains between 0.30 and 0.49 SDs, came communications (speech, journalism, television, radio etc.), physical sciences, nursing, engineering, and economics. The smallest gain (less than 0.01 standard deviations) was in architecture.

The list seemed counterintuitive to me when I first studied it, just as the Kalamazoo data had. In each case, ostensibly rigorous disciples, including most of the STEM disciplines (the exception was math) had disappointing results. Once again the foreign languages shone, while most other humanistic disciplines cohabited with unfamiliar bedfellows such as computer science and law enforcement. Social scientific fields scattered widely, from sociology at the very top to economics close to the bottom.

When one looks at these data, one thing is immediately clear. The fields that show the greatest gains in critical thinking are not the fields that produce the highest salaries for their graduates. On the contrary, engineers may show only small gains in critical thinking, but they often command salaries of over $100,000. Economists may lag as well, but not at salary time, when, according to “What’s It Worth” their graduates enjoy median salaries of $70,000. At the other end majors in sociology and French, German and other commonly taught foreign languages may show impressive gains, but they have to be content with median salaries of $45,000.

But what do these data tell us about educational practice? It seems unlikely that one subject matter taken by itself has a near-magical power to result in significant cognitive gains while another does nothing of the sort. If that were the case, why do business majors show so much more progress than economics majors? Is there something in the content of a physical education major (0.50 SDs) that makes it inherently more powerful than a major in one of the physical sciences (0.34 SDs)? I doubt it.

Since part of the CLA is based on essays students write during the exam, perhaps the natural science majors simply had not written enough to do really well on the test. (That’s the usual first reaction, I find, to unexpected assessment results -- "there must be something wrong with the test.") That was, however, at best a partial explanation, since it didn’t account for the differences among the other fields. English majors, for example, probably write a lot of papers, but their gains were no greater than those of students in computer sciences or health-related fields.

Another possibility is that certain fields attract students who are ready to hone their critical thinking skills. If so, it would be important to identify what it is in each of those fields that attract such students to it. Are there, for example, “signature pedagogies” that have this effect? If so, what are they and how can their effects be maximized? Or is it that certain pedagogical practices, whether or not they attract highly motivated students, increase critical thinking capacities – and others as well? For example, the Wabash national study has identified four clusters of practices that increase student engagement and learning in many areas (good teaching and high-quality interactions with faculty, academic challenge and high expectations, diversity experiences, and higher-order, integrative, and reflective learning).

Some fields, moreover, may encourage students to “broaden out” -- potentially important for the development of critical thinking capacities as one Kalamazoo study suggests. Other disciplines may discourage such intellectual range.

One other hypothesis, I believe, also deserves closer consideration. The CLA is a test of post-formal reasoning. That is, it does not seek to find out if students know the one right answer to the problems it sets; on the contrary, it rewards the ability to consider the merits of alternative approaches. That suggests that students who develop the habit of considering alternative viewpoints, values and outcomes and regularly articulate and weigh alternative possibilities may have an advantage when taking the CLA exam, and quite possibly in real-life settings as well.

Since the study of foreign languages constantly requires the consideration of such alternatives, their study may provide particularly promising venues for the development of such capacities. If so, foreign languages have a special claim on attention and resources even in a time of deep budgetary cuts. Their "signature pedagogies," moreover, may provide useful models for other disciplines.

These varying interpretations of the CLA data open up many possibilities for improving students’ critical thinking. But will these possibilities be fully utilized without new incentives? The current salary structure sends a bad signal when it puts the money where students make very small gains in critical thinking, and gives scant reward to fields that are high performers in this respect . (For example, according to the College & University Professional Association for Human Resources, full professors in engineering average over $114,000, while those in foreign languages average just over $85,000.

Isn’t it time to shift some resources to encourage experimentation in all fields to develop the cognitive as well as the purely financial benefits of the major?

Author/s: 
W. Robert Connor
Author's email: 
newsroom@insidehighered.com

W. Robert Connor is senior advisor to the Teagle Foundation.

Fuzzy Numbers

Almost every college or university publishes a number called the student/faculty ratio as an indicator of undergraduate instructional quality. Among the many spurious data points exploited by commercial ranking agencies, this one holds a special place.

The mythology would have it that a low ratio, say 10 students per faculty member, indicates a university whose undergraduates take most of their instruction in small groups with a faculty instructor, and presumably learn best in those conditions. In contrast, a high number, say 25 students per faculty member, might lead us to think of large classes and less effective, impersonal instruction.

These common impressions represent mostly pure public relations. The ratio means none of this because the numbers used to calculate it are usually unreliable for comparing different universities or colleges and because the basic premise about small classes is flawed.

To illustrate the meaninglessness of the ratio, imagine two universities with exactly the same number of students, say 5,000, and the same number of faculty, say 500. Both institutions would report a student/faculty ratio of 10, and following common wisdom, we might imagine that both have the same teaching environment. The data do not show however, what the faculty do with their time.

Imagine that the first university has faculty of high prestige by virtue of their research accomplishments, and that these faculty spend half of their time in the classroom and half in research activities, a pattern typical of research institutions. Imagine, too, that the second university in our example has faculty less active in research but fully committed to the teaching mission of their college. Where the  research-proficient faculty at our first institution spend only half their time in class, the teaching faculty in the second institution spend all of their time in the classroom.

Correcting the numbers to reflect the real commitment of faculty to teaching would give an actual student to teaching-faculty ratio of 20 to 1 for the research institution and 10 to 1 for the teaching college. The official reported ratio is wildly misleading at best.

The official student/faculty ratio is suspect for yet another reason. It appears as an indicator of something valuable in an institution's teaching and learning process. The reported ratio implies that having a small number of students in a class indicates high instructional quality and effective learning.It may be that in K-12 settings, small class sizes help struggling students learn. In reasonably high quality colleges and universities, however, the evidence is different.

In some classes, for example those that teach beginning languages or performance studios in music, students do learn better when taught in small to very small groups. In the core business curriculum, in basic economics, in art and music appreciation, in history and psychology introductory courses, and many other subjects, students learn as much in large classes of more than 100 as they do in small classes of fewer than 25. In real life, smart universities mix large and small classes so that students can get small classes when small size makes a difference and find a place in large classes when that format works just as well.

A Different Way?

If universities really cared to give students, prospective students and parents a picture of the instructional pattern at their institutions, they would erase the unhelpful student/faculty ratio and instead, provide a more useful measure.

They could analyze the transcripts of their most recent graduating class and report the pattern of large and small classes actually experienced by graduating seniors.

How many courses did the graduates take in their major that had fewer than 20 students, how many general education courses did they take with over 50 or over 100?How many of their courses during their undergraduate years had a tenure-track faculty instructor, and how many had a visitor, a part-time faculty, or a teaching assistant as an instructor?

This kind of report would encourage institutions to explain why the nontenure track instructors teach as well as the tenure-track faculty, and it would give parents and prospective students an accurate understanding of the actual teaching mix they should expect during their undergraduate years.

Such accuracy might not be as good advertising as the misleading student/faculty ratio, but it would have the virtue of reflecting reality, and it would encourage us to talk clearly about the design and the delivery of the undergraduate education we provide.

Author/s: 
John V. Lombardi
Author's email: 
lombardi@umass.edu

Accountability, Improvement and Money

Unfortunately, some of us are old enough to have passed through various incarnations of the accountability movement in higher education. Periodically university people or their critics rediscover the notion of accountability, as if the notion of being accountable to students, parents, legislators, donors, federal agencies, and other institutional constituencies were something new and unrecognized by our colleagues. We appear to have entered another cycle, signaled by the publication last month of a call to action by the State Higher Education Executive Officers (SHEEO) association, with support from the Ford Foundation, called "Accountability for Better Results."

The SHEEO report has the virtue of recognizing many of the reasons why state-level accountability systems fail, and focuses its attention primarily on the issue of access and graduation rates. While this is a currently popular and important topic, the SHEEO report illustrates why the notion of "accountability" by itself has little meaning. Universities and colleges have many constituencies, consumers, funding groups, interested parties, and friends. Every group expects the university to do things in ways that satisfy their goals and objectives, and seek "accountability" from the institution to ensure that their priorities drive the university’s performance. While each of these widely differentiated accountability goals may be appropriate for each group, the sum of these goals do not approach anything like "institutional accountability."

Accountability has special meaning in public universities where it usually signifies a response to the concerns of state legislators and other public constituencies that a campus is actually producing what the state wants with the money the state provides. This is the most common form of accountability, and often leads to accountability systems or projects that attempt to put all institutions of higher education into a common framework to ensure the wise expenditure of state money on the delivery of higher education products to the people.

In this form, accountability is usually a great time sink with no particular value, although it has the virtue of keeping everyone occupied generating volumes of data of dubious value in complex ways that will exhaust the participants before having any useful impact. The SHEEO report is particularly clear on this point.  

This form of accountability has almost no practical utility because state agencies cannot accurately distinguish one institution of higher education from the other for the purposes of providing differential funding. If the state accountability system does not provide differential funding for differential performance, then the exercise is more in the nature of an intense conversation about what good things the higher education system should be doing rather than a process for creating a system that could actually hold institutions accountable for their performance.  

Public agencies rarely hold institutions accountable because to do so requires that they punish the poor performers or at least reward the good performers. No institution wants a designation as a poor performer. An institution with problematic performance characteristics as measured by some system will mobilize every political agent at its disposal (local legislators, powerful alumni and friends, student advocates, parents) to modify the accountability criteria to include sufficient indicators on which they can perform well.

In response to this political pressure, and to accommodate the many different kinds, types and characteristics of institutions, the accountability system usually ends up with 20, 30 or more accountability measures. No institution will do well on all of them, and every institution will do well on many of them, so in the end, all institutions will qualify as reasonably effective to very effective, and all will remain funded more or less as before.

The lifecycle of this process is quite long and provides considerable opportunity for impassioned rhetoric about how well individual institutions serve their students and communities, how effective the research programs are in enhancing economic development, how valuable the public service activities enhance the state, and so on. At the end, when most participants have exhausted their energy and rhetoric, and when the accountability system has achieved stasis, everyone will declare a victory and the accountability impulse will go dormant for several years until rediscovered again.  

Often, state accountability systems offer systematic data reporting schemes with goals and targets defined in terms of improvement, but without incentives or sanctions. These systems assume that the value of measuring alone will motivate institutions to improve to avoid being marked as ineffective. This kind of system has value in identifying the goals and objectives of the state for its institutions, but often relegates the notion of accountability to the reporting of data rather than the allocation of money, where it could make a significant difference. 

If an institution, state, or other entity wants to insist on improved performance from universities, they must specify the performance they seek and then adjust state appropriations to reward those who meet or exceed the established standard. Reductions in state budgets for institutions that fail to perform are rare for obvious political reasons, but the least effective system is one that allocates funds to poorly performing institutions with the expectation that the reward for poor performance will motivate improvement. One key to effective performance improvement, reinforced in the SHEEO report, is strictly limiting the number of key indicators for measuring improvement.  If the number of indicators exceeds 10, the exercise is likely to find all institutions performing well on some indicator and therefore all deserving of continued support.

Differing Directions

Often the skepticism that surrounds state accountability systems stems from a mismatch between the goals of the state (with an investment of perhaps 30 percent or less of the institutional budget) and those of the institutions. Campuses may seek nationally competitive performance in research, teaching, outreach, and other activities. States may seek improvement in access and student graduation rates as the primary determinants of accountability. Institutions may see the state’s efforts as detracting from the institution’s drive toward national reputation and success. Such mismatches in goals and objectives often weaken the effectiveness of state accountability programs. 

Universities are very complex and serve many constituencies with many different expectations about the institutions’ activities. Improvement comes from focusing carefully on particular aspects of an institution’s performance, identifying reliable and preferably nationally referenced indicators, and then investing in success. While the selection of improvement goals and the development of good measures are essential, the most important element in all improvement programs is the ability to move money to reward success.

If an accountability system only measures improvement and celebrates success, it will produce a warm glow of short duration. Performance improvement is hard work and takes time, while campus budgets change every year. Effective measurement is often time consuming and sometimes difficult, and campus units will not participate effectively unless there is a reward. The reward that all higher education institutions and their constituent units understand is money. This is not necessarily money reflected in salary increases, although that is surely effective in some contexts.

Primarily what motivates university improvement, however, is the opportunity to enhance the capacity of a campus. If a campus teaches more students, and as a result earns the opportunity to recruit additional faculty members, this financial reward is of major significance and will motivate continued improvement. At the same time, the campus that seeks improvement cannot reward failure. If enrollment declines, the campus should not receive compensatory funding in hopes of future improvement. Instead, a poorly performing campus should work harder to get better so it too can earn additional support.

In public institutions, the small proportion of state funding within the total budget limits the ability of state systems to influence campus behavior by reallocating funding. In particular, in many states, most of the public money pays for salaries, and reallocating funds proves difficult. Nonetheless, most public systems and legislatures can identify some funds to allocate as a reward for improved performance.
Even relatively small budget increases represent a significant reward for campus achievements.

Accountability, as the SHEEO report highlights, is a word with no meaning until we define the measures and the purpose. If we mean accountability to satisfy public expectations for multiple institutions on many variables, we can expect that the exercise will be time consuming and of little practical impact. If we mean accountability to improve the institution’s performance in specific ways, then we know we need to develop a few key measures and move at least some money to reward improvement. 

Author/s: 
John V. Lombardi
Author's email: 
lombardi@umass.edu

John V. Lombardi, chancellor and professor of history at the University of Massachusetts Amherst, writes Reality Check every two weeks. Scott McLemee's column, Intellectual Affairs, will return Thursday.

No Professor Left Behind

At the annual meeting of one of the regional accrediting agencies a few years ago, I wandered into the strangest session I’ve witnessed in any academic gathering. The first presenter, a young woman, reported on a meeting she had attended that fall in an idyllic setting. She had, she said, been privileged to spend three days “doing nothing but talking assessment” with three of the leading people in the field, all of whom she named and one of whom was on this panel with her. “It just doesn’t get any better than that!” she proclaimed. I kept waiting for her to pass on some of the wisdom and practical advice she had garnered at this meeting, but it didn’t seem to be that kind of presentation.

The title of the next panel I chose suggested that I would finally learn what accrediting agencies meant by “creating a culture of assessment.” This group of presenters, four in all, reenacted the puppet show they claimed to have used to get professors on their campus interested in assessment. The late Jim Henson, I suspect, would have advised against giving up their day jobs.  

And thus it was with all the panels I tried to attend. I learned nothing about what to assess or how to assess it. Instead, I seemed to have wandered into a kind of New Age revival at which the already converted, the true believers, were testifying about how great it was to have been washed in the data and how to spread the good news among non-believers on their campus.

Since that time, I’ve examined several successful accreditation self-studies, and I’ve talked to vice presidents, deans, and faculty members, but I’m still not sure about what a “culture of assessment” is. As nearly as I can determine, once a given institution has arrived at a state of profound insecurity and perpetual self-scrutiny, it has created a “culture of assessment.”  The self-criticism and mutual accusation sessions favored by Communist hardliners come to mind, as does a passage from a Credence Clearwater song: “Whenever I ask, how much should I give? The only answer is more, more!”     

Most of the faculty resistance we face in trying to meet the mandates of the assessment movement, it seems to me, stems from a single issue: professors feel professionally distrusted and demeaned. The much-touted shift in focus from teaching to student learning at the heart of the assessment movement is grounded in the presupposition that professors have been serving their own ends and not meeting the needs of students. Some fall into that category, but whatever damage they do is greatly overstated, and there is indeed a legitimate place in academe for those professors who are not for the masses. A certain degree of quirkiness and glorious irrelevance were once considered par for the course, and students used to be expected to take some responsibility for their own educations.

Clearly, from what we are hearing about the new federal panel studying colleges, the U.S. Department of Education believes that higher education is too important to be left to academics. What we are really seeing is the re-emergence of the anti-intellectualism endemic to American culture and a corresponding redefinition of higher education in terms of immediately marketable preparation for specific jobs or careers. The irony is that the political party that would get big government off our backs has made an exception of academe.  

This is not to suggest, of course, that everything we do in the name of assessment is bad or that we don’t have an obligation to determine that our instruction is effective and relevant.  At the meeting of the National Association of Schools of Art and Design, I heard a story that illustrates how the academy got into this fix. It seems an accreditor once asked an art faculty member what his learning outcomes were for the photography course he was teaching that semester. The faculty member replied that he had no learning outcomes because he was trying to turn students into artists and not photographers. When asked then how he knew when his students had become artists, he replied, “I just know.”

Perhaps he did indeed “just know.” One of the most troubling aspects of the assessment movement, to my mind, is the tendency to dismiss the larger, slippery issues of sense and sensibility and to measure educational effectiveness only in terms of hard data, the pedestrian issues we can quantify. But, by the same token, every photographer must master the technical competencies of photography and learn certain aesthetic principles before he or she can employ the medium to create art. The photography professor in question was being disingenuous. He no doubt expected students to reach a minimal level of photographic competence and to see that competence reflected in a portfolio of photographs that rose to the level of art. His students deserved to have these expectations detailed in the form of specific learning outcomes.

Thus it is, or should be, with all our courses. Everyone who would teach has a professional obligation to step back and to ask himself or herself two questions: What, at a minimum, do I want students to learn, and how will I determine whether they have learned it? Few of us would have a problem with this level of assessment, and most of us would hardly need to be prompted or coerced to adjust our methods should we find that students aren’t learning what we expect them to learn. Where we fall out, professors and professional accreditors, is over the extent to which we should document or even formalize this process.

I personally have heard a senior official at an accrediting agency say that “if what you are doing in the name of assessment isn’t really helping you, you’re doing it wrong.” I recommend that we take her at her word. In my experience -- first as a chair and later as a dean -- it is helpful for institutions to have course outlines that list the minimum essential learning outcomes and which suggest appropriate assessment methods for each course. It is helpful for faculty members and students to have syllabi that reflect the outcomes and assessment methods detailed in the corresponding course outlines. It is also helpful to have program-level objectives and to spell out where and how such objectives are met.

All these things are helpful and reasonable, and accrediting agencies should indeed be able to review them in gauging the effectiveness of a college or university. What is not helpful is the requirement to keep documenting the so-called “feedback loop” -- the curricular reforms undertaken as a result of the assessment process. The presumption, once again, would seem to be that no one’s curriculum is sound and that assessment must be a continuous process akin to painting a suspension bridge or a battleship. By the time the painters work their way from one end to the other, it is time to go back and begin again. “Out of the cradle, endlessly assessing,” Walt Whitman might sing if he were alive today.

Is it any wonder that we have difficulty inspiring more than grudging cooperation on the part of faculty? Other professionals are largely left to police themselves. Not so academics, at least not any longer. We are being pressured to remake ourselves along business lines. Students are now our customers, and the customer is always right. Colleges used to be predicated on the assumption that professors and other professionals have a larger frame of reference and are in a better position than students to design curricula and set requirements. I think it is time to reaffirm that principle; and, aside from requiring the “helpful” documents mentioned above, it is past time to allow professors to assess themselves.

Regarding the people who have thrown in their lot with the assessment movement, to each his or her own. Others, myself included, were first drawn to the academic profession because it alone seemed to offer an opportunity to spend a lifetime studying what we loved, and sharing that love with students, no matter how irrelevant that study might be to the world’s commerce. We believed that the ultimate end of what we would do is to inculcate both a sensibility and a standard of judgment that can indeed be assessed but not guaranteed or quantified, no matter how hard we try. And we believed that the greatest reward of the academic life is watching young minds open up to that world of ideas and possibilities we call liberal education. To my mind, it just doesn’t get any better than that.

Author/s: 
Edward F. Palm
Author's email: 
info@insidehighered.com

Edward F. Palm is dean of social sciences and humanities at Olympic College, in Bremerton, Wash.

Pages

Subscribe to RSS - Assessment
Back to Top