Assessment

Assuring Civility or Curbing Criticism?

Smart Title: 

Higher ed research group calls off panel that would have focused on controversial issue of its journal that featured articles questioning student engagement surveys.

Too Many Rules

Smart Title: 

A federal panel asking whether Higher Education Act regulations are burdensome got an earful.

Classroom Styles

When I received my first test score – a 3 out of 10 -- in college introductory psychology, I realized that I had some hard slogging ahead, especially after the professor told me that "there is a famous Sternberg in psychology and it is obvious there won’t be another one." I eventually pulled a C in the course, which the professor referred to as a "gift." That professor was probably as surprised as I was when I earned an A in his upper-level course, and I certainly was grateful to him when, as chair of the search committee, he hired me back to my alma mater (Yale University) as an assistant professor, where I would remain as a professor for 30 years. My instructor probably wondered, as did I, how I could have done so poorly in the introductory course and so much better in the upper-level course.

There may have been multiple contributing causes to the difference in performance, but one was almost certainly a difference in the styles of learning and thinking that were rewarded in the two courses. The lower-level course was pretty much a straight, memorize-the-book kind of course, whereas the upper-level course was one that encouraged students to formulate their own research studies and to analyze the research studies of others.

Psychologists and educators differ as to whether they believe in the existence of different styles of learning and thinking. Harold Pashler and his colleagues have claimed that the evidence for their existence is weak, but a number of scholars, whose work is summarized in a 2006 book I wrote with Li-fang Zhang entitled The Nature of Intellectual Styles, and in a forthcoming edited Handbook of Intellectual Styles, have provided what we believe to be compelling evidence for the existence and importance of diverse styles of learning and thinking. I have often felt that anyone who has raised two or more children will be aware, at an experiential level, that children learn and think in different ways.

My own thinking about styles of learning and thinking has been driven by my "theory of mental self-government," which I first presented in book format in a volume entitled Thinking Styles. According to this theory, the ways of governments in the world are external reflections of what goes on in people’s minds. There are 13 different styles in the theory, but consider now just three of them. People with a legislative style like to come up with their own ideas and to do things in their own way; people with an executive style prefer to be given more structure and guidance or even told what to do; people with a judicial style like to evaluate and judge things and especially the work of others.

From this point of view, the introductory psychology course I took, like many introductory courses, particularly rewarded students with an executive style – students who liked to memorize what they read in books or heard in lectures. In contrast, the advanced psychology lab course more rewarded students with a legislative or judicial style, in that students came up with ideas for their own experiments and evaluated the research of others.

In a series of studies I conducted with Elena Grigorenko of Yale University and later with Li-fang Zhang of the University of Hong Kong, we had both teachers and students fill out questionnaires based on my theory of mental self-government. In one set of studies with Grigorenko, we then computed a measure of the similarity of the profile of each student to his or her teacher. We also evaluated the styles preferred by the diverse educational institutions on the basis of their mission statements and descriptive literature. There are three findings from that study of particular importance to college classrooms.

The first finding was that institutions differ widely in the styles of thinking that they reward. For example, in the study, one tended to reward a conservative style (characterizing people who like things to remain more or less the way they are) and tended to penalize a liberal style (characterizing people who like things to change), whereas another rewarded exactly the opposite pattern. The correlations of styles with academic success were statistically significant in both schools, but in opposite directions. Teachers also value different styles. Hence it is important for students to select a college or university and, to the extent possible, professors who value at least to some degree the kinds of learning and thinking that best characterize a particular student. Similarly, it is important for professors to select a school at which to work that values the ways in which the professors prefer to think and to teach.

The second relevant finding was that teachers tend to overestimate the extent to which students match their own profile of learning and thinking styles. Teachers often teach in a way that reflects their own preferred styles of learning and thinking, not fully realizing that the styles that they prefer may not correspond to the styles that many of their students prefer. They believe they are teaching in ways that meet the needs of diverse students, when in fact they often are not. In essence, we are at risk for teaching to ourselves rather than to our students.

The third key finding was that teachers tended to grade more highly students whose profiles of learning and thinking better matched their own. In showing this pattern, the teachers were not purposely favoring, nor probably were they even aware they were favoring, people like themselves. But the fundamental principle of interpersonal attraction is that we are more attracted to people who are like ourselves, and so it is not surprising that teachers would value more students who think in the same ways they do. Ideally, teachers will be flexible, both within and between courses. (The psychology professor to whom I referred earlier was flexible between courses, but not within each course.)

Where these preferences particularly become a problem is when the styles that lead to success in a particular course do not match the styles that will be needed for success either in more advanced courses in the same discipline, or, worse, in the occupation for which the course prepares students. For example, in most occupations, one does not sit around taking short-answer or multiple-choice tests on the material one needs to succeed in the job. The risk, then, is that schools will reward students whose styles match the way they are taught but not the requirements of the work for which the teaching prepares them. As an example, 35 years after receiving the C in introductory psychology, I was president of the American Psychological Association — the largest association of psychologists in the world — and did not once have to sit down and take fact-based quizzes on the material I needed to succeed on the job. Indeed, the factual content that would be taught in an introductory-psychology course, and in many other courses, had changed radically in the 35 years that had passed since I took the course.

In my own teaching, I have had run-ins with the importance of styles. For example, when I first started teaching introductory psychology, I taught it the way I ideally would have liked the course, with lots of emphasis on "legislative" activities — students coming up with their own ideas for structuring their learning. It became obvious to me within a couple of weeks that the course was failing to meet the learning needs of the students. I later realized it was for the same reason that the introductory psychology course I had taken had not worked for me. I was teaching to my own style of learning, not to the diversity of students’ styles of learning. I now try to teach in ways that encourage a mix of legislative, executive, and judicial activities. For example, students come up with their own ideas for papers, but also have to answer some short-answer questions on tests and have to analyze the theories and research of various investigators.

Similarly, in teaching an advanced statistics course, I had pretty much pegged some of the students as "stronger learners" and other students as "weaker learners." One day, I read about how to teach a technique I was covering geometrically rather than in the algebraic way I had been teaching that and other material in the course. When I started teaching the material geometrically, I found that many of the students I had identified as "strong learners" were having difficulty, whereas many of the students I had identified as "weak learners" were easily absorbing the material. I had confounded strength of students’ learning skills with match of their learning style to the way I happened to be teaching.

In sum, styles of learning and thinking matter in the classroom. We best serve our students when we teach in a way that enables all students to capitalize on their preferred styles at least some of the time, but that recognizes that students must acquire flexibility in their use of styles and so cannot always learn in their preferred way. My own institution, Oklahoma State University, has a Learning and Student Success Opportunity Center that intervenes with students in ways specifically oriented toward meeting the needs of their diverse learning and thinking styles. Our Institute for Teaching and Learning Excellence teaches teachers how to meet the stylistic needs of students. Our goal in higher education should be to prepare students for the diverse demands later courses and careers will make on their learning and thinking styles so that they can be successful not just in our course, but in their later studies and work.

Author/s: 
Robert J. Sternberg
Author's email: 
info@insidehighered.com

Robert J. Sternberg is provost, senior vice president, and Regents Professor of Psychology and Education at Oklahoma State University.

A More Complete Completion Picture

Smart Title: 

National group includes part-time and other students typically omitted from college success -- and the numbers are not pretty.

Questioning Assumptions

Smart Title: 

Community college leaders say their campuses can do better, rather than focusing on outside forces that are buffeting them.

Measuring Engagement

For more than a decade, the National Survey of Student Engagement (NSSE) and the Community College Survey of Student Engagement (CCSSE) have provided working faculty members and administrators at over 2,000 colleges and universities with actionable information about the extent to which they and their students are doing things that decades of empirical study have shown to be effective. Recently, a few articles by higher education researchers have expressed reservations about these surveys. Some of these criticisms are well-taken and, as leaders of the two surveys, we take them seriously. But the nature and source of these critiques also compel us to remind our colleagues in higher education just exactly what we are about in this enterprise.

Keeping purposes in mind is keenly important. For NSSE and CCSSE, the primary purpose always has been to provide data and tools useful to higher education practitioners in their work. That’s substantially different from primarily serving academic research. While we have encouraged the use of survey results by academic researchers, and have engaged in a great deal of it ourselves, this basic purpose fundamentally conditions our approach to “validity.” As cogently observed by the late Samuel Messick of the Educational Testing Service, there is no absolute standard of validity in educational measurement. The concept depends critically upon how the results of measurement are used. In applied settings, where NSSE and CCSSE began, the essential test is what Messick called “consequential validity” -- essentially the extent to which the results of measurement are useful, as part of a larger constellation of evidence, in diagnosing conditions and informing action. This is quite different from the pure research perspective, in which “validity” refers to a given measure’s value for building a scientifically rigorous and broadly generalizable body of knowledge.

The NSSE and CCSSE benchmarks provide a good illustration of this distinction. Their original intent was to provide a heuristic for campuses to initiate broadly participatory discussions of the survey data and implications by faculty and staff members. For example, if data from a given campus reveal a disappointing level of academic challenge, educators on that campus might examine students’ responses to the questions that make up that benchmark (for example, questions indicating a perception of high expectations). As such, the benchmarks’ construction was informed by the data, to be sure, but equally informed by decades of past research and experience, as well as expert judgment. They do not constitute “scales” in the scientific measurement tradition but rather groups of conceptually and empirically related survey items. No one asked for validity and reliability statistics when Art Chickering and Zelda Gamson published the well-known Seven Principles for Good Practice in Undergraduate Education some 25 years ago, but that has not prevented their productive application in hundreds of campus settings ever since.

The purported unreliability of student self-reports provides another good illustration of the notion of consequential validity. When a student is asked to tell us the frequency with which she engaged in a particular activity (say, making a class presentation), it is fair to question how well her response reflects the absolute number of times she actually did so. But that is not how NSSE and CCSSE results are typically used. The emphasis is most often placed instead on the relative differences in response patterns across groups -- men and women, chemistry and business majors, students at one institution and those elsewhere, and so on. Unless there is a systematic bias that differentially affects how the groups respond, there is little danger of reaching a faulty conclusion. That said, NSSE and CCSSE have invested considerable effort to investigate this issue through focus groups and cognitive interviews with respondents on an ongoing basis. The results leave us satisfied that students know what we are asking them and can respond appropriately.

Finally, NSSE and CCSSE results have been empirically linked to many important outcomes including retention and degree completion, grade-point-average, and performance on standardized generic skills examinations by a range of third-party multi-institutional validation studies involving thousands of students. After the application of appropriate controls (including incoming ability measures) these relationships are statistically significant, but modest. But, as the work of Ernest Pascarella and Patrick Terenzini attests, such is true of virtually every empirical study of the determinants of these outcomes over the last 40 years. In contrast, the recent handful of published critiques of NSSE and CCSSE are surprisingly light on evidence. And what evidence is presented is drawn from single-institution studies based on relatively small numbers of respondents.

We do not claim that NSSE and CCSSE are perfect. No survey is. As such, we welcome reasoned criticism and routinely do quite a bit of it on our own. The bigger issue is that work on student engagement is part of a much larger academic reform agenda, whose research arm extends beyond student surveys to interview studies and on-campus fieldwork. A prime example is the widely acclaimed volume Student Success in College by George Kuh and associates, published in 2005. To reiterate, we have always enjoined survey users to employ survey results with caution, to triangulate them with other available evidence, and to use them as the beginning point for campus discussion. We wish we had an electron microscope. Maybe our critics can build one. Until then, we will continue to move forward on a solid record of adoption and achievement.

Peter Ewell is senior vice president of the National Center for Higher Education Management Systems and chairs the National Advisory Boards for both NSSE and CCSSE. Kay McClenney is a faculty member at the University of Texas at Austin, where she directs the Center for Community College Student Engagement. Alexander C. McCormick is a faculty member at Indiana University at Bloomington, where he directs the National Survey of Student Engagement.

Low-Hanging Fruit

Smart Title: 

Educators consider how they can get "near-completers" to finish up their college degrees.

Paths to the Bachelor's Degree

Smart Title: 

Bachelor's degree recipients in 2007-8 who began their postsecondary educations at a community college took almost 20 percent longer to complete their degrees than did those who started out at a four-year institution, those who began at four-year private colleges finished faster than did those at four-year public and for-profit institutions, and those who delayed entry into college by more than a year out of high school took almost 60 percent longer to complete their degrees than did those who went directly to college.

Targeting, or Serving, Needy Students?

Can we agree on this much?

That a large number of academically gifted and economically affluent students (or their parents) have become savvy consumers, getting their first two years of general education courses out of the way at low-cost community colleges rather than pricier state schools and liberal arts colleges?

That by doing so, these would-be competitive admissions students are taking up a large number of slots at community colleges that would otherwise be filled by less academically gifted or less economically affluent students?

That private nonprofit schools, meanwhile, are maintaining their competitive admissions edge by providing more merit-based tuition discounts rather than need-based tuition discounts? That by doing so, these schools become less and less of an option for those less fortunate?

And that, as the number of well-paying blue collar jobs shrinks in response to the changing nature of the economy, the American middle class must either contract, or the skills needed to gain and retain a well-paying job must somehow expand?

I hope we can find consensus around those points. Most people can at least agree on the connection between college education and well-paying jobs, and the need to up-skill the American workforce in order to defend a society in which the benefits of middle class living are widely shared and enjoyed. Most can also agree that higher education access is shrinking in response to a variety of external pressures, including state budget cuts to higher education and a more consumer-savvy insistence on tuition dollar value.

Now we reach the question where many people disagree. Do less well academically prepared, less affluent individuals deserve an opportunity to receive a higher education? And, if so, should they attend institutions best situated to respond to their particular academic, social and emotional needs, or should they be forced to accept whatever public school option may be available -- regardless of the institution’s track record in retaining and graduating students?

These are the questions at the heart of the current debate surrounding private sector colleges and universities (PSCUs). These institutions cost the student more to attend than a public school does, but, through generous subsidies, taxpayers pay the bulk of education costs at community colleges, not students. As a result, the absolute cost of postsecondary attendance is actually less at the private sector alternative. The Institute for Higher Education Policy recently issued a report about low-income adults in postsecondary education, noting -- as many in higher education have long been aware -- that a significant percentage of low income and minority students attend PSCUs and community colleges. From the perspective of our critics, PSCUs “target” these students while community colleges “serve” them.

Both types of institutions operate in what is largely an open admissions environment (although my own institution does not). Both serve the adult student, who is often financially independent. Both strive to provide students with an education that facilitates career-focused employment (although community colleges wear many other postsecondary hats as well). Both use advertising as well as word of mouth referrals to attract students. But many PSCU students have already attended a community college and opted out for various reasons, including the long waits to enter the most popular programs, large class sizes and inflexible schedules. These problems are all made worse by state budget cuts to higher education.

PSCU students do pay more out of their own pockets than do community college students, but PSCU students see the cost justified by what they receive in return. This value expresses itself in greater individual attention and support … in having confidence in academic skills restored where they may be flagging … in gaining new motivation to succeed and seeing that motivation reinforced through success itself ... and in making the connection between classroom learning and employable skills real and direct.

Two-year PSCU institutions graduate students at three times the rate of community colleges. Placement rates are the bottom line on career-focused education, however, and while community colleges offer lower-cost career programs without outcome metrics, PSCUs must match their career education offerings with real placement of students in relevant jobs. Again, PSCU students see this outcomes-based approach as a difference worth paying for.

In this broader context, the irony of PSCUs being accused of “targeting” students becomes clear. Apparently where some see targeting of low income and minority students unable to make informed decisions about their futures, we see tailoring of postsecondary education to suit a nontraditional student population -- and a better fit all around.

Author/s: 
Arthur Keiser
Author's email: 
newsroom@insidehighered.com

Arthur Keiser is chairman of the Association of Private Sector Colleges and Universities and chancellor of Keiser University.

Do Majors Matter?

Do majors matter? Since students typically spend more time in their area of concentration than anywhere else in the curriculum, majors ought to live up to their name and produce really major benefits. But do they?

Anthony P. Carnevale, the Director of Georgetown’s Center for Education and the Workforce, had recently provided a clear answer. Majors matter a lot -- a lot of dollars and cents. In a report entitled “What’s it Worth,” he shows how greatly salaries vary by major, from $120,000 on average for petroleum engineers down to $29,000 for counseling psychologists.

But what if one asked whether majors make differing contributions to students’ cognitive development? The answer is once again yes, but the picture looks very different from the one in the Georgetown study.

A few years ago, Paul Sotherland, a biologist at Kalamazoo College in Michigan, asked an unnecessary question and got not an answer but a tantalizing set of new questions. It was unnecessary because most experts in higher education already knew the answer, or thought they did: as far as higher-order cognitive skills are concerned, it doesn’t matter what you teach; it’s how you teach it.

What Sotherland found challenged that conventional wisdom and raised new questions about the role of majors in liberal education. Here’s what he did. Kalamazoo had been using the Collegiate Learning Assessment (CLA) to track its students’ progress in critical thinking and analytical reasoning. After a few years it become clear that Kalamazoo students were making impressive gains from their first to their senior years. Sotherland wondered if those gains were across the board or varied from field to field.

So he and his associates tabulated their CLA results for each of the five divisions of the college’s curriculum -- fine arts, modern and classical languages and literatures, humanities, natural sciences and mathematics, and social sciences.

Since gains in CLA scores tend to follow entering ACT or SAT scores, they “corrected” the raw data to see what gains might be attributed to instruction. They found significant differences among the divisions, with the largest gains (over 200 points) in foreign languages, about half that much in the social sciences, still less in the fine arts and in the humanities, least of all in the natural sciences .

How was this to be explained? Could reading Proust somehow hone critical thinking more than working in the lab? (Maybe so.)

But the sample size was small and came from one exceptional institution, one where students in all divisions did better than their SAT scores would lead one to expect, and where the average corrected gain on CLA is 1.5 standard deviations, well above the national average. (Perhaps Inside Higher Ed should sponsor the “Kalamazoo Challenge,” to see if other institutions can show even better results in their CLA data.)

The obvious next step was to ask Roger Benjamin of the Collegiate Learning Assessment if his associates would crunch some numbers for me. They obliged, with figures showing changes over four years for both parts of the CLA -- the performance task and analytical writing. Once again, the figures were corrected on the basis of entering ACT or SAT scores.

The gains came in clusters. At the top was sociology, with an average gain of just over 0.6 standard deviations. Then came multi- and interdisciplinary studies, foreign languages, physical education, math, and business with gains of 0.50 SDs or more.

The large middle cluster included (in descending order) education, health-related fields, computer and information sciences, history, psychology, law enforcement, English, political science, biological sciences, and liberal and general studies.

Behind them, with gains between 0.30 and 0.49 SDs, came communications (speech, journalism, television, radio etc.), physical sciences, nursing, engineering, and economics. The smallest gain (less than 0.01 standard deviations) was in architecture.

The list seemed counterintuitive to me when I first studied it, just as the Kalamazoo data had. In each case, ostensibly rigorous disciples, including most of the STEM disciplines (the exception was math) had disappointing results. Once again the foreign languages shone, while most other humanistic disciplines cohabited with unfamiliar bedfellows such as computer science and law enforcement. Social scientific fields scattered widely, from sociology at the very top to economics close to the bottom.

When one looks at these data, one thing is immediately clear. The fields that show the greatest gains in critical thinking are not the fields that produce the highest salaries for their graduates. On the contrary, engineers may show only small gains in critical thinking, but they often command salaries of over $100,000. Economists may lag as well, but not at salary time, when, according to “What’s It Worth” their graduates enjoy median salaries of $70,000. At the other end majors in sociology and French, German and other commonly taught foreign languages may show impressive gains, but they have to be content with median salaries of $45,000.

But what do these data tell us about educational practice? It seems unlikely that one subject matter taken by itself has a near-magical power to result in significant cognitive gains while another does nothing of the sort. If that were the case, why do business majors show so much more progress than economics majors? Is there something in the content of a physical education major (0.50 SDs) that makes it inherently more powerful than a major in one of the physical sciences (0.34 SDs)? I doubt it.

Since part of the CLA is based on essays students write during the exam, perhaps the natural science majors simply had not written enough to do really well on the test. (That’s the usual first reaction, I find, to unexpected assessment results -- "there must be something wrong with the test.") That was, however, at best a partial explanation, since it didn’t account for the differences among the other fields. English majors, for example, probably write a lot of papers, but their gains were no greater than those of students in computer sciences or health-related fields.

Another possibility is that certain fields attract students who are ready to hone their critical thinking skills. If so, it would be important to identify what it is in each of those fields that attract such students to it. Are there, for example, “signature pedagogies” that have this effect? If so, what are they and how can their effects be maximized? Or is it that certain pedagogical practices, whether or not they attract highly motivated students, increase critical thinking capacities – and others as well? For example, the Wabash national study has identified four clusters of practices that increase student engagement and learning in many areas (good teaching and high-quality interactions with faculty, academic challenge and high expectations, diversity experiences, and higher-order, integrative, and reflective learning).

Some fields, moreover, may encourage students to “broaden out” -- potentially important for the development of critical thinking capacities as one Kalamazoo study suggests. Other disciplines may discourage such intellectual range.

One other hypothesis, I believe, also deserves closer consideration. The CLA is a test of post-formal reasoning. That is, it does not seek to find out if students know the one right answer to the problems it sets; on the contrary, it rewards the ability to consider the merits of alternative approaches. That suggests that students who develop the habit of considering alternative viewpoints, values and outcomes and regularly articulate and weigh alternative possibilities may have an advantage when taking the CLA exam, and quite possibly in real-life settings as well.

Since the study of foreign languages constantly requires the consideration of such alternatives, their study may provide particularly promising venues for the development of such capacities. If so, foreign languages have a special claim on attention and resources even in a time of deep budgetary cuts. Their "signature pedagogies," moreover, may provide useful models for other disciplines.

These varying interpretations of the CLA data open up many possibilities for improving students’ critical thinking. But will these possibilities be fully utilized without new incentives? The current salary structure sends a bad signal when it puts the money where students make very small gains in critical thinking, and gives scant reward to fields that are high performers in this respect . (For example, according to the College & University Professional Association for Human Resources, full professors in engineering average over $114,000, while those in foreign languages average just over $85,000.

Isn’t it time to shift some resources to encourage experimentation in all fields to develop the cognitive as well as the purely financial benefits of the major?

Author/s: 
W. Robert Connor
Author's email: 
newsroom@insidehighered.com

W. Robert Connor is senior advisor to the Teagle Foundation.

Pages

Subscribe to RSS - Assessment
Back to Top