Assessment

Ohio chancellor wants to end remedial education at public universities

Smart Title: 

Chancellor of Ohio's higher ed system wants to end remedial education at four-year universities. Critics say the policy could hurt minority and low-income students.

Moneycollege

If you’ve seen “Moneyball,” the new baseball film about the unlikely success of the Oakland A’s and their out-of-the-box-thinking general manager, Billy Beane, you may have already drawn parallels to the current state of higher education. If not, we’re pleased to do it for you.

Early in "Moneyball" there’s a funny scene of Billy sitting around a table with his scouts, wise old men of America’s pastime. The scouts jaw on about players’ arms, legs and bodies and their potential. One scout insists that an ugly girlfriend means that a player doesn’t have confidence. The scouts are entranced by the obvious. And when it comes to metrics, the scouts focus on what’s easy to measure. The scouts love high school pitchers: “High school pitchers had brand-new arms, and brand-new arms were able to generate the one asset scouts could measure: a fastball’s velocity,” Michael Lewis writes in the book on which the movie was based.

But Billy isn’t fooled. He decides to bring data to the table in the form of Peter Brand, a Yalie with an economics degree and a statistics-spewing laptop ready at hand.

It turns out that high school pitchers are much less likely to go on to successful major league careers than are comparable pitchers who have attended college. And when you try to correlate a range of statistics to runs scored, batting average is a poor indicator, whereas on-base percentage (OBP) is highly correlated. So Billy and the A’s eschew high school pitchers and focus on OBP; the A’s begin to value and acquire players with a knack for getting on base any way they can, especially by taking walks.

The result, chronicled in the entertaining film based on Lewis's book, is an unlikely group of major leaguers who, during the 2002 season, win 20 games in a row -- still a record -- and make the playoffs.

“My only question is if he’s that good a hitter, why doesn’t he hit better?”

-- Billy Beane

Like baseball 10 years ago, higher education is focused on what’s easy to measure. For baseball it may have been body parts, batting averages and the numbers on the radar gun. For higher education, it’s the 3Rs: research, rankings and real estate. Each of these areas is easily quantified or judged: research citations or number of publications in Nature and Science; U.S. News ranking (or colleges choose from a plethora of new entrants to the ranking game, including the international ranking by Shanghai Jiao Tong University); and in terms of real estate, how much has been spent on a new building and how stately, innovative and generally impressive it appears.

Unfortunately, the 3Rs correlate about as closely to student learning and student outcomes as batting average or fastball velocity, which is to say, not at all. Buildings are the “ugly girlfriend” of higher education.

Universities that continue to focus on the 3Rs in the wake of the seismic shifts currently roiling higher education (state budget cuts, increased sticker shock, technology-based learning) are either not serious about improving student learning and student outcomes, or they’re like the baseball fan who has lost her car keys in the stadium parking lot at night. Where does she look for them? Not where she lost them, but under the light because that’s where she can see.

“A young player is not what he looks like, or what he might become, but what he has done.”

-- Billy Beane

Similarly, a university is not what its buildings look like, or what its reputation or rankings say, but what it has done. And by done, we don’t mean research. The link between research and instructional efficacy is unproven at best. We define instruction of students to mean producing measurable outcomes in terms of student learning and employment.

The first step will be to get the data; before we find the Billy Beane of higher education, we first need to find Bill James. With his famous Baseball Abstract, Bill James revolutionized how data was tracked, and which metrics were most important to the success of teams and individual players. James jump-started a movement, called sabermetrics, that collected data that had never before been systematically collected: the pitch count at the end of at-bats, pitch types and locations, the direction and distance of batted balls.

A report issued last month by Complete College America, an organization funded by the Bill & Melinda Gates Foundation and the Lumina Foundation for Education, demonstrates just how ripe higher education is for sabermetrics. While the report was sobering in the data it did present (e.g., of every 100 students who enroll in a public college in Texas, 79 enroll in a community college -- of these 79, only seven have completed a program in four years’ time), more fundamental are the huge holes in the data – larger than the holes in the Houston Astros infield! According to Stan Jones, president of Complete College America, the data are incomplete because students who enroll part-time or who transfer are not tracked: “We know they enroll, but we don’t know what happens to them,” he said. “We shouldn’t make policy based on the image of students going straight from high school to college, living on campus, and graduating four years later, when the majority of college students don’t do that.”

“The great thing about college players: they had meaningful stats. They played a lot more games, against stiffer competition, than high school players. The sample size of their relevant statistics was larger, and therefore a more accurate reflection of some underlying reality. You could project college players with greater certainty than you could project high school players.”

-- Michael Lewis, Moneyball

How ironic that we may be doing a better job gathering baseball statistics at colleges than we are at gathering education statistics. It is essential that we begin to track persistence data on part-time and transfer students on a systematic basis. The Department of Education should lead this initiative. Failing that, Gates, Lumina and others undoubtedly will pick up the slack.

Just as the Moneyball approach has narrowed the gap between teams with $40 million payrolls and teams with payrolls three times higher (see, e.g., Tampa Bay Rays storming back in the month of September and taking the American League wild card berth away from Boston with a payroll of $41 million, 25 percent of the Red Sox payroll), finding and tracking the OBP of higher education will do the same for data-driven institutions of all stripes, including those that do not receive state subsidies, and those that pay taxes.

With the right data, dozens of would-be Billy Beanes will spring up across the country arguing what the on-base percentage equivalent for higher education is, coalescing on persistence and completion metrics that are meaningful for all students (i.e., traditional/adult, full-time/part-time, on-ground/online) and helping their institutions reform and restructure to increase “wins.”

Completion Rates in Context

Much attention has been directed at college completion rates in the past two years, since President Obama announced his goal that the United States will again lead the world with the highest proportion of college graduates by 2020. The most recent contribution to this dialogue was last month’s release of "Time Is the Enemy" by Complete College America.

Much in the introduction to this report is welcome. Expanding completion rate reporting to include part-time students, recognizing that more students are juggling employment and family responsibilities with college, acknowledging that many come to college unprepared for college-level work -- such awareness should inform our policy choices. All in higher education share the desire expressed by Complete College America that more students complete their programs, and do so in less time.

The graduation rates for two-year institutions included in "Time Is the Enemy" show, however, just how inadequate our current measures are for assessing community college student degree progress -- a shortfall also acknowledged by the appointment of the federal Committee on Measures of Student Success, which is charged with making recommendations to the U.S. education secretary by April. Our current national completion measures for community colleges underestimate the true progress of students, presenting a misleading picture of the performance of these open-admissions institutions.

The following suggestions might inform a new set of national metrics for assessing student performance at two-year institutions.

Completion Rates for Community Colleges Should Include Transfers to Baccalaureate Institutions. Although community colleges usually advise students aiming for a bachelor’s degree to complete their associate degree before transferring, to reap the benefits of additional tuition savings and attain a credential, transferring before attaining the associate degree is, for many students, a rational decision. Accepting admission and assimilating into competitive baccalaureate programs and institutions, establishing mentorships with professors in the intended baccalaureate major, or embracing the residential college experience may all lead students to transfer before completing the associate degree. In addition, for a variety of reasons, universities may delay admission of incoming freshmen to the spring semester and advise them to start in the fall at a community college. These students are not seeking degrees at the community college, and will transfer after one semester. Thus, for two-year institutions, preparing students for transfer to a four-year institution should be considered an outcome as favorable as a student earning an associate degree.

The appropriate completion measure for community colleges is a combined graduation-transfer rate. The preferred metric is the percentage of students in the initial cohort who have graduated and/or transferred to a four-year institution. It is important to include transfers to out-of-state institutions in these calculations. In Maryland, a fourth of the community college transfers to baccalaureate institutions enroll in colleges and universities outside of Maryland. Reliance on state reporting systems that do not utilize national databases such as the National Student Clearinghouse to report this metric results in serious underestimates of student success. The need to track transfers across state lines is a major reason for the so-far-unsuccessful push for a national unit record system.

Comparisons of completion rates at community colleges and four-year institutions, where transfer is not included in the community college measure, are inappropriate. Reports such as "Time Is the Enemy" that report graduation rates for community colleges, with table labels such as “Associate Degree-seeking Students,” are misleading in that these calculations include many students who are pursuing baccalaureate transfer programs with no intention of earning the associate.

Completion Rate Calculations Should Exclude Students Not Seeking Degrees. Community colleges serve many students not seeking a college degree, and these students should be excluded from the calculation of completion rates. A student’s stated intent at entry is not adequate to identify degree-seekers, since students may be uncertain about their goals and goals may change. Enrollment in a degree program is not adequate, since students without a degree goal must declare a program in order to be eligible for financial aid, and many colleges force students to choose a major in order to gather gauge student interest for advising purposes.

A better way to define degree-seeking status is based on student behavior. Have students demonstrated pursuit of a degree by enrolling in more than two or three classes? A minimum number of attempted hours is the preferred way of defining the cohort to study. In Maryland, to be included in the denominator of graduation-transfer rates, a student must attempt at least 18 hours within two years of entry. Hours in developmental or remedial courses are included. This way of defining the cohort has several benefits. It does not exclude students beginning as part-time students, as IPEDS does. It eliminates transient students with short-term job skill enhancement or personal enrichment motives. By using attempted hours as the threshold, rather than earned credits as in some other states, this definition does not bias the sample toward success. Students who fail all their courses and earn zero credits will still be in the cohort if they have attempted 18 hours. And finally, it seems reasonable that students show some evidence of effort to persist if institutions are to be held accountable for their degree attainment.

Recognize that Community College Students Who Start Full-time Typically Do Not Remain Full-time. A number of studies suggest that the majority of community college students initially enrolling full-time switch to part-time attendance. This contrasts with students at most four-year institutions, who start and remain full-time. For example, 52 percent of students at community colleges that participate in the Achieving the Dream project began as full-time students. Yet only 31 percent attended full-time for the entire first year. Studies of Florida’s community colleges find similar results. Most students end up with a combination of full-time and part-time attendance, regardless of their initial status. Among students enrolled at least three additional semesters, only 30 percent of Florida’s “full-time” community college students enrolled full-time every semester. As a Florida College System report concludes, “Expecting a ‘full-time’ student to complete an associate degree in two years or even three assumes that the student remains full-time and this is most often not the case. As a result, students will progress at rates slower than assumed by models that consider initial full-time students to be full-time throughout their time in college.” Thus, comparisons of completion rates at 2-year and 4-year institutions, even controlling for full-time status in the first semester, are misleading. Studies at my college suggest that completion rates of community college students who start full-time and continuously attend full-time without interruption are comparable to completion rates attained at many four-year institutions.

Extend the Time for Assessing Completion to at least Six Years. “Normal Time” to completion excludes most associate degree completers. Due to part-time attendance, interrupted studies, and the need to complete remedial education, most associate degree graduates take more than three years to complete. Completion rates calculated at the end of three or four years will undercount true completion. It is not uncommon for a third of associate degree completers to take more than four years to complete their degree. At my institution, fully 5 percent of our associate degree recipients take 10 or more years to complete their “two-year” degree. These students are not failures; they are heroes. Yes, we would all like students to complete their degrees more quickly. But if life circumstances dictate a slower pace, let us support these students in their remarkable persistence. And, in our accountability reporting, recognize that our completion rate statistics are time-bound and fail to account for all who will eventually succeed in their degree pursuit.

When Comparing Completion Rates, Compare Institutions with Similar Students. Differences in completion rates among institutions largely reflect differences in student populations. Community college students who are similar to students at four-year institutions in academic preparation, and in their ability to consistently attend full-time, achieve completion rates comparable to those at many four-year institutions. In Maryland, if you include transfer as a community college completion, community colleges have four-year completion rates equal or higher than the eight-year bachelor’s degree graduation rates at a majority of the state’s four-year institutions with open or low-selectivity admissions. And the completion rate of college-ready community college students -- those not needing developmental education — is similar to all but the most selective four-year schools. At my college, 88 percent of the students in our honors program have graduated with an associate degree in two years. This graduation rate is comparable with that of Johns Hopkins and above that of the flagship University of Maryland at College Park.

Students at four-year institutions who are similar in profile to the typical community college student have completion rates similar to those attained at community colleges. This is not a new finding. A March 1996 report, "Beginning Postsecondary Students: Five Years Later," identified the following “risk factors” affecting bachelor’s degree completion: delayed enrollment in higher education, being a GED recipient, being financially independent, having children, being a single parent, attending part-time, and working full-time while enrolled. Fifty-four percent of the students who had none of these risk factors earned the bachelor’s degree within five years. The graduation rate for students with just one of these risk factors fell to 42 percent. For students with two risk factors the bachelor’s degree graduation rate was 21 percent, and for those with three or more the graduation rate was 13 percent.

Readers of this essay who work at community colleges are probably smiling to themselves. For most community colleges, the majority, if not the overwhelming majority, of students are coping with several of these risk factors. And this list does not account for the need of most community college students for developmental or remedial education. The comparability of completion rates at two- and four-year institutions, when student characteristics are controlled for, should not be a surprising finding.

If we must compare completion rates, it is incumbent upon analysts to account for differences in the academic preparation and life circumstances of student populations. This can be done by sophisticated statistical analysis, or in the selection of peer groups of institutions with similar admissions policies and student body demographics.

Support Hopeful Signs at the Federal Level. The work to date of the Committee on Measures of Student Success authorized by the Higher Education Act of 2008 is encouraging. The committee is to make recommendations to the Secretary of Education by April 2012 regarding the accurate reporting of completion rates for community colleges.

A number of the recommendations in the committee’s draft report issued September 2, 2011 would greatly improve reporting of completion statistics for community colleges:

  • Defining the degree-seeking cohort for calculating completion rates by looking at student behavior, such as a threshold number of hours attempted.
  • Recognizing that “preparing students for transfer to a four-year institution is an equally positive outcome as a student earning an associate’s degree.”
  • Reporting a combined graduation-transfer rate as the primary outcome measure for degree-seeking students.
  • Creating an interim, persistence measure combining lateral transfer with retention at the initial institution.

These recommendations show an understanding of the student populations served by community colleges. Inclusion of these definitions and measures in federal IPEDS reporting would provide more meaningful peer, state, and national benchmarks for all community colleges.

Author/s: 
Craig A. Clagett
Author's email: 
newsroom@insidehighered.com

Assuring Civility or Curbing Criticism?

Smart Title: 

Higher ed research group calls off panel that would have focused on controversial issue of its journal that featured articles questioning student engagement surveys.

Too Many Rules

Smart Title: 

A federal panel asking whether Higher Education Act regulations are burdensome got an earful.

Classroom Styles

When I received my first test score – a 3 out of 10 -- in college introductory psychology, I realized that I had some hard slogging ahead, especially after the professor told me that "there is a famous Sternberg in psychology and it is obvious there won’t be another one." I eventually pulled a C in the course, which the professor referred to as a "gift." That professor was probably as surprised as I was when I earned an A in his upper-level course, and I certainly was grateful to him when, as chair of the search committee, he hired me back to my alma mater (Yale University) as an assistant professor, where I would remain as a professor for 30 years. My instructor probably wondered, as did I, how I could have done so poorly in the introductory course and so much better in the upper-level course.

There may have been multiple contributing causes to the difference in performance, but one was almost certainly a difference in the styles of learning and thinking that were rewarded in the two courses. The lower-level course was pretty much a straight, memorize-the-book kind of course, whereas the upper-level course was one that encouraged students to formulate their own research studies and to analyze the research studies of others.

Psychologists and educators differ as to whether they believe in the existence of different styles of learning and thinking. Harold Pashler and his colleagues have claimed that the evidence for their existence is weak, but a number of scholars, whose work is summarized in a 2006 book I wrote with Li-fang Zhang entitled The Nature of Intellectual Styles, and in a forthcoming edited Handbook of Intellectual Styles, have provided what we believe to be compelling evidence for the existence and importance of diverse styles of learning and thinking. I have often felt that anyone who has raised two or more children will be aware, at an experiential level, that children learn and think in different ways.

My own thinking about styles of learning and thinking has been driven by my "theory of mental self-government," which I first presented in book format in a volume entitled Thinking Styles. According to this theory, the ways of governments in the world are external reflections of what goes on in people’s minds. There are 13 different styles in the theory, but consider now just three of them. People with a legislative style like to come up with their own ideas and to do things in their own way; people with an executive style prefer to be given more structure and guidance or even told what to do; people with a judicial style like to evaluate and judge things and especially the work of others.

From this point of view, the introductory psychology course I took, like many introductory courses, particularly rewarded students with an executive style – students who liked to memorize what they read in books or heard in lectures. In contrast, the advanced psychology lab course more rewarded students with a legislative or judicial style, in that students came up with ideas for their own experiments and evaluated the research of others.

In a series of studies I conducted with Elena Grigorenko of Yale University and later with Li-fang Zhang of the University of Hong Kong, we had both teachers and students fill out questionnaires based on my theory of mental self-government. In one set of studies with Grigorenko, we then computed a measure of the similarity of the profile of each student to his or her teacher. We also evaluated the styles preferred by the diverse educational institutions on the basis of their mission statements and descriptive literature. There are three findings from that study of particular importance to college classrooms.

The first finding was that institutions differ widely in the styles of thinking that they reward. For example, in the study, one tended to reward a conservative style (characterizing people who like things to remain more or less the way they are) and tended to penalize a liberal style (characterizing people who like things to change), whereas another rewarded exactly the opposite pattern. The correlations of styles with academic success were statistically significant in both schools, but in opposite directions. Teachers also value different styles. Hence it is important for students to select a college or university and, to the extent possible, professors who value at least to some degree the kinds of learning and thinking that best characterize a particular student. Similarly, it is important for professors to select a school at which to work that values the ways in which the professors prefer to think and to teach.

The second relevant finding was that teachers tend to overestimate the extent to which students match their own profile of learning and thinking styles. Teachers often teach in a way that reflects their own preferred styles of learning and thinking, not fully realizing that the styles that they prefer may not correspond to the styles that many of their students prefer. They believe they are teaching in ways that meet the needs of diverse students, when in fact they often are not. In essence, we are at risk for teaching to ourselves rather than to our students.

The third key finding was that teachers tended to grade more highly students whose profiles of learning and thinking better matched their own. In showing this pattern, the teachers were not purposely favoring, nor probably were they even aware they were favoring, people like themselves. But the fundamental principle of interpersonal attraction is that we are more attracted to people who are like ourselves, and so it is not surprising that teachers would value more students who think in the same ways they do. Ideally, teachers will be flexible, both within and between courses. (The psychology professor to whom I referred earlier was flexible between courses, but not within each course.)

Where these preferences particularly become a problem is when the styles that lead to success in a particular course do not match the styles that will be needed for success either in more advanced courses in the same discipline, or, worse, in the occupation for which the course prepares students. For example, in most occupations, one does not sit around taking short-answer or multiple-choice tests on the material one needs to succeed in the job. The risk, then, is that schools will reward students whose styles match the way they are taught but not the requirements of the work for which the teaching prepares them. As an example, 35 years after receiving the C in introductory psychology, I was president of the American Psychological Association — the largest association of psychologists in the world — and did not once have to sit down and take fact-based quizzes on the material I needed to succeed on the job. Indeed, the factual content that would be taught in an introductory-psychology course, and in many other courses, had changed radically in the 35 years that had passed since I took the course.

In my own teaching, I have had run-ins with the importance of styles. For example, when I first started teaching introductory psychology, I taught it the way I ideally would have liked the course, with lots of emphasis on "legislative" activities — students coming up with their own ideas for structuring their learning. It became obvious to me within a couple of weeks that the course was failing to meet the learning needs of the students. I later realized it was for the same reason that the introductory psychology course I had taken had not worked for me. I was teaching to my own style of learning, not to the diversity of students’ styles of learning. I now try to teach in ways that encourage a mix of legislative, executive, and judicial activities. For example, students come up with their own ideas for papers, but also have to answer some short-answer questions on tests and have to analyze the theories and research of various investigators.

Similarly, in teaching an advanced statistics course, I had pretty much pegged some of the students as "stronger learners" and other students as "weaker learners." One day, I read about how to teach a technique I was covering geometrically rather than in the algebraic way I had been teaching that and other material in the course. When I started teaching the material geometrically, I found that many of the students I had identified as "strong learners" were having difficulty, whereas many of the students I had identified as "weak learners" were easily absorbing the material. I had confounded strength of students’ learning skills with match of their learning style to the way I happened to be teaching.

In sum, styles of learning and thinking matter in the classroom. We best serve our students when we teach in a way that enables all students to capitalize on their preferred styles at least some of the time, but that recognizes that students must acquire flexibility in their use of styles and so cannot always learn in their preferred way. My own institution, Oklahoma State University, has a Learning and Student Success Opportunity Center that intervenes with students in ways specifically oriented toward meeting the needs of their diverse learning and thinking styles. Our Institute for Teaching and Learning Excellence teaches teachers how to meet the stylistic needs of students. Our goal in higher education should be to prepare students for the diverse demands later courses and careers will make on their learning and thinking styles so that they can be successful not just in our course, but in their later studies and work.

Author/s: 
Robert J. Sternberg
Author's email: 
info@insidehighered.com

Robert J. Sternberg is provost, senior vice president, and Regents Professor of Psychology and Education at Oklahoma State University.

A More Complete Completion Picture

Smart Title: 

National group includes part-time and other students typically omitted from college success -- and the numbers are not pretty.

Questioning Assumptions

Smart Title: 

Community college leaders say their campuses can do better, rather than focusing on outside forces that are buffeting them.

Measuring Engagement

For more than a decade, the National Survey of Student Engagement (NSSE) and the Community College Survey of Student Engagement (CCSSE) have provided working faculty members and administrators at over 2,000 colleges and universities with actionable information about the extent to which they and their students are doing things that decades of empirical study have shown to be effective. Recently, a few articles by higher education researchers have expressed reservations about these surveys. Some of these criticisms are well-taken and, as leaders of the two surveys, we take them seriously. But the nature and source of these critiques also compel us to remind our colleagues in higher education just exactly what we are about in this enterprise.

Keeping purposes in mind is keenly important. For NSSE and CCSSE, the primary purpose always has been to provide data and tools useful to higher education practitioners in their work. That’s substantially different from primarily serving academic research. While we have encouraged the use of survey results by academic researchers, and have engaged in a great deal of it ourselves, this basic purpose fundamentally conditions our approach to “validity.” As cogently observed by the late Samuel Messick of the Educational Testing Service, there is no absolute standard of validity in educational measurement. The concept depends critically upon how the results of measurement are used. In applied settings, where NSSE and CCSSE began, the essential test is what Messick called “consequential validity” -- essentially the extent to which the results of measurement are useful, as part of a larger constellation of evidence, in diagnosing conditions and informing action. This is quite different from the pure research perspective, in which “validity” refers to a given measure’s value for building a scientifically rigorous and broadly generalizable body of knowledge.

The NSSE and CCSSE benchmarks provide a good illustration of this distinction. Their original intent was to provide a heuristic for campuses to initiate broadly participatory discussions of the survey data and implications by faculty and staff members. For example, if data from a given campus reveal a disappointing level of academic challenge, educators on that campus might examine students’ responses to the questions that make up that benchmark (for example, questions indicating a perception of high expectations). As such, the benchmarks’ construction was informed by the data, to be sure, but equally informed by decades of past research and experience, as well as expert judgment. They do not constitute “scales” in the scientific measurement tradition but rather groups of conceptually and empirically related survey items. No one asked for validity and reliability statistics when Art Chickering and Zelda Gamson published the well-known Seven Principles for Good Practice in Undergraduate Education some 25 years ago, but that has not prevented their productive application in hundreds of campus settings ever since.

The purported unreliability of student self-reports provides another good illustration of the notion of consequential validity. When a student is asked to tell us the frequency with which she engaged in a particular activity (say, making a class presentation), it is fair to question how well her response reflects the absolute number of times she actually did so. But that is not how NSSE and CCSSE results are typically used. The emphasis is most often placed instead on the relative differences in response patterns across groups -- men and women, chemistry and business majors, students at one institution and those elsewhere, and so on. Unless there is a systematic bias that differentially affects how the groups respond, there is little danger of reaching a faulty conclusion. That said, NSSE and CCSSE have invested considerable effort to investigate this issue through focus groups and cognitive interviews with respondents on an ongoing basis. The results leave us satisfied that students know what we are asking them and can respond appropriately.

Finally, NSSE and CCSSE results have been empirically linked to many important outcomes including retention and degree completion, grade-point-average, and performance on standardized generic skills examinations by a range of third-party multi-institutional validation studies involving thousands of students. After the application of appropriate controls (including incoming ability measures) these relationships are statistically significant, but modest. But, as the work of Ernest Pascarella and Patrick Terenzini attests, such is true of virtually every empirical study of the determinants of these outcomes over the last 40 years. In contrast, the recent handful of published critiques of NSSE and CCSSE are surprisingly light on evidence. And what evidence is presented is drawn from single-institution studies based on relatively small numbers of respondents.

We do not claim that NSSE and CCSSE are perfect. No survey is. As such, we welcome reasoned criticism and routinely do quite a bit of it on our own. The bigger issue is that work on student engagement is part of a much larger academic reform agenda, whose research arm extends beyond student surveys to interview studies and on-campus fieldwork. A prime example is the widely acclaimed volume Student Success in College by George Kuh and associates, published in 2005. To reiterate, we have always enjoined survey users to employ survey results with caution, to triangulate them with other available evidence, and to use them as the beginning point for campus discussion. We wish we had an electron microscope. Maybe our critics can build one. Until then, we will continue to move forward on a solid record of adoption and achievement.

Peter Ewell is senior vice president of the National Center for Higher Education Management Systems and chairs the National Advisory Boards for both NSSE and CCSSE. Kay McClenney is a faculty member at the University of Texas at Austin, where she directs the Center for Community College Student Engagement. Alexander C. McCormick is a faculty member at Indiana University at Bloomington, where he directs the National Survey of Student Engagement.

Low-Hanging Fruit

Smart Title: 

Educators consider how they can get "near-completers" to finish up their college degrees.

Pages

Subscribe to RSS - Assessment
Back to Top