You have /5 articles left.
Sign up for a free account or log in.

For more than a decade, the National Survey of Student Engagement (NSSE) and the Community College Survey of Student Engagement (CCSSE) have provided working faculty members and administrators at over 2,000 colleges and universities with actionable information about the extent to which they and their students are doing things that decades of empirical study have shown to be effective. Recently, a few articles by higher education researchers have expressed reservations about these surveys. Some of these criticisms are well-taken and, as leaders of the two surveys, we take them seriously. But the nature and source of these critiques also compel us to remind our colleagues in higher education just exactly what we are about in this enterprise.

Keeping purposes in mind is keenly important. For NSSE and CCSSE, the primary purpose always has been to provide data and tools useful to higher education practitioners in their work. That’s substantially different from primarily serving academic research. While we have encouraged the use of survey results by academic researchers, and have engaged in a great deal of it ourselves, this basic purpose fundamentally conditions our approach to “validity.” As cogently observed by the late Samuel Messick of the Educational Testing Service, there is no absolute standard of validity in educational measurement. The concept depends critically upon how the results of measurement are used. In applied settings, where NSSE and CCSSE began, the essential test is what Messick called “consequential validity” -- essentially the extent to which the results of measurement are useful, as part of a larger constellation of evidence, in diagnosing conditions and informing action. This is quite different from the pure research perspective, in which “validity” refers to a given measure’s value for building a scientifically rigorous and broadly generalizable body of knowledge.

The NSSE and CCSSE benchmarks provide a good illustration of this distinction. Their original intent was to provide a heuristic for campuses to initiate broadly participatory discussions of the survey data and implications by faculty and staff members. For example, if data from a given campus reveal a disappointing level of academic challenge, educators on that campus might examine students’ responses to the questions that make up that benchmark (for example, questions indicating a perception of high expectations). As such, the benchmarks’ construction was informed by the data, to be sure, but equally informed by decades of past research and experience, as well as expert judgment. They do not constitute “scales” in the scientific measurement tradition but rather groups of conceptually and empirically related survey items. No one asked for validity and reliability statistics when Art Chickering and Zelda Gamson published the well-known Seven Principles for Good Practice in Undergraduate Education some 25 years ago, but that has not prevented their productive application in hundreds of campus settings ever since.

The purported unreliability of student self-reports provides another good illustration of the notion of consequential validity. When a student is asked to tell us the frequency with which she engaged in a particular activity (say, making a class presentation), it is fair to question how well her response reflects the absolute number of times she actually did so. But that is not how NSSE and CCSSE results are typically used. The emphasis is most often placed instead on the relative differences in response patterns across groups -- men and women, chemistry and business majors, students at one institution and those elsewhere, and so on. Unless there is a systematic bias that differentially affects how the groups respond, there is little danger of reaching a faulty conclusion. That said, NSSE and CCSSE have invested considerable effort to investigate this issue through focus groups and cognitive interviews with respondents on an ongoing basis. The results leave us satisfied that students know what we are asking them and can respond appropriately.

Finally, NSSE and CCSSE results have been empirically linked to many important outcomes including retention and degree completion, grade-point-average, and performance on standardized generic skills examinations by a range of third-party multi-institutional validation studies involving thousands of students. After the application of appropriate controls (including incoming ability measures) these relationships are statistically significant, but modest. But, as the work of Ernest Pascarella and Patrick Terenzini attests, such is true of virtually every empirical study of the determinants of these outcomes over the last 40 years. In contrast, the recent handful of published critiques of NSSE and CCSSE are surprisingly light on evidence. And what evidence is presented is drawn from single-institution studies based on relatively small numbers of respondents.

We do not claim that NSSE and CCSSE are perfect. No survey is. As such, we welcome reasoned criticism and routinely do quite a bit of it on our own. The bigger issue is that work on student engagement is part of a much larger academic reform agenda, whose research arm extends beyond student surveys to interview studies and on-campus fieldwork. A prime example is the widely acclaimed volume Student Success in College by George Kuh and associates, published in 2005. To reiterate, we have always enjoined survey users to employ survey results with caution, to triangulate them with other available evidence, and to use them as the beginning point for campus discussion. We wish we had an electron microscope. Maybe our critics can build one. Until then, we will continue to move forward on a solid record of adoption and achievement.

Next Story

More from Views