You have /5 articles left.
Sign up for a free account or log in.
Curriculum and assessment fads of dubious value that waste millions of dollars and endless hours of student time have long afflicted elementary and secondary education. Higher education, which has seen fewer fads, is now playing catch-up via the National Survey of Student Engagement, universally known as NSSE.
The increased use reflects the extent to which NSSE -- with endorsements from Margaret Spellings’ Commission on the Future of Higher Education and widespread citations in publications of all types -- has become a widely embraced element of accreditation and accountability discussions. A prime example is the Voluntary System of Accountability (VSA), where NSSE is one of the assessment instruments participants can use to document the experiences of their undergraduate students -- and it is by far the most widely used.
In late October of this year, a gathering in Indianapolis drew together NSSE leaders, college presidents and assessment experts to celebrate NSSE’s success. Even at that meeting, NSSE partisans raised some questions about the way campuses use the assessment -- many noted that campuses often administer NSSE in order to check an “accountability” box without actually changing any of the practices that NSSE indicates are lagging. This conference focused on the use of NSSE to identify conditions within a college, but as seen with the example of VSA, NSSE is also used for comparisons across campuses.
Both uses of NSSE deserve critical evaluation.
'Approved' vs. 'Off-Label' Uses of NSSE
In the United States, the Food and Drug Administration reviews new drug applications to determine if the results of clinical trials support the use of the drug for a specific medical condition. These approved uses are listed on the drug’s label. However, doctors routinely prescribe many drugs for “off-label” use, treating conditions for which the drug has not been officially approved. Sometimes these drugs are effective and sometimes they are not.
Similarly, NSSE has an “approved” and an “off-label” use, and the end result is that it is often used more widely and in different ways than its “label” calls for. NSSE is “approved” for analysis within an institution by the Indiana University Center for Postsecondary Research, which is responsible for NSSE. But NSSE is increasingly used for cross-institution comparative analysis -- its “off-label” use.
Does NSSE Fulfill Its Approved Use?
Recently, a series of reports and criticisms have emerged suggesting that NSSE is a flawed instrument upon which to build corrective actions even for use within a single campus. References to effective practices related to student learning saturate NSSE's literature and provide the “scientific” foundation for many of its claims. However, a surprisingly large number of these assertions are built on correlational studies from the 1970s and 1980s, and many of NSSE’s assertions have not been subject to rigorous analysis.
In 2007, Gary Taubes argued in a feature article in the New York Times Magazine that much “medical wisdom” is based on unreliable epidemiological or observational studies that more rigorous studies later disprove. Similarly, NSSE’s self-validation based on “wisdom” and practice derived from correlational studies is a flimsy base for such a popular assessment. Indeed, in a paper that has been getting wide attention on this site and elsewhere, Stephen Porter argues that scant empirical evidence exists to link NSSE scores to student learning outcomes.
Porter’s critique looks closely at NSSE’s psychometric qualities and concludes that it “fails to meet basic standards for validity and reliability.” Porter’s wide-ranging analysis questions NSSE’s survey practices on a fundamental level. He questions whether or not students can recall the information they need to use to answer NSSE questions and he looks at the extent to which different students understand the terms used in NSSE’s questions, exploring for example if there is a common definition among students for “thinking critically and analytically” or even for the term “instructor."
NSSE also seems to be more in tune with the long-ago ideal college world in which students attend only one college, don’t transfer, and attend classes that meet with one faculty member in a campus classroom on a regular basis. In short, NSSE adheres to an outdated perspective that fails to square with the reality of current college attendance -- where there are fewer and fewer “traditional” college students. Similar to IPEDS, the nation’s leading source of data about higher education institutions, NSSE seems in danger of missing the fundamental transformation of our higher education student body that is taking place around us.
These are all serious issues about the degree to which NSSE meets its “approved” use of measuring campus conditions. We will need to rely on the scientific market place to ultimately render judgment on NSSE. Unfortunately, that may be a slow process, during which NSSE’s growth will likely continue.
NSSE as a Tool for Institutional Comparisons
I consider next the growing use of NSSE for institutional comparisons, where not even correlational evidence exists to support this off-label use.
The Center for Postsecondary Research takes pride in how NSSE’s data are being used for institutional comparisons, but at the same time the center also distances itself from this use. This ambivalence may reflect the fact that its own analysis shows how little variance exists across institutions, making comparisons across institutions of dubious value.
NSSE’s 2008 annual report notes that “…for almost all of the [NSSE] benchmarks, less than 10% of the total variation in effective educational practices is attributable to institutions. The lion’s share of the variation is among students within institutions.” Figure 2 of the report displays the amount of variance between campuses in the freshman and senior year for each of the five NSSE benchmarks (Academic Challenge, Active and Collaborative Learning, Student-Faculty Interaction, Enriching Educational Experiences, Supportive Campus Environment). For freshmen, the variation across institutions across the five measures is, on average, just 5 percent, and for seniors, 7 percent. (This slightly higher number results from more variance in just one benchmark).
In short, almost all of the variance in NSSE scores occurs within institutions and very little variance occurs between them, making it of questionable value for institutional comparisons. But NSSE reports are replete with endorsements of such comparisons -- most notably in statements regarding the Voluntary System of Accountability, in which NSSE figures so prominently. According to NSSE: “the VSA is designed to help institutions demonstrate accountability, report on educational practices and outcomes, and assemble information that is accessible, understandable, and comparable” (emphasis added).
A prime product of the VSA are the “college portraits” that provide comparable information about a growing number of schools. Of the more than 300 institutions that have registered to participate in the VSA, almost all of them feature NSSE results on their portraits. Despite this widespread use, serious flaws in the NSSE data limit their usefulness in the VSA.
The NSSE data that are reported come from seniors. Since the average graduation rate across all colleges and universities in the nation hovers around 50 percent, reporting seniors’ attitudes about their college experiences focuses only on the “survivors” or success stories. This creates a clear self-selection problem -- the students included in the senior survey likely have much more positive attitudes toward the institution than do average students, many of whom are long gone before the senior year.
An even more fundamental problem is the lack of variation in NSSE measures. Here, for example, are some NSSE results from college portraits of three Oklahoma public institutions:
% of seniors who | Cameron U. | U. of Central Oklahoma |
Oklahoma State U. main campus |
believe institution provides support for student success |
98% | 97% | 94% |
rated the quality of academic advising at this institution as good or excellent |
78% | 72% | 73% |
rated their entire educational experience as good or excellent |
89% | 90% | 86% |
Almost all the seniors in each of these institutions believed they received support for student success, and only slightly lower proportions believed their educational experience was good or excellent. Consistent with NSSE’s own analysis, there is virtually no variance between these institutions. While student evaluations of advising were on average somewhat lower, there too little variance is evident.
In contrast, on at least one objective measure of student success, there is wide variation: across these institutions graduation rates range from 24 percent to 58 percent.
At the low end, Cameron University has a six-year graduation rate of 24 percent. Cameron University’s seniors who filled out the NSSE questionnaire may truly believe that the institution provides support for success or that the quality of academic advising is good or excellent -- but these students survived a college system in which most of their peers disappear before earning a diploma -- in fact, less than half of the students even make it from year one to year two.
Central Oklahoma’s graduation rate is 31 percent, so again the survivors are likely a hardy breed. Oklahoma State clocks in with a graduation rate of 58 percent, three times higher than Cameron’s and almost twice that of Central Oklahoma, but its NSSE scores are all below Cameron’s and lag Central Oklahoma on two of the three measures.
The lack of variance in measures and the lack of relationship with graduation rates should be sobering to those supporting NSSE’s expanding role in institutional comparisons. There is one other interesting point about VSA: by design, institutions cannot be compared side by side. So even the simple data just presented had to be painstakingly gathered by going from one page to another to another. And VSA will not provide a spreadsheet with all the scores they are making “public.”
VSA is not NSSE -- so VSA’s policies and practices that inhibit comparisons and lack transparency cannot be laid on NSSE’s doorstep, but NSSE waxes enthusiastically about VSA.
NSSE has yet to resolve another important conflict. On one hand, NSSE in effect endorses using its data for institutional comparisons. Its web site dealing with “Public Reporting of Student Engagement Results” reads, in part:
“NSSE especially supports public reporting of student engagement results in ways that enable thoughtful, responsible institutional comparisons while encouraging and celebrating institutional diversity.” (emphasis added)
Yet at the same time that it endorses comparisons, it has the following disclaimer:
“NSSE does not support the use of student engagement results for the purpose of ranking colleges and universities.”
It seems you can “compare” but you can’t “rank”? I’m not sure I understand the difference. However, given the lack of institutional variation, this may be moot -- NSSE data are problematic for both comparison and ranking.
Conclusion: A Flawed Measure Being Pushed Too Far
Measured by its growth and its place in the world of postsecondary education, NSSE’s success is unprecedented. And given the push for accountability and the lack of alternative measures, NSSE’s appeal is understandable. But even NSSE leadership admits they have been more than a little surprised by its growth.
I fear that this rapid growth has pushed NSSE too far into uses for which it is not suited and may be building “education wisdom” on flawed surveys and results that will not stand up to more rigorous analysis. If evidence from the world of medicine provides any guidance, undoing the lessons “learned” from using this flawed instrument will be a difficult undertaking.