You have /5 articles left.
Sign up for a free account or log in.

AndreyPopov/iStock/Getty Images Plus

The angel’s in the idea; the devil’s in the details. In its recently passed 2022 federal omnibus spending legislation, Congress charged the Department of Education with developing a national climate survey asking students about their experiences with domestic violence, dating violence, sexual assault, sexual harassment and stalking, and overseeing the administration of this survey every two years to students at all colleges and universities that accept federal funds. Despite its enormous scope—a biennial survey of tens of millions of students at thousands of colleges and universities—the new requirement has received scant attention in the popular and higher education press, and the Education Department has made no announcements yet about how it intends to comply.

As someone who has worked on the delivery of and public policy surrounding climate surveys for many years, my sense is that if done well, a national survey could provide significant additional information on the impact of harassment and violence on college students. But that lofty goal comes at the end of a road filled with steep challenges. As proposed, the survey could lead to low response rates, confidentiality concerns and questions about the accuracy of data. Instead of rich, meaningful data, institutions will end up with one more compliance obligation laced with bureaucratic challenges and an inappropriate comparison tool based on weak data. And they will lose ground on the important and expanding work of conducting meaningful climate surveys that can drive change and increase safety.

Consider the scale and frequency of the federal survey project—the survey could reach 20 million students at well over 5,000 colleges and universities every other year. Only the U.S. Census is larger, and the Census requires more than 4,000 employees at the Census Bureau to successfully implement every 10 years.

But while it is “free” for Congress to legislate and require the Education Department to create and administer this survey, the legislation did not contain an appropriation, so it appears the department may have to create it without additional funds (or seek funds separately). Not only will this be expensive for the Education Department, but there are likely to be substantial financial and operational implications for colleges and universities—and especially so for smaller institutions with less bandwidth and research infrastructure. These costs will depend, in large part, on what path the Education Department ultimately takes to comply with the new law.

Length Is the Enemy of Completion

Climate surveys have been an incredible force for higher education’s understanding of the impact of sexual and interpersonal violence on college and university students. With larger and more representative sample sizes, we can make more granular findings regarding the different experiences of students of different races, ages, gender identities, sexual orientations and beyond. Unfortunately, the large number of topics listed in the legislation will make the survey very lengthy. The challenges of implementing it accurately, confidentially and efficiently are so fraught that we are less likely to obtain deeper insights of student experience than current climate surveys achieve.

In climate surveys, length is the enemy of completion. The longer we make a survey, the more students will close the window at some point. Worse, attrition is not evenly distributed. Students working multiple jobs and those with kids and other obligations are less likely to complete—or even begin—longer surveys.

Unfortunately, Congress has mandated the Education Department survey students on a large number of subjects, potentially leading to the longest climate survey ever given. This kitchen-sink approach to climate surveys could lead to interesting questions that very few people will bother to answer.

Confidentiality and Statistical (In)Significance

Even if we could somehow keep the length of the survey short, the specific questions mandated about whether students reported incidents of sexual and interpersonal violence and the detailed questions about whether investigations were undertaken and the specific outcomes of those investigations may yield results that compromise the survey taker’s confidentiality. The legislation seems to picture most institutions as medium to large liberal arts colleges with dozens or hundreds of reports and investigations annually. But the law covers all colleges and universities that accept federal funds. This includes thousands of community colleges, technical colleges, trade schools, proprietary institutions and others. Institutions are different, and those that serve students taking classes part-time while working full-time (and, for some, parenting), simply have different reporting and investigation profiles than those with thousands or tens of thousands of students living in residence halls or local apartments and taking courses full-time. When an institution has but a handful of reports, the specificity of the required questions in the law will mean that respondents who experienced violence and harassment can be identified.

Economists warn of the law of small numbers, where we overestimate our ability to generalize from small samples. Humans do this all the time, guessing an athlete’s result in this upcoming play based on how they did the prior three times, assuming that the way someone acts is representative of a larger group or assuming the outcome of an election based on a few people they talk to. Needless to say, small samples are generally not representative of larger populations, and we make these guesses at our peril. For the vast majority of institutions that do not conduct dozens or hundreds of investigations of harassment and assault per year, the experiences of a handful of respondents to a climate survey may not be representative of the actual experiences of students. In addition to the risk of identifying the respondents, the variability in the data—both longitudinally across years at the institution, and latitudinally, compared to other colleges and universities—is simply not statistically meaningful at small numbers.

Not Another Top 10 List!

Frankly, the most troubling aspect of this legislation is the assumption that a survey, long intended to measure and improve at the granular institutional level, can instead be yet one more point of comparison between similar and disparate institutions, a tool that consumers can use to select a college as “safe.” When the idea of such a survey was first floated years ago, my suggestion to Senate and House staff was to task the U.S. Census Bureau with surveying thousands of college students across the country to gain a deeper understanding of the experience of narrower and narrower identity groups. The Census Bureau conducts such surveys of different parts of the population all the time and has the ability and knowledge to help policy makers understand the experience of traditional and post-traditional-aged college students of different gender identities, races and beyond.

But that idea was cast aside in favor of what we will now have: a lengthy, one-size-fits-all, comparative survey distributed to millions of students at very different institutions. It isn’t clear that understanding the varied experiences of our college and university students (generally and specifically) is the intent of this new survey. Rather, following in the path of the Clery Act crime-reporting requirements, the plain language intention of the legislation is comparative—to allow consumers (our students) to be able to pull up a database (such as the College Scorecard referenced in the legislation) and compare the climate survey statistics of various institutions when choosing a college.

This will inevitably lead to U.S. News & World Report–style ratings and listicles offered by the reputable and questionable alike. Clickbait “articles” about “safest college in America” or “most dangerous college in Oklahoma” are inevitable, even though it will often be statistically irresponsible to make such comparisons. These inevitable comparators will hold enormous sway. Which is better—a college with few disclosures where students report low trust in the institution to appropriately respond to violence or harassment or one with a high number of reports but higher levels of trust that reports are addressed appropriately? Unfortunately, some website algorithm or social media influencer will decide.

While the legislation tasks the Education Department with ensuring, “to the maximum extent practicable, that an adequate, random, and representative sample size of students” enrolled at any given institution completes the survey, there appears to be no plan in the legislation to limit the published data to instances where the sample sizes could yield statistically significant findings. It is not scientifically appropriate (or useful) to try to compare institutions whose responses are not significant, and many thousands of institutions have few or no reports or investigations in a given year.

The effort to move to comparisons and shopping tools on prevalence of sexual and interpersonal violence will detract from the efforts to understand prevalence at an institutional level, identify ways to address it through meaningful changes and understand the experiences of narrower groups of students who share an identity. In pretending that we can compare institutional safety (or worse, provide the inevitable restaurant-style A, B and C grades to campuses based on what will generally be small, unrepresentative samples) we will do what we often, tragically, do—take a complex area in which experts work decades to find meaningful public health–style approaches to seemingly intractable challenges, and replace those approaches with buzzy, quick-hit analyses.

Climate surveys are important. Done well, they can provide insights into student and employee experiences that are more accurate than simply counting reports to law enforcement or the institution (since so few violations are reported). Surveys are at their best when they are short, tied to meaningful actions that an institution can take and aimed at determining experiences so we can make those communities safer. Surveys are at their weakest when they are long, complicated, not tied to specific actions, motivated by bureaucratic compliance obligations and used to foster meaningless comparisons that drive clicks but fail to drive change. 

Next Story

Written By

More from Views