Someone once said (OK, it might have been me) that when Harvard University itches, everyone else in higher education scratches. Is the same true of the Massachusetts Institute of Technology?
We may soon have an answer to that question. MIT’s announcement that next year it will reinstate its requirement that applicants submit test scores has already generated responses from those who believe it is time for admission tests to disappear altogether, as well as from apologists for the testing industry.
Is MIT’s decision the beginning of a trend, an anomaly or neither? We probably already have the answer, at least in one regard—MIT’s announcement has not opened the floodgates to other institutions reversing their test-optional policies. MIT’s decision was thoughtful and mission-appropriate, even if test skeptics may disagree with the decision, the rationale or MIT’s interpretation of the evidence. But MIT’s decision doesn’t translate to lots and lots of other colleges and universities. This is not the beginning of the end of test optional.
There are, of course, some global reasons why test-optional policies will not go away. One is the decision by the University of California and Cal State systems to no longer use test scores in their admission processes. As a result, colleges that recruit heavily in California will have a hard time reinstating test score requirements. But students outside California may also rebel against colleges that return to requiring test scores. The Ivies may be able to get away with it, but two years ago when the pandemic accelerated the number of colleges going the test-optional route, an admissions dean friend postulated that colleges farther down the food chain may find that students may simply refuse to apply to colleges that aren’t test optional.
Then there is the elephant in the room. Perhaps the only force more powerful than the desire to use test scores as an insurance policy in evaluating students’ academic preparation is the pressure on colleges to increase applications and lower admit rates. Can colleges afford to lose application numbers in a climate where selectivity is worshipped as a proxy for quality?
I’m not particularly interested in weighing in on the testing culture wars. I don’t think admission tests are evil, just flawed. But I also am bothered by the worship and misuse of test scores.
“Ethical College Admissions” is always on the lookout for bigger-picture issues, and there are a couple I want to examine.
One is the notion that test scores are an engine of diversity. I have seen that argument made by defenders of admissions tests multiple times, and I wonder if there is evidence for that or if it is a suburban legend.
One argument I saw for the use of test scores argued that without test scores, things like legacy preferences become more powerful. Maybe, but that’s not evidence for the power of test scores to increase diversity.
The more common articulation is the diamond in the rough argument, which says that test scores identify students with ability who come from different backgrounds and would otherwise be overlooked. The diamond in the rough argument was one of the justifications for the move from College Board exams to the SAT nearly 100 years ago, at a time when the Ivies and other elite Northeastern colleges were looking to broaden their student bodies geographically and enroll more public school students. That was also at a time when the SAT was seen as an objective measure of intelligence.
Today we understand that test scores correlate strongly with family income and that what tests measure is far from clear. But the diamond in the rough argument persists.
So is there any evidence that the diamond in the rough is a real phenomenon? I reached out to Jon Boeckenstedt at Oregon State, who has been known to call out the diamond in the rough argument on Twitter and is also a guru when it comes to aggregating and analyzing data about college admission and higher education in general. Jon wasn’t aware of any data to support the diamond in the rough hypothesis, but he also suggested that it was at some level tautological, that diamonds in the rough with high test scores are the only diamonds that highly rejective colleges tend to take a chance on.
I also reached out to Stuart Schmill, MIT’s dean of admissions, to ask if he had data on how many MIT students qualify as diamonds in the rough. He was gracious to respond and to admit that he is a regular “Ethical College Admissions” reader. He stated that he couldn’t name a specific number, but he was convinced that there are students whose test scores help MIT to admit them. He also stated that a number of MIT alums have expressed their belief that they fell into that category.
His response made me wonder if all of us have the same definition of diamond in the rough. In an op-ed for The Washington Post, Bob Schaeffer, the executive director of FairTest: National Center for Fair and Open Testing, defined diamonds in the rough as “applicants with modest high school records but high SAT scores,” pointing that those students are more often than not affluent Asian or white males. MIT is clearly not admitting those students, leading me to think it is using a different definition (the use of the term “diamond in the rough” was probably mine, not Stu Schmill’s). His answer suggests that MIT is using test scores to identify not students with modest academic records but high test scores, but rather as confirmation for students with superb records from academic environments without lots of Advanced Placement or high-level mathematics courses.
I’d love to know if anyone can point me to evidence that diamonds in the rough really exist.
There is one other question I want to consider—under what conditions should colleges use test scores? I suspect my command of the obvious will become apparent in my answers.
- Make sure test scores add predictive validity to admission decisions. The test-optional movement has shown us that it is possible to make decisions without test scores even as there is concern about grade inflation in the wake of COVID-19. There are numerous colleges for which test scores have been an insurance policy rather than adding value in making decisions, and one article I saw suggested that only about half of institutions that require test scores actually have validity study research supporting their use. Test scores at best provide a small increase in predictive value compared with high school transcript alone, and that value is for predicting freshman-year grades alone. Shouldn’t we look for tools that predict success through college and even beyond?
- Don’t fall for the false precision that test scores imply. Test scores are too often treated as precise measures, which they are not. Do we measure the things we value or value the things we can measure? The standard error of measurement for each section of the SAT is more than 30 points, such that there is not a meaningful difference between a 600 and a 630. Test score cutoffs for institutional scholarship consideration or for National Merit Scholarship eligibility (even though the College Board is a National Merit partner) are inappropriate uses of testing.
- Consider test scores in context. Take into account that, even if valid, test scores may not predict equally well for different cohorts of students. The 2008 NACAC Testing Commission Report pointed to research that suggested that test scores overpredict first-year grades for some minority students and may underpredict first-year GPA for some female students. Then there is the impact of test preparation. If two students have identical test scores and one has had hours of expensive test preparation and the other hasn’t, those scores don’t mean the same thing. We should especially guard against test scores becoming barriers to access for students whose high school records otherwise suggest promise. A year ago I heard about (and wrote about) a public flagship that had denied students from less privileged backgrounds it wanted to admit because they had submitted test scores, not realizing that submitting scores was optional. In my opinion there is no excuse for institutions not admitting applicants based on test scores alone.
I don’t have test scores to use as (debatable) predictive tools, but that won’t stop me from predicting that MIT’s decision will not end the testing culture wars.