Should We Ban College Admission Tests?

Doing so would actually hurt minority and low-income students, write Daniel H. Robinson, Robert A. Bligh and Howard Wainer.

June 7, 2021
basar17/Getty Images

During the pandemic, testing large numbers of high school graduates safely was a task of insuperable difficulty. Having students take the tests remotely was not a reliable option, as monitoring for cheating was not possible. As a result, about 600 colleges decided to suspend the requirement of admission test scores as part of the application process. Subsequently, many are considering making the suspension permanent.

Some immediate effects of not requiring admission tests appear, at first glance, to be positive. Selective colleges -- those that admit less than 50 percent of applicants -- are currently experiencing substantial increases in applications, especially from first-generation, minority and low-income applicants. As of Feb. 15, only 44 percent of applicants had submitted SAT or ACT scores, compared with 77 percent the year before. This increase in applications may help to reverse the 4.5 percent decline in enrollments during the 2020-21 academic year compared to the prior year. Given such encouraging results, why do we need admissions tests?

It would seem that colleges that accept all of their applicants, like most community colleges and for-profits like the University of Phoenix, would have no use for admissions testing. Not so. The scores can still serve a critically important role of helping both the student and the college decide on a curriculum that would best serve the student’s education. At the other extreme, colleges like Harvard, Princeton or Stanford Universities, which accept fewer than 5 percent of applicants, use admissions testing scores very differently. Such colleges are searching for students whose academic ability allows them to flourish in such a rigorous environment. Such applicants, although often obvious within their individual high schools, may be harder to find by highly selective colleges. Experience has shown (e.g., the National Merit Scholarship Program) that using a well-designed test for initial screening and searching is an efficient, practical and valid tool.

Imagine a 17-year-old minority girl whose family lives in poverty. She blows the top off the SAT, placing her among the top scorers in the country. Unfortunately, her high school grades do not set her apart, both because of the homogeneity forced on grading policies and because the standards of scholarship at her high school and others like it are not held in high esteem. How would a ban on admissions testing affect her ability to get into a highly selective school?

Jon Boeckenstedt, vice provost of Oregon State University, recently referred to admissions testing as pseudo-academic factors that add “almost nothing to an admission officer’s ability to predict an individual student’s academic performance in college.” What are the arguments used to support a ban on admission tests?

The most common argument, echoing Boeckenstedt, is that the tests do not predict college performance. This is incorrect. Actually, the correlation between SAT score and first-year college GPA has been estimated to be 0.55 and is about the same as high school grade point average. There is an even stronger positive relationship between a college’s average admissions test score and its six-year graduation rate. Using data from over 1,000 colleges from the National Center for Education Statistics’ Integrated Postsecondary Education Data System, the correlations between graduation rate and ACT composite score, SAT verbal and SAT math are about 0.8. Thus, the predictive power of an admissions test is impressive compared to other proposed alternatives (e.g., application and recommendation letters, extracurricular activities, etc.).

A second argument is that admissions tests are merely a proxy for student wealth (i.e., they measure the same thing). Indeed, the correlation between a student’s SAT score and socioeconomic status (SES) is 0.42. But do SAT and SES account for basically the same variance in first-year college GPA scores? Not really. After removing SES influence from SAT scores, the overall predictive validity is reduced by only about 0.03. Thus, most of what admissions tests measure is something that is not student wealth.

A third argument that is most prominently portrayed in the media is that admissions tests are biased. This conclusion is drawn because they reveal differences among racial groups. From this fact, critics infer that admissions tests prevent underrepresented minorities from getting a college degree. Although the tests do show, on average, racial differences in scores, those differences didn’t just arise suddenly in 12th grade. They are observed far earlier -- even before children enter kindergarten. Achievement tests administered through elementary and high school grades as part of “high-stakes” testing consistently reveal racial differences. Revealing racial differences does not, by itself, make the SAT or ACT biased. For true test bias to occur, one needs to show that the tests have different predictive validities for different races. The SAT and ACT actually predict college GPAs that are higher than actual GPAs for Black and Hispanic students, while underpredicting for white and Asian students. There have been numerous attempts to develop cognitive tests that do not show group differences, but none has yet been successful. The consensus of expert opinion is that the group differences in test scores will diminish in parallel with the diminution of societal inequalities resources allocation. As for high school grades, they also reveal race differences. There are well-known sex differences in height, but we know of no one who has suggested banning yardsticks.

Compared to high school GPA, admissions tests are fairer. Does anyone doubt that there exist grade givers whose process for assigning grades is, to characterize it generously, idiosyncratic -- allocating grades on some basis other than the student’s merit? But there is no way to detect and correct for such practices. In the past, and to a lesser extent in the future, there were test items that performed differently in different subpopulations of students. But such items can be detected and elided, so that the test is constantly being improved and made fairer. Testing companies expend considerable resources in fairness reviews by subject matter experts and in statistical analyses looking for differential performance of any items in different subgroups.

Historically, fairness was one of the most critical reasons that collegiate admissions testing was implemented in the first place. Henry Chauncey, before he founded the Educational Testing Service, was an admissions officer at Harvard University and was an avid fan of tests, one of which had then been in use at Columbia University for some time. Chauncey proposed its use to Harvard president Abbott Lawrence Lowell, who rejected it because it would not exclude enough Jews. He preferred quotas instead.

College admissions officers are similar to baseball scouts -- both want to identify people who can be successful if admitted/drafted. Admissions officers typically use every metric at their disposal and then make a decision. Baseball scouts do the same, whether they are mainly analytics scouts, as famously depicted in the movie Moneyball; gut-instinct, old-school scouts who rely on in-person observations, as in the movie Trouble With the Curve; or modern scouts who use both metrics. Now imagine that you told baseball scouts that you were banning the use of analytics. What would the removal of a potentially useful metric do to the predictive ability of the scouts? If it is detrimental, then why would we consider doing something similar with admissions tests?

If we get rid of admissions tests, decisions concerning whom to admit will be based on high school grades (which are also plagued by race differences), extracurricular activities and recommendation letters. Low-income students traditionally have less access to extracurricular activities. They also may not receive as favorable recommendation letters as their rich counterparts. Think back to the female minority student with a high SAT score. Will her high school grades, ordinary extracurricular record and standard recommendation letters impress Harvard more than her near-perfect test score?

In a June 15, 2020, tweet, President Trump said COVID-19 testing “makes us look bad.” At his campaign rally in Tulsa, Okla., five days later, he said he had asked his “people” to “slow the testing down, please.” Admissions tests do indeed make us look bad in terms of racial differences. But so does high school GPA, which is also more susceptible to bias. Slowing down or getting rid of admissions testing won’t fix anything, any more than not standing on a scale will help you lose weight. Reduced testing, whether for COVID-19 or for academic skills, is likely to make things worse. Ignorance is not bliss.


Daniel H. Robinson is the K-16 Mind, Brain and Education Endowed Chair at the University of Texas at Arlington. Robert A. Bligh is a retired lawyer residing in San Antonio. Howard Wainer is a statistician and author living in Pennington, N.J.

We have retired comments and introduced Letters to the Editor. Letters may be sent to [email protected].

Read the Letters to the Editor  »

Today’s News from Inside Higher Ed

Inside Higher Ed’s Quick Takes

Back to Top