You have /5 articles left.
Sign up for a free account or log in.
zimmytws/iStock/Getty Images Plus
Last spring, a number of elite colleges and universities announced a return to requiring standardized test scores as part of admissions applications. At the beginning of the pandemic, universities like Brown, Harvard and Dartmouth suspended their standardized testing requirements, making entrance exams optional for aspiring attendees. Now students who plan to apply to these highly selective institutions will prepare to take the SAT (or ACT) this fall.
Even though the overwhelming majority (about 80 percent) of the country’s colleges and universities will remain test optional for the Class of 2029, our national conversation about the role of standardized admissions tests in higher education focuses almost exclusively on elite colleges, whose enrollments represent only 1 percent of students. Such a myopic perspective will do little to increase access to higher education. Real change would require us to reckon with the history of standardized tests and how they have persistently disadvantaged large numbers of students, particularly students of color and students with disabilities.
To support their decisions to reinstate testing requirements, elite colleges have recently claimed that standardized test scores are important factors in helping them evaluate a candidate’s potential for success. But many other research studies consistently show that high school grades are better predictors of college success than standardized tests. Why the difference in the research? It depends on what measures of success we look for and which institutions we focus on. In one study that focuses on elite colleges, measures of success include “attending an elite graduate school” and “working at a prestigious firm.” But if our focus shifts to a broader perspective, studies show that high school grades can be better predictors of college GPAs and four-year degree completion.
Those advocating for a return to requiring tests cite research from Dartmouth concluding that optional test-submission policies were preventing some low-income students from submitting test scores even when doing so would have increased their chances of admission. Requiring standardized test scores, they argue, allows elite colleges to find the diamond in the rough, the student who has enormous potential despite an underprivileged background. Of course, those students deserve a shot, but the perspective ignores others who don’t do well on standardized tests. These students are disproportionately poor, nonwhite or have a learning or intellectual disability.
Why so much attention on exceptional outcomes and Ivy League institutions? Americans love the rags-to-riches narrative, in which potential is discovered and a life is changed by the benevolence of the privileged class. It reinforces the myth of meritocracy and assures us that everyone gets what they deserve. But the narrative covers up much deeper problems in a system in which opportunity is based on a test score. Finding a few more students to join the elite 1 percent of students who attend Ivy League universities isn’t going to change a system based on ableism and inequity.
Standardized tests are touted as great equalizers, but historically our reliance on them has only led to further segregation. The SAT and its role as a gatekeeper to higher education originated with the work of psychologist Robert Yerkes. As the United States prepared to enter World War I in 1917, Yerkes led a team of psychologists who transformed early versions of the IQ test into a series of multiple-choice questions to identify which recruits should be officers and which should be soldiers. By the end of the war, more than 1.7 million men had taken one of Yerkes’s tests. The results of Yerkes’s exam were widely publicized, setting off debates about the country’s national intelligence. But more resonant than the details or accuracy of the results was the fact that Americans had just experienced their first form of high-stakes standardized testing.
Psychologists like Yerkes convinced the country that we were unable to change what IQ tests tell us about ourselves and our intelligence. Throughout the 20th century, the IQ test was used to make horrific determinations, such as the institutionalization of people with intellectual disabilities, forced sterilization and the exclusion of children from public schools. But the impact of the IQ test isn’t limited to these more extreme outcomes. Its logic has shaped our education system in subtle yet pernicious ways.
After working with Yerkes during World War I, Carl Brigham took up the position of secretary of the College Entrance Examination Board, where he designed the Scholastic Aptitude Test, or SAT, which was first given to those students seeking entrance into colleges and universities in 1926. Five of its nine subtests were adapted from Yerkes’s Army exam.
As Nicholas Lemann describes in his history of the SAT, The Big Test (Macmillan, 2000), Brigham was a staunch eugenicist who believed that America’s intelligence and genetic purity were threatened by immigrants and nonwhite Americans. He later disavowed these beliefs, but it was too late to change the course of the country’s elite universities. Intelligence was already defined as a strictly genetic trait concentrated primarily in white males. In the late 1920s, the Army began using the SAT to assess applicants to West Point. Yale, Princeton and Harvard followed.
Soon a fully fledged testing industry developed around the SAT. Arthur Otis, Yerkes’s assistant during World War I, became an editor at the World Book Company, one of the first publishers to recognize the market potential of test-prep material. Other publishers began to provide test takers with booklets of multiple-choice questions to prepare them to strategically make their best guesses on the test. Parents and students alike understood that these tests were more than assessments of a person’s natural mental capacity. They were opportunities to get ahead, and those with the most money and privilege paid to add high test scores to their list of social advantages.
In 1957, the psychologist David Wechsler, author of the WISC and WAIS intelligence tests, predicted that by 1960, at least one out of every two persons in the United States between the ages of 5 and 50 would have taken an intelligence test. In this estimate, Wechsler included the several hundred thousand children given IQ tests as part of the adoption process and for admission to private schools. He also included high school students taking college admissions tests like the SAT that he claimed “differ only in part from standard group intelligence tests.” Psychologists point out the strong correlation between IQ scores and performance on other standardized tests, such as the SAT. But this correlation is no coincidence. The IQ test is a good predictor of success because modern life, including our education system, has been shaped to value qualities measured on an IQ test.
One of the reasons why high-stakes testing caught on was the convenient format enabled by multiple choice. Decisions about educational placement, grade promotion and professional outcomes could be predicted and managed with one straightforward exam. I often notice that testing and learning are one and the same to my students. To them, the point of education is to do well on a test. It reminds me of an essay by Edwin G. Boring in The New Republic from 1923 that declared, “Intelligence as a measurable capacity must at the start be defined as the capacity to do well in an intelligence test.” One hundred years later, academic success is defined as the capacity to do well on a standardized test. It does not just predict a person’s academic ability. It defines academic ability. And those who have the most time and resources to devote to the test are the ones who will succeed.
Given the origins of the SAT, it is no surprise that children from the country’s richest families continue to be overrepresented at elite universities. About one-third of children from the very richest 1 percent of families scored a 1300 or higher on the SAT between 2011 and 2015. Students whose families were in the top quintile of income earners were seven times as likely to have such a high score as those from the bottom quintile. Just one in five children from the nation’s poorest families took the test at all.
The SAT and other high-stakes standardized tests have a strong hold on our modern education system. Its persistence is all the more astounding when we consider how much higher education has changed. When the SAT emerged in the 1920s, a little less than 6 percent of people aged 25 to 29 had obtained a bachelor’s degree or higher. In 2022, the immediate college enrollment rate for high school students was 62 percent. Who gets to go to college and what its purpose is has changed dramatically in the 100 years since Ivy League hopefuls first began taking the SAT. Does it really make sense to rely on the same tools of sorting and selection to shape the landscape of higher education?
Even though test scores are optional to apply to most colleges, students today are aware of the importance placed on SAT scores. “Marinating in admissions anxiety is just a part of adolescence,” an opinion editor for The New York Times recently claimed. No, it’s not. It doesn’t have to be this way. Part of the anxiety is the pressure to do well on the test, but this is only the tip of the iceberg. There’s anxiety about social hierarchies and fitting in and worries about how someone will judge your value and your potential by your performance on a test. There’s also anxiety about being evaluated with a test on which you are disadvantaged by a learning disability, or about being tested on information that you haven’t had the opportunity to learn. And there is anxiety about whom you will disappoint if you don’t attend an elite college. That is the kind of anxiety that our society has chosen to instill in young adults. We could choose not to by giving students more control in what they choose to submit as part of their college applications.
The connection between college entrance exams and intelligence tests should lead us to examine how we culturally and socially define intelligence in ways that only provide opportunities for certain kinds of learners. Real, meaningful change in our education system would mean rethinking how we value and measure intelligence.