Essay urges College Board to end rather than tinker with the SAT
- Study suggests good Advanced Placement scores and personality traits predict college success
- Justice Department ends probe of discussions by private colleges on merit aid
- White definitions of merit and admissions change when they think about Asian Americans, study finds
- Admissions association will require validation of rankings statistics
The new president of the College Board, David Coleman, has written a letter to College Board members proposing to redesign the SAT. He wants to fix it so the test will "focus on the core knowledge and skills that … are most important to prepare students for the rigors of college." The shift may seem unremarkable but it represents a paradigm revolution in relation to the original test. The old SAT, introduced in 1926, was supposed to be an IQ test, measuring innate ability, not hard-earned subject-specific knowledge of anything. For eugenicists, the IQ argument was a winner; for private colleges, it gave them bragging rights for selecting students with a nationally normed device that coincidentally had a powerful linear relation with family income. Administrative complacency, faculty ignorance, and business office economics have kept the test in play. Why fiddle with a winner?
Between 1926 and today, the test was "redesigned" only once, in 2005. When the University of California threatened to dump the old SAT because it was statistically weak and socially biased, the College Board kept them hanging on by promising a better test – one that would be predictively more powerful and without the social disparities of the old test.
Instead, the 2005 SAT has been a failure on all counts. The new SAT dropped the dripping-with-social-bias verbal analogies and added an easily coached writing section. It took more time, was more expensive, predicted even less well than the old one, and managed to magnify social disparities. Racial, gender, and socioeconomic status test score gaps widened, instead of narrowing. Nonetheless, the College Board proclaimed the new SAT a success; everything was supposedly rocket-science perfect, until Coleman’s letter last week.
But why does the SAT need fixing if it is already, as Coleman says, “the best standardized measure of college and career readiness currently available”? The administrators of the ACT would dissent and slightly more of America’s high school seniors now agree with them. Clearly, part of the reason the SAT needs a remake is in response to a decline in market share. But, paradoxically, another source of pressure on the test comes from new developments inside its true archrival, America’s high schools.
The institution that has done the most to uphold academic standards for generations of America’s college-going youth has not been the College Board; it has been the American high school. Coleman’s formulation on the SAT being "the best standardized measure" is a misleading half-truth; the best statistical predictor of college performance is, and always has been, high school grades in college preparatory courses. It is a myth that America’s high schools are so unreliable (but, of course, not our colleges) that their grades are inflated and meaningless measures of academic achievement.
Even the College Board stipulates in its technical literature that high school grade-point average is the variable that holds the highest statistical correlation with first year grades and with cumulative grades. And high school G.P.A. is the best predictor of who will finish a college degree. High school G.P.A. alone performs better than test scores alone, whether one uses the SAT or the ACT; when combined with high school G.P.A., test scores increase our statistical power by one percentage point, as found at DePaul University, using the ACT, or at the University of Georgia, using the SAT. For me, a variable that raises one’s adjusted r-square in a statistical model by one point contributes diddly to our predictive powers. And what it contributes that isn’t diddly is the transmission of social inequality. There is no correlation between high school G.P.A. and family income; the same cannot be said for the SAT/ACT.
America’s high schools, in reaction to No Child Left Behind and the Obama Administration’s push for transparency and accountability, have given birth to a "common core" standards movement in math and English that has been adopted by 45 states and the District of Columbia. Coleman is intimately familiar with the common core, as one of its architects, and my hat is off to him for that. But one of the consequences of getting a more nationally uniform curriculum is that high school grades will end up predicting even more powerfully than before how well one will do in college, and aptitude tests will be left further behind. America’s schools are where our youths learn the "knowledge and skills" needed for college level work; test-prep for a Saturday morning’s experience filling in the blanks cannot ever do that job. As America’s schools become more uniform and transparent, the fears of unreliability that the test industry preys upon will dissipate.
Another reason the SAT is on the drawing board again is the success of the test-optional movement in higher education. Pioneered by Bates College, and championed by many others, including my own Wake Forest University, more than one-third of America’s colleges do not require the SAT or ACT of an applicant. It is a myth that we need the SAT/ACT to select youths who are prepared to make the most of an opportunity to get a college degree — just as it is a myth that we have perfected a statistical science for doing college admissions. According to the College Board, our statistical models capture about 22 percent of the variance in college grades; the University of Georgia, where the SAT contributed one point, managed to get a model that explained 31 percent of the differences in undergraduates’ first year grades.
Most of what matters to undergraduate performance, 70 to 80 percent of what’s going on, isn’t captured by our best statistical modeling. Admissions remains more art than science, and colleges who look at the whole applicant in search of the best fit between individual and campus do a valuable service. Test-optional colleges have to look beyond the numbers. The ranks of test-optional colleges have grown in the last four years. A tipping point will come when everyone will rush to jump on board, and the admission by the College Board that its 2005 version of the test was a failure brings that day closer to us.