You have /5 articles left.
Sign up for a free account or log in.
The College Board has claimed for decades that the SAT's strength is that it predicts the grades students will earn in the first year of college. But what if, in many cases, it doesn't?
A study released Monday suggests that hundreds of thousands of students a year may have SAT scores that predict they will receive either better or worse grades than they are actually likely to receive. While the SAT may predict accurately for many others, the scholars who have produced the new study say it raises questions about the fairness and reliability of the SAT (including the new version about to be unveiled), which remains a key part of the admissions process at many colleges and universities.
"This is a pattern. This is many, many students and isn't isolated," said Herman Aguinis, the lead author of the study and the John F. Mee Chair of Management and a professor of organizational behavior and human resources at the Kelley School of Business at Indiana University at Bloomington. The research (abstract available here) is forthcoming from the Journal of Educational Psychology.
The College Board released a statement late Monday that did not directly dispute the findings, but argued that they did not undercut the approach taken by the organization over the years.
The new research is the result of years of disputes between College Board researchers on one side and, on the other, Aguinis and his co-authors Steven A. Culpepper of the University of Illinois at Urbana-Champaign and Charles A. Pierce of the University of Memphis. Aguinis and his co-authors have benefited from the dispute in that they were able to use for their most recent paper data released by the College Board to counter an earlier paper.
They first voiced their theory (based on models, not actual student data) in 2010, and that prompted a response criticizing their original paper. The response paper found -- based on hundreds of thousands of SAT scores and subsequent freshman year grades -- that SAT scores do predict those grades, as the College Board has claimed. The data came from records on 475,000 students who enrolled at 176 colleges, institutions that were distributed across the country and across various levels of competitiveness. The abstract for that paper may be found here.
Aguinis and his team took those data, however, and approached them in a different way. Instead of asking whether the SAT scores have predictive validity across all students, they asked if they were accurate at various colleges. That's because there is no uniform standard for grading in the United States. And what Aguinis found was that the predictions didn't hold at a significant minority of colleges.
Because he was working from College Board data that didn't identify the colleges, he can't say whether the SAT predicts freshman GPAs at any given college -- even though he said in an interview that he's already receiving requests from colleges to find out if the SAT works at their institutions. Likewise, he said, the College Board data don't allow him to generalize whether the SAT is more or less effective at colleges of various levels of competitiveness.
Some of the findings:
- In looking at scores, by gender, on the mathematics portion of the SAT, he found that at 16 percent of colleges, the predictions were inaccurate for either male or female students at those colleges. He said that there wasn't a clear pattern -- sometimes female applicants are being hurt with predictions that they won't do as well as they will. Other times it is the opposite. That would be about 80,000 people in the sample.
- Similarly, when comparing scores for white and Latino students on the mathematics section, he found a lack of predictive accuracy at 19 percent of colleges. This gap affected about 65,000 people. (The number of students affected by findings varies based on the size of the colleges where the researchers identified problems.)
- And when the researchers looked at the critical-reading section of the SAT and compared black and white applicants' scores, they found that the predictions didn't work at about 20 percent of colleges, again affecting about 65,000 people.
The data did not allow for looking at other groups, such as Asian or Native American students. But Aguinis said that the common pattern was that at different colleges on different parts of the test, different groups benefited. The key thing, he said, is that at many colleges, there is not one pattern of SAT scores predicting a given grade in the first year. The researchers talk about a theory of "differential prediction" that raises questions about the reliability of the SAT for all students at all colleges.
With the College Board about to unveil a new version of the SAT, Aguinis said the same problem is likely, because the College Board maintains that given scores predict the same thing for all students at all colleges.
Aguinis said he emailed the study to David Coleman, the College Board's president, on Monday morning and was hoping to be invited to the team that will evaluate the new SAT.
While it's true that the SAT may work for most colleges at predicting freshman grades, he said that the percentage of colleges where it doesn't work calls for the release of much more information so those institutions can consider whether they want to rely on SAT scores.
The College Board on Monday released a statement from Jack Buckley, its senior vice president for research.
The new study, Buckley said, "reiterates that high school grades have large predictive variability across college campuses, further underscoring the College Board’s long-recommended position that standardized test scores and high school grades should be used in conjunction with the full context of a student’s performance, opportunities and extracurricular accomplishments. The College Board recognizes that student performance in college -- and the extent to which SAT scores, high school grades and other student characteristics can predict that performance -- reflects each unique campus environment and admission process as well as a host of individual student choices made after enrollment. As a result, the College Board provides colleges with a free service called the Admitted Class Evaluation Service (ACES) that identifies the best combination of measures that will predict a student’s performance at a specific institution."
The statement continued: "Institution-specific validity studies through the College Board ACES service are used by institutions who seek to better understand what factors and in what combinations are most predictive of student success on their own campuses. The College Board is working closely with admissions professionals to improve how admissions decisions are made, including how contextual information about students can support colleges’ holistic review of their applicants. We are committed to working with higher education to improve the information available to all decision makers, and to insure that the college admission process is as fair, unbiased and transparent as possible."