You have /5 articles left.
Sign up for a free account or log in.

The last year hasn't been a good one for the standardized testing industry, what with SAT scoring errors and more colleges dropping the test as a requirement. But on Thursday, the journal Science published a study backing the reliability of standardized testing in graduate and professional school admissions.

The study, a "meta-analysis" examining thousands of data sets on a range of tests, found that test scores are a better way to predict graduate and professional school success than are college grades, which may be influenced by grade inflation or the relative competitiveness of different student bodies. The study concluded that the most reliable way to admit students to graduate and professional school is a combination of using test scores and college grades. In addition, the study found that these tests predict just as well for minority and white students.

Groups like the College Board and the Educational Testing Service regularly produce research that backs their views that standardized tests can be a valuable part of the admissions process, but those studies are typically labeled as self-serving by testing critics. While authors of the new study have ties to testing entities, no testing company paid for the study and its publication in the prestigious journal Science was seen as a coup for the testing industry, some of whose officials have been talking excitedly about this research for months.

The Science study examined data about all of the major admissions tests for graduate and professional school, including the Graduate Record Examination, which is used for Ph.D. programs, and the tests used for admission to medical, law and business schools.

Nathan Kuncel, an assistant professor of psychology at the University of Minnesota and one of the study's authors, said in an interview Thursday that the project was the "largest and most comprehensive synthesis of graduate school admissions."

The most striking thing about the study, he said, was the common findings across different graduate admissions tests. "Across all the major standardized tests, they all predict a variety of important outcomes -- grades, licensure passage, obtaining the degree," Kuncel said.

One of the major concerns about standardized admissions tests is their impact on minority enrollments. Black and Latino students, on average, receive lower scores than do white and Asian students. Many educators have questioned the predictive value of the tests for some minority students and urged colleges and graduate programs to place less emphasis on them.

Kuncel said that the review for his study found no evidence of bias in test questions, or any difference in predictive value for different racial or ethnic groups. The study says that while there is evidence that some tests underpredict the performance of women in college, there is no similar evidence for graduate and professional school.

Those who want to know why black and Latino students don't score as well need to stop looking at the tests, Kuncel said. "These tests are acting as a thermometer for other societal issues," he said.

Asked whether his research would discourage colleges from questioning standardized tests, Kuncel said that there were separate questions for graduate schools to consider: One set of questions concerns what tests measure and another concerns what kind of class a graduate school wants to produce.

"If a law school values students who will pass the Bar at a high rate, the LSAT does a great job of that," Kuncel said. But a law school could make a perfectly legitimate issue to focus on other issues, he added.

While the analysis published Thursday focused on tests for graduate school, Kuncel said he expected similar findings would come from looking at the tests used in undergraduate admissions. "All of these tests are very similar in structure," he said.

Bob Schaeffer, public education director of the National Center for Fair & Open Testing, disputed the findings, which he called "a meta-analysis of pro-testing meta-analyses" and said that the analysis ignored "considerable research that reaches opposite conclusions." He compared the study to research sponsored by the tobacco industry to demonstrate that cigarettes do not cause cancer.

He also noted the ties of the authors to the testing industry.

Kuncel said that the testing industry did not pay for the latest research, and that only one of the earlier analyses included in the meta-analysis received support from a testing unit. His co-author is Sarah A. Hezlett of the Personnel Decisions Research Institutes, which works with a variety of testing entities. Kuncel said that while he serves on advisory boards for two testing organizations, that means that he evaluates research proposals at meetings in New Jersey and Washington. "I have not had my pockets lined on the basis of this research," he said.

Next Story

Written By

More from News