What if the educators making important decisions about schools and colleges are acting too much on their guts and not enough based on actual evidence?
To Howard Wainer, that's no hypothetical. He is convinced that, from elementary school through higher education, the best evidence is frequently ignored. His new book, Uneducated Guesses: Using Evidence to Uncover Misguided Education Policies,  is about to appear from Princeton University Press. Parts of the book relate to current controversies over standardized testing and the Advanced Placement Program. (Other parts are more focused on K-12.)
Wainer is critical of the movement to make the SAT optional in college admissions, and argues that students who don't submit SAT scores perform worse in college than do those who submit the scores. Of the AP program, he is a fan in general, but he writes that -- despite calls from some to expand AP everywhere -- there are many school districts where students' chance of success is so low that the investment in AP doesn't make sense.
While Wainer has a long history in the testing field -- he was a research scientist at the Educational Testing Service for 21 years -- he said that his work was not financially supported by ETS or the College Board (although both groups provided access to data). Since leaving ETS, Wainer has been a research scientist at the National Board of Medical Examiners and an adjunct professor of statistics at the Wharton School of the University of Pennsylvania.
Asked for his overall views on testing, Wainer said that he is in favor of "competent testing," but not all testing. He said his view on statistics and education policy is reflected in Samuel Johnson's quote: "The modern method is to count. The ancient one was to guess."
The movement to drop SAT and ACT requirements for applicants has grown in recent years, with many colleges reporting that they lose nothing in academic quality or performance (as measured by graduation rates, for instance) from going test-optional. Further, many colleges that have dropped the requirement have reported that the very act of doing so results in increases in the number of minority applicants.
Wainer attacks this logic by making comparisons of students who do and do not submit SAT scores and enroll at the colleges that do not require the SAT. In the case of Bowdoin College, which does not require testing, but for which most applicants have taken the SAT, he compares the academic performance in college of those who did and did not submit scores. For four other colleges, he looks at the performance of the minority of students who submitted ACT scores instead of SAT scores. (The College Board and ACT agree on a "concordance table" that theoretically converts scores, but applicants tend to submit the scores that make them look best, and those who take both tests don't necessarily score as the table would suggest.)
In the case of Bowdoin, all of the members of the Class of 1999 took the SAT -- even though only 72 percent submitted their scores. Using data obtained from the College Board, Wainer said that those who submitted scores, on average, outperformed those who didn't on the SAT -- 1323 to 1201. So he writes that those who didn't submit made a rational decision, as their scores might have resulted in their rejection.
But then he tracked the academic performance of both groups of students in their first year (the first year being key, since the College Board says that the SAT predicts first-year academic success). He found that those who did not submit scores received grades in the first year that were 0.2 points lower than those of students who submitted scores. This suggests, he writes, that the SAT does predict academic performance in a meaningful way.
Then Wainer examined four colleges that let students submit SAT or ACT scores, and for which first-year grades were also available: Barnard and Colby Colleges, Carnegie Mellon University and the Georgia Institute of Technology. At all of these institutions, the students who submitted SAT scores had slightly better first-year grades than those who didn't.
Wainer argues that these and other data suggest that colleges that seek to enroll those who will perform best in their first year are acting against the evidence when they make the SAT optional. "Making the SAT optional seems to guarantee that it will be the lower-scoring students who perform more poorly, on average, in their first-year college courses, even though the admissions office has found other evidence on which to offer them a spot," he writes.
Robert Schaeffer, public education director of the National Center for Fair and Open Testing, which has encouraged colleges to drop SAT requirements, said that these findings don't challenge the reality that scores of colleges have done in-depth studies in recent years and found that dropping the test requirement has no impact on retention or graduation rates. He noted that Wainer's career "has been spent inside the testing industry" and said that he "ignores evidence" from many other colleges.
Schaeffer said he doubted the findings would "have any significant impact on the continued growth of the test-optional movement."
Skepticism on AP Growth
Another chapter in the book focuses on the AP program, which, on the whole, Wainer supports. He sees the program challenging some students to work harder and learn more than they otherwise would in high school. "AP classes typically have a lot more meat to them," he said, and many high schools assign the best teachers to them. But the issue he explores -- failure rates -- runs counter to some of the AP hype about how quickly (and where) the program can grow.
Failure rates (scores below a 3 on the 1-5 scale) have been attracting increasing attention. Last year, USA Today  and The Dallas Morning News  ran long articles looking at increases in the rates. Between 2001 and 2009, the passing rate on AP exams fell from 60.8 to 56.5 percent -- as the number of students who took an AP exam increased from 17 percent to 26 percent of the public high school population. As the critical articles noted, however, some high schools had very low pass rates, while others had very high rates.
At the time the articles came out, College Board officials criticized them, saying that the increase in participation made an increase in the failure rate almost inevitable, since a broader cross-section of the high school population (and a less elite subset) was taking the tests. Further, College Board leaders said at the time that AP programs encourage higher standards and so benefit a school even with low passing rates.
Wainer argues that while many students can benefit from AP (including plenty not currently participating), something is seriously wrong when schools report very high failure rates. He also finds a clear relationship between PSAT scores and subsequent success on many AP exams. So he argues that at high schools with relatively low PSAT scores, there are very few benefits to a major expansion of AP.
"A lot of schools use as their criterion of success the number of students who take AP courses. I think they should use the number who pass," he said. It's not that students in AP courses don't learn anything, even if they don't pass, he said. "It's a triage decision. These schools have limited resources," so to put those resources into AP, with a low success rate, doesn't make sense.
While the College Board has in fact boasted about rising participation rates, Trevor Packer, the head of the AP program, said in an interview that he agreed with Wainer's arguments. There has been "a rush to AP" that isn't always appropriate, Packer said.
Packer argued that there is plenty of room for growth in the program, but he said that careful analysis suggests that "for schools that are expanding access to AP among unprepared students, they are probably using resources that could be better used elsewhere." So he said he was not bothered by the book at all. "It feels very similar to the rhetoric I want the College Board to be using," he said.