WASHINGTON -- If a student's grades in high school best predict how he or she will fare in college, what should admissions officers do when -- just like the children of Lake Wobegon -- all of their applicants are above average?
That tension was made clear at a session on admissions during the College Board's annual forum here Wednesday. Brian Hazlett, Towson University's director of admissions, told attendees that his institution had learned through experience that a student's grade-point average in high school correlated highly with success in college.
Those results have been mirrored widely. High school grade-point average is "consistently the strongest predictor of four-year college outcomes," Saul Geiser, of the Center for Studies in Higher Education at the University of California, Berkeley, wrote in 2007 after analyzing the trajectories of nearly 80,000 students.
Sitting next to Hazlett on Wednesday's panel was Steve Dever, associate director of admissions at Saint Joseph's University in Pennsylvania. Dever described how easy it is to get dazed by applicants' luminous grade-point averages. He recounted how an unnamed high school recently submitted 21 applications to Saint Joseph's; all had a grade-point average of 4.0, with class rankings that ranged from first to 112th, and SAT scores that ran the gamut.
While classroom performance is Saint Joseph's most important metric, Dever said his university weighed several factors to evaluate an applicant: grade-point average and the rigor of courses, though he also acknowledged that one high school's honors course may differ in rigor from another's. Others on the panel said they weighed class rank, though at least one-third of audience members, from a show of hands, said their schools no longer ranked students from top to bottom.
How to reconcile this tension when standards, culture and resources in high schools can be so disparate?
"Admissions is about precision guesswork," said Jeff Rivell, deputy director of admissions at the University of Delaware. Hazlett agreed, describing college admissions as 98 percent science and 1 percent guesswork -- a process that inherently cannot predict outcomes with certainty.
The panelists urged those in the audience, many of them guidance counselors, to clearly spell out the grading scale their schools use, how the scores are distributed across the student body, and what its demographics are. "The more information we have, the better," said Dever. "We have to connect the dots somehow."
Opinion on one of those dots, the predictive value of the SAT and other standardized tests, was split -- an interesting fact at a session sponsored by the organization that runs the SAT. Then again, this lukewarm view of the test was not uncommon at the conference; another session Wednesday was titled "Looking Beyond Scores."
Hazlett sees the SAT as simply confirming what a student's high school grade-point average already suggested. Research ranging from Geiser's 2007 study to the work of the College Board has found that standardized tests coupled with high school grade-point average predict later college success more accurately than either figure alone. Many institutions, more than 800 by the last count, have said that test scores are no longer needed to predict college success and have eschewed the SAT as a requirement altogether.
Rivell sees the test as necessary. "Is the SAT valid and useful?" asked Rivell, rhetorically. "We need some kind of common yardstick."
While the admissions officers saw their sifting of data and admissions decisions as ultimately successful, they were challenged by a member of the audience, Arthur L. Williams, principal of Huron High School in Ann Arbor, Mich. "You're letting students in, thinking they'll complete, but they're not," said Williams, referring to the fact that about one-half of students nationwide who enter four-year universities finish within six years.
"When you look at the college completion rate," he asked, "how accurate are you?"