Ethical College Admissions: What We Know

Jim Jump reviews the data on test-optional policies and considers what he would still like to know.

January 7, 2019
 
Stock image of a standardized test form.

Those of us in the college admissions world are always looking for events or decisions that augur a change in the landscape or even a paradigm shift. It has been said that when Harvard University itches, everyone scratches, and the current court case involving a challenge to Harvard’s treatment of Asian American applicants certainly has a lot of colleges and universities ready to scratch various parts of the selective, holistic admission process.

Sometimes events that seem world-changing turn out not to be. In a three-day span back in 2006, Harvard, Princeton University and the University of Virginia all announced that they would end their use of early decision, and it was widely assumed that was the beginning of the end for early decision as an idea and a practice. That hasn’t happened, because too many other institutions find early decision an important part of their enrollment management strategy.

It is too early to know the significance of the announcement by the University of Chicago last summer that it was joining the ranks of America’s TOP colleges. That is not a reference to “America’s Top Colleges,” the title for the college rankings produced by Forbes. TOP is an acronym for test-optional policies. Last June Chicago became the most prominent national university to join the test-optional movement. But is that a trend, an anomaly or neither?

The year 2019 marks the 50th anniversary of Bowdoin College’s introduction of test-optional admission. In the succeeding half century, the number of colleges and universities giving at least some applicants the option to submit test scores has grown by leaps and bounds. Rarely does a month go by without another college joining the test-optional movement, and the list of test-optional colleges maintained by the National Center for Fair and Open Testing, better known as FairTest, includes more than 1,000 institutions in 49 states (the outlier is Wyoming).

Fifty years after its introduction, what do we know about test-optional admission? How does it impact colleges in admission, financial aid, graduation rates? How does it impact college choice by students? How might it impact the testing industry? Test-optional policies are clearly no longer a fad, but are they the future?

Those questions are easier to pose than they are to answer, but a report published last spring provided data and analysis that contribute to our understanding of the effects of test-optional policies. The report, “Defining Access: How Test-Optional Works,” was co-authored by Steve Syverson, longtime dean of admissions and financial aid at Lawrence University in Wisconsin (and a past president of the National Association for College Admission Counseling), who has come out of retirement to serve as assistant vice chancellor for enrollment management at the University of Washington Bothell; Valerie Franks, former assistant dean of admissions at Bates College in Maine; and Bill Hiss, longtime dean of admissions and financial aid at Bates College.

The report builds on a study originally published by Franks and Hiss in 2014. It includes 28 institutions that cover the spectrum of colleges that use some form of test-optional admission -- small/large, public/private, selective/nonselective -- and records for nearly one million students. Each participating institution (they are not named) provided data for two years on either side of the decision to go test optional.

The study considered the following questions:

  • What do students who choose not to submit test scores look like?
  • How do those who choose not to submit scores compare with those who do with regard to high school and college achievement?
  • Do institutions treat submitters and nonsubmitters differently in admission and financial aid?
  • How does going test optional impact colleges and universities?

The American philosopher William James argued that philosophy should search for “truths” rather than “Truth,” and the Syverson/Franks/Hiss study concludes that there is not a single “Truth” with regard to test optional.

Perhaps the most common criticism of test-optional admission is that it is designed to increase applications. The admission deans at the 28 participating institutions, some of whom were not in their roles when the decision to go test optional was made, confirmed that increasing applications was a major goal behind the policy. The study showed that all 28 saw application increases following the test-optional implementation, with the average increase 29 percent at private institutions compared with 11 percent at public.

The researchers acknowledge that most colleges and universities saw application increases during the period in question, so they tried to further distill the impact of test optional by comparing the schools in the study to peer institutions with which they directly competed for students but that were not test optional. Fifty-seven percent of the test-optional colleges had greater proportionate application growth.

Does being test optional help colleges enroll more students from underrepresented student populations? All but one of the institutions in the study attracted more applications from that group after adopting test-optional policies, countering a claim made earlier last year in the book Measuring Success, a defense of testing edited by two current and one former College Board employees. Sixty percent of the colleges in the study enrolled students from that group than their identified peers, although the enrollment of Pell recipients was comparable. Interestingly, the institution with the least growth in applications from underrepresented minority students had the greatest growth in URM enrollment, while the institution with the lowest enrollment growth from that group was among the colleges with the greatest increase in applications.

What are the characteristics of students who choose not to submit test scores? Twenty-five percent of the students in the study were nonsubmitters. Women choose not to submit scores at higher rates than men. Black or African American students are twice as likely to be nonsubmitters, and underrepresented and low-income applicants are more likely not to submit scores than the general population.

Nonsubmitters are more likely to major in the humanities, social sciences or liberal arts. The study found a surprising number of students with access to good college counseling choosing not to submit, and concluded that these students appear to be “accurately playing the corners” in applying to college. As a counselor at an independent school, I find that interesting, because I have always suspected that admissions officers may assume that nonsubmitters from good public and private schools may have lower scores than is actually the case. I also wonder how many of those students may be appealing applicants because they have low financial need. The other interesting nugget was that a number of coaches at Division III colleges encourage recruits not to submit if they have modest test scores (Division I recruits are required to report test scores).

Students who chose not to submit were admitted at lower rates than submitters but enrolled at substantially higher rates. Their first-year and cumulative GPAs were modestly lower, but their graduation rates were comparable, perhaps even better, than submitters'.

Instituting test-optional policies may require an investment in financial aid. According to the study, the percentage of students with need did not necessarily increase, but the demonstrated need per student did. Submitters with low or no financial need were more likely to receive merit aid, suggesting that some institutions “buy” students who test well to enhance their profile.

That raises the ultimate question regarding test-optional policies. Do colleges go test optional for philosophical reasons, because test scores don’t provide added value to their prediction equations for student success? (It’s probably time to have a debate about whether freshman-year GPA is an appropriate measure of “student success.”) Or is it for marketing reasons, to increase applications, increase diversity and enhance the profile by removing student groups whose test scores may pull down the institutional numbers? I think I know the answer, but that’s the research I’d like to see.

Bio

Jim Jump is the academic dean and director of college counseling at St. Christopher's School in Richmond, Va. He has been at St. Christopher's since 1990 and was previously an admissions officer, women's basketball coach and philosophy professor at the college level. Jim is a past president of the National Association for College Admission Counseling.

Read more by

Back to Top