Grades and test scores have worked well as the prime criteria to evaluate applicants for admission, haven’t they? No! You’ve probably heard people say that over and over again, and figured that if the admissions experts believe it, you shouldn’t question them. But that long held conventional wisdom just isn’t true. Whatever value tests and grades have had in the past has been severely diminished. There are many reasons for this conclusion, including greater diversity among applicants by race, gender, sexual orientation and other dimensions that interact with career interests. Predicting success with so much variety among applicants with grades and test scores asks too much of those previous stalwarts of selection. They were never intended to carry such a heavy expectation and they just can’t do the job anymore, even if they once did. Another reason is purely statistical. We have had about 100 years to figure out how to measure verbal and quantitative skills better but we just can’t do it.
Grades are even worse than tests as predictors of success. The major reason is grade inflation. Everyone is getting higher grades these days, including those in high school, college, graduate, and professional school. Students are bunching up at the top of the grade distribution and we can’t distinguish among them in selecting who would make the best student at the next level.
We need a fresh approach. It is not good enough to feel constrained by the limitations of our current ways of conceiving of tests and grades. Instead of asking; “How can we make the SAT and other such tests better?” or “How can we adjust grades to make them better predictors of success?” we need to ask; “What kinds of measures will meet our needs now and in the future?” We do not need to ignore our current tests and grades, we need to add some new measures that expand the potential we can derive from assessment.
We appear to have forgotten why tests were created in the first place. While they were always considered to be useful in evaluating candidates, they were also considered to be more equitable than using prior grades because of the variation in quality among high schools.
Test results should be useful to educators -- whether involved in academics or student services -- by providing the basis to help students learn better and to analyze their needs. As currently designed, tests do not accomplish these objectives. How many of you have ever heard a colleague say “I can better educate my students because I know their SAT scores”? We need some things from our tests that currently we are not getting. We need tests that are fair to all and provide a good assessment of the developmental and learning needs of students, while being useful in selecting outstanding applicants. Our current tests don’t do that.
The rallying cry of "all for one and one for all" is one that is used often in developing what are thought of as fair and equitable measures. Commonly, the interpretation of how to handle diversity is to hone and fine-tune tests so they are work equally well for everyone (or at least to try to do that). However, if different groups have different experiences and varied ways of presenting their attributes and abilities, it is unlikely that one could develop a single measure, scale, test item etc. that could yield equally valid scores for all. If we concentrate on results rather than intentions, we could conclude that it is important to do an equally good job of selection for each group, not that we need to use the same measures for all to accomplish that goal. Equality of results, not process is most important.
Therefore, we should seek to retain the variance due to culture, race, gender, and other aspects of non-traditionality that may exist across diverse groups in our measures, rather than attempt to eliminate it. I define non-traditional persons as those with cultural experiences different from those of white middle-class males of European descent; those with less power to control their lives; and those who experience discrimination in the United States.
While the term “noncognitive” appears to be precise and “scientific” sounding, it has been used to describe a wide variety of attributes. Mostly it has been defined as something other than grades and test scores, including activities, school honors, personal statements, student involvement etc. In many cases those espousing noncognitive variables have confused a method (e.g. letters of recommendation) with what variable is being measured. One can look for many different things in a letter. Robert Sternberg’s system of viewing intelligence provides a model, but is important to know what sorts of abilities are being assessed and that those attributes are not just proxies for verbal and quantitative test scores. Noncognitive variables appear to be in Sternberg’s experiential and contextual domains, while standardized tests tend to reflect the componential domain. Noncognitive variables are useful for all students, they are particularly critical for non-traditional students, since standardized tests and prior grades may provide only a limited view of their potential.
I and my colleagues and students have developed a system of noncognitive variables that has worked well in many situations. The eight variables in the system are self-concept, realistic self-appraisal, handling the system (racism), long range goals, strong support person, community, leadership, and nontraditional knowledge. Measures of these dimensions are available at no cost in a variety of articles and in a book, Beyond the Big Test.
This Web site has previously featured how Oregon State University has used a version of this system very successfully in increasing their diversity and student success. Aside from increased retention of students, better referrals for student services have been experienced at Oregon State. The system has also been employed in selecting Gates Millennium Scholars. This program, funded by the Bill & Melinda Gates Foundation, provides full scholarships to undergraduate and graduate students of color from low-income families. The SAT scores of those not selected for scholarships were somewhat higher than those selected. To date this program has provided scholarships to more than 10,000 students attending more than 1,300 different colleges and universities. Their college GPAs are about 3.25, with five year retention rates of 87.5 percent and five year graduation rates of 77.5 percent, while attending some of the most selective colleges in the country. About two thirds are majoring in science and engineering.
The Washington State Achievers program has also employed the noncognitive variable system discussed above in identifying students from certain high schools that have received assistance from an intensive school reform program also funded by the Bill & Melinda Gates Foundation. More than 40 percent of the students in this program are white, and overall the students in the program are enrolling in colleges and universities in the state and are doing well. The program provides high school and college mentors for students. The College Success Foundation is introducing a similar program in Washington, D.C., using the noncognitive variables my colleagues and I have developed.
Recent articles in this publication have discussed programs at the Educational Testing Service for graduate students and Tufts University for undergraduates that have incorporated noncognitive variables. While I applaud the efforts for reasons I have discussed here, there are questions I would ask of each program. What variables are you assessing in the program? Do the variables reflect diversity conceptually? What evidence do you have that the variables assessed correlate with student success? Are the evaluators of the applications trained to understand how individuals from varied backgrounds may present their attributes differently? Have the programs used the research available on noncognitive variables in developing their systems? How well are the individuals selected doing in school compared to those rejected or those selected using another system? What are the costs to the applicants? If there are increased costs to applicants, why are they not covered by ETS or Tufts?
Until these and related questions are answered these two programs seem like interesting ideas worth watching. In the meantime we can learn from the programs described above that have been successful in employing noncognitive variables. It is important for educators to resist half measures and to confront fully the many flaws of the traditional ways higher education has evaluated applicants.
Read more by
Inside Higher Ed’s Blog U
What Others Are Reading