As colleges have faced increasing pressure in recent years to demonstrate that students learn something while enrolled, many have turned to tests of learning outcomes, such as the Collegiate Learning Assessment. In that test -- and popular alternatives from ACT and the Educational Testing Service -- small groups of entering and graduating students are tested on their critical thinking and other skills. In theory, comparing the scores of new and graduating students yields evidence either that students are or are not learning. Many call the difference between the entering and graduating students' performance the "value added" by a college degree.
These test results may be high-stakes for colleges, many of which need to show accreditors and others that they are measuring student learning. But for the students taking the exams, the tests tend to be low stakes -- no one must pass or achieve a certain score to graduate, gain honors or to do pretty much anything.
A new study by three researchers at the Educational Testing Service -- one of the major providers of these value-added exams -- raises questions about whether the tests can be reliable when students have different motivations (or no motivation) to do well on them. The study found that student motivation is a clear predictor of student performance on the tests, and can skew a college's average value-added score. The study recently appeared in Educational Researcher, the flagship journal of the American Educational Research Association. An abstract for the study, "Motivation Matters: Measuring Learning Outcomes in Higher Education," may be found here.
The ETS researchers -- Ou Lydia Liu, Brent Bridgeman and Rachel Adler -- gave the ETS Proficiency Profile (including its optional essay) to 757 students from three institutions: a research university, a master's institution and a community college. (The ETS Proficiency Profile is one of the three major tests used for value-added calculations of student learning, with the others being the CLA and ACT's Collegiate Assessment of Academic Proficiency.) The students were representative of their institutions' student bodies in socioeconomics, performance on admissions tests and various other measures.
To test the impact of motivation, the researchers randomly assigned students to groups that received different consent forms. One group of students received a consent form that indicated that their scores could be linked to them and (in theory) help them. "[Y]our test scores may be released to faculty in your college or to potential employers to evaluate your academic ability." The researchers referred to those in this group as having received the "personal condition." After the students took the test, and a survey, they were debriefed and told the truth, which was that their scores would be shared only with the research team.
The study found that those with a personal motivation did "significantly and consistently" better than other students -- and reported in surveys a much higher level of motivation to take the test seriously. Likewise, these student groups with a personal stake in the tests showed higher gains in the test -- such that if their collective scores were being used to evaluate learning at their college, the institution would have looked like it was teaching more effectively.
Liu, Bridgeman and Adler write that their findings suggest a serious problem in using such test scores to evaluate colleges' teaching and learning quality. "An important message to policymakers is that institutions that employ different motivational strategies in testing the students should be compared with great caution, especially when the comparison is for accountability purposes," they write. "Accountability initiatives involving outcomes assessment should also take into account the effect of motivation when making decisions about an institution's instructional effectiveness."
What about evaluating higher education as a whole? The authors note that the much-discussed Academically Adrift: Limited Learning on College Campuses (2011, University of Chicago Press) used CLA scores to argue that students learn very little in college. The small gains in learning in that book are "very similar to the small learning gain," the authors of the new study write, that they found in their control group. But when motivation was added, they found found greater learning gains.
Richard Arum, a New York University professor who is co-author of the book, said via e-mail that the "methods employed in the study are so different from ours that the results are not truly comparable." Academically Adrift's study was based on the same students tested at multiple points in their careers, Arum noted, unlike the data in the new study. While he said he agreed that motivation affects student test performance, Arum added that "I personally don't think the research design is similar enough to our longitudinal research methods to serve as an empirical critique."
In an e-mail interview, Liu said that the concerns go beyond standardized tests. "Colleges should be aware of the potential effects of motivation on low-stakes test scores," she said. "They should also try to use strategies to improve students' test taking motivation. We think that the findings from our study not only apply to other standardized low-stakes tests, but also apply to the home-grown assessments employed by many institutions, as the nature of the motivation issue remains similar across the testing situations as long as students don’t see any direct consequence of the test scores. The motivational strategies we used in this study produced significant impacts on students’ test scores on the ETS Proficiency Profile. One recommendation we have for institutions is to emphasize to students the importance of test scores to their home institution: the scores are likely to affect public perception of their institution and therefore affect the value of their diploma."
She noted that ETS is adjusting its Proficiency Profile in part to deal with the issue of motivation. "Starting this month, individuals who take the ETS Proficiency Profile will receive certificates stating the performance level achieved on these tests. All test takers will receive an electronic version of the certificate, allowing the recipient to share the certificate with an unlimited number of academic institutions and prospective employers," she said. "Now, their students will be more motivated to do their best since they earn a certificate designed to have value beyond the classroom."
Roger Benjamin, president of the Council for Aid to Education (which runs the CLA), said via e-mail that the new study "raises significant questions." But he said that he believes that the CLA institutions have had success recruiting students with higher motivation levels than those reported by the new study. He also said that the CLA questions "are designed to be interesting to test-takers," but that the new study's conclusions are "worth investigating and we will do so."
Further, he noted that a new version of the CLA called CLA+ is reliable for measuring individual students, not just institutions, and so could be used as a "moderate stakes" test for students, potentially to "boost motivation" of those taking the test.
George Kuh, director of the National Institute for Learning Outcomes Assessment, called the new paper "an important study, mostly because it empirically confirms what people on campuses know," which is that "motivation matters big-time in terms of student performance." He said he considered the findings significant for all of the general learning tests.
Colleges generally have not found a consistent way to assure that students taking these tests are motivated, and that failure raises real questions about the results, he said. Issues of motivation affect who will agree to take the test in addition to their performance, Kuh said. "One provost told me that students were offered $75 to take the CLA and after 10 minutes several turned in their test and asked for the money," he said.
Read more by
Today’s News from Inside Higher Ed
What Others Are Reading