You have /5 articles left.
Sign up for a free account or log in.
Chris Ryan/OJO Images/Getty Images
The Carnegie Foundation for the Advancement of Teaching and the Educational Testing Service are teaming up to develop a new way to evaluate competency-based learning.
The two organizations announced today that they are partnering to create a set of tools designed to assess the qualitative skills that many of today’s employers consider most important—such as creative thinking, work ethic and ability to collaborate.
The organizers assert that such tools could potentially be better indicators of a student’s future success than the traditional Carnegie unit, or credit hour, the measure first introduced in 1906 that correlates proficiency in a subject with the amount of time spent studying it. The new tools would also allow students to account for learning completed outside of the classroom, such as at a job or internship.
The initiative comes at a time when traditional testing is under increased scrutiny, especially since the onset of the COVID-19 pandemic, with fewer and fewer colleges requiring applicants to submit SAT or ACT scores. The exams have also been criticized for being racially biased against Black test takers and weak predictors of student success.
ETS, a major testing organization responsible for exams including the GRE, Praxis and TOEFL, is moving away from its focus on testing to become a “data insights” company, Tim Knowles, president of the Carnegie Foundation, said in an interview with Inside Higher Ed. The new competency-based initiative could help ETS fill the testing void.
The project will initially be focused on high school and middle school, Knowles said. But he believes the assessments will ultimately have valuable applications for higher education: not only could they be utilized as a metric in the college admissions process, but they could also eventually be translated into college credits—in much the same way AP exam scores are—that guide course placements for students and determine how quickly they progress through a curriculum.
Knowles also suggested that such assessments could potentially stand in for a college degree, at least in terms of demonstrating competency when applying for jobs.
Higher education’s “real existential threat is whether there are pathways to purposeful careers that don’t depend on a degree,” he said. “And, frankly, if you get the assessment architecture right and you can actually assess whether people know and can do particular things, then where they learned them is totally irrelevant.”
He acknowledged such a shift will take time.
“That won’t be driven by this particular initiative,” he said. “But that’s what I think is the thing that the future could bring that will be very disruptive to the current model.”
New Life for an Old Idea
Conversations about how institutions—both at the K-12 and postsecondary levels—could better assess learning have been ongoing for decades; some institutions have already begun using competency-based evaluations in specific programs. The American Association of Colleges and Universities introduced VALUE rubrics in 2009, which, according to its website, are utilized by 2,700 colleges and universities worldwide. The rubrics are designed to help educators “evaluate student performance reliably and verifiably across sixteen broad, cross-cutting learning outcomes.”
According to Charla Long at the Competency-Based Education Network, fields like teaching and nursing also utilize such assessments in various capacities, such as to evaluate nursing students’ compassion.
Long believes the joint ETS–Carnegie Foundation project is a significant opportunity for the two influential institutions to help legitimize and scale the work already being done in the field of competency-based assessment.
“Them getting into this will be game-changing,” she said. “We just want it to be informed by the great innovations that are already underway.”
Some critics, however, argue that focusing on competency-based methods of measuring learning fails to recognize how existing college systems and assignments impart those same skills; in other words, it doesn’t acknowledge the ways that learning subjects like history, chemistry or Spanish through traditional methods can help students become successful communicators, critical thinkers and collaborators, among other things.
“The purpose of high school, one of its primary purposes, is to educate citizens, and the education of citizens means providing them access to a liberal education,” said Johann Neem, a history professor at Western Washington University and the author of What’s the Point of College? (Johns Hopkins University Press, 2019).
If competency-based assessments become central to students’ education, “we may actually have less well-prepared citizens,” he argued. “We may lose track of the fundamental purpose of public education, which is the education of citizens. The education of workers is a secondary purpose.”
Neem also said that by requiring students to take such assessments, institutions would limit the freedom of teachers and professors to determine learning outcomes in their own courses—though Knowles argues that they would only give teachers additional, valuable tools for supporting their students’ progress.
Over all, Knowles said, educators and state leaders are enthusiastic about competency-based assessments and have indicated that they will embrace such assessments once they exist.
"I don’t see any constituency, whether it’s K-12 educators, state officials, leaders or employers, whatever their partisan sort of footing may be, I don’t see any of them pushing back or leaning back,” he said. “At the root of it is we’re trying to set young people up with the skills they need for success, whether they go right into the workforce after high school, or they go to college and then go to the workforce. People are aligned around that.”
First Steps
The preliminary phase of the project involves answering a deceptively simple question: What skills should institutions focus on assessing? That requires deciding on skills that both predict future success and can be validly and reliably measured, Knowles said. They hope to identify these skills within the next four to six weeks.
From there, researchers will investigate how to actually measure those abilities.
“A key part of the research effort is to develop a comprehensive skills framework, identifying and defining the key future skills that matter for work, life, and education. The skills will go beyond traditional cognitive skills to include affective and behavioral skills,” Amit Sevak, the president and CEO of ETS, told Inside Higher Ed in an email. “We will start with a short list of skills related to how learners reason, create, collaborate, and persevere, among others. Given ETS’s position as the world’s leading assessment and measurement organization, the framework will also feature discussions of innovative assessment approaches to measuring learning and experiences gained from both in school instruction and out of school experiences.”
The Carnegie Foundation and ETS are hoping the project will culminate in a multistate pilot program, starting in about a year. During the pilot, the project leaders will work with educators and other stakeholders to observe assessments in action and evaluate their fairness and precision in capturing learned skills, as well as the way different classroom conditions impact the results and more.
While it is unclear what these potential assessment approaches will look like, Sevak noted that they will be “very different from the standardized tests we know today,” focused more on measuring progress than assigning a numerical score.
According to Long, this view is aligned with existing work in the realm of competency-based assessment; she said that traditional standardized tests rarely show off a student’s actual capabilities.
She used the example of a teaching candidate: if asked on a multiple-choice quiz how they would manage a pupil who was behaving badly, most education students would choose the correct answer—they would speak calmly to the child about their behavior. But that doesn’t show whether they could actually keep their cool under pressure in the same way a simulation might, Long said.
“We would lean heavily toward performance-based demonstrations of the competency,” she said. “We would want to put them in the situation and watch them do it.”