All eyes were on Mitch Daniels, former governor of Indiana, when he took on the presidency of Purdue University in 2013. How would the politician adjust to life in academe, and would he push his standardized test agenda for K-12 schools up the ladder, many wondered? But apart from a few scuffles with the faculty -- including his abrupt cancellation of the student common reading program, which he attributed to budget cuts -- Daniels’s tenure had been relatively quiet. Until now, that is.
Two years into the job, Daniels has arrived at a major impasse with Purdue’s faculty: how to prove that students are actually learning something while at the university. Backed by Purdue’s Board of Trustees and inspired by the work of Richard Arum and Josipa Roksa (the authors of Academically Adrift: Limited Learning on College Campuses) and others who argue that undergraduates aren’t learning crucial critical thinking skills, Daniels says the university must be accountable to students, parents, taxpayers and policy makers. He’s tasked a faculty body with choosing just how Purdue will assess gains in critical thinking and other skills after four years there, and he wants to start the assessment process soon -- by the fall.
Purdue wants the student growth assessment “for the same reason that hundreds of other universities are already doing this -- that research has shown that in some cases little to no intellectual growth occurs during the college years,” Daniels said in an interview with Inside Higher Ed. “And the marketplace is saying emphatically that they find far too many college graduates lacking in critical thinking and communication skills and problem solving, etcetera.”
Daniels said he is “very confident learning is happening on our campus,” and that he thinks Purdue will “stack up well” against other institutions in terms of student learning gains. But showing that is a matter of “responsibility and necessity,” he added.
Faculty members, meanwhile, say that the process is too rushed, and that they can’t endorse an assessment instrument they’re not sure is valid. Then there are procedural issues, such as how to choose a representative student sample to take the test as freshmen, and how to get seniors -- who are busier and harder to find -- to take it at all. They’re also concerned about how the university will use the data it gathers from any assessment. Will the data truly be aggregate, as the university has said it will be, professors wonder, or will it be somehow used punitively against them?
“There are a wide variety of issues of concern,” said Patrick Kain, an associate professor of philosophy and a member of the both the Student Growth Task Force Oversight Committee, which is studying the assessment issue, and the university’s standing Educational Policy Committee. “One area of concern is whether any of these existing [assessment] instruments are good enough to answer or to begin answering these questions. And I think there are concerns about how this test or results might be used or misused, potentially.... Could they drive decision-making about programs to invest in, or could they be used to recruit for some programs and not others?”
Kain added, “They could affect perceptions about the strength of Purdue compared to other institutions if the test isn’t fairly accurate and fairly useful. They could provide potentially misleading information -- these are the family of concerns I hear people raise.”
The assessment debate actually began about two years ago, when Daniels tasked a joint faculty and administrative committee with recommending an assessment tool to help prove to university “stakeholders” what he said he already knew: that students were learning something at Purdue. Relatively quickly, that committee named an assessment tool: the Collegiate Learning Assessment Plus, run by the Council for Aid to Education. The assessment has been used or is in use at more than 150 institutions to tests gains over time in small, representative groups of freshman and seniors, but it remains controversial. A 2013 study, for example, found that student performance on such tests varies widely based on motivation for taking the test. In other words, a student who has no reason to do well on the test might not take it seriously, and therefore can skew the results negatively for the institution. Others have questioned the appropriateness of basing assessment on small groups of students and whether the gains are likely to be notable at a university like Purdue that admits well-prepared students.
The faculty-administrative committee included some similar concerns about the test in its report, which soon went to the University Senate and the Student Growth Task Force Oversight Committee for further discussion. That’s where it got held up for about a year and a half, as faculty members debated on and off whether the institution needed an external assessment and, if so, what assessment it might use.
“The Purdue faculty constantly performs a lot of assessments and student assignments -- quizzes, exams, portfolios, journals, internships -- I could go on,” said Patricia Hart, professor of Spanish and chair of the University Senate. “So I guess the first reaction is that we think that any assessment initiative should come from the faculty. The first question we would want to ask is, ‘Is this needed? Is this a good idea?’”
Instead, she said, it feels like faculty members were told, “Go pick a test.”
In December, the project’s faculty oversight committee asked the university’s Board of Trustees for more time -- until fall 2016 -- to answer outstanding questions from the faculty and make a recommendation about a test. But the board rejected that idea, saying it wanted answers by February.
At a University Senate meeting this week, Daniels again made his case to the faculty with a PowerPoint presentation showing that many peer institutions already use the Collegiate Learning Assessment. He said he wanted the assessment to “demonstrate what we know: a Purdue degree has high value,” and that Purdue students gain critical-thinking, reasoning and communication skills. He said he wanted the institution to track its progress over time, and make the information “transparent” to students and potential students, parents, “fellow citizens,” and policy makers. He said the assessment would not be used to rate colleges within Purdue, individual majors, programs or individual faculty.
But faculty members remained unconvinced. They again asked Daniels for more time, and he gave them until April -- not quite the year and a half they’d wanted. Faculty members also asked for the immediate formation of an expert panel to look at all available assessment tools, and to consider whether or not it’s necessary to create a new one, specific to Purdue. (Daniels said an internal tool “won’t fly,” since it’s important to be able to compare Purdue to other institutions.)
Kain said the new deadline wasn’t much time, but it was “some” time. Asked if his oversight committee might be able to make a recommendation by then, he said it hasn’t even been able to meet formally yet to review the results of a pilot study of assessment tools from the fall.
Daniels said he’s confident he’ll “work it out” with the faculty before August, when he plans to begin the new assessment program. He said he didn’t regret taking a “consultative route” to planning, but noted that other institutions have taken a definitively “top-down” approach. In the event that the faculty committee does not make a recommendation in time, he said, “We have a faculty recommendation from an expert committee.”
Referring to this week’s University Senate meeting, he added, “I didn’t hear from anybody who feels we shouldn’t be accountable and shouldn’t be taking any such measurements. I didn’t hear that. I heard discussion about the best ways of doing this. But we’ve already extended things for really two years and I’m not inclined to postpone it further. But we’ll continue talking.”
Hart said that Daniels “can ask as many times as he wants, but the answer is always going to be the same: the faculty is very concerned about student growth and could not be more interested in proving or studying it. But in order to do that you have to design the study.”
Read more by
Today’s News from Inside Higher Ed
What Others Are Reading