You have /5 articles left.
Sign up for a free account or log in.

The Kingdom of Tonga

This past July I received a subpoena to appear in the Supreme Court of the Kingdom of Tonga. Google maps suggests that if you flew direct from New Zealand to Hawai’i, you might pass over Tonga about a third of the way up. At stake was -- and still is -- the fate of a small liberal arts academy called the University at ‘Atenisi (ah-ten-EE-SEE) Institute, its name meaning “Athenian” in Tongan, the school’s tribute to ancient Greece.

The local accreditor’s requirements include a standard on student learning outcomes that the university had attempted to comply with in the usual way: by engaging consultants. Nevertheless, ‘Atenisi received in response a terse letter stating it had thus far failed to meet standards and must cease recruiting new students. The university’s dean contacted me during the ensuing legal battle, because of my public critiques of the assessment bureaucracy in American higher education.

I gave testimony via Skype video from my office. The judge and counsel for both parties were collegial, and we spent a couple of hours discussing the practical usefulness of formal learning statements. The “learning outcomes” are just words, I explained, or as Bob Shireman put it best, “The insulting reduction of learning to brief blurbs, using a bizarre system of verb-choice rules.”

In the end, the judge agreed with my suggestion that an accreditor ought not be able to withhold accreditation without citing specific problems with the blurbs and allowing the institution the opportunity to resubmit. The resulting judgment lifted the ban on new enrollment but left the accreditor free to assess applications as it sees fit -- the Tonga National Qualifications and Accreditation Board is still empowered to deny accreditation using arbitrary requirements for the form and content of learning outcome statements. Just words or not, ‘Atenisi’s fate depends on how they scan during the final Kafkaesque accounting.

It may be that the accreditation is being withheld as retribution for political discord between ‘Atenisi and the government dating back to the 1990s. If so, the arcane rules of learning outcomes statements are perfect for the task.

Parallel Worlds

Although bureaucratic assessment reporting has been criticized by assessment organizations, and accreditors complain about “cookie-cutter” reports, the staff in most assessment offices are still obliged to at least pretend that learning outcomes are more than just words.

The faculty already have a language about teaching and learning. If you ask a math professor what students will learn in calculus, she may point you to the table of contents in the textbook, where dozens of topics are enumerated. If you ask how the class is going, you might hear that “they were fine with the derivative rules, but related rates problems are killing them.” This language is an integral part of teaching.

The assessment bureaucracy -- those periodic checkboxy reports -- can only be justified if the formal learning outcome statements and their standardized assessments are superior to the native ways faculty know their students. Otherwise we could just ask faculty how the students are doing and use course registrations and grades for data. We could look at the table of contents to find the learning outcomes.

These two worlds -- the report writing and the lived experience -- coexist, but not easily. While the assessment office depends on the informal channels of faculty knowledge to do meaningful work, in most regions of the country each program requires a formal report. These “cookie-cutter” reports fail miserably at generating new knowledge (something you couldn’t learn by just asking faculty members what they think) because they are based on faith in the special meaningfulness of the learning blurbs.

Standardized-testing research methods can work, but the validation of results is difficult and requires time, expertise and enough data. Educational research bears little resemblance to what passes for assessment in report writing.

Rather, assessment reports are judged by how well they follow the rules, like having outcomes statements of the proper form. The poor quality of the empirical work is a consequence of the sheer number of them (hundreds or thousands of outcomes per institution), the paucity of data produced (usually N < 20), the low reliability of the data and the rudimentary analysis used (often: make a bar graph and circle the short one). This is a truly awful way to do research, and it shouldn’t be a surprise that reviews pointedly ignore data quality issues: everyone would flunk otherwise.

So, despite the reductionist appeal of learning outcomes statements as a basis for scientific understanding of learning, the practical effect of ubiquitous assessment has been the creation of arbitrary rules for meeting regulatory standards and cookie-cutter reports that ironically depend on informal faculty ways of knowing for any useful content.

In ‘Atenisi’s case, there aren’t enough students to ever do quantitative research in the way the formalized learning assessments pretend to. Their case is extreme and outrageous, but the same ideology affects most of higher education in the United States. It may not shut down your university, but it’s costing you dearly; higher education needs data science, not faith in formulas.

Fixing Assessment

If you work in the assessment office, I’m not disparaging your work; I’m trying to help you. There is great potential for your office to contribute to your institution’s programs and students -- and its bottom line. The latter benefit is going to matter when we all fall off the “demographic cliff” that’s capturing headlines.

The benefits your office probably already provides include:

  • Facilitation of external program review. This is the natural extension of faculty ways of knowing and is the most authentic way to understand a program, considering facilities, budgets, faculty numbers and qualifications, curricula, and reviewing samples of student work, for example.
  • Being an internal consultant for program development, e.g., leading discussions of curriculum coherence or identifying intuitive learning goals that span courses. This leads to more agreement about what students should be accomplishing, and helps the faculty’s natural language converge.
  • Summarizing or modeling data, when there’s enough of it to work with.
  • Coordinating assessment reporting for regulatory purposes using cookie-cutter forms, often entered into expensive software systems.

The last one is the most expensive and time-consuming but provides the least benefit to the institution. We need to get out of the checkbox-reporting business, and the sooner the better.

The magnificent opportunity for assessment offices is to further develop their data analysis potential in support of overworked institutional research offices, with a focus on student success -- not just what they learn in a class, but how they got there: find the pathways to success. There’s never been a better time to develop new skills in this area, and a good place to start is with course grades.

Course grades don’t fit nicely into the learning outcome ideology. You may have been told that they provide only “indirect evidence” and are not useful as primary data for understanding learning. This is, of course, preposterous. Here are some questions you could start with:

  • What is the distribution of academic performance among students by demographic?
  • What is the distribution of course difficulty by program or courses within programs?
  • Does learning suffer when students wait to take introductory courses?
  • How reliable are grade assignments by program?
  • How well do grades predict other things you care about, like standardized tests or other assessment data, internship evaluations and outcomes after graduation?

Contact me directly for methods, results and more ideas.

The first step to move forward is to realize that learning outcome statements are just crude placeholders for describing complex human behaviors. There is no magical power imbued by getting the words just right.

David Eubanks is assistant vice president for institutional effectiveness at Furman University (david.eubanks@furman.edu), where he works with faculty and administrators on internal research projects. He holds a Ph.D. in mathematics from Southern Illinois University and has served variously as a faculty member and administrator at four private colleges. Active research interests include crowdsourced data, assessment of writing, the reliability of measurement and causal inference from nominal data. He writes about data in higher education at http://highered.blogspot.com.

Next Story

Written By