You have /5 articles left.
Sign up for a free account or log in.

If I were feeling a bit more obtuse this morning I might have titled this entry “Tis the Season for Trying to Assess Student Learning in College.” Now more than ever higher education leaders, the legislative community, and the public are obsessed with having better data about what students actually know and are able to do upon graduation. Earlier this year the blockbuster book Academically Adrift effectively shook the higher education community by showing that students themselves reported being asked to do very little in college and in some cases lost ground educationally. The book landed in the middle of a national conversation about increased accountability in higher education and associating public funding with evidence of institutional effectiveness. This discourse, affirmed by accreditation agencies, upped the ante on evidence of student learning outcomes at college and universities across the nation.

Aside from the fact that the notion of systematically assessing student learning conceptually contradicts the history of higher education there is one fundamental challenge that must be addressed or at least acknowledged as we move forward—Technology lag.

While most in higher education seem to agree that having better information about student learning has tremendous value, we must admit that the “how” is stumping us. Perhaps the biggest challenge is sufficient agreement about what exactly we seek to measure.  The renowned educator John Dewey claimed that “education is not preparation for life; education is life itself.” Education and learning have never been static commodities confined to space, time, curriculum or a classroom.  In fact, it can be difficult to know exactly when learning is taking place or how it changes thinking or future behavior. This is especially true for college students. If I were less philosophical this morning I might claim that we should at least be able to measure content knowledge -- understanding or aptitude related to specific subject areas.  But how should we measure this? 

Over the last decade the number of student surveys, portfolios archetypes, benchmarking tools, and tests aimed at assessing student learning has quadrupled. It seems that assessing student learning is on the verge of becoming an industry itself. Yet, none of the technology available today fully satisfies our methodological or conceptual standards for holistically valid evidence of student learning.  Instead, we have good information about student engagement, better alignment between course, educational activities, and intended learning outcomes, and a very healthy debate about methods. Psychometrics is my new buzz word!

One issue is the heavy reliance on self-reports from students.  Even with anonymity can we trust students to accurately report the time spent studying, their level of engagement in a course, or honestly assess their own learning? More importantly, what kind of inferences should be made based on self-reports? I’m sometimes confused about how we appropriately account for intrinsic motivation, pre-entry characteristics, or human development during a student’s college years. The technological sophistication required to satisfy questions about student learning and educational gains does not yet exist. As a result of asking tougher questions about student learning ensures that we know much more than we did a generation ago. This also means we are learning how much we don’t know about student learning.

Assessment in its purest sense involves phenomenon that is measurable and a predetermined unit of analysis. And, an assessment that involves more than one unit (e.g., courses across different schools within a university) suggests that they are comparable on some dimensions. Last year I struggled with the idea of conducting systematic assessments of learning in institutions characterized by “organized anarchy.” Doesn’t the hallmark of academic freedom and the autonomy granted to faculty guarantee significant variations in teaching diverse courses with nuanced learning objectives? In addition to being unable to measure important variables students bring to teaching and learning environments, the learning experience can be very different from one course to another in the same department. These variations and the immeasurable human aspects of teaching and learning are what makes assessing student learning so difficult.  Yet, higher education cannot opt out of it. In fact, the pressure and expectation to demonstrate learning outcomes that are measureable, observable, and quantifiable have never been greater.

For now we will need to live with imperfect measures and continue giving colleges and universities the benefit of the doubt. Those who are frustrated will, like me, need to demonstrate patience with the inherently imperfect science of assessing student learning.

James T. Minor is director of higher education programs at the Southern Education Foundation.

Next Story

Written By