My first encounter with assessment came in the form of a joke. The seminary where I did my Ph.D. was preparing for a visit from the Association of Theological Schools, and the dean remarked that he was looking forward to developing ways to quantify all the students' spiritual growth. By the time I sat down for my first meeting on assessment as a full-time faculty member in the humanities at a small liberal arts college, I had stopped laughing. Even if we were not setting out to grade someone’s closeness to God on a scale from 1 to 10, the detailed list of "learning outcomes" made it seem like we were expected to do something close. Could education in the liberal arts — and particularly in the humanities — really be reduced to a series of measurable outputs?
Since that initial reaction of shock, I have come to hold a different view of assessment. I am suspicious of the broader education reform movement of which it forms a part, but at a certain point I asked myself what my response would be if I had never heard of No Child Left Behind or Arne Duncan. Would I really object if someone suggested that my institution might want to clarify its goals, gather information about how it’s doing in meeting those goals, and change its practices if they are not working? I doubt that I would: in a certain sense it’s what every institution should be doing. Doing so systematically does bear significant costs in terms of time and energy — but then so does plugging away at something that’s not working. Paying a reasonable number of hours up front in the form of data collection seems like a reasonable hedge against wasting time on efforts or approaches that don’t contribute to our mission. By the same token, getting into the habit of explaining why we’re doing what we’re doing can help us to avoid making decisions based on institutional inertia.
My deeper concerns come from the pressure to adopt numerical measurements. I share the skepticism of many of my colleagues that numbers can really capture what we do as educators in the humanities and at liberal arts colleges. I would note, however, that there is much less skepticism that numerical assessment can capture what our students are achieving — at least when that numerical assessment is translated into the alphabetical form of grades. In fact, some have argued that grades are already outcome assessment, rendering further measures redundant.
I believe the argument for viewing grades as a form of outcome assessment is flawed in two ways. First, I simply do not think it’s true that student grades factor significantly in professors’ self-assessment of how their courses are working. Professors who give systematically lower grades often believe that they are holding students to a higher standard, while professors who grade on a curve are simply ranking students relative to one another. Further, I imagine that no one would be comfortable with the assumption that the department that awarded the best grades was providing the best education — many of us would likely suspect just the opposite.
Second, it is widely acknowledged that faculty as a whole have wavered in their dedication to strict grading, due in large part to the increasingly disproportionate real-world consequences grades can have on their students’ lives. The "grade inflation" trend seems to have begun because professors were unwilling to condemn a student to die in Vietnam because his term paper was too short, and the financial consequences of grades in the era of ballooning student loan debt likely play a similar role today. Hence it makes sense to come up with a parallel internal system of measurement so that we can be more objective.
Another frequently raised concern about outcome assessment is that the pressure to use measures that can easily be compared across institutions could lead to homogenization. This suspicion is amplified by the fact that many (including myself) view the assessment movement as part of the broader neoliberal project of creating “markets” for public goods rather than directly providing them. A key example here is Obamacare: instead of directly providing health insurance to all citizens (as nearly all other developed nations do), the goal was to create a more competitive market in an area where market forces have not previously been effective in controlling costs.
There is much that is troubling about viewing higher education as a competitive market. I for one believe it should be regarded as a public good and funded directly by the state. The reality, however, is that higher education is already a competitive market. Even leaving aside the declining public support for state institutions, private colleges and universities have always played an important role in American higher education. Further, this competitive market is already based on a measure that can easily be compared across institutions: price.
Education is currently a perverse market where everyone is in a competition to charge more, because that is the only way to signal quality in the absence of any other reliable measure of quality. There are other, more detailed measures such as those collected by the widely derided U.S. News & World Report ranking system — but those standards have no direct connection to pedagogical effectiveness and are in any case extremely easy to game.
The attempt to create a competitive market based on pedagogical effectiveness may prove unsuccessful, but in principle, it seems preferable to the current tuition arms race. Further, while there are variations among accrediting bodies, most are encouraging their member institutions to create assessment programs that reflect their own unique goals and institutional ethos. In other words, for now the question is not whether we’re measuring up to some arbitrary standard, but whether institutions can make the case that they are delivering on what they promise.
Hence it seems possible to come up with an assessment system that would actually be helpful for figuring out how to be faithful to each school or department’s own goals. I have to admit that part of my sanguine attitude stems from the fact that Shimer’s pedagogy embodies what independent researchers have already demonstrated to be “best practices” in terms of discussion-centered, small classes — and so if we take the trouble to come up with a plausible way to measure what the program is doing for our students, I’m confident the results will be very strong. Despite that overall optimism, however, I’m also sure that there are some things that we’re doing that aren’t working as well as they could, but we have no way of really knowing that currently. We all have limited energy and time, and so anything that can help us make sure we’re devoting our energy to things that are actually beneficial seems all to the good.
Further, it seems to me that strong faculty involvement in assessment can help to protect us from the whims of administrators who, in their passion for running schools "like a business," make arbitrary decisions based on their own perception of what is most effective or useful. I have faith that the humanities programs that are normally targeted in such efforts can easily make the case for their pedagogical value, just as I am confident that small liberal arts schools like Shimer can make a persuasive argument for the value of their approach. For all our justified suspicions of the agenda behind the assessment movement, none of us in the humanities or at liberal arts colleges can afford to unilaterally disarm and insist that everyone recognize our self-evident worth. If we believe in what we’re doing, we should welcome the opportunity to present our case.
Adam Kotsko is assistant professor of humanities at Shimer College.
Historians of this period, possessing the clearsightedness that only time provides, will likely point to online learning as the disruptive technology platform that radically changed higher education, which had remained largely unchanged since the cathedral schools of medieval Europe -- football, beer pong and food courts notwithstanding.
Online learning is already well-understood, well-established and well-respected by those who genuinely know it. But what we now see in higher education is a new wave of innovation that uses online learning, or at least aspects of it, as a starting point. The meteoric growth of the for-profit sector, the emergence of MOOCs, new self-paced competency-based programs, adaptive learning environments, peer-to-peer learning platforms, third-party service providers, the end of geographic limitations on program delivery and more all spring from the maturation of online learning and the technology that supports it. Online learning has provided a platform for rethinking delivery models and much of accreditation is not designed to account for these new approaches.
Until now, regional accreditation has been based on a review of an integrated organization and its activities: the college or university. These were largely cohesive and relatively easy to understand organizational structures where almost everything was integrated to produce the learning experience and degree. Accreditation is now faced with assessing learning in an increasingly disaggregated world with organizations that are increasingly complex, or at least differently complex, including shifting roles, new stakeholders and participants, various contractual obligations and relationships, and new delivery models. There is likely to be increasing pressure for accreditation to move from looking only at the overall whole, the institution, to include smaller parts within the whole or alternatives to the whole: perhaps programs, providers and offerings other than degrees and maybe provided by entities other than traditional institutions. In other words, in an increasingly disaggregated world does accreditation need to become more disaggregated as well?
Take the emergence of competency-based education, which is more profound – if less discussed – than massive open online courses (MOOCs). Our own competency-based program, College for America (CfA), is the first of its kind to so wholly move from any anchoring to the three-credit hour Carnegie Unit that pervades higher education (shaping workload, units of learning, resource allocation, space utilization, salary structures, financial aid regulations, transfer policies, degree definitions and more). The irony of the three-credit hour is that it fixes time while it leaves variable the actual learning. In other words, we are really good at telling the world how long students have sat at their desks and we are really quite poor at saying how much they have learned or even what they learned. Competency-based education flips the relationship and says let time be variable, but make learning well-defined, fixed and non-negotiable.
In our CfA program, there are no courses. There are 120 competencies – “can do” statements, if you will – precisely defined by well-developed rubrics. Students demonstrate mastery of those competencies through completion of “tasks” that are then assessed by faculty reviewers using the rubrics. Students can’t “slide by” with a C or a B; they have either mastered the competencies or they are still working on them. When they are successful, the assessments are maintained in a web-based portfolio as evidence of learning. Students can begin with any competency at any level (there are three levels moving from smaller, simpler competencies to higher level, complicated competencies) and go as fast or as slow as they need to be successful. We offer the degree for $2,500 per year, so an associate degree for $5,000 if a student takes two years and for as little as $1,250 if they complete in just six months (an admittedly formidable task for most). CfA is the first program of its kind to be approved by a regional accreditor, NEASC in our case, and is the first to seek approval for Title IV funding through the “direct assessment of learning” provisions. At the time of this writing, CfA has successfully passed the first stage review by the Department of Education and is still moving through the approval process.
The radical possibility offered in the competency-based movement is that traditional higher education may lose its monopoly on delivery models. Accreditors have for some time put more emphasis on learning outcomes and assessment, but the competency-based education movement privileges them above all else. When we excel at both defining and assessing learning, we open up enormous possibilities for new delivery models, creativity and innovation. It’s not a notion that most incumbent providers welcome, but in terms of finding new answers to the cost, access, quality, productivity and relevance problems that are reaching crisis proportions in higher education, competency-based education may be the most dramatic development in higher education in hundreds of years. For example, the path to legitimacy for MOOCs probably lies in competency-based approaches, and while they can readily tackle the outcomes or competency side of the equation, they still face formidable challenges of reliable, trustworthy and rigorous assessment at scale (at least while trying to remain free). Well-developed competency-based approaches can also help undergird the badges movement, demanding that such efforts be transparent about the claims associated with a badge and the assessments used to validate learning or mastery.
Competency-based education may also provide accreditors with a framework for more fundamentally rethinking assessment. It would shift accreditation to looking much harder at learning outcomes and competencies, the claims an entity is making for the education it provides and for the mechanisms it uses for knowing and demonstrating that the learning has occurred. The good news here is that such a dual focus would free accreditors from so much attention on inputs, like organization, stakeholder roles and governance, and instead allow for the emergence of all sorts of new delivery models. The bad news is that we are still working on how to craft well designed learning outcomes and conduct effective assessment. It’s harder than many think. A greater focus on outcomes and assessment also begs other important questions for accreditors:
How will they rethink standards to account for far more complex and disaggregated business models which might have a mix of “suppliers,” some for-profit and some nonprofit, and which look very different from traditional institutions?
Will they only accredit institutions or does accreditation have to be disaggregated too? Might there by multiple forms of accreditation: for institutions, for programs, for courses, for MOOCs, for badges and so on? At what level of granularity?
CBE programs are coming. College for America is one example, but other institutions have announced efforts in this area. Major foundations are lining up behind the effort (most notably the Lumina and Bill and Melinda Gates Foundations), and the Department of Education appears to be relying on accreditors to attest to the quality and rigor of those programs. While the Department of Education is moving cautiously on this question, accreditors might want to think through what a world untethered to the credit hour might look like. Might there be two paths to accreditation: the traditional “institutional path” and the “competency-based education path,” with the former looking largely unchanged and the latter using rigorous outcomes and assessment review to support more innovation than current standards now do? Innovation theory would predict that new innovative CBE accreditation pathway would come to improve the incumbent accreditation processes and standards.
This last point is important: accreditors need to think about their relationship to innovation. If the standards are largely built to assess incumbent models and enforced by incumbents, they must be by their very nature conservative and in service of the status quo. Yet the nation is in many ways frustrated with the status quo and unwilling to support it in the old ways. Frankly, they believe we are failing, and the ways they think we are failing depend on whom you ask. But never has the popular press (and thus the public and policy makers) been so consumed with the problems of traditional higher education and intrigued by the alternatives. In some ways, accreditors are being asked to shift or at least expand their role to accommodate these new models.
If regional accreditors are unable to rise to that challenge they might see new alternative accreditors emerge and be left tethered to incumbent models that are increasingly less relevant or central to how higher education takes place 10 years from now. There is time. As has been said, we frequently overestimate the amount of change in the next two years and the dramatically underestimate the amount of change in the next 10. The time is now for regional accreditors to re-engineer the paths to accreditation. In doing so they can not only be ready for that future, they can help usher it into reality.
Paul J. LeBlanc is president of Southern New Hampshire University. This essay is adapted from writing produced for the Western Association of Schools and Colleges as part of a convening to look at the future of accreditation. WASC has given permission for it to be shared more widely and without restriction.
During a recent conversation about the value of comprehensive student learning assessment, one faculty member asked, “Why should we invest time, money, and effort to do something that we are essentially already doing every time we assign grades to student work?”
Most educational assessment zealots would respond by launching into a long explanation of the differences between tracking content acquisition and assessing skill development, the challenges of comparing general skill development across disciplines, the importance of demonstrating gains on student learning outcomes across an entire institution, blah blah blah (since these are my peeps, I can call it that). But from the perspective of an exhausted professor who has been furiously slogging through a pile of underwhelming final papers, I think the concern over a substantial increase in faculty workload is more than reasonable.
Why would an institution or anyone within it choose to be redundant?
If a college wants to know whether its students are learning a particular set of knowledge, skills, and dispositions, it makes good sense to track the degree to which that is happening. But we make a grave mistake when we require additional processes and responsibilities from those “in the trenches” without thinking carefully about the potential for diminishing returns in the face of added workload (especially if that work appears to be frivolous or redundant). So it would seem to me that any conversation about assessing student learning should emphasize the importance of efficiency so that faculty and staff can continue to fulfill all the other roles expected of them.
This brings me back to what I perceive to be an odd disconnect between grading and outcomes assessment on most campuses. It seems to me that if grading and assessment are both intent on measuring learning, then there ought to be a way to bring them closer together. Moreover, if we want assessment to be truly sustainable (i.e., not kill our faculty), then we need to find ways to link, if not unify, these two practices.
What might this look like? For starters, it would require conceptualizing content learned in a course as the delivery mechanism for skill and disposition development. Traditionally, I think we’ve envisioned this relationship in reverse order – that skills and dispositions are merely the means for demonstrating content acquisition – with content acquisition becoming the primary focus of grading. In this context, skills and dispositions become a sort of vaguely mysterious redheaded stepchild (with apologies to stepchildren, redheads, and the vaguely mysterious). More importantly, if we are now focusing on skills and dispositions, this traditional context necessitates an additional process of assessing student learning.
However, if we reconceptualize our approach so that content becomes the raw material with which we develop skills and dispositions, we could directly apply our grading practices in the same way. One would assign a proportion of the overall grade to the necessary content acquisition, and the rest of the overall grade (apportioned as the course might require) to the development of the various skills and dispositions intended for that course. In addition to articulating which skills and dispositions each course would develop and the progress thresholds expected of students in each course, this means that we would have to be much more explicit about the degree to which a given course is intended to foster improvement in students (such as a freshman-level writing course) as opposed to a course designed for students to demonstrate competence (such as a senior-level capstone in accounting procedures). At an even more granular level, instructors might define individual assignments within a given course to be graded for improvement earlier in the term with other assignments graded for competence later in the term.
I recognize that this proposal flies in the face of some deeply rooted beliefs about academic freedom that faculty, as experts in their field, should be allowed to teach and grade as they see fit. When courses were about attaining a specific slice of content, every course was an island. Seventeenth-century British literature? Check. The sociology of crime? Check. Cell biology? Check.
In this environment, it’s entirely plausible that faculty grading practices would be as different as the topography of each island. But if courses are expected to function collectively to develop a set of skills and/or dispositions (e.g., complex reasoning, oral and written communication, intercultural competence), then what happens in each course is irrevocably tied to what happens in previous and subsequent courses. And it follows that the “what” and “how” of grading would be a critical element in creating a smooth transition for students between courses.
Now it would be naïve of me to suggest that making such a fundamental shift in the way that a faculty thinks about the relationship between courses, curriculums, learning and grading is somehow easy. Agreeing to a single set of institutionwide student learning outcomes can be exceedingly difficult, and for many institutions, embedding the building blocks of a set of institutional outcomes into the design and deliver of individual courses may well seem a bridge too far.
However, any institution that has participated in reaccreditation since the Spellings Commission in 2006 knows that identifying institutional learning outcomes and assessing students’ gains on those outcomes is no longer optional. So the question is no longer whether institutions can choose to engage in assessment; the question is whether student learning, and the assessment of it, becomes an imposition that squeezes out other important faculty and staff responsibilities or if there is a way to coopt the purposes of learning outcomes assessment into a process that already exists.
In the end it seems to me that we already have all of the mechanisms in place to embed robust learning outcomes assessment into our work without adding any new processes or responsibilities to our workload. However, to make this happen we need to 1) embrace all of the implications of focusing on the development of skills and dispositions while shifting content acquisition from an end to a means to a greater end, and 2) accept that the educational endeavor in which we are all engaged is a fundamentally collaborative one and that our chances of success are best when we focus our individual expertise toward our collective mission of learning.
Mark Salisbury is director of institutional research and assessment at Augustana College, in Illinois. This essay is adapted from a post on his campus blog.
From health care to major league baseball, entire industries are being shaped by the evolving use of data to drive results. One sector that remains largely untouched by the effective use of data is higher education. Fortunately, a recent regulation from the Department of Education offers a potential new tool that could begin driving critical income data into conversations about higher education programs and policies.
Last year, the Department of Education put forward a regulation called gainful employment. It was designed to crack down on bad actors in investor-funded higher education (sometimes called for-profit higher education). It set standards for student loan repayment and debt-to-income ratios that institutions must meet in order for students attending a specific institution to remain eligible for federal funds.
In order to implement the debt-to-income metric, the Obama administration created a system by which schools submitted social security data for a cohort of graduates from specific programs. As long as the program had over 30 graduates, the Department of Education could then work with the Social Security Administration to produce an aggregated income for the cohort. Department officials used this to determine a program-level debt-to-income metric against which institutions would be assessed. This summer, the income data was released publicly along with the rest of the gainful employment metrics.
Unfortunately, the future of the gainful employment regulation is unclear. A federal court judge has effectively invalidated it. We, at Capella University, welcome being held accountable for whether our graduates can use their degree to earn a living and pay back their loans. While we think that standard should be applied to all of higher education, we also believe there is an opportunity for department officials to take the lemons of the federal court’s ruling and make lemonade.
They have already created a system by which any institution can submit a program-level cohort of graduates (as long as it has a minimum number of graduates in order to ensure privacy) and receive aggregate income data. Rather than letting this system sit on the shelf and gather dust while the gainful employment regulations wind their way through the courts, they should put it to good use. The Department of Education could open this system up and make it available to any institution that wants to receive hard income data on their graduates.
I’m not proposing a new regulation or a requirement that institutions use this system. It could be completely voluntary. Ultimately, it is hard to believe that any institution, whether for-profit or traditional, would seek to ignore this important data if it were available to them. Just as importantly, it is hard to believe that students wouldn’t expect an institution to provide this information if they knew it was available.
Historically, the only tool for an institution to understand the earnings of its graduates has been self-reported alumni surveys. While we at Capella did the best we could with surveys, they are at best educated guesswork. Now, thanks to gainful employment, any potential student who wants to get an M.B.A. in finance from Capella can know exactly what graduates from that program earned on average in the 2010 tax year, which in this case is $95,459. Prospective students can also compare this and other programs, which may not see similar incomes, against competitors.
For those programs where graduates are earning strong incomes, the data can validate the value of the program and drive important conversations about best practices and employer alignment. For those programs whose graduates are not receiving the kinds of incomes expected, it can drive the right conversations about what needs to be done to increase the economic value of a degree. Perhaps most importantly, hard data about graduate incomes can lead to productive public policy conversations about student debt and student financing across all higher education.
That said, the value of higher education is not only measured by the economic return it provides. For example, some career paths that are critical to our society do not necessarily lead to high-paying jobs. All of higher education needs to come up with better ways to measure a wide spectrum of outcomes, but just because we don’t yet have all those measurements doesn’t mean we shouldn’t seize an a good opportunity to use at least one important data point. The Department of Education has created a potentially powerful tool to increase the amount of data around a degree’s return on investment. They should put this tool to work for institutions and students so that everyone can drive toward informed decisions and improved outcomes.
It should become standard practice for incoming college students or adults looking to further their education to have an answer to this simple question: What do graduates from this program earn annually? We welcome that conversation.