Assessment

High-profile problems at highly visible universities get accreditor's attention

Smart Title: 

With actions involving U.Va., North Carolina and Florida A&M, among others, Southern agency responds to spate of high-profile controversies and calamities.

WASC raises concerns about Education Department's evaluation of accreditors

Smart Title: 

A major regional accreditor raises questions about whether the Education Department's methods of evaluating such agencies are truly helpful.

Linking outcomes assessment to grading could build faculty support (essay)

During a recent conversation about the value of comprehensive student learning assessment, one faculty member asked, “Why should we invest time, money, and effort to do something that we are essentially already doing every time we assign grades to student work?”  

Most educational assessment zealots would respond by launching into a long explanation of the differences between tracking content acquisition and assessing skill development, the challenges of comparing general skill development across disciplines, the importance of demonstrating gains on student learning outcomes across an entire institution, blah blah blah (since these are my peeps, I can call it that).  But from the perspective of an exhausted professor who has been furiously slogging through a pile of underwhelming final papers, I think the concern over a substantial increase in faculty workload is more than reasonable.  

Why would an institution or anyone within it choose to be redundant?

If a college wants to know whether its students are learning a particular set of knowledge, skills, and dispositions, it makes good sense to track the degree to which that is happening. But we make a grave mistake when we require additional processes and responsibilities from those “in the trenches” without thinking carefully about the potential for diminishing returns in the face of added workload (especially if that work appears to be frivolous or redundant). So it would seem to me that any conversation about assessing student learning should emphasize the importance of efficiency so that faculty and staff can continue to fulfill all the other roles expected of them.

This brings me back to what I perceive to be an odd disconnect between grading and outcomes assessment on most campuses. It seems to me that if grading and assessment are both intent on measuring learning, then there ought to be a way to bring them closer together. Moreover, if we want assessment to be truly sustainable (i.e., not kill our faculty), then we need to find ways to link, if not unify, these two practices.

What might this look like?  For starters, it would require conceptualizing content learned in a course as the delivery mechanism for skill and disposition development.  Traditionally, I think we’ve envisioned this relationship in reverse order – that skills and dispositions are merely the means for demonstrating content acquisition – with content acquisition becoming the primary focus of grading.  In this context, skills and dispositions become a sort of vaguely mysterious redheaded stepchild (with apologies to stepchildren, redheads, and the vaguely mysterious).  More importantly, if we are now focusing on skills and dispositions, this traditional context necessitates an additional process of assessing student learning.

However, if we reconceptualize our approach so that content becomes the raw material with which we develop skills and dispositions, we could directly apply our grading practices in the same way.  One would assign a proportion of the overall grade to the necessary content acquisition, and the rest of the overall grade (apportioned as the course might require) to the development of the various skills and dispositions intended for that course. In addition to articulating which skills and dispositions each course would develop and the progress thresholds expected of students in each course, this means that we would have to be much more explicit about the degree to which a given course is intended to foster improvement in students (such as a freshman-level writing course) as opposed to a course designed for students to demonstrate competence (such as a senior-level capstone in accounting procedures).  At an even more granular level, instructors might define individual assignments within a given course to be graded for improvement earlier in the term with other assignments graded for competence later in the term.

I recognize that this proposal flies in the face of some deeply rooted beliefs about academic freedom that faculty, as experts in their field, should be allowed to teach and grade as they see fit. When courses were about attaining a specific slice of content, every course was an island. Seventeenth-century British literature? Check. The sociology of crime? Check. Cell biology? Check.  

In this environment, it’s entirely plausible that faculty grading practices would be as different as the topography of each island.  But if courses are expected to function collectively to develop a set of skills and/or dispositions (e.g., complex reasoning, oral and written communication, intercultural competence), then what happens in each course is irrevocably tied to what happens in previous and subsequent courses.  And it follows that the “what” and “how” of grading would be a critical element in creating a smooth transition for students between courses.

Now it would be naïve of me to suggest that making such a fundamental shift in the way that a faculty thinks about the relationship between courses, curriculums, learning and grading is somehow easy.  Agreeing to a single set of institutionwide student learning outcomes can be exceedingly difficult, and for many institutions, embedding the building blocks of a set of institutional outcomes into the design and deliver of individual courses may well seem a bridge too far. 

However, any institution that has participated in reaccreditation since the Spellings Commission in 2006 knows that identifying institutional learning outcomes and assessing students’ gains on those outcomes is no longer optional.  So the question is no longer whether institutions can choose to engage in assessment; the question is whether student learning, and the assessment of it, becomes an imposition that squeezes out other important faculty and staff responsibilities or if there is a way to coopt the purposes of learning outcomes assessment into a process that already exists.

In the end it seems to me that we already have all of the mechanisms in place to embed robust learning outcomes assessment into our work without adding any new processes or responsibilities to our workload.  However, to make this happen we need to 1) embrace all of the implications of focusing on the development of skills and dispositions while shifting content acquisition from an end to a means to a greater end, and 2) accept that the educational endeavor in which we are all engaged is a fundamentally collaborative one and that our chances of success are best when we focus our individual expertise toward our collective mission of learning.

Mark Salisbury is director of institutional research and assessment at Augustana College, in Illinois. This essay is adapted from a post on his campus blog.

Editorial Tags: 

Let's use available income data to judge value of college degrees (essay)

From health care to major league baseball, entire industries are being shaped by the evolving use of data to drive results. One sector that remains largely untouched by the effective use of data is higher education. Fortunately, a recent regulation from the Department of Education offers a potential new tool that could begin driving critical income data into conversations about higher education programs and policies.

Last year, the Department of Education put forward a regulation called gainful employment. It was designed to crack down on bad actors in investor-funded higher education (sometimes called for-profit higher education). It set standards for student loan repayment and debt-to-income ratios that institutions must meet in order for students attending a specific institution to remain eligible for federal funds.

In order to implement the debt-to-income metric, the Obama administration created a system by which schools submitted social security data for a cohort of graduates from specific programs.  As long as the program had over 30 graduates, the Department of Education could then work with the Social Security Administration to produce an aggregated income for the cohort. Department officials used this to determine a program-level debt-to-income metric against which institutions would be assessed. This summer, the income data was released publicly along with the rest of the gainful employment metrics.

Unfortunately, the future of the gainful employment regulation is unclear. A federal court judge has effectively invalidated it. We, at Capella University, welcome being held accountable for whether our graduates can use their degree to earn a living and pay back their loans. While we think that standard should be applied to all of higher education, we also believe there is an opportunity for department officials to take the lemons of the federal court’s ruling and make lemonade.

They have already created a system by which any institution can submit a program-level cohort of graduates (as long as it has a minimum number of graduates in order to ensure privacy) and receive aggregate income data. Rather than letting this system sit on the shelf and gather dust while the gainful employment regulations wind their way through the courts, they should put it to good use. The Department of Education could open this system up and make it available to any institution that wants to receive hard income data on their graduates.

I’m not proposing a new regulation or a requirement that institutions use this system. It could be completely voluntary. Ultimately, it is hard to believe that any institution, whether for-profit or traditional, would seek to ignore this important data if it were available to them. Just as importantly, it is hard to believe that students wouldn’t expect an institution to provide this information if they knew it was available. 

Historically, the only tool for an institution to understand the earnings of its graduates has been self-reported alumni surveys. While we at Capella did the best we could with surveys, they are at best educated guesswork. Now, thanks to gainful employment, any potential student who wants to get an M.B.A. in finance from Capella can know exactly what graduates from that program earned on average in the 2010 tax year, which in this case is $95,459. Prospective students can also compare this and other programs, which may not see similar incomes, against competitors.

For those programs where graduates are earning strong incomes, the data can validate the value of the program and drive important conversations about best practices and employer alignment. For those programs whose graduates are not receiving the kinds of incomes expected, it can drive the right conversations about what needs to be done to increase the economic value of a degree. Perhaps most importantly, hard data about graduate incomes can lead to productive public policy conversations about student debt and student financing across all higher education.

That said, the value of higher education is not only measured by the economic return it provides. For example, some career paths that are critical to our society do not necessarily lead to high-paying jobs. All of higher education needs to come up with better ways to measure a wide spectrum of outcomes, but just because we don’t yet have all those measurements doesn’t mean we shouldn’t seize an a good opportunity to use at least one important data point. The Department of Education has created a potentially powerful tool to increase the amount of data around a degree’s return on investment. They should put this tool to work for institutions and students so that everyone can drive toward informed decisions and improved outcomes.

It should become standard practice for incoming college students or adults looking to further their education to have an answer to this simple question: What do graduates from this program earn annually? We welcome that conversation.

Scott Kinney is president of Capella University.

6th Annual Sloan-C/MERLOT Emerging Technologies for Online Learning International Symposium

Date: 
Tue, 04/09/2013 to Thu, 04/11/2013

Location

3667 Las Vegas Boulevard South Planet Hollywood Resort
89109 Las Vegas, Nevada
United States

Institute for the Development of Excellence in Assessment Leadership

Date: 
Tue, 08/06/2013 to Fri, 08/09/2013

Location

Baltimore, Maryland
United States

Study tracks European tracking of university students

Smart Title: 

Universities and governments on the continent exhibit many of the same data limitations as U.S. colleges in gauging student outcomes, study shows.

International educators debate mass vs. elite higher education

Smart Title: 

International educators at a meeting of the Organization for Economic Cooperation and Development put a new twist on an old debate, prodded by New York University's provocative president.

Pulse podcast examines Blackboard Analytics for Learn

Smart Title: 

This month's edition of The Pulse podcast features a conversation with Mark Max, vice president of Blackboard Analytics for Learn.

Data show key role for community colleges in 4-year degree production

Smart Title: 

Study shows that 45 percent of bachelor's degree recipients studied at two-year institutions first -- as many as three-quarters in some states.

Pages

Subscribe to RSS - Assessment
Back to Top