With great interest, I read the recent news announcing that the American Council on Education (ACE) had evaluated five Coursera MOOCs and recommended them for credit. But I had hoped for something different.
Having traditional prestigious institutions making their online content open to the world – of course without their prestigious credit attached – was an exciting development. A race to post courses ensued. On the surface, it’s an altruistic move to make learning available to anyone, anywhere for free.
Dig deeper and we are left to ask, how many MOOC courses will really be worth college credit, where will the credits be accepted, and for how long will college credits even be the primary measurement of learning?
Now that ACE has evaluated a few courses, MOOC providers will see how their process goes as students start actually finding proctors and taking tests -- or finding other methods of assessment -- to prove they learned the material. But a few courses will not be enough to really help students earn degrees, and with MOOC courses and providers continuing to proliferate, this does not seem like a viable way to keep up with demand.
Regardless, it is more than likely that the universities that agreed to the ACE CREDIT review are never going to accept an ACE CREDIT transcript themselves. The students with ACE CREDIT transcripts will need to present those transcripts to “lesser known” schools that are not among the elite players – colleges with much lower tuition and a willingness to serve post-traditional students.
More troubling is the fact that the ACE process for credit review is still course-based. Will this really be flexible enough in the future? Will it measure competencies and individual learning outcomes? Even if it seems scalable, will it mean all MOOC evaluations have to run through ACE and only ACE? Will students have to wait until ACE has evaluated a MOOC course before they can get credit?
Moreover, this raises the question: Are course evaluations and testing really the best or only way to deal with this new era of learning? What about experiential learning? If someone has college-level learning from their life experience is it invalid unless they take a course?
As Inside Higher Ed points out in its article, this was a fast move in an industry that moves at a glacial pace. But when ice really begins to melt, it can quickly turn into a waterfall. Students have more options for learning, and can get more information, from a variety of sources. So the question for education becomes, how can we best accommodate that?
I would assert that a portfolio assessment of students’ learning is the best way. Just as an artist shows a portfolio to a prospective employer, students should be able to demonstrate learning from wherever they have learned -- work, MOOCs, informal training, military service, volunteer service, and more -- all in one place. And much of this learning will not involve a course at all.
If MOOCs are to be truly disruptive, they must link to competencies, credentials, degrees and/or ultimately jobs. Using a course-by-course, credit hour-by-credit hour approach to do this will not dramatically change the way people earn degrees. And dramatic change that allows for individual demonstrations of competencies is the only way to provide the education quality and agility necessary to truly recognize learning derived from free resources on the web. By focusing on competencies, we can align and accept learning experiences from everywhere.
Pamela Tate is president/CEO of the Council for Adult and Experiential Learning.
During a recent conversation about the value of comprehensive student learning assessment, one faculty member asked, “Why should we invest time, money, and effort to do something that we are essentially already doing every time we assign grades to student work?”
Most educational assessment zealots would respond by launching into a long explanation of the differences between tracking content acquisition and assessing skill development, the challenges of comparing general skill development across disciplines, the importance of demonstrating gains on student learning outcomes across an entire institution, blah blah blah (since these are my peeps, I can call it that). But from the perspective of an exhausted professor who has been furiously slogging through a pile of underwhelming final papers, I think the concern over a substantial increase in faculty workload is more than reasonable.
Why would an institution or anyone within it choose to be redundant?
If a college wants to know whether its students are learning a particular set of knowledge, skills, and dispositions, it makes good sense to track the degree to which that is happening. But we make a grave mistake when we require additional processes and responsibilities from those “in the trenches” without thinking carefully about the potential for diminishing returns in the face of added workload (especially if that work appears to be frivolous or redundant). So it would seem to me that any conversation about assessing student learning should emphasize the importance of efficiency so that faculty and staff can continue to fulfill all the other roles expected of them.
This brings me back to what I perceive to be an odd disconnect between grading and outcomes assessment on most campuses. It seems to me that if grading and assessment are both intent on measuring learning, then there ought to be a way to bring them closer together. Moreover, if we want assessment to be truly sustainable (i.e., not kill our faculty), then we need to find ways to link, if not unify, these two practices.
What might this look like? For starters, it would require conceptualizing content learned in a course as the delivery mechanism for skill and disposition development. Traditionally, I think we’ve envisioned this relationship in reverse order – that skills and dispositions are merely the means for demonstrating content acquisition – with content acquisition becoming the primary focus of grading. In this context, skills and dispositions become a sort of vaguely mysterious redheaded stepchild (with apologies to stepchildren, redheads, and the vaguely mysterious). More importantly, if we are now focusing on skills and dispositions, this traditional context necessitates an additional process of assessing student learning.
However, if we reconceptualize our approach so that content becomes the raw material with which we develop skills and dispositions, we could directly apply our grading practices in the same way. One would assign a proportion of the overall grade to the necessary content acquisition, and the rest of the overall grade (apportioned as the course might require) to the development of the various skills and dispositions intended for that course. In addition to articulating which skills and dispositions each course would develop and the progress thresholds expected of students in each course, this means that we would have to be much more explicit about the degree to which a given course is intended to foster improvement in students (such as a freshman-level writing course) as opposed to a course designed for students to demonstrate competence (such as a senior-level capstone in accounting procedures). At an even more granular level, instructors might define individual assignments within a given course to be graded for improvement earlier in the term with other assignments graded for competence later in the term.
I recognize that this proposal flies in the face of some deeply rooted beliefs about academic freedom that faculty, as experts in their field, should be allowed to teach and grade as they see fit. When courses were about attaining a specific slice of content, every course was an island. Seventeenth-century British literature? Check. The sociology of crime? Check. Cell biology? Check.
In this environment, it’s entirely plausible that faculty grading practices would be as different as the topography of each island. But if courses are expected to function collectively to develop a set of skills and/or dispositions (e.g., complex reasoning, oral and written communication, intercultural competence), then what happens in each course is irrevocably tied to what happens in previous and subsequent courses. And it follows that the “what” and “how” of grading would be a critical element in creating a smooth transition for students between courses.
Now it would be naïve of me to suggest that making such a fundamental shift in the way that a faculty thinks about the relationship between courses, curriculums, learning and grading is somehow easy. Agreeing to a single set of institutionwide student learning outcomes can be exceedingly difficult, and for many institutions, embedding the building blocks of a set of institutional outcomes into the design and deliver of individual courses may well seem a bridge too far.
However, any institution that has participated in reaccreditation since the Spellings Commission in 2006 knows that identifying institutional learning outcomes and assessing students’ gains on those outcomes is no longer optional. So the question is no longer whether institutions can choose to engage in assessment; the question is whether student learning, and the assessment of it, becomes an imposition that squeezes out other important faculty and staff responsibilities or if there is a way to coopt the purposes of learning outcomes assessment into a process that already exists.
In the end it seems to me that we already have all of the mechanisms in place to embed robust learning outcomes assessment into our work without adding any new processes or responsibilities to our workload. However, to make this happen we need to 1) embrace all of the implications of focusing on the development of skills and dispositions while shifting content acquisition from an end to a means to a greater end, and 2) accept that the educational endeavor in which we are all engaged is a fundamentally collaborative one and that our chances of success are best when we focus our individual expertise toward our collective mission of learning.
Mark Salisbury is director of institutional research and assessment at Augustana College, in Illinois. This essay is adapted from a post on his campus blog.
From health care to major league baseball, entire industries are being shaped by the evolving use of data to drive results. One sector that remains largely untouched by the effective use of data is higher education. Fortunately, a recent regulation from the Department of Education offers a potential new tool that could begin driving critical income data into conversations about higher education programs and policies.
Last year, the Department of Education put forward a regulation called gainful employment. It was designed to crack down on bad actors in investor-funded higher education (sometimes called for-profit higher education). It set standards for student loan repayment and debt-to-income ratios that institutions must meet in order for students attending a specific institution to remain eligible for federal funds.
In order to implement the debt-to-income metric, the Obama administration created a system by which schools submitted social security data for a cohort of graduates from specific programs. As long as the program had over 30 graduates, the Department of Education could then work with the Social Security Administration to produce an aggregated income for the cohort. Department officials used this to determine a program-level debt-to-income metric against which institutions would be assessed. This summer, the income data was released publicly along with the rest of the gainful employment metrics.
Unfortunately, the future of the gainful employment regulation is unclear. A federal court judge has effectively invalidated it. We, at Capella University, welcome being held accountable for whether our graduates can use their degree to earn a living and pay back their loans. While we think that standard should be applied to all of higher education, we also believe there is an opportunity for department officials to take the lemons of the federal court’s ruling and make lemonade.
They have already created a system by which any institution can submit a program-level cohort of graduates (as long as it has a minimum number of graduates in order to ensure privacy) and receive aggregate income data. Rather than letting this system sit on the shelf and gather dust while the gainful employment regulations wind their way through the courts, they should put it to good use. The Department of Education could open this system up and make it available to any institution that wants to receive hard income data on their graduates.
I’m not proposing a new regulation or a requirement that institutions use this system. It could be completely voluntary. Ultimately, it is hard to believe that any institution, whether for-profit or traditional, would seek to ignore this important data if it were available to them. Just as importantly, it is hard to believe that students wouldn’t expect an institution to provide this information if they knew it was available.
Historically, the only tool for an institution to understand the earnings of its graduates has been self-reported alumni surveys. While we at Capella did the best we could with surveys, they are at best educated guesswork. Now, thanks to gainful employment, any potential student who wants to get an M.B.A. in finance from Capella can know exactly what graduates from that program earned on average in the 2010 tax year, which in this case is $95,459. Prospective students can also compare this and other programs, which may not see similar incomes, against competitors.
For those programs where graduates are earning strong incomes, the data can validate the value of the program and drive important conversations about best practices and employer alignment. For those programs whose graduates are not receiving the kinds of incomes expected, it can drive the right conversations about what needs to be done to increase the economic value of a degree. Perhaps most importantly, hard data about graduate incomes can lead to productive public policy conversations about student debt and student financing across all higher education.
That said, the value of higher education is not only measured by the economic return it provides. For example, some career paths that are critical to our society do not necessarily lead to high-paying jobs. All of higher education needs to come up with better ways to measure a wide spectrum of outcomes, but just because we don’t yet have all those measurements doesn’t mean we shouldn’t seize an a good opportunity to use at least one important data point. The Department of Education has created a potentially powerful tool to increase the amount of data around a degree’s return on investment. They should put this tool to work for institutions and students so that everyone can drive toward informed decisions and improved outcomes.
It should become standard practice for incoming college students or adults looking to further their education to have an answer to this simple question: What do graduates from this program earn annually? We welcome that conversation.