assessmentaccountability

USA Funds to Invest in Measuring Value

USA Funds, a large nonprofit student loan guarantor, this week announced $3.5 million in funding for four initiatives to measure the value of college. Recipients include the Indiana Commission for Higher Education, which has developed program-level return-on-investment reports, and the National Center for Higher Education Management Systems, which is comparing degree production to employer demand in specific regions. Projects from the U.S. Chamber of Commerce Foundation and the National Skills Coalition also received grants.

Common threads in the four efforts, USA Funds said in a written statement, are that they are based on the program level -- rather than institutionwide looks -- and they compare the costs of education with returns from employment and wages as well as measures like job satisfaction.

"By supporting these new models in 12 states, we are developing powerful new tools to help students find a more direct path through education and training to rewarding careers," said William Hansen, USA Funds president and CEO.

The Waning of the Carnegie Unit (essay)

For a century, the Carnegie Unit -- or credit hour -- served American education very well. Created by the Carnegie Foundation for the Advancement of Teaching in 1906, it is now the nearly universal accounting unit for colleges and schools. It brought coherence and common standards to the chaotic 19th-century high school and college curriculum, established a measure for judging student academic progress, and set the requirements for high school graduation and college admission. But today it has grown outdated and less useful.

A time-based standard, one Carnegie Unit (or credit) is awarded for every 120 hours of class time. The foundation translated this into one hour of instruction five days a week for 24 weeks. Students have been expected to take four such courses a year for four years in high school, with a minimum of 14 Carnegie Units required for college admission. The Carnegie Unit perfectly mirrored its times and the design of the nation’s schools.

An industrialized America created schools modeled on the technology of the times: the assembly line. With the Carnegie Unit as a basis, schools nationwide adopted a common process for schooling groups of children, sorted by age for 13 years, 180 days a year in Carnegie unit-length courses. Students progressed according to seat time -- how long they were exposed to teaching.

At colleges and universities across the nation, the Carnegie Unit became more commonly referred to as the credit hour. The common semester-long class became three credit hours. The average four-year degree was earned after completing 120 credit hours. Time and process were fixed, and outcomes of schooling were variable. All students were expected to learn the same things in the same period of time. The Carnegie Unit provided the architecture to make this system work.

But in the United States’ transition from an industrial to an information economy, the Carnegie Unit is becoming obsolete. The information economy focuses on common, fixed outcomes, yet the process and the time necessary to achieve them are variable. The concern in colleges and schools is shifting from teaching to learning -- what students know and can do, not how long they are taught. Education at all levels is becoming more individualized, as students learn different subjects at different rates and learn best using different methods of instruction.

As a result, educational institutions need a new accounting to replace the Carnegie Unit. A 2015 report by the Carnegie Foundation made this clear, stating the Carnegie Unit “sought to standardize students’ exposure to subject material by ensuring they received consistent amounts of instructional time. It was never intended to function as a measure of what students learned.” States have responded by adopting outcome- or learning-based standards for schools. They are now detailing the skills and knowledge students must attain to graduate and implementing testing regimens, such as fourth- and eighth-grade reading and math exams, to assess whether students have met those standards, testing regimens to assess student progress and attainment of outcomes.

This evolution is causing two problems. First, both the industrial and information economy models of education are being imposed on our educational institutions at the same time. At the moment, the effect is more apparent in our schools than colleges, but higher education can expect to face the same challenges. Today, schools and colleges are being required to use the fixed-process, fixed-calendar and Carnegie Unit accounting system of the industrial era. They are also being required to achieve the information economy’s fixed outcomes and follow its testing procedures. The former is true of higher education, and government is increasingly asking colleges and universities for the latter.

Doing both is not possible, by definition. Instead, states need to move consciously and systematically to the information economy’s emerging and increasingly dominant model of education, which will prevail in the future. The Carnegie Unit will pass into history.

The second problem is that the steps states have taken to implement standards, outcomes and associated testing are often incomplete and unfinished. They are at best betas quickly planned and hurriedly implemented, which like all new initiatives demand significant rethinking, redesign and refinement. In the decades to come, today's tests will appear primitive by comparison to the assessment tools that replace them. Think of the earliest cell phones -- they needed development and refinement.

Unfortunately, however, states’ mandates go beyond the capacity and capabilities of their standards, tests, data systems and existing curricula. For example, despite growing state and federal pressure to evaluate faculty and institutions based on student performance, most states do not have the data or data systems to make this possible.

If Information Age accounting systems for education are to work as well as the Carnegie Unit did, the tasks ahead are these:

  • Define the outcomes or standards students need to achieve to graduate from school and college. While the specific outcomes or standards adopted are likely to vary from state to state, the meaning of each standard or outcome should be common to all states. A current example is coding. Today states, cities and institutions differ profoundly in their requirements in this area, however, it is essential that the meaning of competence in this area be common.
  • Create curricula that mirror each standard and that permit students to advance according to mastery.
  • Develop assessments that measure student progress and attainment of standards or outcomes. Over time, build upon current initiatives in analytics and adaptive learning, to embed assessment into curricula to function like a GPS, discovering students’ misunderstandings in real time and providing guidance to get them back on track.

These three key steps will lay the groundwork for the education demanded by the Information Age. They will provide the clarity, specificity, standardization, reliability and adoptability that made the Carnegie Unit successful. It will create an educational accounting system for the information economy that is as strong as the Carnegie Unit was for industrial America.

I do not pretend doing this will be easy or quick. It is nothing less than the reinvention of the American education system. It will require bold institutions to lead, as universities like Carnegie Mellon University, the Massachusetts Institute of Technology, Southern New Hampshire University and Western Governors University are doing, to create and test the new models of education for the Information Age. It will take a coalition of state government, educational institutions and professional associations like accreditors to turn the innovations into policy.

We don't have the luxury of turning away from this challenge. Our education system is not working. In contrast to the industrial era, in which national success rested on physical labor and natural resources, information economies require brains and knowledge. The future demands excellent schools and colleges.

Arthur Levine is the president of the Woodrow Wilson National Fellowship Foundation in Princeton, N.J. He served as the president of Teachers College, Columbia University, from 1994 to 2006.

Editorial Tags: 

Army restructures educational system to resemble civilian universities

Smart Title: 

The U.S. Army will model its newly consolidated approach to educational programs on traditional higher education, in part to help soldiers get more college credits for their military experience.

Senate Proposal for Alternative Accreditation Path

Senator Michael Bennet, a Colorado Democrat, and Senator Marco Rubio, a Florida Republican, this week introduced a bill that would create a new "outcomes-based" accreditation system. The proposed legislation, which builds on previous ideas from the two senators, would allow alternative education providers -- as well as traditional colleges and universities -- to access federal financial aid programs if they can meet a bar for high student outcomes. Those measures would include student learning, completion and return on investment.

"We need a new system that encourages, rather than hinders, innovation, promotes higher quality and shifts the focus to student success," Bennet said in a written statement. "The alternative outcomes-based process in this bill will help colleges, new models like competency-based education and innovative providers, and is an important step in shifting the current incentives and creating the 21st-century system of higher education we need."

Rubio, who is seeking the Republican presidential nomination, has hammered on the current higher education accreditation system while speaking on the campaign trail, calling it a "cartel." The alternative system he and Bennet proposed, Rubio said, would be based on higher quality standards.

The bill would allow colleges and providers to bypass a wait to receive federal-aid eligibility while they seek accreditation, instead enabling them to enter into contracts with the U.S. Department of Education, but only if the institutions "are generating positive student outcomes."

College Abacus Releases Tool for Low-Income Students

College Abacus is a free online tool for students and families to compare college pricing -- using net-price estimates taken from colleges and federal databases. The tool, which is owned by ECMC Group, a nonprofit loan guarantor, was one of several outside entities the U.S. Department of Education collaborated with on new data from the White House's College Scorecard, released earlier this month. College Abacus got early access to information from the large data sets that undergird the Scorecard, incorporating it into the online tool.

On Monday the group announced the release of a new tool aimed at low-income students. In addition to net-price comparisons, the new Pell Abacus uses data from the Scorecard to display college-specific information on financial factors such as average loan payments for Pell Grant recipients, the percentage of students who receive Pell Grants and the average monthly income percentage spent on federal loan repayments after college.

“By making this process simple to navigate without tax forms and accessible on mobile phones, we’re removing some of the key barriers preventing low-income students from exploring their full range of college options,” Abigail Seldin, co-founder of College Abacus and vice president of innovation and product management at ECMC Group, said in a written statement.

Essay says data in White House Scorecard is lacking

As president of University of Phoenix, I am instinctively guided to support the principles of greater access to, and better analysis of, data and information. That holds particularly true in the case of data that can help prospective students make informed choices about higher education.

So the White House’s newly released College Scorecard -- and its attendant torrent of new data on colleges -- should be a welcome move. It purports to contain a variety of information that assesses institutions on important metrics, including graduation rates and the income of graduates.

It is no secret, however, that the Scorecard has attracted widespread criticism, not least from my colleagues at large public universities, whose concerns I share regarding broader methodological flaws in it -- particularly the failure to include data on students who did not receive Title IV funds (data currently unavailable to the department under federal law). And even the data about Title IV recipients presents major challenges. They paint a skewed view of graduation rates that I believe does a particular disservice to students and prospective working adult learners -- the very people this tool should help.

Just taking University of Phoenix as an example, there is much for which my university can be proud. The data released includes findings ranking it sixth in the nation amongst large, private institutions (more than 15,000 students) in terms of the income of its graduates (and 24th among all large institutions, public and private). This adds to our institution’s latest draft three-year cohort default rate of 13.6, which is comparable to the national average.

But consider the methodology behind the graduation rates that the Scorecard cites -- arguably the most problematic flaw underlying it. For years now the U.S. Department of Education has relied on Integrated Postsecondary Education Data System (IPEDS) graduation rates, which reflect only first-time, full-time undergraduate students. By any measure, the student population of America is more diverse than those who attend college full-time and complete it in a single shot. At the University of Phoenix, 60 percent of students in 2014 were first generation, and 76 percent were working -- 67 percent with dependents. These are the type of students labeled “nontraditional” by a Department of Education that has often talked of empowering them.

Yet for the purposes of the department’s graduation rates, these nontraditional students are effectively invisible, uncounted. In 2014, University of Phoenix’s institutional graduation rate for students with bachelor’s degrees was 42 percent. The department’s new Scorecard puts that figure at 20 percent. Our institutional rates demonstrate a higher rate of student success while IPEDS provides an incomplete picture of the university’s performance. In 2014, only 9.3 percent of my university’s students were first-time, full-time students as defined by IPEDS.

These graduation data would be troubling enough were it not for the fact that they are misinforming the same students that the Department of Education claims to be helping. For our graduates, the refusal to accurately calculate these data cheapens their legitimate and hard-earned academic achievements.

Reporting on the Scorecard, National Public Radio suggested that “what the government released … isn’t a scorecard at all -- it’s a data dump of epic proportions.” That is a correct assessment that speaks to the crux of the problem. More data, in this case, is not better. In open phone calls with reporters, department officials have acknowledged the limitations of their data, seemingly citing that very acknowledgment as license to publish them anyway. Yet no such acknowledgment is made clearly on the new Scorecard’s website, where students will access the information to make their decisions.

Now that the floodgate of institutional data has been opened, however, it is incumbent on all of us to improve it, contextualize it and help interpret it so prospective students can be appropriately informed by it. Responding to the Scorecard, the Association of Public and Land-grant Universities called for “Congress through the reauthorization of the Higher Education Act to support a student-level data system for persistence, transfer, graduation and employment/income information to provide more complete data for all institutions.”

The University of Phoenix has long supported these principles and objectives -- not just in pushing for more complete data but also in making clear that the standards must be applied to all institutions of higher learning. We agree with both Republicans and Democrats who want to see more audit-ready data for every college and university so as to validate and verify the foundational basis upon which the department creates and enforces regulations that should be applied to all higher education institutions (last year’s gainful employment rules among them). More can be done to guard against potential political motivations in the presentation of public data.

For our part, University of Phoenix is also clear that we must improve student outcomes, as we generally have year over year. From significant investment in our core campuses to ensuring that first-time undergraduates complete a pathway diagnostic before enrolling in their first credit-bearing course, we are engaged in the work that will help us to continue improving those outcomes and, more generally, to transform into a better, more trusted institution.

In the year I have been president, I have met with thousands of our students and graduates -- the men and women who are the face of that nontraditional category. These are people who are achieving great academic success despite the other demands that contemporary life imposes. They are driven, ambitious, determined and hardworking. And they leave me in no doubt of two things: their success deserves to be appropriately recognized, and their successors deserve better information in picking a college. We can all play a role in securing these basic goals.

Timothy P. Slottow is president of University of Phoenix.

Report Urges Creation of Student-Level Data System

It will be difficult to understand and ultimately improve the performance of American higher education without a better data infrastructure, and a federal student-level data system would be the best method for producing such data, a report from the Institute for Higher Education Policy argues. The report, which grew from a February 2015 meeting of researchers and policy makers on the topic, explores and analyzes seven possible methods of producing better data on student outcomes, such as improving the Education Department's current databases, leaning more heavily on the National Student Clearinghouse, and linking the emerging network of state-level data systems. Creating a federal unit records system would be the best approach, the report asserts, while noting that such a system is currently prohibited by Congress.

University Leaders Push for Better Graduation Data

More than 200 university presidents and chancellors on Monday urged the Obama administration to incorporate voluntary, institution-submitted data on student completion rates into its forthcoming consumer information tool. 

The university leaders said in a letter to Education Secretary Arne Duncan that federal graduation rates -- which currently capture only first-time, full-time students -- are far too incomplete and misrepresent how well colleges perform. More than half of bachelor’s degree recipients attend more than one institution before graduating, and therefore aren’t counted in the federal data.

The letter asks the administration to include more complete graduation rates using the Student Achievement Measure, which is run by a coalition of college groups and tracks a far greater swath of students, including transfer and part-time students. 

The Education Department is currently developing a new consumer information tool it plans to release in the coming weeks in lieu of the controversial college ratings system it had originally proposed. 

Essay responds to criticism of possible changes in accreditation standards for engineering

Recent media coverage of the Accreditation Board for Engineering and Technology’s pre-proposed engineering criteria changes has raised concerns that some of the professional competencies may be removed from our accreditation criteria. In addition, many have incorrectly assumed that such changes are a fait accompli. The reality is there is no intent to reduce the professional competencies at all. Rather, we are in the early stages of discussion and opinion gathering on how to improve our accreditation criteria so they are more appropriately aligned with what students will need in the future to succeed in the evolving global economy.

Although discussions about potential criteria changes are in process, they have triggered heated debate regarding the importance of professional skills and abilities. We understand the concern and realize the enormous importance of these skills in an ever-changing multidisciplinary global environment. That is why we introduced them to our criteria in the mid-1990s and have strengthened them ever since. The primary purpose of these recent discussions was to improve the criteria: to make them richer in content, measurable and above all realistic. Additionally, in the spirit of continuous quality improvement, there was a concerted effort to streamline reporting requirements by programs undergoing accreditation.

Twenty years ago, we developed comprehensive criteria that have been adopted throughout the world as the standard for producing engineers who can lead and excel in an increasingly multidisciplinary world. In the intervening two decades, the world has changed, professions have evolved (and new ones emerged), while the rate of technological advancement has exploded. It is our responsibility, as the global accreditor of technical education, to examine our fundamental tenets -- the criteria -- to ensure they match the reality of today’s world, while leading us through the 21st century.

Our accreditation criteria were developed to provide programs with guidance on what’s expected from graduates of modern engineering programs. They were intentionally designed to be nonprescriptive, providing academic programs enough latitude so that they have the freedom to innovate. We are aware that academe is constantly examining ways to improve the educational experience for their students, and they must be able to build and modify their programs to meet an ever-changing world. This is a complex task, and for this reason, our criteria committee has been examining these topics very carefully for the past six years.

And while we welcome the vigorous discussions prompted by news coverage and an essay on this site, we want to reassure that, as we have done in the past, we will continue to provide opportunities for professional societies, faculty, industry and the general public to offer their inputs at every stage. For that purpose, a link is available, and we remain committed to engaging in a clear communication process that reaches our key stakeholders.

The wealth of input and opinions is incredibly valuable to our deliberations. This feedback has been influencing our criteria committee members’ decisions throughout this effort. On July 16, the criteria committee recommended selected changes in the proposal. These proposed changes were subsequently approved by the ABET Engineering Accreditation Commission. Now, this work will be sent to the ABET Board of Delegates for the first reading in October. If approved, the proposed changes will be released for public review and comment. We strongly believe that “continuous improvement is more productive than postponed perfection,” as the criteria committee noted during its recent meeting.

In closing, we cannot emphasize enough that it is not too late to provide comments at the ABET website at any time.

K. Jamie Rogers, professor of industrial and mechanical systems engineering at the University of Texas at Arlington, is the 2014-15 president of ABET.

Editorial Tags: 

Essay describes a retired president's experience with a nontraditional course provider

At the crack of dawn this past May 15, an e-mail hit my inbox from a friend who knows my abiding interest in getting undergraduate education right: “Take a look at this.”

“This” was a link to an article in University Business from the day before: “JumpCourse announces 13 [online] courses recommended for credit by ACE.” Curious about the courses approved by the American Council on Education, I took a look at the article and then the JumpCourse website.

I am a pragmatist with regard to how to make higher education in America more successful at providing students a truly 21st-century education. We should be doing what works, and to be sure, a lot of what is done in traditional on-campus undergraduate education doesn’t work. But in my view that is because way too often we are not doing what we know and what the evidence shows does work: demanding, engaging forms of pedagogy focused both on disciplinary and broader liberal learning goals. If JumpCourse is better than a large portion of standard practice, amen to that.

One of the 13 courses newly approved by ACE is Introductory Sociology -- in my field. So I decided I would take the course and share a report.

The opening paragraph of the ACE CREDIT description on the JumpCourse website says: "JumpCourse believes in greater access to higher quality education. By expanding educational opportunities, we are adapting to the changing needs of college students. We are proud to announce that the American Council on Education's College Credit Recommendation Service (ACE CREDIT®) evaluated and recommends college credit for courses developed by JumpCourse. It is our mission to help students achieve their academic goals affordably and effectively, paving a road to graduation that will allow students to begin planning for their future." (Emphasis mine. This and other examples from the course materials may be found here.)

Note the claim that JumpCourse is providing access to higher quality education, though it doesn’t say higher quality than what. And in a Q&A section on the website -- JumpCourse? How is it made? -- it says: "While we can't tell you all of our little secrets, we can say that each JumpCourse is created by instructional designers, writers, video producers, professional storytellers, subject matter experts and otherwise passionate and talented individuals who want to help expand access and affordability of college education." (Emphasis mine.)

I have a Ph.D. in sociology and taught Introductory Sociology while on the faculty of Carleton College early in my career before spending 23 years as the president of two liberal arts colleges. I have been powerfully influenced by the Association of American Colleges & Universities' now decade-long Liberal Education and America’s Promise (LEAP) initiative and serve as chair of the LEAP Presidents’ Trust. Through LEAP, AAC&U has led a national conversation about what inclusive quality in undergraduate education needs to mean in today’s world -- a conversation that has produced an emerging consensus: in addition to knowledge and competence in specific fields of learning, the education students need for the 21st century must stress higher order learning goals of inquiry and analysis, critical and creative thinking, integrative and reflective thinking, written and oral communication, quantitative literacy, information literacy, intercultural understanding, and teamwork and real-world problem solving.

And this focus on learning goals above and beyond disciplinary content must start right at the beginning of college in challenging introductory courses that serve both as entry points into the disciplines and as part of what we awkwardly call general education.

How does what I encountered in my JumpCourse fit in to all of this?

Let me begin by summarizing briefly how the Introductory Sociology course I took and the associated online format are organized. You purchase the course for $149 -- $99 if you are not going to take the proctored online final exam -- and are then given access to it. There is an opening lesson, accompanied at the end by a short quiz that teaches you enough about how the software works so that you can proceed.

Then you begin the first of, in the case of Introductory Sociology, 21 units. Each unit is divided into eight to 15 lessons. For each lesson you can watch a video and/or read a short -- almost always just a page long -- text. These texts are called “lecture notes,” but they are more like CliffsNotes or what you might find in a bad textbook. The videos are just a person speaking essentially the exact same words you would read if you chose to read the text segment (I verified this by listening to and reading a number of these) along with some catchy visuals.

This is how the website describes the way the course is organized: "Our courses teach you through professionally produced videos, lecture notes in the form of an ebook and interactive quizzes. We also supply course coaches to monitor and encourage you through the course. You get to choose how you learn best: watch, read or practice. Each course is adaptive and molds to the way you learn."

The implication is that watching, reading and practicing -- and here they mean answering practice questions -- represent the varieties of ways people learn. But aren’t engage, write, debate, analyze, critique, research, encounter, participate and other activities also ways of learning that might be best for a given student?

If you need help you can contact an “instructor.” I didn’t contact the instructor, but here is one of three e-mails I received from mine: "June 25, 2015: Hi Daniel, This is your JumpCourse coach. I wanted to take the time and congratulate you on getting started on your Sociology JumpCourse class. Keep up the work and remember if there is anything you get stuck on or need some help with, please don’t hesitate to contact me. That’s what I’m here for! If you keep working hard you will be done with the class before you know it. (Name and telephone number.)"

I assume these periodically sent notes were automated communications, though it might very well be the case that a real person would answer at the phone number provided or respond to an email query seeking help and/or support.

You can also post emails asking questions or seeking advice from other students simultaneously enrolled in the course. Here is the total of what was up on the interstudent email site the day I checked:

  • Jan. 31 at 4:17 p.m.: Intro to Sociology online final exam. Has anyone taken the online final exam for intro to sociology?
  • July 7 at 4:47 p.m.: No, just started studying today.

After you read the short text, you move to a practice test to assess your understanding and short-term memory. The texts very briefly introduce, define and explain terms and concepts and associate them with the sociologists and others who invented them. At the bottom there are always a few references in case you want to read more, but JumpCourse does not actually link to the additional references, so you would have to look them up somewhere else, and I doubt many JumpCourse students do (I didn’t).

The practice tests are multiple choice, fill-in the blank or matching questions. If you answer a question correctly you move to the next. If you answered at least 90 percent of the questions on the practice test correctly, you move on to the next text. If you do not answer a question correctly, later on the practice test may ask the same question again, or ask you another version of the question to see if you get it right the second time. It will also return to the question later in the unit, circling back to see if you can get it right after a bit of time separation from the topic. This is the only way in which I can imagine they mean that “each course is adaptive and molds to the way you learn.”

At the end of the unit there is a unit test, and if you get 90 percent or higher on that, JumpCourse unlocks the next unit and you can proceed.

I spent roughly an hour or a little more on each of the 21 units. Since I am a sociologist, I remain familiar with the material and could read the text and move quickly to the practice test to demonstrate comprehension, and I have good short-term memory skills. If I had listened to the videos instead of reading the text I think my time investment would have doubled, since I can read much faster than the person on the video spoke.

I believe a truly introductory student might very well struggle with the practice tests and take much longer to move forward, but I don’t think student struggling is evidence that the course is demanding in the sense in which I mean it. I got a question wrong in about half the lessons -- almost always because definitions were unclear or contradictory, or distinctions were made that didn’t seem sensible, at least to me. I think many introductory students will be tripped up by the sloppiness of many of the practice questions.

For example, Lenski’s schema for defining societies at increasing levels of economic and social complexity is presented. His classification of societies in order of increasing complexity, they report, is: hunting and gathering, pastoral, horticultural, agrarian, industrial, and postindustrial. In one place pastoral societies are said to “grow food,” while in another it is said that they have a “more steady food supply” but it is not stated that they actually grow food. In another place pastoral societies are said to be “nomadic” -- a key point -- while elsewhere they are said to “look after livestock.” The final exam, of course, expected precision when it came to those distinctions.

I tried to pretend I was an introductory student as I did this. I know that introducing students to a new field does absolutely require defining terms, giving examples to clarify meaning and explaining some of the history of the ideas in the field. You can’t get to the big stuff if students don’t know what you mean when you use terms like “social system, status, role, group, organization, etc.” But in this course, presentation of terms and concepts is all there is. In fact, I wasn’t challenged at all to think through any complex problems or to use any quantitative reasoning skills.

For example, in the section on demography a projection of what the age distribution by gender of the Chinese population will be in 2030 was presented. We could have been asked to think about it and propose some interpretations of what it meant and how it got that way. But all they asked me to do was complete some matching questions where the answer was in the text.

I loved teaching introductory sociology. Last time I taught it was spring of 1979. I never used a textbook. I wanted students to read the very best, most well-written original books and articles I could find so that they could become inspired by what they read -- excited by the insights sociology could provide regarding the big questions the field was invented to address -- not just introduced to concepts, terms and people. For example, I would ask students to read major sections of a small book -- Evolution and Culture -- by Sahlins and Service to give them a sense not just of the stages of cultural evolution but of the deep insights one can gain by thinking of social history that way.

I was the textbook, but the goal was always to help students achieve insights into big questions once they had developed enough of a sociological vocabulary. Where does inequality come from? What are the consequences for peoples’ lives of their socioeconomic and other social statuses? What are the stages of cultural evolution and the present-day consequences of the fact that just about all of the stages of cultural evolution still exist in real societies around the world? What varieties of social systems exist and how did the differences come about? How do a social system’s subsystems -- polity, economy, community, family, etc. -- affect each other? How do we know any of this?

There were learning goals beyond disciplinary content. To be sure, in my JumpCourse there were one or two sentences in each lesson’s text that addressed things like this, but the coverage was superficial.

And with regard to quantitative reasoning -- a critical 21st-century learning goal -- even in the late 1970s at Carleton there was enough of a computer system to allow me to have my introductory students actually do some quantitative analysis of real data. They did secondary analysis using data from some of the great and pioneering sociologists’ research, which one could obtain from Columbia University’s Bureau of Applied Social Research or the University of Michigan’s Inter-University Consortium of Political and Social Research (ICPSR) and then wrote it up so that they could begin to learn how at least one tribe of sociologists engaged in the search for understanding.

In my course all tests were essay exams, and there were additional writing assignments. Only perhaps once in each practice test in this JumpCourse was I even asked to fill in the blank. I was never asked to write anything -- even if only to regurgitate something.

Some readers of this may say, “Of course you could do these things at a college where maybe 12 students sit around a table,” but back then we didn’t limit the number of students who could take a class. We thought it was the student's decision to choose a small class over a larger one, so my introductory classes ranged in size from 35 to 90.

The content in the Introduction to Sociology JumpCourse, as far as it goes, may be like many of the textbooks I refused to use in my own teaching -- so I don’t think, at the most basic level, that this course constitutes any kind of content malpractice. But it sets the bar way too low. If this course is even a bit typical of the online courses now being certified for credit, we should be asking many more serious questions about the quality of these new providers’ products.

In the course I took there were no expectations of students beyond taking the very simple-minded practice and end-of-unit tests and, if you wanted credit for the course, taking the proctored final exam -- 60 multiple-choice questions to be completed in one and a half hours with a 70 percent, or 42 correct answers, required to pass (I took the final and did miss three questions).

No writing or presenting of any kind, no interaction with an instructor beyond being able to ask questions electronically, no interaction with other students taking the course, no expectation of any kind of higher order thinking, analysis, or grappling with big questions, no inspiring students to want to learn more by showing them the deep and powerful insights into our social world that sociology can provide. I can’t imagine any student being inspired by this course to want to know more about sociology, and I do not believe that any skills beyond improving short-term memory will be developed through taking it.

So what to make of this? I believe my JumpCourse failed on its own terms -- providing disciplinary content employers say they want and need and that students who can least afford higher education will be able to convert into improved life chances and success -- and it failed to even come close to addressing the aims and objectives that employers need higher education institutions to reach.

We have lots of evidence, of course, that standard practice far too often fails on these grounds as well. But standard practice fails in my view when it does not hold to what we already know works to enable students to achieve the learning they need for the 21st century, and when it fails to include serious assessment up to the task of discovering what students actually know and can do relative to the learning goals of liberal education to create a continuous quality improvement feedback loop. When standard practice is a challenging, high-student-engagement process focused both on disciplinary and higher order learning goals, it is also cost-effective because four-year graduation rates go up. Despite their low cost to consumers, disruptive innovations like JumpCourse will inspire no one to learn what they must in a timely way.

One thing almost always missing from debates about “disruptive innovations” like JumpCourse and their comparison to “traditional” higher education is agreement about what the aims and objectives of higher education should be. It is impossible to assess the relative effectiveness and efficiency of an innovation in comparison to standard practice, or whether even the best examples of standard practice cost too much, if the goals being pursued are radically different. If it is job training or minimal fluency with the terms and concepts of disciplines that America wants, then maybe innovations like JumpCourse can make it above the bar someday. But if we want graduates of our colleges to be able to think, analyze, integrate, write and communicate, JumpCourses will never achieve that.

Finally, in medicine it is malpractice to replace standard practice with experimental remedies and procedures until they have proven to be superior and have side effects that are no worse, and it is unethical to experiment on humans without their informed consent. Yet disruptive higher education innovators lobby constantly for the freedom to do both, and like some of our nation’s worst ethical lapses in human experimentation, the subjects are and will be low-income disadvantaged people misled to believe they will be receiving treatment that is superior to standard practice and will cause them no harm.

Let’s conduct the absolutely necessary continuous experimentation to find new ways to improve student learning not by freeing “providers” up to do whatever they want funded with new government subsidies, but within the same kind of framework in place to test new drugs or other medical treatments for efficacy and safety.

Daniel F. Sullivan is president emeritus of St. Lawrence University and senior adviser to the president of the Association of American Colleges and Universities.

Editorial Tags: 

Pages

Subscribe to RSS - assessmentaccountability
Back to Top