Accreditation

Iowa regulator agreed with Ashford University's complaint about meddling by federal and California agencies

Smart Title: 

Ashford University cries foul on veterans' agency and California for meddling in Iowa's decision to yank the for-profit's GI Bill eligibility, and newly released emails show an Iowa official shared that view.

Report on for-profits in six countries finds similar problems and few benefits

Smart Title: 

U.K. report on for-profit colleges in six countries finds few benefits of sector and calls for tighter regulation, while acknowledging lack of data makes it hard to set rules.

IT think tank's call for alternative forms of credentialing and measuring competency

Smart Title: 

Technology think tank says standardized testing by outside groups and alternative forms of credentialing could create helpful competitive pressure on higher education and the traditional college degree.

WICHE's interstate passport seeks to help students transfer while preserving colleges' autonomy

Smart Title: 

New interstate network seeks to help students transfer across state lines without losing credits, but also defers to faculty members at each college about how to measure learning.

A new way to improve the available data on student success (essay)

A national outcry regarding the cost of education and the poor performance of institutions in graduating their students has raised questions about the extent to which accreditors are fulfilling their mission of quality assurance. Politicians have expressed outrage, for instance, at the fact that accreditors are not shutting down institutions with graduation rates in the single digits.

At the same time, accreditors and others have noted that the graduation data available from the National Center for Education Statistics’ Integrated Postsecondary Education Data System, familiarly known as IPEDS, include only first-time, full-time student cohorts and, as such, are too limited to be the measure by which institutional success is measured -- or by which accreditation is judged. But simply noting this problem does nothing to solve it. The imperative and challenge of getting reliable data on student success must be more broadly acknowledged and acted upon. The WASC Senior College and University Commission (WSCUC) has taken important steps to do just that.

As is well known, IPEDS graduation rates include only those students who enrolled as first-time, full-time students at an institution. Of the approximately 900,000 undergraduate students enrolled at institutions accredited by WSCUC, about 40 percent, or 360,000, fit this category. That means approximately 540,000 students in this region, including all transfer and part-time students, are unaccounted for by IPEDS graduation rate data.

The National Student Clearinghouse provides more helpful data regarding student success: while including full-time student cohorts, part-time students are also considered, as well as students who combine the two modes, and data include information on students who are still enrolled, have transferred and are continuing their studies elsewhere or have graduated elsewhere. Six-year student outcomes, however, are still the norm.

Since 2013, WSCUC has worked with a tool developed by one of us -- John Etchemendy, provost at Stanford University and a WSCUC commissioner -- that allows an institution and our commission to get a fuller and more inclusive picture of student completion. That tool, the graduation rate dashboard, takes into account all students who receive an undergraduate degree from an institution, regardless of how they matriculate (first time or transfer) or enroll (full time or part time). It is a rich source of information, enabling institutions to identify enrollment, retention and graduation patterns of all undergraduate students and to see how those patterns are interrelated -- potentially leading to identifying and resolving issues that may be impeding student success.

Here’s how it works.

WSCUC collects six data points from institutions via our annual report, the baseline data tracked for all accredited, candidate and eligible institutions and referenced by WSCUC staff, peer evaluators and the commission during every accreditation review. On the basis of those data points, we calculate two completion measures: the unit redemption rate and the absolute graduation rate. The unit redemption rate is the proportion of units granted by an institution that are eventually “redeemed” for a degree from that institution. The absolute graduation rate is the proportion of students entering an institution who eventually -- a key word -- graduate from that institution.

The idea of the unit redemption rate is easy to understand. Ideally, every unit granted by an institution ultimately results in a degree (or certificate). Of course, no institution actually achieves this ideal, since students who drop out never “redeem” the units they take while enrolled, resulting in a URR below 100 percent. So the URR is an alternative way to measure completion, somewhat different from the graduation rate, since it counts units rather than students. But most important, it counts units that all students -- full time and part time, first time and transfer -- take and redeem.

Interestingly, using one additional data point (the average number of units taken by students who drop out), we can convert the URR into a graduation measure, the absolute graduation rate, which estimates the proportion of students entering a college or university (whether first time or transfer) who eventually graduate. Given the relationship between annual enrollment, numbers of units taken in a given year and the length of time it takes students to complete their degrees -- all of which vary -- the absolute graduation rate is presented as an average over eight years. While not an exact measure, it can be a useful one, especially when used alongside IPEDS data to get a more nuanced and complete picture of student success at an institution.

What is the advantage to using this tool? For an institution like Stanford -- where enrollments are relatively steady and the overwhelming majority of students enter as first-time, full-time students and then graduate in four years -- there is little advantage. In fact, IPEDS data and dashboard data look very similar for that type of institution: students enter, take roughly 180 quarter credits for an undergraduate degree and redeem all or nearly all of them for a degree in four years. For an institution serving a large transfer and/or part-time population, however, the dashboard can provide a fuller picture than ever before of student success. One of our region’s large public universities has a 2015 IPEDS six-year graduation rate of 30 percent, for example, while its absolute graduation rate for the year was 61 percent.

What accounts for such large discrepancies? For many WSCUC institutions, the IPEDS graduation rate takes into account fewer than 20 percent of the students who actually graduate. The California State University system, for example, enrolls large numbers of students who transfer from community colleges and other institutions. Those students are counted in the absolute graduation rate, but not in the IPEDS six-year rate.

As the dashboard includes IPEDS graduation rate data as well as the percentage of students included in the first-time, full-time cohort, it makes it possible to get a better picture of an institution’s student population as well as the extent to which IPEDS data are more or less reliable as indicators of student success at that institution.

Here’s an example: over the years between 2006 and 2013, at California State University Dominguez Hills, the IPEDS six-year graduation rate ranged between 24 percent and 35 percent. Those numbers, however, reflect only a small percentage of the university’s student population. The low of 24 percent in 2011 reflected only 7 percent of its students; the high of 35 percent in 2009 reflected just 14 percent. The eight-year IPEDS total over those years, reflecting 10 percent of the student population, was 30 percent.

In contrast, looking at undergraduate student completion using the dashboard, we see an absolute graduation rate of 61 percent -- double the IPEDS calculation. Clearly, the dashboard gives us a significantly different picture of student completion at that institution.

And there’s more. To complement our work with the dashboard, WSCUC staff members have begun work on triangulating dashboard data with data from the National Student Clearinghouse and IPEDS to look at student success from various angles. We recognize that all three of these tools have limitations and drawbacks as well as advantages: we’ve already noted the limitations of the IPEDS and National Student Clearinghouse data, as well as the benefit of the inclusion in the latter’s data of transfer students and students still enrolled after the six-year period. In addition, the data from both IPEDS and the clearinghouse can be disaggregated by student subpopulations of gender and ethnicity, as well as by institution type, which can be very beneficial in evaluating institutional effectiveness in supporting student success.

Pilot work has been done to plot an institution’s IPEDS and dashboard data in relation to the clearinghouse data, displayed as a box-and-whisker graph that provides the distribution of graduation rates regionally by quartile in order to give an indication of an institution’s success in graduating its students relative to peer institutions within the region. While care must be taken to understand and interpret the information provided through these data, we do believe that bringing them together in this way can be a powerful source of self-analysis, which can lead to institutional initiatives to improve student completion.

As noted, WSCUC has been working with the dashboard since 2013. While we are excited and encouraged regarding the benefits of the tool in providing a more complete and nuanced picture of student success, we also recognize that we have a great deal of work ahead of us to make the tool as useful as we believe it can be. After two pilot projects including a limited number of WSCUC-accredited institutions, the required collection of data by all WSCUC colleges and universities in 2015 revealed a number of challenges to institutions in submitting the correct data. The dashboard can be somewhat difficult to understand, especially for institutions with large shifts in enrollment patterns. And unlike National Student Clearinghouse data, dashboard data, at least at this point, cannot be disaggregated to reveal patterns of completion for various student subpopulations.

Such issues notwithstanding, we are encouraged by the value of the dashboard that we have seen to date and are committed to continuing to refine this tool. WSCUC staff members have given presentations both regionally and nationally on the dashboard, including one to IPEDS trainers to show them the possibilities of this tool to extend the data available nationally regarding student completion.

We are hopeful that other accreditors and possibly the NCES will find the dashboard a useful tool and, if so, adopt it as an additional completion measure for institutions across the country. In any case, we will continue to do this work regionally so as to not just complain about the available data but to also contribute to their improvement and usefulness.

Mary Ellen Petrisko is president of the WASC Senior College and University Commission. John Etchemendy is provost of Stanford University.

Editorial Tags: 
Image Source: 
iStock

Southern accreditor puts five colleges on probation

Smart Title: 

The Southern Association of Colleges and Schools has put four small private colleges and one community college on notice, mostly due to financial problems.

Shareholders to decide fate of U of Phoenix ownership

Smart Title: 

A decision today by Apollo Education Group's shareholders could determine today whether U of Phoenix is sold or not.

CFPB Lacks Authority Over For-Profit-College Accreditors, Judge Rules

Smart Title: 

Federal judge rules Consumer Financial Protection Bureau lacks the authority to investigate for-profit-college accreditors.

Essay on how fixation on 'inane' student learning outcomes fails to ensure academic quality

In a recent Century Foundation essay, I raised a concern that accreditors of traditional colleges are allowing low-quality education to go unaddressed while insisting, in a misguided attempt to prove they care about learning, that colleges engage in inane counting exercises involving meaningless phantom creatures they call student learning outcomes, or SLOs.

The approach to quality assurance I recommend, instead, is to focus not on artificially created measures but on the actual outputs from students -- the papers, tests and presentations professors have deemed adequate for students to deserve a degree.

I got a lot of positive feedback on the essay, especially, as it happens, from people involved in some of the processes I was criticizing. Peter Ewell, for example, acknowledged in an email that “the linear and somewhat mindless implementation of SLOs on the part of many accreditors is not doing anybody any good.”

This story began in the 1990s, when reformers thought they could improve teaching and learning in college if they insisted that colleges declare their specific “learning goals,” with instructors defining “the knowledge, intellectual skills, competencies and attitudes that each student is expected to gain.” The reformers’ theory was that these faculty-enumerated learning objectives would serve as the hooks that would then be used by administrators to initiate reviews of actual student work, the key to improving teaching.

That was the idea. But it hasn’t worked out that way. Not even close. Here is one example of how the mindless implementation of this idea distracts rather than contributes to the goal of improved student learning. When a team from the western accreditor, the WASC Senior College and University Commission, visited San Diego State University in 2005, it raised concerns that the school had shut down its review process of college majors, which was supposed to involve outside experts and the review of student work. Now, 10 years have passed and the most recent review by WASC (the team visit is scheduled for this month) finds there are still major gaps, with “much work to be done to ensure that all programs are fully participating in the assessment process.”

What has San Diego State been doing instead of repairing its program review process? It has been writing all of its meaningless student learning outcome blurbs that accreditors implemented largely in response to the Spellings Commission in 2006. San Diego State reported its progress in that regard in a self-review it delivered to WASC last year:

Course Learning Outcomes (CLOs) are required for all syllabi; curricular maps relating Degree Learning Outcomes (DLOs) to major required courses are now a required component for Academic Program Review; programs are being actively encouraged to share their DLOs with students and align DLOs with CLOs to provide a broader programmatic context for student and to identify/facilitate course-embedded program assessment.

All this SLO-CLO-DLO gibberish and the insane curriculum map database (really crazy, take a look) is counterproductive, giving faculty members ample ammunition for dismissing the idiocy of the whole process. The insulting reduction of learning to brief blurbs, using a bizarre system of verb-choice rules, prevents rather than leads to the type of quality assurance that has student work at the center.

The benefits of, instead, starting with student work as the unit of analysis is that it respects the unlimited variety of ways that colleges, instructors and students alike, arriving with different skill levels, engage in the curriculum.

Validating colleges’ own quality-assurance systems should become the core of what accreditors do if they want to serve as a gateway to federal funds. Think of it as an outside audit of the university’s academic accounting system.

With this approach, colleges are responsible for establishing their own systems for the occasional review of their majors and courses by outside experts they identify. Accreditors, meanwhile, have the responsibility of auditing those campus review processes, to make sure that they are comprehensive and valid, involving truly independent outsiders and the examination of student work.

SLO madness has to stop. If accreditors instead focus on the traditional program-review processes, assuring that both program reviews and audits include elements of random selection, no corner of the university can presume to be immune from scrutiny.

Robert Shireman is a senior fellow at the Century Foundation and a former official at the U.S. Department of Education.

Essay on value of student learning outcomes in measuring and ensuring academic quality

Robert Shireman is right. The former official at the U.S. Department of Education correctly wrote recently that there is little evidence that using accreditation to compel institutions to publicly state their desired student learning outcomes (SLOs), coupled with the rigid and frequently ritualistic ways in which many accreditation teams now apply these requirements, has done much to improve the quality of teaching and learning in this country.

But the answer, surely, is not to abolish such statements. It is to use them as they were intended -- as a way to articulate collective faculty intent about the desired impact of curricula and instruction. For example, more than 600 colleges and universities have used the Degree Qualifications Profile (DQP). Based on my firsthand experience with dozens of them as one of the four authors of the DQP, their faculties do not find the DQP proficiency statements to be “brief blurbs” that give them “an excuse to dismiss the process,” as Shireman wrote. Instead, they are using these statements to guide a systematic review of their program offerings, to determine where additional attention is needed to make sure students are achieving the intended skills and dispositions, and to make changes that will help students do so.

As another example, the Accreditation Board for Engineering Technology (ABET) established a set of expectations for engineering programs that have guided the development of both curricula and accreditation criteria since 2000. Granted, SLOs are easier to establish and use in professional fields than they are in the liberal arts. Nevertheless, a 10-year retrospective study, published about two years ago, provided persuasive empirical evidence that engineering graduates were achieving the intended outcomes and that these outcomes have been supported and used by faculties in engineering worldwide.

Shireman also is on point about the most effective way to examine undergraduate quality: looking at actual student work. But what planet has he been living on to not recognize that this method isn’t already in widespread use? Results of multiple studies by the National Institute for Learning Outcomes Assessment (NILOA) and the Association of American Colleges and Universities (AAC&U) indicate that this is how most institutions look at academic quality -- far exceeding the numbers that use standardized tests, surveys or other methods. Indeed, faculty by and large already agree that the best way to judge the quality of student work is to use a common scoring guide or rubric to determine how well students have attained the intended proficiency. Essential to this task is to set forth unambiguous learning outcomes statements. There is simply no other way to do it.

As an example of the efficacy of starting with actual student work, 69 institutions in nine states last year looked at written communications, quantitative fluency and critical thinking based on almost 9,000 pieces of student work scored by faculty using AAC&U’s VALUE rubrics. This was done as part of an ongoing project called the Multi-State Collaborative (MSC) undertaken by AAC&U and the State Higher Education Executive Officers (SHEEO). The project is scaling up this year to 12 states and more than 100 institutions. It’s a good example of how careful multi-institutional efforts to assess learning using student work as evidence can pay considerable dividends. And this is just one of hundreds of individual campus efforts that use student work as the basis for determining academic quality, as documented by NILOA.

One place where the SLO movement did go off the rails, though, was allowing SLOs to be so closely identified with assessment. When the assessment bandwagon really caught on with accreditors in the mid-1990s, it required institutions and programs to establish SLOs solely for the purpose of constructing assessments. These statements otherwise weren’t connected to anything. So it was no wonder that they were ignored by faculty who saw no link with their everyday tasks in the classroom. The hundreds of DQP projects catalogued by NILOA are quite different in this respect, because all of them are rooted closely in curriculum or course design, implementing new approaches to teaching or creating settings for developing particular proficiencies entirely outside the classroom. This is why real faculty members in actual institutions remain excited about them.

At the same time, accreditors can vastly improve how they communicate and work with institutions about SLOs and assessment processes. To begin with, it would help a lot if they adopted more common language. As it stands, they use different terms to refer to the same things and tend to resist reference to external frameworks like the DQP or AAC&U’s Essential Learning Outcomes. As Shireman maintains, and as I have argued for decades, they also could focus their efforts much more deliberately on auditing actual teaching and learning processes -- a common practice in the quality assurance approaches of other nations. Indeed, starting with examples of what is considered acceptable-quality student work can lead directly to an audit approach.

Most important, accreditors need to carefully monitor what they say to institutions about these matters and the consistency with which visiting teams “walk the talk” about the centrality of teaching and learning. Based on volunteer labor and seriously undercapitalized, U.S. accreditation faces real challenges in this arena. The result is that institutions hear different things from different people and constantly try to second-guess “what the accreditors really want.” This compliance mentality is extremely counterproductive and accreditors themselves are only partially responsible for it. Instead, as my NILOA colleagues and I argue in our recent book, Using Evidence of Student Learning to Improve Higher Education, faculty members and institutional leaders need to engage in assessment primarily for purposes of improving their own teaching and learning practices. If they get that right, success with actors like regional accreditors will automatically follow.

So let’s take a step back and ponder whether we can realistically improve the quality of student learning without first clearly articulating what students should know and be able to do as result of their postsecondary experience. Such learning outcomes statements are essential to evaluating student attainment and are equally important in aligning curricula and pedagogy.

Can we do better about how we talk about and use SLOs? Absolutely. But abandoning them would be a serious mistake.

Peter Ewell is president of the National Center for Higher Education Management Systems (NCHEMS), a research and development center.

Editorial Tags: 

Pages

Subscribe to RSS - Accreditation
Back to Top