assessmentaccountability

Obama higher education plan signals policy shift

Section: 
Smart Title: 

President Obama lays out details for his plan to hold colleges accountable for rising prices, with ramifications beyond the election this fall.

Improving Graduation Rates Is Job One at City Colleges of Chicago

Smart Title: 

City Colleges of Chicago have a 7 percent graduation rate. If that number doesn't go up, the system's chancellor, presidents and trustees could lose their jobs.

ACICS-Accredited Colleges Meet Federal Deadline

The U.S. Department of Education last month finalized its decision to terminate the Accrediting Council for Independent Colleges and Schools, a controversial national accrediting agency that oversaw Corinthian Colleges, ITT and other failed for-profits.

Before the end of December, all remaining ACICS institutions filed paperwork with the department to retain their federal aid eligibility for 18 months while seeking a new accreditor, the department said this week. The roughly 245 colleges collectively received $4.76 billion in federal aid during 2015.

Ted Mitchell, the U.S. under secretary of education, said in an interview that he was encouraged by the transition process so far for ACICS-accredited colleges.

“The institutions are taking their responsibilities seriously,” he said. “We’re working to make this transition as successful as possible.”

Most of the colleges have begun seeking approval from the Accrediting Commission of Career Schools and Colleges, a national accrediting agency. Michale McComis, the commission’s executive director, said last week that 180 ACICS-accredited institutions have formally initiated the process. He expects that number to grow to 210 colleges by the end of January.

Some experts on for-profit higher education have predicted that substantial numbers of ACICS-accredited institutions will fail to find a new agency home within 18 months. One higher education lawyer said that challenge remains, and that the department had overplayed its celebration of ACICS institutions successfully completing their federal aid extension paperwork.

Mitchell, however, said the process of getting roughly 245 institutions to sign provisional Program Participation Agreements was complex and required collaboration between the feds and ACICS-approved colleges. The agreements include monitoring and reporting requirements the department said are intended to protect taxpayers and students.

In addition, Mitchell said he was confident that well-run institutions among the group “will have the time to secure accreditation.”

ACICS has sued to block the department’s decision to de-recognize the accreditor. A judge last month denied a request from ACICS for a temporary injunction.

It’s unclear if the incoming Trump administration would be able to overturn the department’s move to eliminate ACICS, or if it would seek to try.

Is this diversity newsletter?: 

Wage data's value in higher education are limited by geography and selectivity, study finds

Smart Title: 

New study dumps cold water on the value of wage data to prospective students who are place bound and headed to less-selective colleges.

Exploring CiteScore, Elsevier's new journal impact metrics

Smart Title: 

Elsevier explains the thought process behind CiteScore, its new journal impact metrics. Critics worry about potential conflicts of interest.

Southern accreditor places 10 on probation, including Louisville and new UT campus

Smart Title: 

Southern accreditor puts 10 colleges on probation, including Louisville for its governance problems, several for-profit art schools for financial woes and new University of Texas campus for an array of shortcomings.

College Completion Rates Recover After Slide

Overall national college completion rates are rising after a two-year slide, according to new data from the National Student Clearinghouse Research Center, which tracks 97 percent of all college enrollments.

For college students who first enrolled in 2010, the overall six-year graduation rate was 54.8 percent, an increase of 1.9 percentage points from the previous year's students. The new rate is similar to that of students who first enrolled in 2008, but is lower than the 56.1 percent rate for the pre-recession 2007 group.

“We can expect this nationwide recovery in college completion rates to continue in upcoming years,” said Doug Shapiro, the center's executive director.

The recession led to a nationwide surge in college enrollments, the center said, particularly among adult and part-time students. That bump was followed by declining completion rates, which have now partially reversed.

"Dramatic increases in enrollments appear to have leveled off and completion rates are recovering some ground," the report said. "For two-year institutions that could point to overcrowded classrooms to help explain lower completion rates in the previous years, the higher rates for this year’s smaller cohort were perhaps to be expected. For four-year public and nonprofit institutions, however, the rebounding completions rates accomplished with continuing increases in enrollment are a surprising result."

Is this diversity newsletter?: 

Growing federal role in accreditation will have drawbacks (essay)

For accreditation, 2016 will be remembered as an inflection point, a pivotal moment, a culmination of a multiyear revamping, which means this space is now dominated by two features.

The federal government, through the U.S. Department of Education, has consolidated its authority over accreditation. It is now the major actor directing and leading this work. Second, the public, whether members of the news media, consumer protection advocates, think tanks or employers, is now in agreement that the primary task of accreditation is public accountability. That means accredited status is supposed to be about protecting students -- to serve as a signal that what an institution or program says about itself is reliable, that there are reasonable chances of student success and that students will benefit economically in some way from the educational experience.

Both the strengthened federal oversight and expectations of public accountability have staying power. They are not temporary disruptions. They will remake accreditation for the foreseeable future.

At least some government authority over accreditation and public concern about the space and accountability are not new. What is new and what makes this moment pivotal is the extent to which there is agreement on both the expanded federal role and public accountability. And both are in significant contrast to longstanding practice of accrediting organizations as independent, nongovernmental bodies accustomed to setting their own direction and determining their own accountability.

This disruption can result in serious drawbacks for accreditation and higher education -- and students. Those drawbacks include a loss of responsible independence for both accreditation and the higher education institutions that are accredited. This independence has been essential to the growth and development of U.S. higher education as an outstanding enterprise both when it comes to quality and to access. There are concerns about maintaining academic freedom, so vital to high-quality teaching and research, in the absence of this independence. We have not, in this country, experimented with government and the public determining quality, absent academics themselves. Government calls for standardization in accreditation can, however unintentionally, undermine the valuable variation of types of colleges and universities, reducing options for students.

Consolidation of Federal Oversight

By way of background, “accreditation is broken” has been a federal government mantra for several years now. For the U.S. Congress, both Democrats and Republicans, as well as the executive branch, messages about perceived deficiencies of accreditation have been driving the push for greater government oversight, whether delivered from a secretary of education describing accreditors as “watchdogs that don’t bite” or an under secretary talking about how accreditors are “asleep at the switch” or a senator maintaining that “too often the accreditation means nothing” or a leading House representative saying accreditors may have to change how they operate in the changing landscape of higher education.

Members of Congress, through various hearings, bills and statements, have called for changes that would focus accreditation more on student learning, create an alternative accreditation system or strengthen government oversight of accreditation, especially in relation to protecting students. Yes, some policy makers are concerned about the department going too far. Crucially, however, the debate is not about what is being done -- greater federal oversight and public accountability -- but who should have the authority to act.

Both Congress and the department are pushing accreditation to focus more intently on both the performance of institutions and the achievement of students. From a federal perspective, “quality” is now about higher graduation rates, less student debt and default, better jobs, and decent earnings. The Education Department’s Transparency Agenda, announced last fall, has become a major vehicle to assert this federal authority. The Agenda ties judgment about whether accreditation is effective to graduation and default information, with the department, for the first time, publishing such data arrayed by accreditors and publishing accreditors’ student achievement standards -- or identifying the absence of such standards. The department also is taking steps to move accreditors toward standardizing the language of accreditation, toward more emphasis on quantitative standards and toward greater transparency about accreditation decisions.

Consistent with the Agenda, the National Advisory Committee on Institutional Quality and Integrity (NACIQI), the federal body tasked with recommending to the secretary of education whether accrediting organizations are to be federally recognized, is now including attention to graduation and default rates as part of its periodic recognition reviews. Committee meetings involve more and more calls for judging accrediting organizations’ fitness for federal recognition based less on how these organizations operate and more on how well their accredited institutions and programs are doing when it comes to graduation and debt. And NACIQI has been clear that, because of the importance to the public and to protecting students, all activities of accrediting organizations now need to be part of the committee’s purview.

Most recently, Democratic Senators Elizabeth Warren, Dick Durbin and Brian Schatz introduced a bill on accreditation that would upend the process. The bill captures the major issues and concerns that have been raised by Congress and the department during the past few years, offering remedies driven by expanding federal authority over accreditors and institutions: federally imposed student achievement standards, a federal definition of quality, federal design of how accreditation is to operate and federal requirements that accrediting organizations include considerations of debt, default, affordability and success with Pell Grants as part of their standards. While it is unlikely that anything will happen with this bill during the remainder of the year, it provides a blueprint for change in accreditation for the next Congress and perhaps the foundation for the future reauthorization the Higher Education Act itself.

Moreover, as government plays a more prominent role in accreditation, the process has become important enough to be political. Lawmakers sometimes press individual accrediting organizations to act against specific institutions or to protect certain institutions. Across both the for-profit and nonprofit sectors, lawmakers make their own judgments and are public about whether individual institutions are to have accredited status and how well individual accrediting organizations do their jobs. Now, when accrediting organizations go before NACIQI, not only are they concerned about meeting federal law and regulation, but they are also focused on the politics around any of their institutions or programs.

In short, the shift in Washington -- defining quality expectations for accreditors in contrast to accepting how accreditors define quality, intensive and extensive managing of how accreditors are carrying out their work in contrast to leaving this management to the accreditors, seeking to standardize accreditation practice in contrast to the variation in practice that comes with a decentralized accreditation world of 85 different accrediting organizations -- has placed the federal government in a strong oversight role. There is bipartisan support in Congress and across branches of government for this rearrangement of the accreditation space. It is difficult to imagine that the extent to which the federal government influences the activity and direction of accreditation will diminish any time soon, if at all.

Consolidation of Public Expectations

The pressure on accreditation for public accountability has significant staying power in a climate where higher education is both essential and, for many, expensive, even with federal and state assistance. There is a sense of urgency surrounding the need for at least some higher education for individual economic and social well-being as well as the future competitiveness and capacity of society. At the same time, disturbingly, student loan debt now totals more than $1.3 trillion, and in 2016 the average graduate who took out loans to complete a bachelor’s degree owed more than $37,000. In this environment, the public wants accreditation to focus on students gaining a quality education at a manageable financial level.

Accreditation is now the public’s business. On a weekly basis, multiple articles on accreditation appear in the news media, nationally and internationally. Social media reflect this as well, with any article about accreditation, but especially negative news, engaging large numbers of people in a very short period of time. Think tank reports on accreditation are increasing in number, mostly focused on how it needs to change.

From all sources, the focus is on accreditation and whether it is a reliable source of public accountability. Media attention is on default rates as too high and graduation rates as too low, on repeated expressions of employer dissatisfaction with employees’ skills and whether accredited institutions do a good job of preparing workers. In the face of a constant stream of articles highlighting these concerns, the public increasingly questions what accreditation accomplishes and, in particular, whether it is publicly accountable.

Moreover, where judgments about academic quality were once left to accreditors and institutions, technology now enables the news media and the public to make such judgments on their own. Enormous amounts of data on colleges and universities are readily available, from graduation rates to attrition, retention and transfer rates. Multiple data sources such as the federal government’s College Scorecard, College Navigator and Education Trust’s College Results Online are now available to be used by students, families, employers and journalists. Urgency, concern and widespread opportunity to make one’s own judgment about quality have all coalesced to raise questions about why any reliance on accreditation is needed, unless accreditation carries out this public accountability role. Perhaps the most striking example of this development is Google’s recent announcement that it is working with the College Scorecard to present Scorecard data (e.g., graduation rates, earnings, tuition) as part of a display when people search for a particular college or university.

What’s Next?

This, then, is the revamped accreditation space, with the federal government determining the direction of accreditation and a public that is driving accreditation into a predominantly public accountability role.

Will this revamping be successful? Will students be better served? Only if government, the public, higher education and accreditation can strike a balance. Expanded government oversight should be accompanied by acknowledging and respecting the independence, academic judgment and academic leadership long provided by colleges and universities and central to effective higher education and accreditation. Emphasis on public accountability should be accompanied by valuing the role of academics in determining quality. By and large, this has been accomplished through the relationship between accreditation, higher education and government until recently. The way forward needs this same balance.

Judith S. Eaton is president of the Council for Higher Education Accreditation, a membership association of 3,000 degree-granting colleges and universities.

Image Caption: 
Packed room during June meeting of federal panel that oversees accreditors
Is this diversity newsletter?: 

ABA Censures Law School

The American Bar Association, whose accrediting arm oversees law schools across the country, announced this month that it has censured Valparaiso University School of Law and placed the Charlotte School of Law on probation.

According to the ABA's archive, it's the first time the organization has censured a law school since 2013 and the first time it has placed a law school on probation in at least five years.

A censure is one of several possible sanctions the ABA may impose on a law school program, ranging from fines to withdrawal of approval.

Amid criticism this summer from the federal body that oversees higher education accreditors, the ABA has taken a tough stance in several recent oversight decisions. In August, its accrediting arm recommended against approving the new University of North Texas Dallas College of Law (an announcement last week said UNT Dallas would get another chance to earn accreditation). In the same month, it found the admissions practices at Ave Maria Law School in Florida out of compliance with standards. The ABA, however, said those actions were not taken in response to the criticism of its oversight practices.

The notices for both the Valparaiso University and Charlotte schools of law cited lack of compliance with standards requiring that a school only admit applicants who appear likely to succeed in the program and pass the bar. The probation notice for the Charlotte School of Law also cited a standard requiring a school to maintain a rigorous program of legal education.

The Charlotte School of Law responded to the ABA decision in a statement on its website.

Is this diversity newsletter?: 

How assessment falls significantly short of valid research (essay)

In a rare moment of inattention a couple of years ago, I let myself get talked into becoming the chair of my campus’s Institutional Review Board. Being IRB chair may not be the best way to endear oneself to one’s colleagues, but it does offer an interesting window into how different disciplines conceive of research and the many different ways that scholarly work can be used to produce useful knowledge.

It has also brought home to me how utterly different research and assessment are. I have come to question why anyone with any knowledge of research methods would place any value on the results of typical learning outcomes assessment.

IRB approval is required for any work that involves both research and human subjects. If both conditions are met, the IRB must review it; if only one is present, the IRB can claim no authority. In general, it’s pretty easy to tell when a project involves human subjects, but distinguishing nonresearch from research, as it is defined by the U.S. Department of Health and Human Services, is more complicated. It depends in large part on whether the project will result in generalizable knowledge.

Determining what is research and what is not is interesting from an IRB perspective, but it has also forced me to think more about the differences between research and assessment. Learning outcomes assessment looks superficially like human subjects research, but there are some critical differences. Among other things, assessors routinely ignore practices that are considered essential safeguards for research subjects as well as standard research design principles.

A basic tenet of ethical human subjects research is that the research subjects should consent to participate. That is why obtaining informed consent is a routine part of human subject research. In contrast, students whose courses are being assessed are typically not asked whether they are willing to participate in those assessments. They are simply told that they will be participating. Often there is what an IRB would see as coercion. Whether it’s 20 points of extra credit for doing the posttest or embedding an essay that will be used for assessment in the final exam, assessors go out of their way to compel participation in the study.

Given that assessment involves little physical or psychological risk, the coercion of assessment subjects is not that big of a deal. What is more interesting to me is how assessment plans ignore most of the standard practices of good research. In a typical assessment effort, the assessor first decides what the desired outcomes in his course or program are. Sometimes the next step is to determine what level of knowledge or skill students bring with them when they start the course or program, although that is not always done. The final step is to have some sort of posttest or “artifact” -- assessmentspeak for a student-produced product like a paper rather than, say, a potsherd -- which can be examined (invariably with a rubric) to determine if the course or program outcomes have been met.

On some levels, this looks like research. The pretest gives you a baseline measurement, and then, if students do X percent better on the posttest, you appear to have evidence that they made progress. Even if you don’t establish a baseline, you might still be able to look at a capstone project and say that your students met the declared program-level outcome of being able to write a cogent research paper or design and execute a psychology experiment.

From an IRB perspective, however, this is not research. It does not produce generalizable knowledge, in that the success or, more rarely, failure to meet a particular course or program outcome does not allow us to make inferences about other courses or programs. So what appears to have worked for my students, in my World History course, at my institution, may not provide any guidance about what will work at your institution, with your students, with your approach to teaching.

If assessment does not offer generalizable knowledge, does assessment produce meaningful knowledge about particular courses or programs? I would argue that it does not. Leaving aside arguments about whether the blunt instrument of learning outcomes can capture the complexity of student learning or whether the purpose of an entire degree program can be easily summed up in ways that lend themselves to documentation and measurement, it is hard to see how assessment is giving us meaningful information, even concerning specific courses or programs.

First, the people who devise and administer the assessment have a stake in the outcome. When I assess my own course or program, I have an interest in the outcome of that assessment. If I create the assessment instrument, administer it and assess it, my conscious or even unconscious belief in the awesomeness of my own course or program is certain to influence the results. After all, if my approach did not already seem to be the best possible way of doing things, as a conscientious instructor, I would have changed it long ago.

Even if I were the rare human who is entirely without bias, my assessment results would still be meaningless, because I have no way of knowing what caused any of the changes I have observed. I have never seen a control group used in an assessment plan. We give all the students in the class or program the same course or courses. Then we look at what they can or cannot do at the end and assume that the course work is the cause of any change we have observed. Now, maybe this a valid assumption in a few instances, but if my history students are better writers at the end of the semester than they were at the beginning of the semester, how do I know that my course caused the change?

It could be that they were all in a good composition class at the same time as they took my class, or it could even be the case, especially in a program-level assessment, that they are just older and their brains have matured over the last four years. Without some group that has not been subjected to my course or program to compare them to, there is no compelling reason to assume it’s my course or program that’s causing the changes that are being observed.

If I developed a drug and then tested it myself without a control group, you might be a bit suspicious about my claims that everyone who took it recovered from his head cold after two weeks and thus that my drug is a success. But these are precisely the sorts of claims that we find in assessment.

I suspect that most academics are either consciously aware or at least unconsciously aware of these shortcomings and thus uneasy about the way assessment is done. That no one says anything reflects the sort of empty ritual that assessment is. Faculty members just want to keep the assessment office off their backs, the assessment office wants to keep the accreditors at bay and the accreditors need to appease lawmakers, who in turn want to be able to claim that they are holding higher education accountable.

IRBs are not supposed to critique research design unless it affects the safety of human subjects. However, they are supposed to weigh the balance between the risks posed by the study and the benefits of the research. Above all, you should not waste the time or risk the health of human subjects with research that is so poorly designed that it cannot produce meaningful results.

So, acknowledging that assessment is not research and not governed by IRB rules, it still seems that something silly and wasteful is going on here. Why is it acceptable that we spend more and more time and money -- time and money that have real opportunity costs and could be devoted to our students -- on assessment that is so poorly designed that it does not tell us anything meaningful about our courses or students? Whose interests are really served by this? Not students. Not faculty members.

It’s time to stop this charade. If some people want to do real research on what works in the classroom, more power to them. But making every program and every faculty member engage in nonresearch that yields nothing of value is a colossal, frivolous waste of time and money.

Erik Gilbert is a professor of history at Arkansas State University.

Editorial Tags: 
Image Source: 
iStock/applesimon
Is this diversity newsletter?: 

Pages

Subscribe to RSS - assessmentaccountability
Back to Top