For accreditation, 2016 will be remembered as an inflection point, a pivotal moment, a culmination of a multiyear revamping, which means this space is now dominated by two features.
The federal government, through the U.S. Department of Education, has consolidated its authority over accreditation. It is now the major actor directing and leading this work. Second, the public, whether members of the news media, consumer protection advocates, think tanks or employers, is now in agreement that the primary task of accreditation is public accountability. That means accredited status is supposed to be about protecting students -- to serve as a signal that what an institution or program says about itself is reliable, that there are reasonable chances of student success and that students will benefit economically in some way from the educational experience.
Both the strengthened federal oversight and expectations of public accountability have staying power. They are not temporary disruptions. They will remake accreditation for the foreseeable future.
At least some government authority over accreditation and public concern about the space and accountability are not new. What is new and what makes this moment pivotal is the extent to which there is agreement on both the expanded federal role and public accountability. And both are in significant contrast to longstanding practice of accrediting organizations as independent, nongovernmental bodies accustomed to setting their own direction and determining their own accountability.
This disruption can result in serious drawbacks for accreditation and higher education -- and students. Those drawbacks include a loss of responsible independence for both accreditation and the higher education institutions that are accredited. This independence has been essential to the growth and development of U.S. higher education as an outstanding enterprise both when it comes to quality and to access. There are concerns about maintaining academic freedom, so vital to high-quality teaching and research, in the absence of this independence. We have not, in this country, experimented with government and the public determining quality, absent academics themselves. Government calls for standardization in accreditation can, however unintentionally, undermine the valuable variation of types of colleges and universities, reducing options for students.
Consolidation of Federal Oversight
By way of background, “accreditation is broken” has been a federal government mantra for several years now. For the U.S. Congress, both Democrats and Republicans, as well as the executive branch, messages about perceived deficiencies of accreditation have been driving the push for greater government oversight, whether delivered from a secretary of education describing accreditors as “watchdogs that don’t bite” or an under secretary talking about how accreditors are “asleep at the switch” or a senator maintaining that “too often the accreditation means nothing” or a leading House representative saying accreditors may have to change how they operate in the changing landscape of higher education.
Both Congress and the department are pushing accreditation to focus more intently on both the performance of institutions and the achievement of students. From a federal perspective, “quality” is now about higher graduation rates, less student debt and default, better jobs, and decent earnings. The Education Department’s Transparency Agenda, announced last fall, has become a major vehicle to assert this federal authority. The Agenda ties judgment about whether accreditation is effective to graduation and default information, with the department, for the first time, publishing such data arrayed by accreditors and publishing accreditors’ student achievement standards -- or identifying the absence of such standards. The department also is taking steps to move accreditors toward standardizing the language of accreditation, toward more emphasis on quantitative standards and toward greater transparency about accreditation decisions.
Consistent with the Agenda, the National Advisory Committee on Institutional Quality and Integrity (NACIQI), the federal body tasked with recommending to the secretary of education whether accrediting organizations are to be federally recognized, is now including attention to graduation and default rates as part of its periodic recognition reviews. Committee meetings involve more and more calls for judging accrediting organizations’ fitness for federal recognition based less on how these organizations operate and more on how well their accredited institutions and programs are doing when it comes to graduation and debt. And NACIQI has been clear that, because of the importance to the public and to protecting students, all activities of accrediting organizations now need to be part of the committee’s purview.
Most recently, Democratic Senators Elizabeth Warren, Dick Durbin and Brian Schatz introduced a bill on accreditation that would upend the process. The bill captures the major issues and concerns that have been raised by Congress and the department during the past few years, offering remedies driven by expanding federal authority over accreditors and institutions: federally imposed student achievement standards, a federal definition of quality, federal design of how accreditation is to operate and federal requirements that accrediting organizations include considerations of debt, default, affordability and success with Pell Grants as part of their standards. While it is unlikely that anything will happen with this bill during the remainder of the year, it provides a blueprint for change in accreditation for the next Congress and perhaps the foundation for the future reauthorization the Higher Education Act itself.
Moreover, as government plays a more prominent role in accreditation, the process has become important enough to be political. Lawmakers sometimes press individual accrediting organizations to act against specific institutions or to protect certain institutions. Across both the for-profit and nonprofit sectors, lawmakers make their own judgments and are public about whether individual institutions are to have accredited status and how well individual accrediting organizations do their jobs. Now, when accrediting organizations go before NACIQI, not only are they concerned about meeting federal law and regulation, but they are also focused on the politics around any of their institutions or programs.
In short, the shift in Washington -- defining quality expectations for accreditors in contrast to accepting how accreditors define quality, intensive and extensive managing of how accreditors are carrying out their work in contrast to leaving this management to the accreditors, seeking to standardize accreditation practice in contrast to the variation in practice that comes with a decentralized accreditation world of 85 different accrediting organizations -- has placed the federal government in a strong oversight role. There is bipartisan support in Congress and across branches of government for this rearrangement of the accreditation space. It is difficult to imagine that the extent to which the federal government influences the activity and direction of accreditation will diminish any time soon, if at all.
Consolidation of Public Expectations
The pressure on accreditation for public accountability has significant staying power in a climate where higher education is both essential and, for many, expensive, even with federal and state assistance. There is a sense of urgency surrounding the need for at least some higher education for individual economic and social well-being as well as the future competitiveness and capacity of society. At the same time, disturbingly, student loan debt now totals more than $1.3 trillion, and in 2016 the average graduate who took out loans to complete a bachelor’s degree owed more than $37,000. In this environment, the public wants accreditation to focus on students gaining a quality education at a manageable financial level.
Accreditation is now the public’s business. On a weekly basis, multiple articles on accreditation appear in the news media, nationally and internationally. Social media reflect this as well, with any article about accreditation, but especially negative news, engaging large numbers of people in a very short period of time. Think tank reports on accreditation are increasing in number, mostly focused on how it needs to change.
From all sources, the focus is on accreditation and whether it is a reliable source of public accountability. Media attention is on default rates as too high and graduation rates as too low, on repeated expressions of employer dissatisfaction with employees’ skills and whether accredited institutions do a good job of preparing workers. In the face of a constant stream of articles highlighting these concerns, the public increasingly questions what accreditation accomplishes and, in particular, whether it is publicly accountable.
Moreover, where judgments about academic quality were once left to accreditors and institutions, technology now enables the news media and the public to make such judgments on their own. Enormous amounts of data on colleges and universities are readily available, from graduation rates to attrition, retention and transfer rates. Multiple data sources such as the federal government’s College Scorecard, College Navigator and Education Trust’s College Results Online are now available to be used by students, families, employers and journalists. Urgency, concern and widespread opportunity to make one’s own judgment about quality have all coalesced to raise questions about why any reliance on accreditation is needed, unless accreditation carries out this public accountability role. Perhaps the most striking example of this development is Google’s recent announcement that it is working with the College Scorecard to present Scorecard data (e.g., graduation rates, earnings, tuition) as part of a display when people search for a particular college or university.
This, then, is the revamped accreditation space, with the federal government determining the direction of accreditation and a public that is driving accreditation into a predominantly public accountability role.
Will this revamping be successful? Will students be better served? Only if government, the public, higher education and accreditation can strike a balance. Expanded government oversight should be accompanied by acknowledging and respecting the independence, academic judgment and academic leadership long provided by colleges and universities and central to effective higher education and accreditation. Emphasis on public accountability should be accompanied by valuing the role of academics in determining quality. By and large, this has been accomplished through the relationship between accreditation, higher education and government until recently. The way forward needs this same balance.
Judith S. Eaton is president of the Council for Higher Education Accreditation, a membership association of 3,000 degree-granting colleges and universities.
The American Bar Association, whose accrediting arm oversees law schools across the country, announced this month that it has censured Valparaiso University School of Law and placed the Charlotte School of Law on probation.
According to the ABA's archive, it's the first time the organization has censured a law school since 2013 and the first time it has placed a law school on probation in at least five years.
A censure is one of several possible sanctions the ABA may impose on a law school program, ranging from fines to withdrawal of approval.
Amid criticism this summer from the federal body that oversees higher education accreditors, the ABA has taken a tough stance in several recent oversight decisions. In August, its accrediting arm recommended against approving the new University of North Texas Dallas College of Law (an announcement last week said UNT Dallas would get another chance to earn accreditation). In the same month, it found the admissions practices at Ave Maria Law School in Florida out of compliance with standards. The ABA, however, said those actions were not taken in response to the criticism of its oversight practices.
The notices for both the Valparaiso University and Charlotte schools of law cited lack of compliance with standards requiring that a school only admit applicants who appear likely to succeed in the program and pass the bar. The probation notice for the Charlotte School of Law also cited a standard requiring a school to maintain a rigorous program of legal education.
The Charlotte School of Law responded to the ABA decision in a statement on its website.
In a rare moment of inattention a couple of years ago, I let myself get talked into becoming the chair of my campus’s Institutional Review Board. Being IRB chair may not be the best way to endear oneself to one’s colleagues, but it does offer an interesting window into how different disciplines conceive of research and the many different ways that scholarly work can be used to produce useful knowledge.
It has also brought home to me how utterly different research and assessment are. I have come to question why anyone with any knowledge of research methods would place any value on the results of typical learning outcomes assessment.
IRB approval is required for any work that involves both research and human subjects. If both conditions are met, the IRB must review it; if only one is present, the IRB can claim no authority. In general, it’s pretty easy to tell when a project involves human subjects, but distinguishing nonresearch from research, as it is defined by the U.S. Department of Health and Human Services, is more complicated. It depends in large part on whether the project will result in generalizable knowledge.
Determining what is research and what is not is interesting from an IRB perspective, but it has also forced me to think more about the differences between research and assessment. Learning outcomes assessment looks superficially like human subjects research, but there are some critical differences. Among other things, assessors routinely ignore practices that are considered essential safeguards for research subjects as well as standard research design principles.
A basic tenet of ethical human subjects research is that the research subjects should consent to participate. That is why obtaining informed consent is a routine part of human subject research. In contrast, students whose courses are being assessed are typically not asked whether they are willing to participate in those assessments. They are simply told that they will be participating. Often there is what an IRB would see as coercion. Whether it’s 20 points of extra credit for doing the posttest or embedding an essay that will be used for assessment in the final exam, assessors go out of their way to compel participation in the study.
Given that assessment involves little physical or psychological risk, the coercion of assessment subjects is not that big of a deal. What is more interesting to me is how assessment plans ignore most of the standard practices of good research. In a typical assessment effort, the assessor first decides what the desired outcomes in his course or program are. Sometimes the next step is to determine what level of knowledge or skill students bring with them when they start the course or program, although that is not always done. The final step is to have some sort of posttest or “artifact” -- assessmentspeak for a student-produced product like a paper rather than, say, a potsherd -- which can be examined (invariably with a rubric) to determine if the course or program outcomes have been met.
On some levels, this looks like research. The pretest gives you a baseline measurement, and then, if students do X percent better on the posttest, you appear to have evidence that they made progress. Even if you don’t establish a baseline, you might still be able to look at a capstone project and say that your students met the declared program-level outcome of being able to write a cogent research paper or design and execute a psychology experiment.
From an IRB perspective, however, this is not research. It does not produce generalizable knowledge, in that the success or, more rarely, failure to meet a particular course or program outcome does not allow us to make inferences about other courses or programs. So what appears to have worked for my students, in my World History course, at my institution, may not provide any guidance about what will work at your institution, with your students, with your approach to teaching.
If assessment does not offer generalizable knowledge, does assessment produce meaningful knowledge about particular courses or programs? I would argue that it does not. Leaving aside arguments about whether the blunt instrument of learning outcomes can capture the complexity of student learning or whether the purpose of an entire degree program can be easily summed up in ways that lend themselves to documentation and measurement, it is hard to see how assessment is giving us meaningful information, even concerning specific courses or programs.
First, the people who devise and administer the assessment have a stake in the outcome. When I assess my own course or program, I have an interest in the outcome of that assessment. If I create the assessment instrument, administer it and assess it, my conscious or even unconscious belief in the awesomeness of my own course or program is certain to influence the results. After all, if my approach did not already seem to be the best possible way of doing things, as a conscientious instructor, I would have changed it long ago.
Even if I were the rare human who is entirely without bias, my assessment results would still be meaningless, because I have no way of knowing what caused any of the changes I have observed. I have never seen a control group used in an assessment plan. We give all the students in the class or program the same course or courses. Then we look at what they can or cannot do at the end and assume that the course work is the cause of any change we have observed. Now, maybe this a valid assumption in a few instances, but if my history students are better writers at the end of the semester than they were at the beginning of the semester, how do I know that my course caused the change?
It could be that they were all in a good composition class at the same time as they took my class, or it could even be the case, especially in a program-level assessment, that they are just older and their brains have matured over the last four years. Without some group that has not been subjected to my course or program to compare them to, there is no compelling reason to assume it’s my course or program that’s causing the changes that are being observed.
If I developed a drug and then tested it myself without a control group, you might be a bit suspicious about my claims that everyone who took it recovered from his head cold after two weeks and thus that my drug is a success. But these are precisely the sorts of claims that we find in assessment.
I suspect that most academics are either consciously aware or at least unconsciously aware of these shortcomings and thus uneasy about the way assessment is done. That no one says anything reflects the sort of empty ritual that assessment is. Faculty members just want to keep the assessment office off their backs, the assessment office wants to keep the accreditors at bay and the accreditors need to appease lawmakers, who in turn want to be able to claim that they are holding higher education accountable.
IRBs are not supposed to critique research design unless it affects the safety of human subjects. However, they are supposed to weigh the balance between the risks posed by the study and the benefits of the research. Above all, you should not waste the time or risk the health of human subjects with research that is so poorly designed that it cannot produce meaningful results.
So, acknowledging that assessment is not research and not governed by IRB rules, it still seems that something silly and wasteful is going on here. Why is it acceptable that we spend more and more time and money -- time and money that have real opportunity costs and could be devoted to our students -- on assessment that is so poorly designed that it does not tell us anything meaningful about our courses or students? Whose interests are really served by this? Not students. Not faculty members.
It’s time to stop this charade. If some people want to do real research on what works in the classroom, more power to them. But making every program and every faculty member engage in nonresearch that yields nothing of value is a colossal, frivolous waste of time and money.
Erik Gilbert is a professor of history at Arkansas State University.
Submitted by Sarah Bray on November 15, 2016 - 3:00am
Is English 101 really just English 101? What about that first lab? Is a B or C in either of those lower-division courses a bellwether of a student’s likelihood to graduate? Until recently, we didn’t think so, but more and more, the data are telling us yes. In fact, insights from our advanced analytics have helped us identify a new segment of at-risk students hiding in plain sight.
It wasn’t until recently that the University of Arizona discovered this problem. As we combed through volumes of academic data and metrics with our partner, Civitas Learning, it became evident that students who seemed poised to graduate were actually leaving at higher rates than we could have foreseen. Why were good students -- students with solid grades in their lower-division foundational courses -- leaving after their first, second or even third year? And what could we do to help them stay and graduate from UA?
There’s a reason it’s hard to identify which students fall into this group; they simply don’t exhibit the traditional warning signs as defined by the retention experts. These students persist into the higher years but never graduate despite the fact that they’re strong students. They persist past their first two years and over 40 percent have GPAs above 3.0 -- so how does one diagnose them as at risk when all metrics indicate that they’re succeeding? Now we’re taking a deeper look at the data from the entire curriculum to find clues about what these students really need and even redefine our notion of what “at risk” really means.
Lower-division foundational courses are a natural starting point for us. These are the courses where basic mastery -- of a skill like writing or the scientific process -- begins, and mastery of these basics increases in necessity over the years. Writing, for instance, becomes more, not less, important over students’ academic careers. A 2015 National Survey of Student Engagement at UA indicated that the number of pages of writing assigned in the academic year to freshmen is 55, compared to 76 pages for seniors. As a freshman or sophomore, falling behind even by a few fractions can hurt you later on.
To wit, when a freshman gets a C in English 101, it doesn’t seem like a big deal -- why would it? She’s not at risk; she still has a 3.0, after all. But this student has unintentionally stepped into an institutional blind spot, because she’s a strong student by all measures. Our data analysis now shows that this student may persist until she hits a wall, usually during her major and upper-division courses, which is oftentimes difficult to overcome.
Let’s fast forward two years, then, when that same freshman is a junior enrolled in demanding upper-level classes. Her problem, a lack of writing command, has compounded into a series of C’s or D’s on research papers. A seemingly strong student is now at risk to persist, and her academic life becomes much less clear. We all thought she was on track to graduate, but now what? From that point, she may change her major, transfer to another institution or even exit college altogether. In the past, we would never have considered wraparound support services for students who earned a C in an intro writing course or a B in an intro lab course, but today we understand that we have to be ready and have to think about a deeper level of academic support across the entire life cycle of an undergrad.
Nationally, institutions like ours have developed many approaches to addressing the classic challenges of student success, developing an infrastructure of broad institutional interventions like centralized tutoring, highly specialized support staff, supplemental classes and more. Likewise, professors and advisers have become more attuned to responding to the one-on-one needs of students who may find themselves in trouble. There’s no doubt that this high/low approach has made an impact and our students have measurably benefited from it. But to assist students caught in the middle, those that by all measurement are already “succeeding,” we have to develop a more comprehensive institutional approach that works at the intersections of curricular innovation and wider student support.
Today, we at UA are adding a new layer to the institutional and one-to-one approaches already in place. In our courses, we are pushing to ensure that mastery matters more than a final grade by developing metrics and models that are vital to student learning. This, we believe, will lead to increases in graduation rates. We are working hand in hand with college faculty members, administrators and curriculum committees, arming those partners with the data necessary to develop revisions and supplementary support for the courses identified as critical to graduation rather than term-over-term persistence. We are modeling new classroom practices through the expansion of student-centered active classrooms and adaptive learning to better meet the diverse needs of our students.
When mastery is what matters most, the customary objections to at-risk student intervention matter less. Grade inflation by the instructor and performance for grade by the student become irrelevant. A foundational course surrounded by the support that a student often finds in lower-division courses is not an additional burden to the student, but an essential experience. Although the approach is added pressure on the faculty and staff, it has to be leavened with the resources that help both the instructor and the students succeed.
This is a true universitywide partnership to help a population of students who have found themselves unintentionally stuck in the middle. We must be data informed, not data driven, in supporting our students, because when our data are mapped with a human touch, we can help students unlock their potential in ways even they couldn’t have imagined.
Angela Baldasare is assistant provost for institutional research. Melissa Vito is senior vice president for student affairs and enrollment management and senior vice provost for academic initiatives and student success. Vincent J. Del Casino Jr. is provost of digital learning and student engagement and associate vice president of student affairs and enrollment management at the University of Arizona.
Submitted by Paul Fain on October 17, 2016 - 3:00am
The Lumina Foundation on Monday released a revised strategic plan for achieving its goal of 60 percent of Americans holding a college degree, certificate or other high-quality credential by 2025. The foundation has released a new plan every four years since first proposing the goal in 2008.
The latest iteration provides a more detailed breakdown of the 16.4 million Americans who will need to earn a credential to meet the goal. About 4.8 million are traditional-age students who now are not likely to earn a college degree or certificate. Another 6.1 million are potential returning adult students, who attended college but did not earn a credential. The final group is 5.5 million with no college credits -- 64 million Americans fit this description, Lumina said.
"Through the work we’ve done under our first two strategic plans, we have learned what it will take to reach the goal. But we also have learned that the changes that must be made are not mere tweaks. Modest, incremental improvement will not suffice. Indeed, fundamental redesign is required," the report said. "We must move from a system that is centered on institutions and organized around time to one that is centered on students, organized around high-quality learning and focused on closing attainment gaps. In short, we must build a true system of postsecondary learning from the disconnected and fragmented pieces we have now."
While political support in Washington builds slowly for a federal student record database, Indiana and the University of Texas System get creative with their own data on how students fare after college.
Submitted by Paul Fain on October 7, 2016 - 3:00am
B Lab is a nonprofit group that issues a seal of approval to companies across 120 industries that adhere to voluntary standards based on social and environmental performance, accountability and transparency. After a two years of work, the group on Friday released a new benchmarking tool for colleges. The voluntary standards are designed to enable comparisons of both nonprofit and for-profit institutions.
"B Lab recognizes that the cost and outcomes of higher education, particularly regarding for-profit institutions, have become increasingly controversial, but regardless of structure institutions should put their students’ needs first," Dan Osusky, standards development manager at B Lab, said in a written statement. "We see our role as the promoter of robust standards of industry-specific performance that can be used by for-profits and nonprofits alike to create the greatest possible positive impact and serve the public interest, ultimately by improving the lives of their students."
A committee of experts, working with HCM Strategists and with funding from the Lumina Foundation, devised the standards. Laureate Education, a global for-profit chain, already uses the assessment tool.