Federal rating system could displace accreditation as judge of higher ed quality (essay)

With all the extensive consultation about the Postsecondary Institutions Ratings System during the past 18 months, all the meetings and the many conversations, we know almost nothing about its likely impact on accreditation, our all-important effort by colleges, universities and accrediting organizations working together to define, judge and improve academic quality.

All that the U.S. Department of Education has officially said to date is that the system will “help inform” accreditation -- and we do not know what this means. 

This is worrisome. Ratings create, in essence, a federal system of quality review of higher education, with the potential to upend the longstanding tradition of nongovernmental accreditation that has carried out this role for more than 100 years. And establishing the system may mean the end of more than 60 years of accreditation as a partner with government, the reliable authority on educational quality to which Congress and the Education Department have turned.

Accreditation is about judgment of academic quality in the hands of faculty members and academic administrators. It is about the commitment to peer review -- academics reviewing academics yet accountable to the public -- as the preferred, most effective mode of determining quality. It is about leadership for academic judgment when it comes to such factors as curriculum, programs, standards and strategic direction remaining in the hands of the academic community. 

In contrast, a ratings system is a path to a government model of quality review in place of the current model of academics as the primary judges of quality.

First introduced by President Obama in August 2013 and turned over to the Education Department for development, the ratings system is on track for implementation in 2015-16. Based on the still incomplete information the department has released to the public, the system is intended to rate (read: judge) colleges and universities based on three indicators: access, affordability and student outcomes. Institutions will be considered either “high performing,” “low performing” or “those in the middle.” Ultimately, the amount of federal student aid funding a college or university receives is intended to be linked to its rating.

A federal ratings system is both an existential and political challenge to accreditation.

First, there is the challenge of a potential shift of ownership of quality. Second, new key actors in judging quality may be emerging. Finally, the relationship between accreditation and the federal government when it comes to quality may be shifting, raising questions about both the gatekeeping role of accreditation in eligibility for federal funds and the agreement about distribution of responsibilities among the parties in the triad -- the federal government, the states and accreditation.

A ratings system means that government owns quality through its indicators and its decisions about what counts as success in meeting the indicators. The indicators replace peer review. 

It means that government officials are key actors in judging quality. Officials replace academics. With all respect to the talent and commitment of these officials, they are not hired for their expertise in teaching and learning, developing higher education curriculum, setting academic standards, or conducting academic research. Yet using a ratings system calls for just these skills.

A ratings system means that the relationship between accreditors and the federal government, with the accreditors as dominant with regard to quality judgments, may give way to a lesser role for accreditation, perhaps using performance on the ratings system as a key determinant of eligibility for federal funds -- in addition to accreditation. Or, it is not difficult to envision a scenario in which ratings replace accreditation entirely with regard to institutional eligibility for access to federal financial aid.

We need to know more about what we do not know about the ratings system. Going forward, we will benefit from keeping the following questions in mind as the system -- and its impact on accreditation -- continues to develop.

First, there are questions about the big picture of the ratings system:

  • Has a decision been made that the United States, with the single most distinctive system of a government-private sector partnership that maximizes the responsible independence of higher education, is now shifting to the model of government dominance of higher education that typifies most of the rest of the world?
  • What reliable information will be available to students and the public through the ratings system that they do not currently have? Will this information be about academic quality, including effective teaching and learning? What is the added value? 

Second, there are questions about the impact of the ratings on accredited institutions:

  • Are the indicators to serve as the future quality profile of a college or university? Will the three indicators that the system uses -- access, affordability and outcomes -- become the baseline for judging academic quality in the future? 
  • Will it be up to government to decide what counts as success with regard to the outcomes indicators for a college or university -- graduation, transfer of credit, entry to graduate school and earnings?
  • To claim quality, will colleges and universities have to not only provide information about their accredited status, but also their ratings, whether “high performing,” “low performing” or “in the middle”?
  • Will institutions be pushed to diminish their investment in accreditation if, ultimately, it is the ratings that matter -- in place of accreditation?

Finally, there are questions about how ratings will affect the day-to-day operation of accrediting organizations and their relationship to the federal government:

  • Will accreditors be required to collect/use/take into account the information generated by the ratings system? If so, how is this to influence their decisions about institutions and programs that are currently based on peer review, not ratings?
  • Will performance on the ratings system be joined with formal actions of accrediting organizations, with both required for accredited status and thus eligibility of institutions for federal funds -- in contrast to the current system of reliance on the formal actions of accrediting organizations?
  • How, if at all, will the ratings system affect the periodic federal review of the 52 accrediting organizations that are currently federally recognized? Will the government review now include the ratings of institutions as part of examination and judgment of an accreditor’s effectiveness?

While we cannot answer many of these questions at this time, we can use them to anticipate what may take place in the approaching reauthorization of the Higher Education Act, with bills expected in spring or summer.

We can use them to identify key developments in the ratings that have the potential to interfere with our efforts to retain peer review and nongovernmental quality review in preference to the ratings system.

Judith S. Eaton is president of the Council for Higher Education Accreditation.

Faculty members should drive efforts to measure student learning (essay)

Lumina Foundation recently released an updated version of its Degree Qualifications Profile (D.Q.P.), which helps define what students should know and what skills they should master to obtain higher education degrees.

This revised framework marks a significant step in the conversation about measuring students’ preparedness for the workforce and for life success based on how much they've learned rather than how much time they’ve spent in the classroom. It also provides a rare opportunity for faculty members at colleges and universities to take the lead in driving long-overdue change in how we define student success.

The need for such change has never been stronger. As the economy evolves and the cost of college rises, the value of a college degree is under constant scrutiny. No longer can we rely on piled-up credit hours to prove whether students are prepared for careers after graduation. We need a more robust -- and relevant -- way of showing that our work in the classroom yields results.

Stakeholders ranging from university donors to policy makers have pushed for redefining readiness, and colleges and universities have responded to their calls for action. But too often the changes have been driven by the need to placate those demanding reform and produce quick results. That means faculty input has been neglected.

If we’re to set up assessment reform for long-term success, we need to empower faculty members to be the true orchestrators.  

The D.Q.P. provides an opportunity to do that, jelling conversations that have been going on among faculty and advisers for years. Lumina Foundation developed the tool in consultation with faculty and other experts from across the globe and released a beta version to be piloted by colleges and universities in 2011. The latest version reflects feedback from the field, based on their experience with the beta version -- and captures the iterative, developmental processes of education understood by people who work with students daily.

Many of the professionals teaching in today’s college classrooms understand the need for change. They’re used to adapting to ever-changing technologies, as well as evolving knowledge. And they want to measure students’ preparedness in a way that gives them the professional freedom to own the changes and do what they know, as committed professionals, works best for students.

As a tool, the D.Q.P. encourages this kind of faculty-driven change. Rather than a set of mandates, the D.Q.P. is a framework that invites them to be change agents. It allows faculty to assess students in ways that are truly beneficial to student growth. Faculty members don't care about teaching to the assessment; they want to use what they glean from assessments to help improve student learning.

We’ve experienced the value of using the D.Q.P. in this fashion at Utah State University. In 2011, when the document was still in its beta version, we adopted it as a guide to help us rethink general education and its connection to our degrees and the majors within them. 

We began the process by convening disciplinary groups of faculty to engage them in a discussion about a fundamental question: “What do you think your students need to know, understand and be able to do?” This led to conversations about how students learn and what intellectual skills they need to develop.

We began reverse engineering the curriculum, which forced us to look at how general education and the majors work together to produce proficient graduates. This process also forced us to ask where degrees started, as well as ended, and taught us how important advisers, librarians and other colleagues are to strong degrees.

The proficiencies and competencies outlined in the D.Q.P. provided us with a common institutional language to use in navigating these questions. The D.Q.P.’s guideposts also helped us to avoid reducing our definition of learning to course content and enabled us to stay focused on the broader framework of student proficiencies at various degree milestones.

Ultimately the D.Q.P. helped us understand the end product of college degrees, regardless of major: citizens who are capable of thinking critically, communicating clearly, deploying specialized knowledge and practicing the difficult soft skills needed for a 21st-century workplace.

While establishing these criteria in general education, we are teaching our students to see their degrees holistically. In our first-year program, called Connections, we engage students in becoming "intentional learners" who understand that a degree is more than a major. This program also gives students a conceptual grasp of how to use their educations to become well prepared for their professional, personal and civic lives. They can explain their proficiencies within and beyond their disciplines and understand they have soft skills that are at a premium.

While by no means a perfect model, what we’ve done at Utah State showcases the power of engaging faculty and staff as leaders to rethink how a quality degree is defined, assessed and explained. Such engagement couldn’t be more critical.

After all, if we are to change the culture of higher learning, we can't do it without the buy-in from those who perform it. Teachers and advisers want their students to succeed, and the D.Q.P. opens a refreshing conversation about success that focuses on the skills and knowledge students truly need.

The D.Q.P. helps give higher education practitioners an opportunity to do things differently. Let’s not waste it.

Norm Jones is a professor of history and chairman of general education at Utah State University. Harrison Kleiner is a lecturer of philosophy at Utah State.

Editorial Tags: 

Teaching and Learning in Higher Education Conference

Thu, 06/04/2015 to Fri, 06/05/2015


201 S Grant Avenue
Columbus , Ohio 43215
United States

National Institute on the Assessment of Adult Learning

Tue, 06/02/2015 to Thu, 06/04/2015


Hyatt Regency Penn's Landing, 201 South Columbus Blvd.
Philadelphia , Pennsylvania 19106
United States

Obama proposal on tuition-free community colleges elevates sound bites over sound policy (essay)

President Obama has jumped on the bandwagon, which started in Tennessee, of making community college tuition-free. This latest proposal is his most recent effort to increase the prominence of the federal government in higher education. While giving higher education more federal  visibility may be a good thing, making community colleges tuition-free is also the latest in a series of proposals in which the administration seems to have decided that sound bites trump sound policy.

The cycle began in the administration’s early days when it declared its primary goal in higher education was to “re-establish” the U.S. as having the world’s highest attainment rate -- the proportion of working adults with a postsecondary degree of some sort. 

Never mind that the U.S. has not had the highest rate in the world for at least several decades and that achieving such a distinction now is well nigh impossible given where some other countries are. And also ignore the fact that some countries which have overtaken us, such as South Korea and Japan, have done so in large part because they are educating an increasing share of a declining number of their young people – a demographic condition we should want to avoid at all costs. 

In this effort to be Number One in higher education, the Obama administration is continuing a trend in K-12 education that began in the Clinton and George W. Bush administrations in which we as a nation set totally unrealistic goals to be achieved after the incumbent administration has left office.  Not clear why we would want to expand this practice into higher education, but that’s what we are doing.

The administration also in its first year pushed for a remarkable expansion of Pell Grants as part of the economic stimulus package of 2009. It was certainly good to augment Pell Grants in the midst of a severe recession when so many students were having a tough time paying their college bills. But rather than doing it on a temporary basis by increasing awards for current recipients, the administration pushed for and the Congress agreed to a permanent legislative change that increased the number of recipients by 50 percent and doubled long term funding. 

This is the equivalent of changing tax rates in the middle of a recession rather than providing a rebate.   It certainly provided more aid for many more students – nearly one in two undergraduates now receives a Pell Grant.  But the expansion in eligibility means less aid is available for the low-income students who most need it.  And few seem worried that Pell Grant increases may have led many institutions that package aid to reduce the grants they provide from their own funds to Pell recipients, as is reflected in the fact that institutional aid increasingly goes to middle-income students.

The Obama administration’s recent effort to develop a rating system for postsecondary institutions is another example of politics triumphing over sound policy. The rhetoric goes to the noble notion of making institutions more productive and more affordable, but the metrics the administration has proposed using are unlikely to produce the desired result or may well have the unintended effect of producing bad results.

Much more troublesome, the administration’s ratings proposal would penalize students based on where they decide to enroll, as those going to colleges that don’t perform well would get less aid. This is illogical as well as counterproductive. Thankfully, there seems little chance that this proposal would be adopted, but one is left to wonder why it was suggested and pushed when it would do little to address the many real challenges facing American higher education, such as chronic inequity and unaffordability.

Which brings me to the most recent proposal by President Obama – to make community colleges tuition-free.  At this stage, we know relatively little about what is being proposed other than that it is modeled on what was done in Tennessee where state lottery funds (not a very good federal model) were used to ensure that students with good grades would not have to pay tuition to go to community college.  But since there are so few details as to how this tuition-free package would be structured, there are more questions regarding the President’s proposal than there are answers.  These include:

Who will benefit and who will pay? If the administration were to follow the Tennessee plan, current Pell Grant recipients will largely not benefit as their Pell Grant award fully covers the cost of tuition at most community colleges throughout the country. So beneficiaries would disproportionately be middle-class students who mostly can afford $3,300 in annual average tuition costs of community college, just as has been the case for the Tennessee plan. 

The administration to its credit seems to recognize this potential lack of progressivity, and its spokesmen have declared (to Inside Higher Ed) that the new benefits will be on top of what Pell Grant recipients currently receive. This could be an avenue for a big step forward in federal policy were we to recognize that Pell Grants are largely for living expenses for students whose families cannot afford to pay those expenses, but it means that the federal costs of implementing such a plan will be substantial, probably far more than the $60 billion in additional costs over 10 years now being suggested.

Also lost in the enthusiasm about making community colleges tuition-free is the reality that the biggest bill for most students are the costs of living while enrolled and the opportunity costs of leaving the job market to enroll in school on more than an occasional basis. Also lost in the hubbub is the question of how these benefits are going to be paid for. This key financing question seems largely unanswered in the administration’s explanation thus far.

What would happen to enrollments in other higher education institutions? Advocates for the Tennessee Promise talk about how it has already boosted enrollments in community colleges. There seems to be little consideration, though, of whether this might come at the expense of enrollments in other colleges and universities. The Obama administration clearly prefers for students to go to community colleges rather than for-profit trade schools, but it seems to have little concern that offering more aid for students enrolling in community colleges will have any adverse effect on enrollments in more traditional four-year institutions -- including historically black colleges that could ill afford the dropoff in enrollments.    

But federal and state officials have an obligation to recognize that enrollments in higher education are not unlimited and that providing incentives for students to enroll in one sector means that enrollments in other sectors are likely to decline. Is the next step for the federal government to propose a program of support for those institutions that cannot afford to wait for all those new community college students to transfer in two or three years to fill their now empty seats?

Why would community colleges participate? Like many other federal and state policy initiatives, the president’s proposal reflects a tendency to think only in terms of demand and to believe that price reductions will inevitably result in enrollment increases. But the economic reality is that good policy must take into account institutional behavior as well, and it is not at all clear why community colleges would change their behavior in light of the Obama proposal. Under the Obama plan the federal and state governments would replace funds that families currently spend or loans that students currently borrow for tuition. The likely result of such a policy would be more students enrolling in already overcrowded community colleges will little or no additional funds provided to community colleges to educate them.

If one truly wants to improve community college financing, a better approach would be one in which governments recognize the additional costs entailed in enrolling additional students and try to help pay for those costs. But in the absence of such a proposal, the current Obama plan seems more of the same – more requirements but no more money. As a result, it is hard to understand the enthusiasm of the community college and other national associations for the president’s plan.

Why would states participate? It’s also not immediately clear why states would participate in the Obama plan as it is aimed primarily or entirely at changing how tuition is financed. As a result, it really would not get at the majority of the community college financing iceberg – what states and localities spend in support of every student who enrolls. So the question remains: why would states choose to participate in this plan that obligates them to meet a series of new requirements AND pay for one-quarter of tuition costs in addition to still paying what they do now for operating subsidies.

In sum, an analysis of what we know of the president’s plan is part of a troubling pattern that seems to characterize our higher education policy debates these days. Political considerations trump good policy. The interests of low-income students get second billing to middle class affordability, or no billing at all. Not enough attention is paid to how things actually would work or why institutions or states would decide to participate. 

It all goes to show that, as the economist John Maynard Keynes famously said, “There is no free lunch.” One of the problems with the Obama administration’s continuing enthusiasm for higher education policy initiatives is that is doesn’t seem to recognize this basic economic reality.

Arthur M. Hauptman is a public policy consultant specializing in higher education policy and finance. This is the first in a series of articles about how federal and state higher education policies might be changed to produce greater equity, efficiency and effectiveness.

Fundamentals of Program Assessment Workshop

Sat, 04/25/2015


Atlanta , Georgia
United States

Assessment (of the right kind) is key to institutional revival

Today, leaders of colleges and universities across the board, regardless of size or focus, are struggling to meaningfully demonstrate the true value of their institution for students, educators and the greater community because they can't really prove that students are learning.

Most are utilizing some type of evaluation or assessment mechanism to keep “the powers that be” happy through earnest narratives about goals and findings, interspersed with high-level data tables and colorful bar charts. However, this is not scientific, campuswide assessment of student learning outcomes aimed at the valid measure of competency.

The "Grim March" & the Meaning of Assessment

Campuswide assessment efforts rarely involve the rigorous, scientific inquiry about actual student learning that is aligned from program to program and across general education. Instead, year after year, the accreditation march has trudged grimly on, its participants working hard to produce a plausible picture of high “satisfaction” for the whole, very expensive endeavor.

For the past 20-plus years, the primary source of evidence for a positive impact of instruction has come from tools like course evaluation surveys. Institutional research personnel have diligently combined, crunched and correlated this data with other mostly indirect measures such as retention, enrollment and grade point averages.

Attempts are made to produce triangulation with samplings of alumni and employer opinions about the success of first-time hires. All of this is called “institutional assessment,” but this doesn’t produce statistical evidence from direct measurement that empirically demonstrates that the university is directly responsible for the students’ skill sets based on instruction at the institution. Research measurement methods like Chi-Square or Inter-rater reliability combined with a willingness to assess across the institution can demonstrably prove that a change in student learning is statistically significant over time and is the result of soundly delivered curriculum. This is the kind of “assessment” the world at large wants to know about.

The public is not satisfied with inferentially derived evidence. Given the cost, they yearn to know if their sons and daughters are getting better at things that matter to their long-term success. Employers routinely stoke this fire by expressing doubt about the out-of-the-box skills of graduates.

Who Owns Change Management

Whose responsibility is it to redirect the march to provide irrefutable reports that higher education is meeting the needs of all its stakeholders? Accreditors now wring their hands and pronounce that reliance on indirect measures will no longer suffice. They punish schools with orders to fix the shortfalls in the assessment of outcomes and dole out paltry five-year passes until the next audit. They will not, however, provide sound, directive steps for the marchers about how to systematically address learning outcomes.

How about the government? The specter of more third-party testing is this group’s usual response. They did it to K-12 and it has not worked there either. Few would be happy with that center of responsibility.

Back to the campus. To be fair, IR or offices of institutional effectiveness have been reluctant to get involved with direct measures of student performance for good reasons. Culture dictates that such measures belong to program leaders and faculty. The traditions and rules of “academic freedom” somehow demand this. The problem is that faculty and program leaders are indeed content experts, but they are no more versed in effective assessment of student outcomes than anyone else on campus.

This leaves us with campus leaders who have long suspected something is very wrong or at least misdirected. To paraphrase one highly placed academic officer, “We survey our students and a lot of other people and I’m told that our students are ‘happy.’ I just can’t find anyone who can tell me for sure if they’re ‘happy-smarter’ or not!” Their immersion in the compliance march does not give them much clue about what to do about the dissonance they are feeling.

The Assessment Renaissance

Still, the intelligent money is on higher ed presidents first and foremost, supported by their provosts and other chief academic officers. If there is to be deep change in the current culture they are the only ones with the proximal power to make it happen. The majority of their number has declared that “disruption” in higher education is now essential.

Leaders looking to eradicate the walking dead assessment march in a systematic way need to:

  1. Disrupt. This requires a college or university leader to see beyond the horizon and ultimately have an understanding of the long-term objective. It doesn’t mean they need to have all the ideas or proper procedures, but they must have the vision to be a leader and a disrupter. They must demand change on a realistic, but short timetable.
  2. Get Expertise. Outcomes/competency-based assessment has been a busy field of study over the past half-decade. Staff development and helping hands from outside the campus are needed.
  3. Rally the Movers and Shakers. In almost every industry, there are other leaders without ascribed power but whose drive is undeniable. They are the innovators and the early adopters. Enlist them as co-disruptors. On campuses there are faculty/staff that will be willing to take risks for the greater good of assessment and challenge the very fabric of institutional assessment. Gather them together and give them the resources, the authority and the latitude to get the job done. Defend them. Cheerlead at every opportunity.
  4. Change the Equation. Change the conversation from GPAs and satisfaction surveys to one essential unified goal: are students really learning and how can a permanent change in behavior be measurably demonstrated?
  5. Rethink your accreditation assessment software. Most accreditation software systems rely on processes that are narrative, not a systematic inquiry via data. Universities are full of people who research for a living. Give them tools (yes, like Chalk & Wire, which my company provides) to investigate learning and thereby rebuild a systematic approach to improve competency.
  6. Find the Carrots. Assume a faculty member in engineering is going to publish. Would a research-based study about teaching and learning in their field stand for lead rank and tenure? If disruption is the goal, then the correct answer is yes.

Assessment is complex, but it’s not complicated. Stop the grim march. Stand still for a time. Think about learning and what assessment really means and then pick a new proactive direction to travel with colleagues.

Geoff Irvine is CEO and founder of Chalk & Wire.

Editorial Tags: 

Administrators should work with the faculty to assess learning the right way (essay)

“Why do we have such trouble telling faculty what they are going to do?” said the self-identified administrator, hastening to add that he “still thinks of himself as part of the faculty.”

“They are our employees, after all. They should be doing what we tell them to do.”

Across a vast number of models for assessment, strategic planning, and student services on display at last month’s IUPUI Assessment Institute, it was disturbingly clear that assessment professionals have identified “The Faculty” (beyond the lip service to #notallfaculty, always as a collective body) as the chief obstacle to successful implementation of campuswide assessment of student learning. Faculty are recalcitrant. They are resistant to change for the sake of being resistant to change. They don’t care about student learning, only about protecting their jobs. They don’t understand the importance of assessment. They need to be guided toward the Gospel with incentives and, if those fail, consequences.

Certainly, one can find faculty members of whom these are true; every organization has those people who do just enough to keep from getting fired. But let me, at risk of offending the choir to whom keynote speaker Ralph Wolff preached, suggest that the faculty-as-enemy trope may well be a problem of the assessment field’s own making. There is a blindness to the organizational and substantive implications of assessment, hidden behind the belief that assessment is nothing more than collecting, analyzing, and acting rationally on information about student learning and faculty effectiveness.

Assessment is not neutral. In thinking of assessment as an effort to determine whether students are learning and faculty are being effective, it is imperative that we unpack the implicit subject doing the determining. That should make clear that assessment is first and foremost a management rather than a pedagogical practice. Assessment not reported to the administration meets the requirements of neither campus assessment processes nor accreditation standards, and is thus indistinguishable from non-assessment. As a fundamental principle of governance in higher education, assessment is designed to promote what social scientist James Scott has called “legibility”: the ability of outsiders to understand and compare conditions across very different areas in order to facilitate those outsiders’ capacity to manage.

The Northwest Commission on Colleges and Universities, for example, requires schools to practice “ongoing systematic collection and analysis of meaningful, assessable, and verifiable data” to demonstrate mission fulfillment. That is not simply demanding that schools make informed judgments. Data must be assessable and verifiable so that evaluators can examine the extent to which programs revise their practices using the assessment data. They can’t do that unless the data make sense to them. Administrators make the same demand on their departments through campus assessment processes. In the process a hierarchical, instrumentally rational, and externally oriented management model replaces one that has traditionally been decentralized, value rational, and peer-driven.

That’s a big shift in power. There are good (and bad) arguments to be made in favor of (and opposed to) it, and ways of managing assessment that shift that power more or less than others. Assessment professionals are naïve, however, to think that those shifts don’t happen, and fools to think that the people on the losing end of them will not notice or simply give in without objection.

At the same time, assessment also imposes substantive demands on programs through its demand that they “close the loop” and adapt their curriculums to those legible results regardless of how meaningful those results are to the programs themselves. An externally valid standard might demand significant changes to the curriculum that move the program away from its vision.

In my former department we used the ETS Major Field Test as such a standard. But while the MFT tests knowledge of political science as a whole, in political science competence is specific to subfields. Even at the undergraduate level students specialize sufficiently to be, for example, fully conversant in international relations and ignorant of political thought. The overall MFT score does not distinguish between competent specialization and broad mediocrity. One solution was to expect that students would demonstrate excellence in at least one subfield of the discipline. The curriculum would then have to require that students took nearly every course we offered in a subfield, and staffing realities in our program would inevitably make that field American politics.

Because the MFT was legible to a retired Air Force officer (the institutional effectiveness director), an English professor (the dean), a chemist (the provost), and a political appointee with no previous experience in higher education (the president), it stayed in place as a benchmark of progress, but offered little to guide program management. The main tool we settled on was an assessment of the research paper produced in a required junior-level research methods course (that nearly all students put off to their final semester). That assessment gave a common basis for evaluation (knowledge of quantitative research methods) and allowed faculty to evaluate substantive knowledge in a very narrow range of content through the literature review. But it also shifted emphasis toward quantitative work in the discipline, and further marginalized political thought altogether since that subfield isn’t based on empirical methods. We considered adding a political thought assignment, but that would have required students to prioritize that over the empirical fields (no other substantive field having a required assignment) rather than putting it on equal footing.

Evaluating a program with “meaningful, assessable, and verifiable data” can’t be done without changing the program. To “close the loop” based on MFT results required a substantive change in how we saw our mission: from producing well-rounded students to specialists in American politics. To do so with the methods paper required changes in course syllabuses and advising to bring more emphasis on empirical fields, more quantitative rather than qualitative work within those fields, more emphasis on methods supporting conclusions rather than the substance of the conclusions, and less coursework in political thought. We had a choice between these options. But we could not choose an option that would not require change in response to the standard, not just the results.

This is the reality facing those, like the administrator I quoted at the beginning of this essay, who believe that they can tell faculty what to do with assessment without telling them what to do with the curriculum. If assessment requires that a program make changes based on the results of its assessment processes, then the selection of processes defines a domain of curricular changes that can result. Some of these will be unavoidable: a multiple-choice test will require faculty to favor knowledge transmission over synthetic thinking. Others will be completely proscribed: if employment in the subfield of specialization is an assessment measure, the curriculum in political thought will never be reinforced, because people don’t work in political thought. But no process can be neutral among all possible curriculums.

Again, that may or may not be a bad thing. Sometimes a curriculum just doesn’t work, and assessment can be a way to identify it and replace it with something that does. But the substantive influence of assessment is most certainly a thing one way or the other, and that thing means that assessment professionals can’t say that assessment doesn’t change what faculty teach and how they teach it. When they tell faculty members that, they appear at best clueless and at worst disingenuous. With most faculty members having oversensitive BS detectors to begin with, especially when dealing with administrators, piling higher and deeper doesn’t exactly win friends and influence people.

The blindness that comes from belief in organizationally and curricularly neutral assessment is, I think, at the heart of the condescending attitudes toward faculty at the Assessment Institute. In the day two plenary session, one audience member asked, essentially, “What do we do about them?” as if there were no faculty members in the room. The faculty member next to me was quick to tune out as the panel took up the discussion with the usual platitudes about buy-in and caring about learning.

Throughout the conference there was plenty of discussion of why faculty members don’t “get it.” Of how to get them to buy into assessment on the institutional effectiveness office’s terms. Of providing effective incentives — carrots, yes, but plenty of sticks — to get them to cooperate. Of how to explain the importance of accreditation to them, as if they are unaware of even the basics. And of faculty paranoia that assessment was a means for the administration to come for their jobs.

What there wasn’t: discussion of what the faculty’s concerns with assessment actually are. Of how assessment processes do in fact influence what happens in classrooms. Of how assessment feeds program review, thus influencing administrative decisions about program closure and the allocation of tenure lines (especially of the conversion of tenure lines to adjunct positions when vacancies occur). Of the possibility that assessment might have unintended consequences that hinder student learning. These are very real concerns for faculty members, and should be for assessment professionals as well.

Nor was there discussion of what assessment professionals can do to work with faculty in a relationship that doesn’t subordinate faculty. Of how assessment professionals can build genuinely collaborative rather than merely cooptive relationships with faculty members. Of, more than anything, the virtues of listening before telling. When it comes to these things, it is the assessment field that doesn’t “get it.”

Let me assure you, as a former faculty member who talks about these issues with current ones: faculty members do care about whether students learn. In fact, many lose sleep over it. Faculty members informally assess their teaching techniques every time they leave a classroom and adjust what they do accordingly. In fact, that usually happens before they walk back into that classroom, not at the end of a two-year assessment cycle. Faculty members most certainly feel disrespected by suggestions they only care for themselves. In fact, it is downright offensive to suggest that they are selfish when in order to make learning happen they frequently make less than their graduates do and live in the places their graduates talk of escaping.

Assessment professionals need to approach faculty members as equal partners rather than as counterrevolutionaries in need of reeducation. That’s common courtesy, to be sure. But it is also essential if assessment is to actually improve student learning.

You do care about student learning, don’t you?

Jeffrey Alan Johnson is assistant director of institutional effectiveness and planning at Utah Valley University.

Editorial Tags: 

Ratings and scorecards: the wrong kind of higher ed accountability (essay)

Calls for scorecards and rating systems of higher education institutions that have been floating around Washington, if used for purposes beyond providing comparable consumer information, would make the federal government an arbiter of quality and judge of institutional performance.

This change would undermine the comprehensive, careful scrutiny currently provided by regional accrediting agencies and focus on cursory reviews.

Regional accreditors provide a peer-review process that sparks an investigation into key challenges institutions face to look beyond symptoms for root causes. They force all providers of postsecondary education to investigate closely every aspect of performance that is crucial to strengthening institutional excellence, improvement, and innovation. If you want to know how well a university is really performing, a graduation rate will only tell you so much.

But the peer-review process conducted by accrediting bodies provides a view into the vital systems of the institution: the quality of instruction, the availability and effectiveness of student support, how the institution is led and governed, its financial management, and how it uses data.

Moreover, as part of the peer-review process, accrediting bodies mobilize teams of expert volunteers to study governance and performance measures that encourage institutions to make significant changes. No government agency can replace this work, can provide the same level of careful review, or has the resources to mobilize such an expert group of volunteers. In fact, the federal government has long recognized its own limitations and, since 1952, has used accreditation by a federally recognized accrediting agency as a baseline for institutional eligibility for Title IV financial-aid programs.

Attacked at times by policy makers as an irrelevant anachronism and by institutions as a series of bureaucratic hoops through which they must jump, the regional accreditors’ approach to quality control has rather become increasingly more cost-effective, transparent, and data- and outcomes-oriented.

Higher education accreditors work collaboratively with institutions to develop mutually agreed-upon common standards for quality in programs, degrees, and majors. In fact, in the Southern region, accreditation has addressed public and policy maker interests in gauging what students gain from their academic experience by requiring, since the 1980s, the assessment of student learning outcomes in colleges. Accreditation agencies also have established effective approaches to ensure that students who attend institutions achieve desired outcomes for all academic programs, not just a particular major.

While the federal government has the authority to take actions against institutions that have proven deficient, it has not used this authority regularly or consistently. A letter to Congress from the American Council on Education and 39 other organizations underscored the inability of the U.S. Department of Education to act with dispatch, noting that last year the Department announced “it would levy fines on institutions for alleged violations that occurred in 1995 -- nearly two decades prior.”

By contrast, consider that in the past decade, the Southern Association of Schools and Colleges Commission on Colleges stripped nine institutions of their accreditation status and applied hundreds of sanctions to all types of institutions (from online providers to flagship campuses) in its region alone. But, when accreditors have acted boldly in recent times, they been criticized by politicians for going too far, giving accreditors the sense that we’re “damned if we do, damned if we don’t.”

The Problem With Simple Scores

Our concern about using rating systems and scorecards for accountability is based on several factors. Beyond tilting the system toward the lowest common denominator of quality, rating approaches can create new opportunities for institutions to game the system (as with U.S. News & World Report ratings and rankings) and introduce unintended consequences as we have seen occur in K-12 education.

Over the past decade, the focus on a few narrow measures for the nation’s public schools has not led to significant achievement gains or closing achievement gaps. Instead, it has narrowed the curriculum and spurred the current public backlash against overtesting. Sadly, the data generated from this effort have provided little actionable information to help schools and states improve, but have actually masked -- not illuminated -- the root causes of problems within K-12 institutions.

Accreditors recognize that the complex nature of higher education requires that neither accreditors nor the government should dictate how individual institutions can meet desired outcomes. No single bright line measure of accountability is appropriate for the vast diversity of institutions in the field, each with its own unique mission. The fact that students often enter and leave the system and increasingly earn credits from multiple institutions further complicates measures of accountability.

Moreover, setting minimal standards will not push institutions that think they are high performing to get better. All institutions – even those considered “elite” – need to work continually to achieve better outcomes and should have a role in identifying key outcomes and strategies for improvement that meet their specific challenges.

Accreditors also have demonstrated they are capable of addressing new challenges without strong government action. With the explosion of online providers, accreditors found a solution to address the challenges of quality control for these programs. Accrediting groups partnered with state agencies, institutions, national higher education organizations, and other stakeholders to form the State Authorization Reciprocity Agreements, which use existing regional higher education compacts to allow for participating states and institutions to operate under common, nationwide standards and procedures for regulating postsecondary distance education. This approach provides a more uniform and less costly regulatory environment for institutions, more focused oversight responsibilities for states, and better resolution of complaints without heavy-handed federal involvement.

Along with taking strong stands to sanction higher education institutions that do not meet high standards, regional accreditors are better-equipped than any centralized governmental body at the state or national level to respond to the changing ecology of higher education and the explosion of online providers.

We argue for serious -- not checklist -- approaches to accountability that support improving institutional performance over time and hold institutions of all stripes to a broad array of criteria that make them better, not simply more compliant.

Belle S. Wheelan is president of the Southern Association of Colleges and Schools Commission on Colleges, the regional accrediting body for 11 states and Latin America. Mark A. Elgart is founding president and chief executive officer for AdvancED, the world’s largest accrediting body and parent organization for three regional K-12 accreditors.


Subscribe to RSS - Assessment
Back to Top