Institutional administration

50 questions about higher education (essay)

I’ve worked in higher education for 23 years, 4 months and 6 days. If you add college and grad school to the mix, I’ve been associated with universities for (let’s see... carry the five... plus two… equals) a long time.

So I’ve had plenty of opportunities to ponder our peculiar industry and consider why things are the way they are.

People always ask me that very question. Really -- they just come up to me at parties, shrug their shoulders and say, “Why?” I try not to think it’s some kind of existential query or it’s because I’m wearing a plaid jacket with a striped shirt and a polka dot tie. I might develop a complex or something.

No, I think we simply have more questions than answers. To wit:

  1. Why does our year end in June (or July, for some) when the rest of the world thinks in terms of, you know, January to December?
  1. Why, when we’re considering change of any sort, is the most frequently uttered phrase, “Because we’ve always done it that way”?
  1. Why, when communicating externally, do we use jargon and buzzwords only we understand?
  1. Why do we aim to obfuscate and befuddle in the Orwellian tradition?
  1. Why do some believe academic freedom extends beyond the normal boundaries of free speech and, for that matter, decorum?
  1. Why do we assume academic freedom doesn’t exist absent tenure?
  1. Why do we think the public understands tuition discounting and won’t have sticker shock?
  1. Why do birds suddenly appear…?
  1. Why don’t TV crews follow athletes from the field to the library after Saturday night’s big game to show that academic ability and athletic prowess can live in true harmony?
  1. Why does every campus community in America complain about parking as if it’s their own private hell?
  1. Why don’t we conclude that if it takes 10 months to fill an important administrative vacancy and the place doesn’t fold in the meantime, then perhaps we could do without it?
  1. Why are there no classes on Fridays?
  1. Why are there classes at 8 a.m.?
  1. Why does the Big 12 have 10 members?
  1. Why does the Big Ten have 14?
  1. Why does the Atlantic Coast Conference think the coast extends to South Bend, Ind.?
  1. Why does the Big East consider Chicago east?
  1. Why are résumés 2 pages and vitae 30?
  1. Why do no decisions get made and no work gets done during the six weeks known as “the holidays”?
  1. Why are we no longer permitted to utter the word “Christmas”?
  1. Why do we hire experienced experts whose first order of business is to hire consultants?
  1. Why do fools fall in love?
  1. Why can’t I find SEC hockey on ESPNU?
  1. Why do adjuncts adjunct under such conditions?
  1. Why does the media pay so much attention to universities that collectively enroll less than 1 percent of our nation’s students?
  1. Why don’t they pay more attention to systemic issues such as those adjuncts?
  1. Why do students never read the syllabus until something goes wrong?
  1. Why do employees never read the employee manual until something goes wrong?
  1. Why do all mission statements sound the same and yet say nothing?
  1. Why aren’t there more bowling scholarships?
  1. Why do we still value seat time over competencies?
  1. Why do we conflate administrative experience with ability?
  1. Why do we need 22 assistant directors of admissions?
  1. Why is an appendix more valuable to a book than to a human body?
  1. Why can’t we be friends?
  1. Why do textbooks cost more than my first car?
  1. Why do textbooks depreciate faster than cars?
  1. Why do people post what they had for lunch on Facebook?
  1. Why do we respond?
  1. Why isn’t college baseball more popular?
  1. Why do we continue blaming rising costs on external regulations?
  1. Why do we need climbing walls?
  1. Why do we celebrate snow days like we’re in middle school?
  1. Why don’t we have more snow days?
  1. Why are the paved pathways across the quad never the shortest route?
  1. Why don’t we do it in the road?
  1. Why do people confuse deciding with doing?
  1. Why do we fuss with the various Latin declensions of “alumni” when it’s easier to say “graduates”?
  1. Why do we all say we recognize charismatic leadership when we see it but can’t seem to define charisma?
  1. Why ask why?

Mark J. Drozdowski is director of university communications at the University of New Haven. This is the latest installment of an occasional humor column, Special Edification.

Editorial Tags: 

Imperial College London investigates role of pressure in death of academic

Smart Title: 

Imperial College London will review policies after death of scholar who is believed to have felt his job was at risk due to failed grant applications.

Ratings and scorecards: the wrong kind of higher ed accountability (essay)

Calls for scorecards and rating systems of higher education institutions that have been floating around Washington, if used for purposes beyond providing comparable consumer information, would make the federal government an arbiter of quality and judge of institutional performance.

This change would undermine the comprehensive, careful scrutiny currently provided by regional accrediting agencies and focus on cursory reviews.

Regional accreditors provide a peer-review process that sparks an investigation into key challenges institutions face to look beyond symptoms for root causes. They force all providers of postsecondary education to investigate closely every aspect of performance that is crucial to strengthening institutional excellence, improvement, and innovation. If you want to know how well a university is really performing, a graduation rate will only tell you so much.

But the peer-review process conducted by accrediting bodies provides a view into the vital systems of the institution: the quality of instruction, the availability and effectiveness of student support, how the institution is led and governed, its financial management, and how it uses data.

Moreover, as part of the peer-review process, accrediting bodies mobilize teams of expert volunteers to study governance and performance measures that encourage institutions to make significant changes. No government agency can replace this work, can provide the same level of careful review, or has the resources to mobilize such an expert group of volunteers. In fact, the federal government has long recognized its own limitations and, since 1952, has used accreditation by a federally recognized accrediting agency as a baseline for institutional eligibility for Title IV financial-aid programs.

Attacked at times by policy makers as an irrelevant anachronism and by institutions as a series of bureaucratic hoops through which they must jump, the regional accreditors’ approach to quality control has rather become increasingly more cost-effective, transparent, and data- and outcomes-oriented.

Higher education accreditors work collaboratively with institutions to develop mutually agreed-upon common standards for quality in programs, degrees, and majors. In fact, in the Southern region, accreditation has addressed public and policy maker interests in gauging what students gain from their academic experience by requiring, since the 1980s, the assessment of student learning outcomes in colleges. Accreditation agencies also have established effective approaches to ensure that students who attend institutions achieve desired outcomes for all academic programs, not just a particular major.

While the federal government has the authority to take actions against institutions that have proven deficient, it has not used this authority regularly or consistently. A letter to Congress from the American Council on Education and 39 other organizations underscored the inability of the U.S. Department of Education to act with dispatch, noting that last year the Department announced “it would levy fines on institutions for alleged violations that occurred in 1995 -- nearly two decades prior.”

By contrast, consider that in the past decade, the Southern Association of Schools and Colleges Commission on Colleges stripped nine institutions of their accreditation status and applied hundreds of sanctions to all types of institutions (from online providers to flagship campuses) in its region alone. But, when accreditors have acted boldly in recent times, they been criticized by politicians for going too far, giving accreditors the sense that we’re “damned if we do, damned if we don’t.”

The Problem With Simple Scores

Our concern about using rating systems and scorecards for accountability is based on several factors. Beyond tilting the system toward the lowest common denominator of quality, rating approaches can create new opportunities for institutions to game the system (as with U.S. News & World Report ratings and rankings) and introduce unintended consequences as we have seen occur in K-12 education.

Over the past decade, the focus on a few narrow measures for the nation’s public schools has not led to significant achievement gains or closing achievement gaps. Instead, it has narrowed the curriculum and spurred the current public backlash against overtesting. Sadly, the data generated from this effort have provided little actionable information to help schools and states improve, but have actually masked -- not illuminated -- the root causes of problems within K-12 institutions.

Accreditors recognize that the complex nature of higher education requires that neither accreditors nor the government should dictate how individual institutions can meet desired outcomes. No single bright line measure of accountability is appropriate for the vast diversity of institutions in the field, each with its own unique mission. The fact that students often enter and leave the system and increasingly earn credits from multiple institutions further complicates measures of accountability.

Moreover, setting minimal standards will not push institutions that think they are high performing to get better. All institutions – even those considered “elite” – need to work continually to achieve better outcomes and should have a role in identifying key outcomes and strategies for improvement that meet their specific challenges.

Accreditors also have demonstrated they are capable of addressing new challenges without strong government action. With the explosion of online providers, accreditors found a solution to address the challenges of quality control for these programs. Accrediting groups partnered with state agencies, institutions, national higher education organizations, and other stakeholders to form the State Authorization Reciprocity Agreements, which use existing regional higher education compacts to allow for participating states and institutions to operate under common, nationwide standards and procedures for regulating postsecondary distance education. This approach provides a more uniform and less costly regulatory environment for institutions, more focused oversight responsibilities for states, and better resolution of complaints without heavy-handed federal involvement.

Along with taking strong stands to sanction higher education institutions that do not meet high standards, regional accreditors are better-equipped than any centralized governmental body at the state or national level to respond to the changing ecology of higher education and the explosion of online providers.

We argue for serious -- not checklist -- approaches to accountability that support improving institutional performance over time and hold institutions of all stripes to a broad array of criteria that make them better, not simply more compliant.

Belle S. Wheelan is president of the Southern Association of Colleges and Schools Commission on Colleges, the regional accrediting body for 11 states and Latin America. Mark A. Elgart is founding president and chief executive officer for AdvancED, the world’s largest accrediting body and parent organization for three regional K-12 accreditors.

The media should cast a more skeptical eye on higher ed reforms (essay)

It’s September and therefore time once again to clear this year’s collection of task force, blue ribbon panel, and conference reports to await the new harvest. Sad. Every one of these efforts was once graced by a newspaper article, often with breathless headline, reporting on another well-intentioned group’s solution to one or another of higher education’s problems.

By now we know that much of this work will have little positive impact on higher education, and realize that some of it might have been harmful. The question in either case is, where was the press?

Where were the challenges, however delicately phrased, asking about evidence, methodology, experimentation or concrete results? Why were press releases taken at face value, and why was there no follow-up to explore whether the various studies had any relevance or import in the real world?

The journalists I know are certainly equal to the task: bright, invested, interesting. But along with the excellent writing, where is the healthy skepticism and the questioning attitude of the scholar and the journalist?

This absence of a critical attitude has consequences. A myth, given voice, can cause untold harm. In one extreme example, the canard that accreditors trooped through schools “counting books” enabled a mindless focus on irrelevant measured learning outcomes, bright lines, metrics, rubrics and the like. This helped erode one of the most effective characteristics of accreditation and gave rise to a host of alternatives, once again unexamined, unreviewed, and unchallenged -- but with enough press space to enable them to take root.

Many of us do apply a healthy dose of constructive skepticism to the new, the untested, and the unverified. But it’s only reporters and journalists who have the ability to voice such concerns in the press.

No doubt it’s more pleasant to write about promising new developments than to express concern and caution. But don’t we have a right to expect this as well? Surely de Tocqueville’s press, whose "eye is always open" and which "forces public men to appear before the tribunal of public opinion" has bequeathed a sense of responsibility to probe and to scrutinize proposals and plans as well as people.

Consider, for example, the attitude of the press to MOOCs. First came the thrilling stories of millions of people studying quantum electrodynamics, as well as the heartwarming tale of the little girl high in the Alps learning Esperanto from a MOOC while guarding the family’s sheep. Or something.

The MOOC ardor has cooled, but it’s not because of a mature, responsible examination by the press.

The mob calling for disruption hasn’t dispersed, only the watchword is now "innovation." Any proposal that claims to teach students more effectively, at a lower cost and a quicker pace, is granted a place in the sun, while faculty and institutions are labeled as obstructionists trying to save their jobs.

That responsible voices don’t get heard often enough might be partially our fault. Even though every journalist went to college, this personal experience was necessarily limited. Higher education is maddeningly diverse, and writers should be invited to observe or participate in a variety of classes, at different levels and in all kinds of schools.

Accrediting agencies should invite more reporters to join site visits. Reality is a powerful teacher and bright journalists would make excellent students.

Reporters who understand higher education would also be more effective in examining proposed legislation. We need a questioning eye placed on unworkable or unrealistic initiatives to ensure that higher education not be harmed – as has been the case so often in the past.

Senator Tom Harkin’s recent Higher Education Act bill has language that would make accreditation totally ineffective. Hopefully it will be removed in further iterations of the legislation.

But wouldn’t we be better off if searching questions came from an independent, informed, and insistent press?

 

Bernard Fryshman is a professor of physics and former accreditor.

Editorial Tags: 

Pages

Subscribe to RSS - Institutional administration
Back to Top