State policy

Cross-State Cooperation

Smart Title: 
Reciprocity agreements would make the U.S. government's "state authorization" rule less burdensome for institutions and state agencies, but agreeing on standards could be hard.

On the Brink

Smart Title: 
West Virginia must come up with $100 million to ensure the future of one of its public colleges, a study finds.

Closing Time in California

Smart Title: 
With the state about to shutter its higher education coordinating board, education experts analyze a higher ed system that may be taking a nosedive.

Texas-Size Compromise

Smart Title: 
U. of Texas chancellor's plan to improve accountability and productivity receives praise from several directions, a rarity in the state these days.

Reign of the Politician-Chancellor

Smart Title: 
Former lawmakers in public-university leadership roles show Gov. Rick Perry's influence in shaping Texas' higher education institutions.

Freedom at a Price

Smart Title: 
Ohio governor's plan to deregulate state universities could spark another round of divisive debates.

Does Performance Funding Work?

Smart Title: 
Study finds little impact of state formulas that tie colleges' funds to outcomes -- but asserts that expanded programs might work.

Budgets Half Empty, Glass Half Full

Smart Title: 
While many public colleges will see state appropriations cut upwards of 10 percent, officials say the cuts aren't as bad as they could have been given the pressures facing lawmakers.

Accountability, Improvement and Money

Unfortunately, some of us are old enough to have passed through various incarnations of the accountability movement in higher education. Periodically university people or their critics rediscover the notion of accountability, as if the notion of being accountable to students, parents, legislators, donors, federal agencies, and other institutional constituencies were something new and unrecognized by our colleagues. We appear to have entered another cycle, signaled by the publication last month of a call to action by the State Higher Education Executive Officers (SHEEO) association, with support from the Ford Foundation, called "Accountability for Better Results."

The SHEEO report has the virtue of recognizing many of the reasons why state-level accountability systems fail, and focuses its attention primarily on the issue of access and graduation rates. While this is a currently popular and important topic, the SHEEO report illustrates why the notion of "accountability" by itself has little meaning. Universities and colleges have many constituencies, consumers, funding groups, interested parties, and friends. Every group expects the university to do things in ways that satisfy their goals and objectives, and seek "accountability" from the institution to ensure that their priorities drive the university’s performance. While each of these widely differentiated accountability goals may be appropriate for each group, the sum of these goals do not approach anything like "institutional accountability."

Accountability has special meaning in public universities where it usually signifies a response to the concerns of state legislators and other public constituencies that a campus is actually producing what the state wants with the money the state provides. This is the most common form of accountability, and often leads to accountability systems or projects that attempt to put all institutions of higher education into a common framework to ensure the wise expenditure of state money on the delivery of higher education products to the people.

In this form, accountability is usually a great time sink with no particular value, although it has the virtue of keeping everyone occupied generating volumes of data of dubious value in complex ways that will exhaust the participants before having any useful impact. The SHEEO report is particularly clear on this point.  

This form of accountability has almost no practical utility because state agencies cannot accurately distinguish one institution of higher education from the other for the purposes of providing differential funding. If the state accountability system does not provide differential funding for differential performance, then the exercise is more in the nature of an intense conversation about what good things the higher education system should be doing rather than a process for creating a system that could actually hold institutions accountable for their performance.  

Public agencies rarely hold institutions accountable because to do so requires that they punish the poor performers or at least reward the good performers. No institution wants a designation as a poor performer. An institution with problematic performance characteristics as measured by some system will mobilize every political agent at its disposal (local legislators, powerful alumni and friends, student advocates, parents) to modify the accountability criteria to include sufficient indicators on which they can perform well.

In response to this political pressure, and to accommodate the many different kinds, types and characteristics of institutions, the accountability system usually ends up with 20, 30 or more accountability measures. No institution will do well on all of them, and every institution will do well on many of them, so in the end, all institutions will qualify as reasonably effective to very effective, and all will remain funded more or less as before.

The lifecycle of this process is quite long and provides considerable opportunity for impassioned rhetoric about how well individual institutions serve their students and communities, how effective the research programs are in enhancing economic development, how valuable the public service activities enhance the state, and so on. At the end, when most participants have exhausted their energy and rhetoric, and when the accountability system has achieved stasis, everyone will declare a victory and the accountability impulse will go dormant for several years until rediscovered again.  

Often, state accountability systems offer systematic data reporting schemes with goals and targets defined in terms of improvement, but without incentives or sanctions. These systems assume that the value of measuring alone will motivate institutions to improve to avoid being marked as ineffective. This kind of system has value in identifying the goals and objectives of the state for its institutions, but often relegates the notion of accountability to the reporting of data rather than the allocation of money, where it could make a significant difference. 

If an institution, state, or other entity wants to insist on improved performance from universities, they must specify the performance they seek and then adjust state appropriations to reward those who meet or exceed the established standard. Reductions in state budgets for institutions that fail to perform are rare for obvious political reasons, but the least effective system is one that allocates funds to poorly performing institutions with the expectation that the reward for poor performance will motivate improvement. One key to effective performance improvement, reinforced in the SHEEO report, is strictly limiting the number of key indicators for measuring improvement.  If the number of indicators exceeds 10, the exercise is likely to find all institutions performing well on some indicator and therefore all deserving of continued support.

Differing Directions

Often the skepticism that surrounds state accountability systems stems from a mismatch between the goals of the state (with an investment of perhaps 30 percent or less of the institutional budget) and those of the institutions. Campuses may seek nationally competitive performance in research, teaching, outreach, and other activities. States may seek improvement in access and student graduation rates as the primary determinants of accountability. Institutions may see the state’s efforts as detracting from the institution’s drive toward national reputation and success. Such mismatches in goals and objectives often weaken the effectiveness of state accountability programs. 

Universities are very complex and serve many constituencies with many different expectations about the institutions’ activities. Improvement comes from focusing carefully on particular aspects of an institution’s performance, identifying reliable and preferably nationally referenced indicators, and then investing in success. While the selection of improvement goals and the development of good measures are essential, the most important element in all improvement programs is the ability to move money to reward success.

If an accountability system only measures improvement and celebrates success, it will produce a warm glow of short duration. Performance improvement is hard work and takes time, while campus budgets change every year. Effective measurement is often time consuming and sometimes difficult, and campus units will not participate effectively unless there is a reward. The reward that all higher education institutions and their constituent units understand is money. This is not necessarily money reflected in salary increases, although that is surely effective in some contexts.

Primarily what motivates university improvement, however, is the opportunity to enhance the capacity of a campus. If a campus teaches more students, and as a result earns the opportunity to recruit additional faculty members, this financial reward is of major significance and will motivate continued improvement. At the same time, the campus that seeks improvement cannot reward failure. If enrollment declines, the campus should not receive compensatory funding in hopes of future improvement. Instead, a poorly performing campus should work harder to get better so it too can earn additional support.

In public institutions, the small proportion of state funding within the total budget limits the ability of state systems to influence campus behavior by reallocating funding. In particular, in many states, most of the public money pays for salaries, and reallocating funds proves difficult. Nonetheless, most public systems and legislatures can identify some funds to allocate as a reward for improved performance.
Even relatively small budget increases represent a significant reward for campus achievements.

Accountability, as the SHEEO report highlights, is a word with no meaning until we define the measures and the purpose. If we mean accountability to satisfy public expectations for multiple institutions on many variables, we can expect that the exercise will be time consuming and of little practical impact. If we mean accountability to improve the institution’s performance in specific ways, then we know we need to develop a few key measures and move at least some money to reward improvement. 

John V. Lombardi
Author's email:

John V. Lombardi, chancellor and professor of history at the University of Massachusetts Amherst, writes Reality Check every two weeks. Scott McLemee's column, Intellectual Affairs, will return Thursday.

Grade Inflation and Abdication

Over the last generation, most colleges and universities have experienced considerable grade inflation. Much lamented by traditionalists and explained away or minimized by more permissive faculty, the phenomenon presents itself both as an increase in students’ grade point averages at graduation as well as an increase in high grades and a decrease in low grades recorded for individual courses. More prevalent in humanities and social science than in science and math courses and in elite private institutions than in public institutions, discussion about grade inflation generates a great deal of heat, if not always as much light.

While the debate on the moral virtues of any particular form of grade distribution fascinates as cultural artifact, the variability of grading standards has a more practical consequence. As grades increasingly reflect an idiosyncratic and locally defined performance levels, their value for outside consumers of university products declines. Who knows what an "A" in American History means? Is the A student one of the top 10 percent in the class or one of the top 50 percent? 

Fuzziness in grading reflects a general fuzziness in defining clearly what we teach our students and what we expect of them. When asked to defend our grading practices by external observers -- parents, employers, graduate schools, or professional schools -- our answers tend toward a vague if earnest exposition on the complexity of learning, the motivational differences in evaluation techniques, and the pedagogical value of learning over grading. All of this may well be true in some abstract sense, but our consumers find our explanations unpersuasive and on occasion misleading.

They turn, then, to various forms of standardized testing. When the grades of an undergraduate have an unpredictable relevance to a standard measure performance, and when high quality institutions that should set the performance standard routinely give large proportions of their students “A” grades, others must look elsewhere for some reliable reference. A 3.95 GPA should reflect the same level of preparation for students from different institutions.

Because they do not, we turn to the GMAT, LSAT, GRE, or MCAT, to take four famous examples. These tests normalize the results from the standards-free zone of American higher education. The students who aspire to law or medical school all have good grades, especially in history or organic chemistry. In some cases, a student’s college grades may prove little more than his or her ability to fulfill requirements and mean considerably less than the results of a standardized test that attempts to identify precisely what the student knows that is relevant to the next level of academic activity.

Although many of us worry that these tests may be biased against various subpopulations, emphasize the wrong kind of knowledge, and encourage students to waste time and money on test prep courses, they have one virtue our grading system does not provide: The tests offer a standardized measure of a specific and clearly defined subset of knowledge deemed useful by those who require them for admission to graduate or professional study.

Measuring State Investment

If the confusion over the value of grades and test scores were not enough, we discover that at least for public institutions, our state accountability systems focus heavily on an attempt to determine whether student performance reflects a reasonable value for taxpayer investment in colleges and universities. This accountability process engages a wide range of measures -- time to degree, graduation rate, student satisfaction, employment, graduate and professional admission, and other indicators of undergraduate performance -- but even with the serious defects in most of these systems, they respond to the same problems as do standardized tests.

Our friends and supporters have little confidence in the self-generated mechanisms we use to specify the achievement of our students. If the legislature believed that students graduating with a 3.0 GPA were all good performers measured against a rigorous national standard applied to reasonably comparable curricula, they would not worry much about accountability. They would just observe whether our students learned enough to earn a nationally normed 3.0 GPA. 

Of course, we have no such mechanism to validate the performance of our students. We do not know whether our graduates leave better or worse prepared than the students from other institutions. We too, in recognition of the abdication of our own academic authority as undergraduate institutions, rely on the GRE, MCAT, LSAT, and GMAT to tell us whether the students who apply (including our own graduates) can meet the challenges of advanced study at our own universities.

Partly this follows from another peculiarity of the competitive nature of the American higher education industry. Those institutions we deem most selective enroll students with high SATs on average (recognizing that a high school record is valuable only when validated in some fashion by a standardized test). Moreover, because selective institutions admit smart students who have the ability to perform well, and because these institutions have gone to such trouble to recruit them, elite colleges often feel compelled to fulfill the prophecy of the students’ potential by ensuring that most graduate with GPA’s in the A range. After all, they may say, average does not apply to our students because they are all, by definition, above average.

When reliable standards of performance weaken in any significant and highly competitive industry, consumers seek alternative external means of validating the quality of the services provided. The reluctance of colleges and universities, especially the best among us, to define what they expect from their students in any rigorous and comparable way, brings accreditation agencies, athletic  organizations, standardized test providers, and state accountability commissions into the conversation, measuring the value of the institution’s results against various nationally consistent expectations of performance. 

We academics dislike these intrusions into our academic space because they coerce us to teach to the tests or the accountability systems, but the real enemy is our own unwillingness to adopt rigorous national standards of our own.

John V. Lombardi
Author's email:


Subscribe to RSS - State policy
Back to Top