If we are focused on improvement, we have to design a strategy for improvement. Although it is possible to have grand ideas and elaborate discussions about the importance of this or that within the university's portfolio of activities, the critical issue is to know what defines competitive achievement and what drives success in this competition.
We will have a big fight about the definitions, but after we have had the fight, and we then say to our colleagues inside and outside the university, "tell me the five universities of our type that you admire most," we will not necessarily pick the same five, but the five each of us picks will have these characteristics: they will have high quality students and they will have high quality research faculty. The quality of their students will be defined by their SAT/ACT scores and their high school grade point averages on admission to the university and the quality of their faculty will be defined by the publication and research records they have achieved in competition against the best in the nation. The institutions may have other characteristics as well: championship sports, elegant facilities, elaborate programs for student engagement, powerful outreach programs, and so on. But the defining characteristics are always competitive students and competitive faculty.
When we want to improve our performance, we have to focus on getting the best students and the best faculty. We have to focus on doing what it takes to enhance the students’ academic and personal experience, not because it's the right thing to do (although it is) but because a very good academic and personal experience is what attracts the best students, and the more best students you get, the more additional best students you can attract. Similarly we have to focus on what it takes to recruit the best faculty, keep the best faculty, and create the support for their research that allows them to be increasingly productive at high levels of competitive quality.
How do we get there? First, we have to have the measurements that focus our attention on the achievement of these things. Second, we have to understand who is responsible for achieving improvement. These two decisions are by far the most important in the process of managing improvement: What we watch, and who is responsible for making what we watch get better.
Universities for many reasons have a great enthusiasm for distributing responsibility widely and thinly. So in the university, everyone is responsible for everything. The faculty think they are responsible for making decisions about the budget, physical plant, parking, student life, fundraising, and in general about everything. They are happy to share this responsibility with many other groups: administrators, students, alumni, political actors, donors, and anyone else with an opinion. We confuse the right to have an opinion with the responsibility for doing something and taking responsibility for the consequences. In universities we tend to give everyone the authority to speak and be heard and be accommodated, but we are not entirely sure who has the responsibility to see that what needs to be done gets done. We are even less clear how we will insist on connecting this authority and responsibility to some form of consequence. (Accountability is the buzz word for this but its meaning is so diluted by the political controversies surrounding the use of accountability in coercive state and national regulations that I avoid it in most cases when talking about real issues.)
The mechanisms that produce improvement involve three things, two of which we've discussed here in many previous posts to our course list. The first is specifying what matters: teaching and research-- students and faculty. The second is deciding how to measure these elements of performance: competitive student quality--competitive faculty research quality. The third is constructing a mechanism for connecting what matters and our measurement of what matters to the budget on an annual basis. When we construct a system that addresses these three elements we will have achieved the core components of the improving university because we will have recognized that Money Matters, Performance Counts, and Time Is the Enemy.
The structure for measuring university performance requires a simple-to-describe but difficult-to-implement system. This system has two components: a mechanism for measuring a unit's improvement relative to its past performance, and a mechanism for measuring a unit's improvement relative to its national competitive marketplace. The first component is easy to do because it only requires us to collect the appropriate measurement of performance using our own data, to which we have easy access. We can do this every year. The second component is more difficult because we have to collect the appropriate measurements of performance using data collected from other institutions, some of which data may be readily available, but most of which is not. In this case we may well have to reference our external market place on a longer cycle, say once every three years or so.
The measurement system also requires us to decide where the authority and responsibility for delivering performance lies. In a university this is almost always the department/program. This is the unit that owns quality control because it owns the content represented by the department or unit. If a chemistry department is going to be good and get better, it is the chemistry department that must know what good means and can ensure that improvement takes place. So the primary responsibility and authority for delivering improvement rests with the department. If the department hires good faculty and motivates them to perform high levels of research and teaching, then the department will improve. If the department hires and tenures ordinary faculty who only perform at acceptable levels, then the department will not improve. While the dean can use the measurement system to see if the department is actually improving, the dean is almost never able to actually improve the department, only the department itself can do this. As a result, the structure for improvement has to focus its measurement system on the department.
However, from the perspective of the institution, the unit responsible for seeing that departments improve is the college or school. The dean is the university official who must operate the system that tracks the performance of the departments to ensure that they are improving. In theory the president/chancellor/provost could run this system directly on behalf of the departments, but in practice this span of control is far too great, and in the university, colleges and schools exist to manage the departments effectively.
Given this structure, what do we measure? Whatever the measures, the university seeks improvement in productivity and quality. One super professor in a department of fifty is not equivalent to 50 highly competitive and productive professors who may not all be at the super professor level. The goal is to do a lot and to do it well. We measure research productivity and quality and we measure teaching productivity and quality. We may find that our metrics for measuring some things are better than for others. So we will surely do better measuring teaching productivity than we will measuring teaching quality, but that does not relieve us of the obligation to produce whatever evidence we have on teaching quality. We will do well measuring both the quality and productivity of research because research is nationally referenced in most cases.
The departments must design the measures of quality and productivity in their respective fields, but the measures they pick must be nationally referenced and approved by the Dean and the Provost to ensure that we create reference points that speak to the national competition. For research we may count publications in high quality journals, peer reviewed grants and contracts, exhibits in nationally significant galleries, performances in nationally significant venues, books published by reputable refereed presses, and so on. These will be different by field, but every department or field in the university will have its nationally established reference points for productivity and quality of research or creative activity.
In addition, we have another element that we always include in these metrics: money. Money matters. Some money is not within a department's ability to manage, but much money is. So for each department we count annual giving, we count grant and contract revenue spent, we count indirect cost collected, and we count other sources (which may be from distance education, executive programs, or sales of goods and services). Similarly, we count credit hours as if they were money. In all universities, credit hours are simply a proxy for the funding that comes from many sources to support teaching and students. Every university pays for credit hours with money derived from the state, tuition and fees, and perhaps other sources such as endowments. Credit hours are also the accounting mechanism we use to describe student work and the work to teach students. Credit hours are another measure of performance, and those units (departments/programs) that teach many students require more support than those departments that teach fewer students, because students generate revenue and require expenses.
Once we have our metrics established, a process that takes a lot of time and conversation and often controversy, we can then track our performance from where we were to where we are and see how much we improved. We can then go outside the university to see how well we can benchmark our internal data against external data related to performance. We may not be able to match everything, but we'll almost always find that we can match much of our data against some relatively small group of significant external reference units.
Identifying these external references to departmental counterparts is a critical process. The department proposes its counterparts to the dean who will approve their suitability and pass this on to the Provost for review to ensure that the reference points for one set of departments in a college match in significance those from other departments and colleges. The counterparts should be no less than three. They should be from institutions that are better than we are and that are major competitors in our field. Our goal is not to show that there are institutions like ours or that there are institutions less effective than ours. The goal is to measure our improvement against the institutions that compete within the top category of institutions like us. We have to get better not only measured against our prior performance but against the improved performance of our competitors. Much discussion is required to settle on these comparators.
Once we have the data, we then have to connect these data to the budget process of the university so we can make the appropriate adjustments to funding that establish the institutional investment that follows successful improvement.
This last element is perhaps the most important one of all in the process of developing a system for institutional improvement. The only way to get consistent, sustained, and relatively rapid improvement is to ensure that the budget follows the improvement criteria. If we fund units and departments that do not improve, because we feel sorry for them, they have powerful political supporters, or they are traditionally protected within the university organization, we will create a set of incentives unrelated to improvement. In all universities, and perhaps elsewhere in society, behavior follows the money. If we fund those units that do well, improve, increase their teaching and research productivity and quality relative to past performance and relative to outside comparisons, then we will see everyone focused on this kind of improvement. If we fund political positions, personal influence, external threats and coercion, or careerism by faculty and administrators, then we will see everyone focused on developing good political networks, excellent personal relationships, external pressure groups, and careerist back scratching among ambitious faculty or staff or administrators. By linking the budget to performance, by systematically, openly, and visibly investing in improvement, and by doing this consistently over some reasonable period of years (usually five to eight), the institution will get better fast, it will learn how to focus, and the system will likely become institutionalized and sustain itself for some time.
This process works, but it is anything except easy.