You have /5 articles left.
Sign up for a free account or log in.

Ernest Boyer’s classic 1990 book, Scholarship Reconsidered, raised the academy’s consciousness concerning many types of scholarship: the scholarship of discovery, of integration, of application, and of teaching. All of these types of scholarship continue to receive our attention. But a fifth type, the scholarship of administration, using a scholarly approach in performing higher education administration, should be added to that list.

Never has the need been greater in higher education to apply our skills as scholars to the work of operating colleges and universities. Anecdotal evidence of individual institutions’ success stories may be inspirational and persuasive, but such evidence fails to provide data-informed guidance to colleges and universities trying to determine how best to use their resources to enhance student success and faculty scholarship. Turning the scholarly lens on administration and using the same careful investigation, design, assessment, and communication strategies employed in traditional research, colleges and universities can more effectively ensure that their efforts result in the greatest positive effect. Too often, and in direct contrast with how they would conduct their own research, some administrators embark on academic initiatives without first investigating what others have done, without designing their initiatives so that they can assess the results, and without broadly communicating those results. Applying basic research principles to administration can help prevent colleges and universities from pursuing unsuccessful paths.

Finding a successful path in these times of increased financial pressures and competition may not just be advantageous, but a matter of survival. As a result of the recent economic downturn, colleges and universities are being caught between decreases in all sorts of revenue and continuously rising energy, technology, and health care costs. Colleges and universities need to know which strategies will provide the most benefit to their students and faculty members given the resources expended.

For example, consider student retention -- critically important to an institution both as an indicator of that institution’s success with students, and as a source of revenue. Suppose the president of East Coast University has announced that $1 million is available for a special retention initiative. The provost is convinced that more full-time faculty are needed, the vice president for administration is convinced that the solution lies in cleaning the bathrooms and classrooms more often, and the vice president for student affairs is convinced that a new wing on the sports center is definitely the answer. Each of these administrators can tell many stories supporting their point of view, recounting students’, and their parents’, concerns and complaints. Chances are that the administrator who tells the most persuasive stories, or who has built up the most influence with the president, or whose idea is backed by a potential donor, will get the lion’s share of the $1 million.

Now consider another way of attacking this problem. Suppose a representative sample of current students and of recently departed students is chosen and then $100,000 of the $1 million is spent to assess their satisfaction concerning all major aspects of their experiences at East Coast University. Suppose further that the results reveal that by far the most common cause of student dissatisfaction and departure is the quality of teaching. The remaining $900,000 could then be spent, quite effectively, on conducting workshops for both full-time and part-time faculty to enhance their teaching (using methods previously proven successful in the research literature, of course). Simultaneously, at no cost except time, the provost could focus on working with the department chairs and deans to provide clear expectations to the faculty with regard to their teaching performance, and to provide clear and definitive consequences should expectations not be met.

Many colleges certainly have institutional research offices, and some administrators use studies from these offices to examine the track records of various programs or key trends. However, the kind of scholarship of administration that is needed goes much further -- to actually constructing experiments, complete with control groups.

For an example that requires such an approach, first note that currently in the United States, some 40 percent of traditional college students take at least one developmental course. These students, although they have graduated from high school, have been deemed not to possess the necessary reading, writing, and/or quantitative skills for college-credit courses. However, at least for students just below the cutoffs for qualifying for college-credit courses, there is no definitive evidence that taking, and even passing, developmental courses will increase the probability of these students ever graduating college. Simply being placed directly into college-credit courses along with greater support services, or some other form of support, may be just as effective. For students performing significantly below the cutoffs we have even less information. Perhaps an intensive boot-camp-type experience would be better than separate developmental courses -- we just do not know. There has been an assumption by some educators that developmental courses would be the best way to help such students, but that assumption may be wrong. Multiple studies have tried to use existing data to determine definitively whether developmental courses do or do not help students just below the cutoffs, but ultimately all such studies are imperfect because none have had a true experimental design -- none have randomly selected students into these courses or not and then compared the two groups.

Therefore many United States institutions of higher education will continue to expend huge proportions of their budgets each year on these courses that may not be the best way to help their students, and millions of students will be placed in these courses each year, taking up hundreds of hours of each student’s time, time that could be devoted to other activities such as earning money or taking credit-bearing courses, and delaying these students’ college graduations. Why are all administrators not clamoring for data to support the existence of these types of courses? It has been argued that the definitive evidence will never exist because it would not be ethical to conduct a controlled experiment on whether or not to give a student developmental courses. However, is it ethical to continue to require millions of students to take these courses when we do not even know that this is the best way to help them? What would be ethical is to conduct a controlled experiment -- one that randomly assigns students to developmental courses, other forms of assistance, or no assistance, and then assesses the results.

That it is possible to take a successful scholarly approach to administration is illustrated by a third example: the MDRC study of first-year learning communities at Kingsborough Community College of the City University of New York. In this experiment, the researchers first identified new first-year students who wished to participate in a learning community. Then the researchers randomly selected half of these students to participate in learning communities, and the rest of the students were not allowed to participate. Thus the resulting two groups of students were similar, not only in terms of their high school preparation and intended career goals, but in their motivation to be in a learning community, a possibly critical variable. As it turned out, the students who participated in the learning communities accumulated more credits and performed better on English skills assessment tests. Thus it can be concluded that, at least at Kingsborough Community College, learning communities do increase student success.

There are other cases of widespread administrative practices for which there is no evidence. For example, consider the practice at many colleges and universities of giving faculty reassigned time for research. Is it definitely the case that some reassigned time results in more scholarly productivity, and that more reassigned time results in even more scholarly productivity? Perhaps a reduction in teaching of one course has very little effect, a two-course reduction has a lot of effect, and more than two courses has no greater effect than two courses.

It would be easy, however, for someone using a scholarly approach to administration to become paralyzed and never act. Controlled, experimental research is rare in higher education, and therefore administrators cannot act only when their actions can be based on such research. Frequently, the only evidence available is correlational -- event A tends to occur when event B occurs, and as many scholars know, such a correlation does not prove that A causes B, or vice versa. Students graduate in the spring when there are many flowers, but flowers do not cause graduation. When only correlational evidence is available, administrators must use their judgment to try to interpret whether or not such a finding is evidence of a causal relationship, and thus whether they should take action based on this finding or not.

In summary, administrators need to guide their actions using the same skills that they use in conducting their research. Knowledge of what constitutes good evidence, how to get that evidence, how to use that evidence, and how to tell others about that evidence, should not get checked at the door when someone takes up residence in the chair’s, dean’s, vice president’s, or president’s office. Our institutions of higher education deserve nothing less.

Next Story

More from Views