Colleges Spent Millions on Bowl Travel, But Made More

Colleges in the National Collegiate Athletic Association’s Football Bowl Subdivision spent $90.3 million traveling to and from 35 bowl games last year, but they still made money thanks to the $300.8 million that was returned to the conferences and in turn their member campuses, according to an NCAA audit. In the Southeastern Conference, where colleges both made and spent the most money, campuses got $52,278,677 in bowl payouts, but accumulated $14,762,565 in expenses. However, this was the first year the NCAA did not count bowl bonuses that institutions awarded to coaches as expenses.

Ad keywords: 

Arkansas Baptist Faculty Unpaid Since November 1

Arkansas Baptist College faculty members have not been paid since Nov. 1, KTHV News reported. The Faculty Senate also released a letter calling for the removal of President Fitz Hill, questioning his financial decisions and saying that he was not supporting the principles of shared governance. The college responded with a statement saying that the faculty accusations were inaccurate.

Essay defends study questioning merits of performance funding

Policy making is difficult and complex; evaluating the effects of policy can also be quite difficult. Nevertheless, it is important that researchers and policy analysts undertake the hard work of asking difficult questions and doing their best to answer those questions.

This is what we attempted to do when we undertook a yearlong effort to evaluate the effects of performance funding on degree completions. This effort has culminated in two peer-reviewed papers and one policy brief which summarizes the results of those papers. Our policy brief was widely distributed and the results were discussed in a recent Inside Higher Ed article.

Recently, Nancy Shulock (of California State University at Sacramento) and Martha Snyder (of HCM Strategists, a consulting firm) responded to our policy brief with some sharp criticism in these pages. As academics, we are no strangers to criticism; in fact, we welcome it. While they rightly noted the need for stronger evidence to guide the performance funding debate, they also argued that we produced “a flawed piece of research,” that our work was “simplistic,” and that it merely “compares outcomes of states where the policy was in force to those where it was not.”

This is not only an inaccurate representation of our study, but it shows an unfortunate misunderstanding of the latest innovations in social science research. We see this as an opportunity to share some insights into the analytical technique Shulock and Snyder are skeptical of.

The most fail-proof method of determining whether a policy intervention had an impact on an outcome is an experimental design. In this instance, it would require that we randomly assign states to adopt performance funding while others retain the traditional financing model. But because this is impossible, “quasi-experimental” research designs can be used to simulate experiments. The U.S. Department of Education sees experimental and quasi-experimental research as “the most rigorous methods to address the question of project effectiveness”, and the American Educational Research Association actively encourages scholars to use these techniques when experiments are not possible to undertake.

We chose the quasi-experimental design called “differences-in-differences,” where we compared performance-funding states with non-performance funding states (one difference) in the years before and after the policy intervention (the other difference).  The difference in these differences told us much more about the policy’s impact than could traditional regression analysis or descriptive statistics. Unfortunately most of the quantitative research on performance funding is just that – traditional regression or descriptive analysis – and neither strategy can provide rigorous or convincing evidence of the policy’s impacts. For an introduction to the method, see here and here.

Every study has its limitations and ours is no different. On page 3 of the brief (and in more detail in our full papers) we explain some of these issues and the steps we took to test the robustness of our findings. This includes controlling for multiple factors (e.g., state population, economic conditions, tuition, enrollment patterns, etc.) that might have affected degree completions in both the performance funding states and the non-performance funding states. Further, Shulock and Snyder claim that we “failed to differentiate among states in terms of when performance funding was implemented,” when in fact we do control for this as explained in our full papers.

We do not believe that introducing empirical evidence into the debates about performance funding is dangerous. Rather, we believe it is sorely missing. We also understand that performance funding is a political issue and one that is hotly debated. Because of this, it can be dangerous to promote expensive policies without strong empirical evidence of positive impacts. We wish this debate occurred with more transparency to these politics, as well as with a better understanding of the latest developments in social science research design.

The authors take issue with a second point that requires a response – their argument that we selected the wrong performance funding states. We disagree. The process of identifying these states required painstaking attention to detail and member-checks from experts in the field, especially when examining a 20-year period of time (1990-2010). In our full studies, we provide additional information beyond what is included in our brief (see endnote 8) about how we selected our states.

The authors suggested that we misclassified Texas and Washington. With Texas, our documents show that in 2009, SB 1 approved “Performance Incentive” funding for the biennium. Perhaps something changed after that year that we missed, and this would be a valid critique, but we have no evidence of that. The authors rightly noticed that our map incorrectly coded Washington state as having performance funding for four-year and two-year colleges when in fact it is only for two-year colleges. We correctly identified Washington in our analysis and this is displayed correctly in the brief (see Table 2).

All of these details are important, and we welcome critiques from our colleagues. After all, no single study can explain a single phenomenon; it is only through the accumulation of knowledge from multiple sources that allows us to see the full picture. Policy briefs are smaller fragments of this picture than are full studies, so we encourage readers to look at both the brief and the full studies to form their opinions about this research.  

We agree with the authors that there is much that our brief does not tell us and that there are any number of other outcomes one could choose to evaluate performance funding. Clearly, performance funding policies deserve more attention and we intend to conduct more studies in the years to come. So far, all we can say with much confidence is that, on average and in the vast majority of cases, performance funding either had no effect on degree completions or it had a negative effect.

We feel that this is an important finding and that it does “serve as a cautionary tale.” Policy makers would be wise to acknowledge our findings in the context of other information and considerations when they consider whether to implement performance funding in their states, and if so, what form it might take.

Designing and implementing performance funding is a costly endeavor. It is costly in terms of the political capital expended by state law makers; the time devoted by lawmakers, state agency staff, and institutional leaders; and in the amount of money devoted to these programs. Therefore, inserting rigorous empirical analysis to the discussion and debate is important and worthwhile.

But just as the authors say performance funding “should not be dismissed in one fell swoop,” it should not be embraced in one fell swoop either. This is especially true given the mounting evidence (for example here, here, here, and here) that these efforts may not actually work in the same way the authors believe they should.

Claiming that there is “indisputable evidence that incentives matter in higher education” is a bold proposition to make in light of these studies and others. Only time will tell as more studies come out. Until then, we readily agree with some of the author’s points and critiques and would not have decided to draft this reply had they provided an accurate representation of our study’s methods.

David Tandberg is assistant professor of higher education at Florida State University. Nicholas Hillman is an assistant professor educational leadership & policy analysis at the University of Wisconsin at Madison.

Editorial Tags: 

Temple University Will Drop 7 Sports Teams

Temple University announced Friday that it will drop seven intercollegiate athletic teams, leaving it with 17. Five men's teams will be eliminated -- baseball, crew, gymnastics, outdoor track and field and indoor track & field. Two women's teams -- softball and rowing -- will be eliminated. A statement from Kevin Clark, the director of athletics, said that the university needed to focus athletics spending on other programs. "Temple does not have the resources to equip, staff, and provide a positive competitive experience for 24 varsity sports. Continuing this model does a disservice to our student-athletes," said Clark. "We need to have the right-sized program to create a sustainable model."

Ad keywords: 

Robert Morris U. Will Eliminate 7 Teams

Robert Morris University this week announced plans to eliminate 7 of its 23 athletic teams. The Pennsylvania-based university said that savings will be used to finance improvements in the remaining athletic programs. The men's teams being eliminated are track (indoor and outdoor), cross country, and tennis. The women’s sports are field hockey, golf and tennis.


Ad keywords: 

Performance funding isn't perfect, but a recent study shortchanges it (essay)

A recent research paper published by the Wisconsin Center for the Advancement of Postsecondary Education and reported on by Inside Higher Ed criticized states' efforts to fund higher education based in part on outcomes, in addition to enrollment. The authors, David Tandberg and Nicholas Hillman, hoped to provide a "cautionary tale" for those looking to performance funding as a "quick fix."

While we agree that performance-based funding is not the only mechanism for driving change, what we certainly do not need are impulsive conclusions that ignore positive results and financial context. With serious problems plaguing American higher education, accompanied by equally serious efforts across the country to address them, it is disheartening to see a flawed piece of research mischaracterize the work on finance reform and potentially set back one important effort, among many, to improve student success in postsecondary education.

As two individuals who have studied performance funding in depth, we know that performance funding is a piece of the puzzle that can provide an intuitive, effective incentive for adopting best practices for student success and encourage others to do so. Our perspective is based on the logical belief that tying some funding dollars to results will provide an incentive to pursue those results. This approach should not be dismissed in one fell swoop. 

We are dismayed that the authors were willing to assert an authoritative conclusion from such simplistic research. The study compares outcomes of states "where the policy was in force" to those where it was not -- as if "performance funding" is a monolithic policy everywhere it has been adopted.

The authors failed to differentiate among states in terms of when performance funding was implemented, how much money is at stake, whether performance funds are "add ins" or part of base funding formulas, the metrics used to define and measure "performance," and the extent to which "stop loss" provisions have limited actual change in allocations. These are critical design issues that vary widely and that have evolved dramatically over the 20-year period the authors used to decide if "the policy was in force" or not.

Treating this diverse array of unique approaches as one policy ignores the thoughtful work that educators and policy makers are currently engaged in to learn from past mistakes and to improve the design of performance funding systems. Even a well-designed study would probably fail to reveal positive impacts yet, as states are only now trying out new and better approaches -- certainly not the "rush" to adopting a "quick fix" that the authors assert. It could just as easily be argued that more traditional funding models actually harm institutions trying to make difficult and necessary changes in the best interest of students and their success (see here and here).

The simplistic approach is exacerbated by two other design problems. First, we find errors in the map indicating the status of performance funding. Texas, for example, has only recently implemented (passed in spring 2013) a performance funding model for its community colleges; it has yet to affect any budget allocations. The recommended four-year model was not passed. Washington has a small performance funding program for its two-year colleges but none for its universities. Yet the map shows both states with performance funding operational for both two-year and four-year sectors.

Second, the only outcome examined by the authors was degree completions as it "is the only measure that is common among all states currently using performance funding." While that may be convenient for running a regression analysis, it ignores current thinking about appropriate metrics that honor different institutional missions and provide useful information to drive institutional improvement. The authors make passing reference to different measures at the end of the article but made no effort to incorporate any realism or complexities into their statistical model.

On an apparent mission to discredit performance funding, the authors showed a surprising lack of curiosity about their own findings. They found eight states where performance funding had a positive, significant effect on degree production but rather than examine why that might be, they found apparent comfort in the finding that there were "far more examples" of performance funding failing the significance tests.

"While it may be worthwhile to examine the program features of those states where performance funding had a positive impact on degree completions," they write, "the overall story of our state results serves as a cautionary tale." Mission accomplished.

In their conclusion they assert that performance funding lacks "a compelling theory of action" to explain how and why it might change institutional behaviors.

We strongly disagree. The theory of action behind performance funding is simple: financial incentives shape behaviors. Anyone doubting the conceptual soundness of performance funding is, in effect, doubting that people respond to fiscal incentives. The indisputable evidence that incentives matter in higher education is the overwhelming priority and attention that postsecondary faculty and staff have placed, over the years, on increasing enrollments and meeting enrollment targets, with enrollment-driven budgets.

The logic of performance funding is simply that adding incentives for specified outcomes would encourage individuals to redirect a portion of that priority and attention to achieving those outcomes. Accepting this logic is to affirm the potential of performance funding to change institutional behaviors and student outcomes. It is not to defend any and all versions of performance funding that have been implemented, many of which have been poorly done. And it is not to criticize the daily efforts of faculty and staff, who are committed to student success but cannot be faulted for doing what matters to maintain budgets.

Surely there are other means -- and more powerful means -- to achieve state and national goals of improving student success, as the authors assert. But just as surely it makes sense to align state investments with the student success outcomes that we all seek.

Nancy Shulock is executive director of the Institute for Higher Education Leadership & Policy at California State University at Sacramento, and Martha Snyder is senior associate at HCM Strategists.

Editorial Tags: 

University Research Spending Flat in 2012

Research and development spending by colleges and universities in 2012 fell for the first time since 1974 when adjusted for inflation, the National Science Foundation said last week.

Expenditures on R&D rose slightly in current dollars, to $65.8 billion from $65.3 billion in 2011; federal, state and local spending actually declined, but institutions' own research spending rose slightly, as seen in the table below.

When adjusted for inflation, though, in 2005 dollars, all research expenditures declined, driven down by a steady drop in funds from the federal stimulus legislation of 2009. The figures are in millions.

Fiscal year All R&D Spending Federal Govt. State and Local Govt. Institution Funds Business Other
2010 $61,257 $37,477 $3,853 $11,941 $3,198 $4,088
2011 65,274 40,771 3,831 12,601 3,181 4,890
2012 65,775 40,130 3,704 13,674 3,282 4,984


Ad keywords: 

Report Reviews Challenges Facing Higher Ed in California

California is falling behind in its ability to provide higher education to its state's citizens, particularly those who enroll outside the elite public and private universities found in the state, according to a report released Tuesday. "Boosting California's Postsecondary Education Performance," from the Committee for Economic Development, reviews the financial, economic and demographic challenges facing the state's colleges and universities and finds that much of the stress is on access institutions that most students attend. Given limited chances for significant infusions of new funds, the report suggests that new ways of providing education will be key. "Without quantum increases in educational access, productivity, and effectiveness of the state’s postsecondary institutions, particularly those with broad-access missions, there is little likelihood that California will have the human capital to compete successfully in the global economy or assure its citizens access to economic prosperity and a middle-class life."


Employee Stole $5 Million From Medical School Group

A former administrative employee admitted in federal court Monday that she stole more than $5 million from the Association of American Medical Colleges, The Washington Post reported. The woman was fired when the graft was discovered. Authorities said that she created bank accounts with names similar to those of groups with which the AAMC does business. She then created fake invoices for those entities, paid the funds to the accounts and had access to the money.


Essay on the impact of applying corporate values to higher education

America's public research universities face a challenging economic environment characterized by rising operating costs and dwindling state resources. In response, institutions across the country have looked toward the corporate sector for cost-cutting models. The hope is that implementing these “real-world” strategies will centralize redundant tasks (allowing some to be eliminated), stimulate greater efficiency, and ensure long-term fiscal solvency.

Recent events at the University of Michigan (suggest that faculty should be proactive in the face of such “corporatization” schemes, which typically are packaged and presented as necessary and consistent with a commitment to continued excellence. The wholesale application of such strategies can upend core academic values of transparency, and shared governance, and strike at the heart of workplace equity.

Early this month our university administration rolled out the “Workforce Transition” phase of its “Administrative Services Transformation” (AST) plan. From far on high, with virtually no faculty leadership input, 50 to 100 staff members in the College of Literature, Science, and the Arts (LS&A) departments were informed that their positions in HR and finances (out of an anticipated total of 325) would be eliminated by early 2014. Outside consultants, none of whom actually visited individual departments for any serious length of time, reduced these positions to what they imagined as their “basic” functions: transactional accounting and personnel paperwork.

It became clear that many of those impacted constitute a specific demographic: women, generally over 40 years of age, many of whom have served for multiple decades in low- to mid-level jobs without moving up the ranks. A university previously committed to gender equity placed the burden of job cuts on the backs of loyal and proven female employees.

These laid-off employees found little comfort in learning that they would be free to apply for one of 275 new positions in HR or finance that will be contained at an off-campus “shared services” center disconnected from the intellectually vital campus life.

The resulting plan reveals no awareness of how departments function on an everyday basis. Such “shared services” models start with the presumption that every staff member is interchangeable and every department’s needs are the same. They frame departments as “customers” of centralized services, perpetuating the illusion that the university can and should function like a market. This premise devalues the local knowledge and organic interactions that make our units thrive. Indeed, it dismisses any attribute that cannot be quantitatively measured or “benchmarked.” Faculty members who reject these models quickly become characterized as “change resisters”: backward, tradition-bound, and incapable of comprehending budgetary complexities.

The absence of consultation with regard to the plan is particularly galling given that academic departments previously have worked well with the administration to keep the university in the black. Faculty members are keenly aware of our institution’s fiscal challenges and accordingly have put in place cost-cutting and consolidating measures at the micro level for the greater good.

Worries about departmental discontentment with AST and shared services resulted in increasing secrecy around the planned layoffs. In an unprecedented move, department chairs and administrators were sworn to silence by “gag orders” prohibiting them from discussing the shared services plan even with each other. Perturbed, close to 20 department chairs wrote a joint letter to top university executives expressing their dismay. As one department chair said, "The staff don't know if they can trust the faculty, the faculty don't know if they trust the administration.”

Within a few days, at least five LS&A departments had written collective letters of protest, signed by hundreds of faculty members and graduate students.  Over the past few weeks, that chorus of opposition has only intensified as faculty members from all corners of our campus have challenged AST. Some have called for a one- to two-year moratorium and others for an outright suspension of the program.

The outcry against the planned transition itself reflects the growing rift between departmental units and the central administration at the University of Michigan. Championed as an astute financial fix by a cadre hidden away in the upper-level bureaucracy, the shared-services model is the brainchild of Accenture, an outside consulting firm which our university has also contracted for a multimillion-dollar IT rationalization project.

Caught off-guard by the strong pushback, the administration has issued several messages admitting that their communication strategies around these changes were inadequate, stating that for now layoffs will be avoided, and assuring us that there will be greater consultation and transparency going forward. 

While these definitely are hopeful signs, important questions about institutional priorities and accountability have arisen.

Initially, the university’s consultants claimed that AST would render a savings of $17 million. Over time that figure shrunk to $5 million, and by some accounts now is reputed to be as low as $2 million. Yet the university has already reportedly spent at least $3 million on this effort with even more spending on the horizon.

Where are the cost savings? How much more will the university spend on Accenture and other outside consultants? How will replacing or shifting valued employees, even at lower numbers and salaries, from their departmental homes to what essentially is a glorified offsite “call center” actually enhance efficiency? How can a university ostensibly committed to gender equity justify making long-serving and superb female employees pay the price of AST? What credible proof is there that centralized management will provide any budgetary or administrative benefits to the specialized needs of individual departments?

The implications of these questions are thrown into starker relief when considering that almost to the day of the announced layoffs, the university launched its most ambitious capital campaign, “Victors for Michigan,” with festivities costing more than $750,000 and a goal of raising $4 billion.

Whether or not the collective protest initiated by a critical mass of faculty will result in change or reversal remains to be seen. Nevertheless, the past few weeks have been a wake-up call. Faculty must educate themselves about the basic fiscal operations of the institution in these changing times and reassert their leadership. Gardens, after all, require frequent tending.

Otherwise, we remain vulnerable to opportunistic management consultants seeking to use fiscal crisis as a source of profit. Public institutions that remain under the spell of misleading corporate promises will ultimately save little and lose a great deal.


Anthony Mora is associate professor of American culture and history at the University of Michigan. Alexandra Minna Stern is professor of American culture and history, and a professor of obstetrics and gynecology at the University of Michigan.

Editorial Tags: 


Subscribe to RSS - institutionalfinance
Back to Top