institutionalfinance

Enrollments Dip for 2nd Year; For-Profits Drop Sharply

Overall enrollment in higher education fell by 1.5 percent in fall 2013, marking the second consecutive year of decline, the National Student Clearinghouse estimated Thursday in a report. Enrollments edged up over fall 2012 at four-year private and public institutions, by 1.3 and 0.3 percent, respectively, but dropped by 3.1 and 9.7 percent at community colleges and four-year for-profit institutions, the clearinghouse report said. No region gained students in year-over-year numbers, but the Midwest (-2.6 percent) suffered most with the other regions dipping by less than a percentage point. 

The report also provides data on enrollments by state and gender, among other counts.

Yeshiva U. Plans Deep Budget Cuts

Yeshiva University this week announced plans for deep budget cuts, the continuation of cuts to faculty retirement accounts and the sale of some buildings owned by the university, The Jewish Daily Forward reported. Like many universities, Yeshiva lost a considerable share of its endowment after the economic downturn started in 2008. But Yeshiva also lost about $100 million due to investments with Bernard Madoff. And the university has been sued for millions by men who say that, as boys, they were abused at the university's high school -- and that officials ignored the problem. In October, Moody's Investors Service Moody's Investors Service downgraded the university's bond rating to Baa2 from Baa1, citing "the university's weak liquidity with a full draw on operating lines of credit, expected covenant breach on lines of credit, deep operating deficits driving negative cash flow, and uncertainty regarding the outcome of litigation."

Colleges Spent Millions on Bowl Travel, But Made More

Colleges in the National Collegiate Athletic Association’s Football Bowl Subdivision spent $90.3 million traveling to and from 35 bowl games last year, but they still made money thanks to the $300.8 million that was returned to the conferences and in turn their member campuses, according to an NCAA audit. In the Southeastern Conference, where colleges both made and spent the most money, campuses got $52,278,677 in bowl payouts, but accumulated $14,762,565 in expenses. However, this was the first year the NCAA did not count bowl bonuses that institutions awarded to coaches as expenses.

Ad keywords: 

Arkansas Baptist Faculty Unpaid Since November 1

Arkansas Baptist College faculty members have not been paid since Nov. 1, KTHV News reported. The Faculty Senate also released a letter calling for the removal of President Fitz Hill, questioning his financial decisions and saying that he was not supporting the principles of shared governance. The college responded with a statement saying that the faculty accusations were inaccurate.

Essay defends study questioning merits of performance funding

Policy making is difficult and complex; evaluating the effects of policy can also be quite difficult. Nevertheless, it is important that researchers and policy analysts undertake the hard work of asking difficult questions and doing their best to answer those questions.

This is what we attempted to do when we undertook a yearlong effort to evaluate the effects of performance funding on degree completions. This effort has culminated in two peer-reviewed papers and one policy brief which summarizes the results of those papers. Our policy brief was widely distributed and the results were discussed in a recent Inside Higher Ed article.

Recently, Nancy Shulock (of California State University at Sacramento) and Martha Snyder (of HCM Strategists, a consulting firm) responded to our policy brief with some sharp criticism in these pages. As academics, we are no strangers to criticism; in fact, we welcome it. While they rightly noted the need for stronger evidence to guide the performance funding debate, they also argued that we produced “a flawed piece of research,” that our work was “simplistic,” and that it merely “compares outcomes of states where the policy was in force to those where it was not.”

This is not only an inaccurate representation of our study, but it shows an unfortunate misunderstanding of the latest innovations in social science research. We see this as an opportunity to share some insights into the analytical technique Shulock and Snyder are skeptical of.

The most fail-proof method of determining whether a policy intervention had an impact on an outcome is an experimental design. In this instance, it would require that we randomly assign states to adopt performance funding while others retain the traditional financing model. But because this is impossible, “quasi-experimental” research designs can be used to simulate experiments. The U.S. Department of Education sees experimental and quasi-experimental research as “the most rigorous methods to address the question of project effectiveness”, and the American Educational Research Association actively encourages scholars to use these techniques when experiments are not possible to undertake.

We chose the quasi-experimental design called “differences-in-differences,” where we compared performance-funding states with non-performance funding states (one difference) in the years before and after the policy intervention (the other difference).  The difference in these differences told us much more about the policy’s impact than could traditional regression analysis or descriptive statistics. Unfortunately most of the quantitative research on performance funding is just that – traditional regression or descriptive analysis – and neither strategy can provide rigorous or convincing evidence of the policy’s impacts. For an introduction to the method, see here and here.

Every study has its limitations and ours is no different. On page 3 of the brief (and in more detail in our full papers) we explain some of these issues and the steps we took to test the robustness of our findings. This includes controlling for multiple factors (e.g., state population, economic conditions, tuition, enrollment patterns, etc.) that might have affected degree completions in both the performance funding states and the non-performance funding states. Further, Shulock and Snyder claim that we “failed to differentiate among states in terms of when performance funding was implemented,” when in fact we do control for this as explained in our full papers.

We do not believe that introducing empirical evidence into the debates about performance funding is dangerous. Rather, we believe it is sorely missing. We also understand that performance funding is a political issue and one that is hotly debated. Because of this, it can be dangerous to promote expensive policies without strong empirical evidence of positive impacts. We wish this debate occurred with more transparency to these politics, as well as with a better understanding of the latest developments in social science research design.

The authors take issue with a second point that requires a response – their argument that we selected the wrong performance funding states. We disagree. The process of identifying these states required painstaking attention to detail and member-checks from experts in the field, especially when examining a 20-year period of time (1990-2010). In our full studies, we provide additional information beyond what is included in our brief (see endnote 8) about how we selected our states.

The authors suggested that we misclassified Texas and Washington. With Texas, our documents show that in 2009, SB 1 approved “Performance Incentive” funding for the biennium. Perhaps something changed after that year that we missed, and this would be a valid critique, but we have no evidence of that. The authors rightly noticed that our map incorrectly coded Washington state as having performance funding for four-year and two-year colleges when in fact it is only for two-year colleges. We correctly identified Washington in our analysis and this is displayed correctly in the brief (see Table 2).

All of these details are important, and we welcome critiques from our colleagues. After all, no single study can explain a single phenomenon; it is only through the accumulation of knowledge from multiple sources that allows us to see the full picture. Policy briefs are smaller fragments of this picture than are full studies, so we encourage readers to look at both the brief and the full studies to form their opinions about this research.  

We agree with the authors that there is much that our brief does not tell us and that there are any number of other outcomes one could choose to evaluate performance funding. Clearly, performance funding policies deserve more attention and we intend to conduct more studies in the years to come. So far, all we can say with much confidence is that, on average and in the vast majority of cases, performance funding either had no effect on degree completions or it had a negative effect.

We feel that this is an important finding and that it does “serve as a cautionary tale.” Policy makers would be wise to acknowledge our findings in the context of other information and considerations when they consider whether to implement performance funding in their states, and if so, what form it might take.

Designing and implementing performance funding is a costly endeavor. It is costly in terms of the political capital expended by state law makers; the time devoted by lawmakers, state agency staff, and institutional leaders; and in the amount of money devoted to these programs. Therefore, inserting rigorous empirical analysis to the discussion and debate is important and worthwhile.

But just as the authors say performance funding “should not be dismissed in one fell swoop,” it should not be embraced in one fell swoop either. This is especially true given the mounting evidence (for example here, here, here, and here) that these efforts may not actually work in the same way the authors believe they should.

Claiming that there is “indisputable evidence that incentives matter in higher education” is a bold proposition to make in light of these studies and others. Only time will tell as more studies come out. Until then, we readily agree with some of the author’s points and critiques and would not have decided to draft this reply had they provided an accurate representation of our study’s methods.

David Tandberg is assistant professor of higher education at Florida State University. Nicholas Hillman is an assistant professor educational leadership & policy analysis at the University of Wisconsin at Madison.

Editorial Tags: 

Temple University Will Drop 7 Sports Teams

Temple University announced Friday that it will drop seven intercollegiate athletic teams, leaving it with 17. Five men's teams will be eliminated -- baseball, crew, gymnastics, outdoor track and field and indoor track & field. Two women's teams -- softball and rowing -- will be eliminated. A statement from Kevin Clark, the director of athletics, said that the university needed to focus athletics spending on other programs. "Temple does not have the resources to equip, staff, and provide a positive competitive experience for 24 varsity sports. Continuing this model does a disservice to our student-athletes," said Clark. "We need to have the right-sized program to create a sustainable model."

Ad keywords: 

Robert Morris U. Will Eliminate 7 Teams

Robert Morris University this week announced plans to eliminate 7 of its 23 athletic teams. The Pennsylvania-based university said that savings will be used to finance improvements in the remaining athletic programs. The men's teams being eliminated are track (indoor and outdoor), cross country, and tennis. The women’s sports are field hockey, golf and tennis.

 

Ad keywords: 

Performance funding isn't perfect, but a recent study shortchanges it (essay)

A recent research paper published by the Wisconsin Center for the Advancement of Postsecondary Education and reported on by Inside Higher Ed criticized states' efforts to fund higher education based in part on outcomes, in addition to enrollment. The authors, David Tandberg and Nicholas Hillman, hoped to provide a "cautionary tale" for those looking to performance funding as a "quick fix."

While we agree that performance-based funding is not the only mechanism for driving change, what we certainly do not need are impulsive conclusions that ignore positive results and financial context. With serious problems plaguing American higher education, accompanied by equally serious efforts across the country to address them, it is disheartening to see a flawed piece of research mischaracterize the work on finance reform and potentially set back one important effort, among many, to improve student success in postsecondary education.

As two individuals who have studied performance funding in depth, we know that performance funding is a piece of the puzzle that can provide an intuitive, effective incentive for adopting best practices for student success and encourage others to do so. Our perspective is based on the logical belief that tying some funding dollars to results will provide an incentive to pursue those results. This approach should not be dismissed in one fell swoop. 

We are dismayed that the authors were willing to assert an authoritative conclusion from such simplistic research. The study compares outcomes of states "where the policy was in force" to those where it was not -- as if "performance funding" is a monolithic policy everywhere it has been adopted.

The authors failed to differentiate among states in terms of when performance funding was implemented, how much money is at stake, whether performance funds are "add ins" or part of base funding formulas, the metrics used to define and measure "performance," and the extent to which "stop loss" provisions have limited actual change in allocations. These are critical design issues that vary widely and that have evolved dramatically over the 20-year period the authors used to decide if "the policy was in force" or not.

Treating this diverse array of unique approaches as one policy ignores the thoughtful work that educators and policy makers are currently engaged in to learn from past mistakes and to improve the design of performance funding systems. Even a well-designed study would probably fail to reveal positive impacts yet, as states are only now trying out new and better approaches -- certainly not the "rush" to adopting a "quick fix" that the authors assert. It could just as easily be argued that more traditional funding models actually harm institutions trying to make difficult and necessary changes in the best interest of students and their success (see here and here).

The simplistic approach is exacerbated by two other design problems. First, we find errors in the map indicating the status of performance funding. Texas, for example, has only recently implemented (passed in spring 2013) a performance funding model for its community colleges; it has yet to affect any budget allocations. The recommended four-year model was not passed. Washington has a small performance funding program for its two-year colleges but none for its universities. Yet the map shows both states with performance funding operational for both two-year and four-year sectors.

Second, the only outcome examined by the authors was degree completions as it "is the only measure that is common among all states currently using performance funding." While that may be convenient for running a regression analysis, it ignores current thinking about appropriate metrics that honor different institutional missions and provide useful information to drive institutional improvement. The authors make passing reference to different measures at the end of the article but made no effort to incorporate any realism or complexities into their statistical model.

On an apparent mission to discredit performance funding, the authors showed a surprising lack of curiosity about their own findings. They found eight states where performance funding had a positive, significant effect on degree production but rather than examine why that might be, they found apparent comfort in the finding that there were "far more examples" of performance funding failing the significance tests.

"While it may be worthwhile to examine the program features of those states where performance funding had a positive impact on degree completions," they write, "the overall story of our state results serves as a cautionary tale." Mission accomplished.

In their conclusion they assert that performance funding lacks "a compelling theory of action" to explain how and why it might change institutional behaviors.

We strongly disagree. The theory of action behind performance funding is simple: financial incentives shape behaviors. Anyone doubting the conceptual soundness of performance funding is, in effect, doubting that people respond to fiscal incentives. The indisputable evidence that incentives matter in higher education is the overwhelming priority and attention that postsecondary faculty and staff have placed, over the years, on increasing enrollments and meeting enrollment targets, with enrollment-driven budgets.

The logic of performance funding is simply that adding incentives for specified outcomes would encourage individuals to redirect a portion of that priority and attention to achieving those outcomes. Accepting this logic is to affirm the potential of performance funding to change institutional behaviors and student outcomes. It is not to defend any and all versions of performance funding that have been implemented, many of which have been poorly done. And it is not to criticize the daily efforts of faculty and staff, who are committed to student success but cannot be faulted for doing what matters to maintain budgets.

Surely there are other means -- and more powerful means -- to achieve state and national goals of improving student success, as the authors assert. But just as surely it makes sense to align state investments with the student success outcomes that we all seek.
 

Nancy Shulock is executive director of the Institute for Higher Education Leadership & Policy at California State University at Sacramento, and Martha Snyder is senior associate at HCM Strategists.

Editorial Tags: 

University Research Spending Flat in 2012

Research and development spending by colleges and universities in 2012 fell for the first time since 1974 when adjusted for inflation, the National Science Foundation said last week.

Expenditures on R&D rose slightly in current dollars, to $65.8 billion from $65.3 billion in 2011; federal, state and local spending actually declined, but institutions' own research spending rose slightly, as seen in the table below.

When adjusted for inflation, though, in 2005 dollars, all research expenditures declined, driven down by a steady drop in funds from the federal stimulus legislation of 2009. The figures are in millions.

Fiscal year All R&D Spending Federal Govt. State and Local Govt. Institution Funds Business Other
2010 $61,257 $37,477 $3,853 $11,941 $3,198 $4,088
2011 65,274 40,771 3,831 12,601 3,181 4,890
2012 65,775 40,130 3,704 13,674 3,282 4,984

 

Ad keywords: 

Report Reviews Challenges Facing Higher Ed in California

California is falling behind in its ability to provide higher education to its state's citizens, particularly those who enroll outside the elite public and private universities found in the state, according to a report released Tuesday. "Boosting California's Postsecondary Education Performance," from the Committee for Economic Development, reviews the financial, economic and demographic challenges facing the state's colleges and universities and finds that much of the stress is on access institutions that most students attend. Given limited chances for significant infusions of new funds, the report suggests that new ways of providing education will be key. "Without quantum increases in educational access, productivity, and effectiveness of the state’s postsecondary institutions, particularly those with broad-access missions, there is little likelihood that California will have the human capital to compete successfully in the global economy or assure its citizens access to economic prosperity and a middle-class life."

 

Pages

Subscribe to RSS - institutionalfinance
Back to Top