Marketing / public relations / government relations

UC system weighs shift in tuition payments to after graduation

Smart Title: 

Proposal being weighed by University of California to shift student payments to after graduation and tie them to income would be a dramatic change in how education is financed.

Delayed state payments cause headaches for Illinois public universities

Smart Title: 

For three years, Illinois has delayed payments to public colleges, presenting a different sort of budgeting problem for administrators.

Higher education proposals included in State of the Union

Smart Title: 

In State of the Union speech, Obama calls for keeping student loan interest rates low and warns colleges to stop raising tuition -- or risk losing federal support.

Gingrich puts forward higher ed ideas in 2012 campaign

Smart Title: 

Republican front-runner Newt Gingrich is unique in the field for his academic past -- and for some of the ideas he has put forward.

Students need the right kind of college ratings system (essay)

The more expensive a purchase, the more important it is to be a smart consumer. Many Americans value labeling and rankings from food (nutrition labels) to appliances (energy ratings) to vehicles (gas mileage and crash safety) to health plans (Obamacare’s bronze, silver, gold, and platinum). Yet for one of the most expensive purchases a person will ever make – a college education – there is a dearth of reliable and meaningful comparable information.

In August, President Obama directed the U.S. Department of Education to develop a federal college ratings system with two goals: (1) to serve as a college search tool for students and (2) to function as an accountability measure for institutions of higher education.

Under the president’s proposal, ratings will be available for consumer use in 2015, and by 2018, they would be tied to the colleges’ receipt of federal student aid. Many colleges and universities have been protesting ever since, especially about the accountability goal.

But improving the information imbalance about higher education outcomes is a key step toward improving graduation rates and slowing the rise in student loan debt. Although accountability mechanisms are a complex issue that may well take somewhat longer than 2018 to develop, student advocates agree on the following: We must move forward now with the multifactor rating information that higher education consumers desperately need. Furthermore, the administration’s rating system should provide comparable data on several factors relevant to college choice so that students can choose which are most important to them, rather than imposing the government’s judgment about which handful of factors should be combined into a single institutional rating.

As we evaluate the case for federal consumer ratings, let’s first set aside the 15 percent of college students who attend the most selective institutions and enjoy generally very high graduation rates. They may feel well-served by rankings like Barron’s and U.S. News, which emphasize reputation, financial resources, and admissions selectivity.

But for the 85 percent of students who attend non- or less-selective institutions, the institution they choose has far greater consequences. For these “post-traditional” students, college choice could mean the difference between dropping out with an unmanageable debt load or graduating with a degree and moving on to a satisfying career.

To share a real example, consider three Philadelphia universities: a suburban private, a Catholic private, and an urban state. These institutions are all within 30 miles, enroll students with similar academic characteristics, and serve similar percentages of Pell-eligible students. If you are a local, low-income student of color who wants to attend college close to home, how should you decide where to go?

What if you knew that the suburban private school’s graduation rate for underrepresented minority students (31 percent) scored much lower than the Catholic private (54 percent) and urban state school (61 percent)? Or that the urban state and private Catholic schools have lower net prices for low-income students? Would that affect your choice? (Thanks to Education Trust’s College Results Online for these great data.)

A rating system with multiple measures (rather than a single one) could greatly help this student. Armed with facts about comparable graduation rates, admissions criteria, and net prices, she can investigate her options further, ask informed questions, and ultimately make a stronger decision about which institution is the best fit for her

A ratings system designed for the 85 percent of students going to less-selective institutions will help students get the information most important to them. Many consumer rating schemas include multiple measures. Car buyers can compare fuel efficiency, price and safety ratings as well as more subjective ratings of comfort or “driver experience” from a variety of sources. Some buy Honda Civics for gas mileage and safety, others choose more expensive options for luxury features or handling.

Similarly, prospective college students need to know not just about accessibility/selectivity (average GPA, SAT/ACT scores), but also about affordability (net price by income tier, average student loan debt, ability to repay loans) and accountability (graduation rates by race and by income). The information should be sortable by location (to aid place-bound students) and by institution type (two-year, four-year, public, private) for students to compare side by side. 

The data to fuel the rating system are for the most part already available, although some are in need of improvement. As is now widely acknowledged, we must change the federal calculation of graduation rates as soon as possible to account for part-time and transfer students, and we must collect and report institutional Pell Grant recipient graduation rates as part of the federal data system (IPEDS). Over the long term, we should also find a valid way to assess work force outcomes for students.

But let’s not delay a ratings system that will serve students any further. Once the system is up and running, we can turn to the more complex and politically difficult question of how to use federal financial aid dollars to incentivize better institutional outcomes.

Carrie Warick is director of partnerships and policy at the National College Access Network, which advocates on behalf of low-income and underrepresented students.

Editorial Tags: 

Essay defends study questioning merits of performance funding

Policy making is difficult and complex; evaluating the effects of policy can also be quite difficult. Nevertheless, it is important that researchers and policy analysts undertake the hard work of asking difficult questions and doing their best to answer those questions.

This is what we attempted to do when we undertook a yearlong effort to evaluate the effects of performance funding on degree completions. This effort has culminated in two peer-reviewed papers and one policy brief which summarizes the results of those papers. Our policy brief was widely distributed and the results were discussed in a recent Inside Higher Ed article.

Recently, Nancy Shulock (of California State University at Sacramento) and Martha Snyder (of HCM Strategists, a consulting firm) responded to our policy brief with some sharp criticism in these pages. As academics, we are no strangers to criticism; in fact, we welcome it. While they rightly noted the need for stronger evidence to guide the performance funding debate, they also argued that we produced “a flawed piece of research,” that our work was “simplistic,” and that it merely “compares outcomes of states where the policy was in force to those where it was not.”

This is not only an inaccurate representation of our study, but it shows an unfortunate misunderstanding of the latest innovations in social science research. We see this as an opportunity to share some insights into the analytical technique Shulock and Snyder are skeptical of.

The most fail-proof method of determining whether a policy intervention had an impact on an outcome is an experimental design. In this instance, it would require that we randomly assign states to adopt performance funding while others retain the traditional financing model. But because this is impossible, “quasi-experimental” research designs can be used to simulate experiments. The U.S. Department of Education sees experimental and quasi-experimental research as “the most rigorous methods to address the question of project effectiveness”, and the American Educational Research Association actively encourages scholars to use these techniques when experiments are not possible to undertake.

We chose the quasi-experimental design called “differences-in-differences,” where we compared performance-funding states with non-performance funding states (one difference) in the years before and after the policy intervention (the other difference).  The difference in these differences told us much more about the policy’s impact than could traditional regression analysis or descriptive statistics. Unfortunately most of the quantitative research on performance funding is just that – traditional regression or descriptive analysis – and neither strategy can provide rigorous or convincing evidence of the policy’s impacts. For an introduction to the method, see here and here.

Every study has its limitations and ours is no different. On page 3 of the brief (and in more detail in our full papers) we explain some of these issues and the steps we took to test the robustness of our findings. This includes controlling for multiple factors (e.g., state population, economic conditions, tuition, enrollment patterns, etc.) that might have affected degree completions in both the performance funding states and the non-performance funding states. Further, Shulock and Snyder claim that we “failed to differentiate among states in terms of when performance funding was implemented,” when in fact we do control for this as explained in our full papers.

We do not believe that introducing empirical evidence into the debates about performance funding is dangerous. Rather, we believe it is sorely missing. We also understand that performance funding is a political issue and one that is hotly debated. Because of this, it can be dangerous to promote expensive policies without strong empirical evidence of positive impacts. We wish this debate occurred with more transparency to these politics, as well as with a better understanding of the latest developments in social science research design.

The authors take issue with a second point that requires a response – their argument that we selected the wrong performance funding states. We disagree. The process of identifying these states required painstaking attention to detail and member-checks from experts in the field, especially when examining a 20-year period of time (1990-2010). In our full studies, we provide additional information beyond what is included in our brief (see endnote 8) about how we selected our states.

The authors suggested that we misclassified Texas and Washington. With Texas, our documents show that in 2009, SB 1 approved “Performance Incentive” funding for the biennium. Perhaps something changed after that year that we missed, and this would be a valid critique, but we have no evidence of that. The authors rightly noticed that our map incorrectly coded Washington state as having performance funding for four-year and two-year colleges when in fact it is only for two-year colleges. We correctly identified Washington in our analysis and this is displayed correctly in the brief (see Table 2).

All of these details are important, and we welcome critiques from our colleagues. After all, no single study can explain a single phenomenon; it is only through the accumulation of knowledge from multiple sources that allows us to see the full picture. Policy briefs are smaller fragments of this picture than are full studies, so we encourage readers to look at both the brief and the full studies to form their opinions about this research.  

We agree with the authors that there is much that our brief does not tell us and that there are any number of other outcomes one could choose to evaluate performance funding. Clearly, performance funding policies deserve more attention and we intend to conduct more studies in the years to come. So far, all we can say with much confidence is that, on average and in the vast majority of cases, performance funding either had no effect on degree completions or it had a negative effect.

We feel that this is an important finding and that it does “serve as a cautionary tale.” Policy makers would be wise to acknowledge our findings in the context of other information and considerations when they consider whether to implement performance funding in their states, and if so, what form it might take.

Designing and implementing performance funding is a costly endeavor. It is costly in terms of the political capital expended by state law makers; the time devoted by lawmakers, state agency staff, and institutional leaders; and in the amount of money devoted to these programs. Therefore, inserting rigorous empirical analysis to the discussion and debate is important and worthwhile.

But just as the authors say performance funding “should not be dismissed in one fell swoop,” it should not be embraced in one fell swoop either. This is especially true given the mounting evidence (for example here, here, here, and here) that these efforts may not actually work in the same way the authors believe they should.

Claiming that there is “indisputable evidence that incentives matter in higher education” is a bold proposition to make in light of these studies and others. Only time will tell as more studies come out. Until then, we readily agree with some of the author’s points and critiques and would not have decided to draft this reply had they provided an accurate representation of our study’s methods.

David Tandberg is assistant professor of higher education at Florida State University. Nicholas Hillman is an assistant professor educational leadership & policy analysis at the University of Wisconsin at Madison.

Editorial Tags: 

Pages

Subscribe to RSS - Marketing / public relations / government relations
Back to Top