You have /5 articles left.
Sign up for a free account or log in.

With the renewal of the Higher Education Act in the offing and the second act of a president who focused heavily on higher education in his first term now under way, the hunt for big or new (or at least helpful) ideas about student aid policy is in full swing -- and the stakes are high. Everybody and their uncle -- from the Bill & Melinda Gates Foundation to the College Board -- is soliciting or producing recommendations about reimagining federal financial aid, especially the bedrock Pell Grant Program.

And given the likelihood that, for the foreseeable future, federal financial aid funding may be a zero-sum game, with overall funding levels flat or falling creating the prospect of winners and losers, the stakes are high as experts assess what works and what doesn't.

On Monday, two well-regarded scholars of federal financial aid policy took a look backward for ideas about how to move ahead. In a paper published by the National Bureau of Economic Research (abstract here), Susan Dynarski of the University of Michigan and Judith Scott-Clayton of Columbia University's Teachers College scan 50 years of financial aid practice and research to "review what is known and what is not known about how well various student aid programs work," they write.

After outlining the history of financial aid and explaining the difficulty of evaluating which policies are effective and which aren't -- mostly surrounding the fact that it is hard to undertake, or even mimic, the sort of randomized experiments that are the gold standard in most kinds of research -- they nonetheless offer four "major lessons that can be taken from the research on financial aid effectiveness, drawing primarily on experimental and quasi-experimental analyses."

They are:

  • Money matters for college access. "[W]hen students know that they will receive a discount, enrollment rates increase," they say, citing numerous pieces of research. And while this is a newer line of inquiry, early indications are that money "can improve persistence and completion," too, they write (though with one important caveat, to be discussed later).
  • Program complexity undermines aid effectiveness. "Programs ... that have clearly demonstrated impacts on college enrollment tend to have simple, easy-to-understand eligibility rules and application procedures," Dynarski and Scott-Clayton write. The authors, who are strong proponents of simplifying the Pell Grant Program, argue that Pell's application and eligibility is not nearly simple enough, limitations that "may be obscuring its benefits and dampening its impact among the individuals who need it most -- those who are on the fence about college for financial reasons."
  • Evidence on the effect of loans is limited but suggests that design is important. The authors note that there has been relatively little rigorous research to answer the question of whether loans affect college enrollment, performance or completion, or to compare the impact of loans to that of grants (which need not be repaid).  "A more interesting question than whether [loans] increase college enrollment or completion at all is whether some types of loans are more effective than others," they write.
  • Academic incentives appear to augment aid effectiveness, particularly after enrollment. The authors cite several studies to suggest that scholarships tied to students' academic performance "can bolster the impact of financial aid on college performance and completion," and that "dollars with strings attached produce larger effects than dollars alone."

Sara Goldrick-Rab, another scholar who studies financial aid, largely agrees with Dynarski's and Scott-Clayton's conclusion that "as state and federal budgets face increasing pressures and politicians look for ways to control spending, financial aid programs will be vulnerable to cutbacks if evidence is lacking on their effectiveness, and even those programs with documented positive effects may be asked to do more with less."

And that's precisely why she pushed back hard against the last of their assertions above in a post on her blog. It's not, she writes, that it's unreasonable to believe that grants with academic incentives appear to boost college outcomes more than money alone -- only that the hypothesis "has just as little empirical support today -- or perhaps even less -- than it did a few years ago when the debate over this issue was especially hot."

Specifically, Goldrick-Rab notes that two of the major studies that Dynarski and Scott-Clayton cite to show the relative effectiveness of performance-based grants and the comparative ineffectiveness of grant funds alone have subsequently been updated in ways (here and here, respectively) that undercut the NBER authors' conclusions.

The truth, she writes, "is that the experimental work needed to test the hypothesis that academic incentives tied to grant aid outperform grant aid without strings attached hasn't been conducted....  We need to set up a horse race between aid and aid+incentives for a sample of students much like those whom we'd hope to reform aid for -- Pell Grant recipients, most likely.  Only then will we know if academic incentives really add value. And even then, we won't know why -- without rigorous mixed methods research.

"For now, the jury is out, and policy makers who pair academic incentives with need-based aid are flying blind," Goldrick-Rab adds. "They may have other rationales for doing wanting to do this -- some people feel better about distributing money when it comes with strings -- but they shouldn't pretend it's an evidence-based decision."

Dynarski responded: "Any review is a snapshot of research at a given time, and the Wisconsin working paper was not yet released when we wrote our article this fall. I am sure it will get the attention it merits in future reviews of the literature."

Next Story

Written By

More from Financial Health