You have /5 articles left.
Sign up for a free account or log in.

Over the last four decades, federal and state policy makers have wrestled with how to design student aid programs to make them fair, efficient, and effective – and how to evaluate and improve those programs, once in place.

Early on it was discovered that competing interests could easily overtake and dominate the policy formulation process. Unsupported claims that programs were inefficient, poorly targeted, or unfairly favored one type of student or institution over another were not uncommon. Even proposals that appeared to alter the intent of the program, disenfranchise a whole class of students, or undermine a particular type of institution were offered with no accompanying data analysis. Often developed behind closed doors, such proposals gave little consideration to the impact of the proposed change on the enrollment, persistence, and completion behavior of affected students.

Over two decades ago, in an attempt to improve the policymaking process, a group of analysts in Washington put in place a nonpartisan, analytical framework to ensure that policymakers could understand the exact nature and likely impact of alternative proposals. The framework involved an agreement to use a standard computer model with known assumptions and populated with the best and most recent data. The model produced standard output when alternative program specifications were entered, such as changes in the maximum award, level of tuition sensitivity of the award, expected family contribution, and other program algorithms.

The output was a standard table that displayed the resulting changes in cells. A simplified version looked something like this:

Impact of Proposal on Students and Institutions
Family
Income
Type and Control of College Total
2-Year
Public
4-Year
Public
4-Year
Private

All Other
Postsecondary

 
Low A        
Middle   B      
High     C    
Total       D E

Data Arrayed in Each Cell

  • Number of Recipients
  • Level of Program Funds
  • Share of Program Funds
  • Average Award of Dependent and Independent Students

The rows of the table (displayed on the left) represented levels of family income; the columns denoted institutions of different type, control, and cost of attendance. For example, cell A included the lowest-income recipients attending 2-year public colleges, cell B included their middle-income peers who attended 4-year public colleges, and cell C included their high-income peers who attended 4-year private colleges.

The bottom row contained program funds received, by type and control of institution. For example, cell D showed total program funds going to all other postsecondary institutions, and cell E showed total program costs. The remaining cells showed other combinations.

Within each cell (displayed on the right), the computer output would array the following data: number of recipients; level of program funds; share of program funds; and average award for dependent and independent students. Once this table was produced for the current programs, proposed changes could be entered into the model to produce a new table, for purposes of comparison to the benchmark table -- the status quo.

Proposals that did not significantly change the existing distribution of program funds, by family income and type of institution, as measured by the shares in the cells, were deemed neutral. Proposals that redistributed program funds toward the northwest portion of the table, that is, toward cell A, were deemed relatively consistent with program intent by most observers; while those that moved funds generally to the southeast portion of the table, toward Cell C, not so much. Even the most challenged participants got the hang of the exercise quickly.

The benefits of obtaining unanimous agreement to use this framework in the policy formulation process were profound. For each alternative proposal, policymakers had at their disposal: any and all changes made to the underlying demographic assumptions of the model; the complete set of all proposed program changes; and the impact on students, institutions, and taxpayers of implementing the changes. One major benefit of using the framework was minimizing, if not wholly excluding, obviously self-serving proposals that ran counter to any reasonable interpretation of program intent. Occasionally, however, such a proposal would slip through, to the great amusement of n-1 participants. (Wow, you really hate community college students, don’t you?)

Use of the framework had another really important advantage. Advocacy (nothing wrong with that!) could be quickly distinguished from analysis. Advocates, analysts, and the all-too-familiar hybrids, who wear multiple hats, had the same information. There was an even playing field with everyone’s cards in full sight on the table. When used to identify and compare equal cost options that held total funding constant and redistributed different shares to participants, a sometimes unsettling zero-sum game unfolded in which losses had to finance gains.
Lively discussions ensued. Some had to be taken outside.

It is important to note that the framework did not provide estimates of the likely impact on student outcomes, that is, actual changes in enrollment and persistence behavior. At the time, there were no reliable data to build into the model that predicted student behavior – particularly any induced positive or negative enrollment effects of the proposal.

But this early effort to standardize at least the analytical portion of the policy process was a resounding success. Because, without these first-order estimates, winners and losers under proposed changes could not be identified, much less educated guesses made about how students might actually behave in response.

As another round of Higher Education Act reauthorization approaches, the higher education policy community, more than ever, needs to develop a similar analytical framework, underpinned by a more sophisticated computer model, driven by far richer data, containing more grant programs – federal, state, and institutional. Creating such a framework would not be all that difficult, the returns would again be enormous, and the data are available.

The table would display students, by family income and dependency, and all institutions, by type, control, and cost of attendance. Separate tables could be created at the program, institutional, state, and national level.

The effort should start with simple questions: What information should be displayed in the cells? Certainly it should include at least those in the simple table above. Should dependent and independent recipients be treated separately? Yes. Should merit-based grants be included? Probably. How about nontraditional students? Of course. You get the idea.

Given today’s budget battles, momentous zero-sum decisions that hold program funding constant will be made at the federal and state level – decisions that will dramatically affect the enrollment and persistence decisions of low- and middle-income students, and institutions as well. Without an agreed-upon framework with which to compare alternative proposals, at least as to who gains and who loses, policy discussions will proceed unproductively as if policymakers were starting from scratch, when, in fact, they are not. Without such a framework, discussions will fail to take properly into account the sobering reality that there are already programs in place that students, parents, and institutions count on, and that changes in existing programs will not only add to complexity and confusion but also have important tradeoffs and consequences.

Building the analytical framework should start now with the Pell Grant program. Given its central importance to millions of students and thousands of institutions, all legislative proposals to modify or alter the program should be specified and evaluated using an up-to-date version of a standard computer model that all stakeholders, including students, can use – a model that includes a common set of inputs and outputs. This is particularly important in the case of proposed changes that would condition the Pell award on the basis of data not currently collected and used in the calculation of award, expected family contribution, or student and institutional eligibility.

Examples include making the Pell award conditional on measures of merit or progress. In such cases, the source of the data must be specified, a new parameter created, and the impact of making the award conditional on that parameter estimated using the model. Winnings must be balanced with losses, and educated guesses must at least be considered about what will likely happen to students affected by the proposed change – particularly those who would lose much needed grant aid if the change were incorporated in the program.

Perhaps most important, proposals whose cost and distributional analyses appear acceptable should be subjected to rigorous case-controlled testing with additional funds – holding students harmless – before implementation. Congress, the Administration, and state legislatures will certainly need this information to make decisions because redistributing a fixed amount of scarce need-based grant aid to meet national and state access and completion goals, while minimizing unintended harm to students and institutions, will be challenging.

Without the light that good data and analysis can shed on the effort, policymakers will again be dancing in the dark.

Next Story

More from Views