Imagine you’ve just been accepted to the college of your dreams. At first you feel elation, but then anxiety sets in — will you and your family be able to afford it? Since financial aid packages often blur the line between grants and loans, it might be hard to tell. A $40,000 "award" at one school might seem like a much better deal than a $20,000 package at another — unless you realize the larger "award" consists mostly of loans. With borrowing and loan default rates on the rise, aid packages have huge consequences for students’ educational and financial lives.
In response, the U.S. Department of Education recently unveiled a "Shopping Sheet" that standardizes the way financial aid packages are presented to students. This allows students and parents to easily compare the true cost of one college to another. But institutions don’t have to use the Shopping Sheet, and the National Association of Student Financial Aid Administrators (NASFAA), a powerful industry trade group, is trying to make sure they never do.
NASFAA should be an influential advocate for the Shopping Sheet. Part of its mission is to support policies that increase student access and success. But when the Shopping Sheet was recently unveiled by the Department of Education, NASFAA’s president, Justin Draeger, issued the following statement:
"We remain concerned with the inflexible standardization of the Shopping Sheet, and more broadly, with the multitude of consumer disclosure initiatives that have been introduced in recent months. Institutions need flexibility to design a financial aid award letter that best meets the needs of their unique student populations."
The Shopping Sheet might need to be altered in some circumstances, whether it be something as simple as how to classify the federal TEACH grant, or something much more complicated like how to accurately reflect cost of attendance and net price for part-time students. Financial aid administrators, however, are unlikely to experiment with it and provide invaluable feedback since their own professional organization signals that they shouldn’t. And a system in which every institution creates its own award letter ends up serving no students well.
As an example, here’s a real financial aid award letter, followed by a version of the Shopping Sheet containing the same information. I’ve indicated in red some key differences between the two versions to show how the Shopping Sheet would help students and families make better decisions (click on either image to zoom in):
The first letter combines work-study and loans into the “total award” the student will receive for the academic year, with no reference to the student’s cost of attendance. It goes on to say that “You have been awarded” several federal loans. “Award” is a generous term here since almost the entire cost of attendance will be financed by student loan debt. This amount will only grow larger as interest accrues over time. Even more worrisome, the package includes over $30,000 in a Parent PLUS loan. Of federal loan options, PLUS loans have the highest interest rate, and are not a guarantee — parents have to apply for one.
The Shopping Sheet, by contrast, makes this harsh reality perfectly clear by using the institution’s estimated full cost of attendance and displaying the student’s net price, after accounting for grant and scholarship aid. Federal loans are kept separate from grants and scholarships. Parent PLUS and private loans are only mentioned as a financing option that may be available depending on the student’s situation. The sheet also standardizes common terminology so that loans and grants aren’t cloaked in financial aid jargon — such as labeling a Perkins Loan as “Perkins” or “Perkins L.”
The Shopping Sheet makes this university’s award package look a lot less rosy. And that’s probably one of the main reasons why many institutions and NASFAA are so against it. With skyrocketing costs and persistent state disinvestment, revenue-hungry institutions will try anything to get accepted students in the door. This includes adding more loans to the bottom-line of the aid package, and adding PLUS loans where alternative private loans used to be. Packaging of financial aid is becoming increasingly strategic, and is often done with the institution’s goals, not a student’s need, in mind.
If NASFAA continues to put entrenched institutional interests above students’ financial welfare, it’s unlikely that the Shopping Sheet will be voluntarily adopted at a large scale. The best bet for getting clear, comparable, useful information into students’ hands is federal legislation. Senator Al Franken has introduced a bill with bipartisan support that would require the use of a model aid letter, similar to the Shopping Sheet. “Students today have enough obstacles keeping them from a quality education, deciphering the paperwork shouldn’t be one of them,” remarked Senator Ben Cardin, one of the bill's co-sponsors, “We need to make it easier to understand the options for financial aid and exactly what the full cost will be.”
That’s true, but legislation takes time. Students and their families need help understanding college costs now. If NASFAA is serious about institutional flexibility that actually helps students, then they should encourage institutions to adopt and experiment with the Shopping Sheet now. Otherwise, their flexibility will be legislated away.
Rachel Fishman is a policy analyst for the Education Policy Program at the New America Foundation. Before joining New America, she worked as a policy analyst at Education Sector.
This month, Vice President Joe Biden led a round-table discussion with a group of college and university presidents from some of our nation’s largest institutions of higher education. The outcome of that meeting was an agreement by the leaders of 10 institutions or higher education systems to include a standardized “shopping sheet” in the financial aid packets sent to incoming students, beginning in the fall of 2013. A sample of the “shopping sheet,” which is designed to provide information relating to college costs, student indebtedness, and likelihood of degree completion, can be found here.
Though I recognize the alarming increase in college costs that has occurred during the last 15 years, and I applaud any honest effort to address this problem, I fear the “shopping sheet” fails to break new ground.
Transparency is a good thing, and students/parents should know what to expect when they select a college. The problems with the “shopping sheet,” however, are threefold.
First, this seems to be an attempt to repackage something that many colleges and universities are already doing. The College Portrait’s Voluntary System of Accountability (VSA) provides a more detailed and nuanced collection of pertinent information for those considering their college options. It includes costs related to tuition and fees, a personalized estimation of financial aid and loans, as well as details and data concerning admissions, campus life, student experiences/outcomes, and much more. The VSA is easy to navigate and also allows for comparison of institutions. Hundreds of colleges and universities are already participating in the VSA, and expansion of that number would be a positive step. Given the existence of the VSA, introduction of the “shopping sheet” seems a bit redundant and doesn’t offer any solution to the cost issue.
Second, the “shopping sheet” fails to address one of the hidden issues in the college-cost discussion -- time to degree. As I have discussed in the past, graduating on time dramatically reduces the total cost of college and increases one’s lifetime earning potential. Though the “shopping sheet” provides a snapshot of institutional and average 4-year graduation rates as well as student retention rates, this information is not sufficient for understanding the total cost/value proposition of attending a college. The College of New Jersey, where I serve as president, is one of only six public colleges and universities nationally that maintain 4-year graduation rates greater than 70 percent.
The reality is that most college students now take longer than 4 years to complete their degrees, or do not graduate at all. That makes 6-year graduation rates, which are included in the VSA but omitted from the “shopping sheet,” an important statistic for consideration. Other vital outcomes, such as post-graduate employment information, graduate school admission rates, and professional license or certification exam passage rates, are published on TCNJ’s admissions web site and in other locations. These data points can be very informative during the college-selection process but are currently overlooked by both the “shopping sheet” and the VSA. Inclusion of that information would be a strong enhancement.
Third, doing this sort of reporting through the “shopping sheet” or VSA or some other government-imposed mechanism, whether state or federal, forces colleges and universities to expend resources. The information provided in these reports can be very useful, but it does not get aggregated or analyzed unless you hire staff to do that work. That’s appropriate, if the expenditures improve educational quality or help increase effectiveness. Unfortunately, though collecting data and issuing reports may illustrate the cost problem, those actions will not solve the problem.
In order to actually address the college-cost issue, institutions must operate strategically and efficiently. They must manage course offerings in ways that optimize the deployment faculty and staff, facilitate the attainment of learning outcomes, and provide students with access to the courses they need for timely degree completion. Institutions also must offer support services that undergird the academic experience, eliminate roadblocks, and enhance the prospects of students graduating on time. Therefore, neither institutions nor their students can afford unnecessary redundancy in the name of political one-upmanship.
I think we can all agree that colleges and universities should be open and honest with prospective students about the actual cost of attaining a degree, not just enrolling for a year. Providing information that allows for simple, accurate comparison of institutions is a worthwhile goal, but I believe adding a few data points to the VSA would be a better strategy than implementing the “shopping sheet.” It’s important to remember, though, that talking about and reporting on our affordability problem is not enough; we need to find ways to solve it.
R. Barbara Gitenstein is president of the College of New Jersey.
Over the last four decades, federal and state policy makers have wrestled with how to design student aid programs to make them fair, efficient, and effective – and how to evaluate and improve those programs, once in place.
Early on it was discovered that competing interests could easily overtake and dominate the policy formulation process. Unsupported claims that programs were inefficient, poorly targeted, or unfairly favored one type of student or institution over another were not uncommon. Even proposals that appeared to alter the intent of the program, disenfranchise a whole class of students, or undermine a particular type of institution were offered with no accompanying data analysis. Often developed behind closed doors, such proposals gave little consideration to the impact of the proposed change on the enrollment, persistence, and completion behavior of affected students.
Over two decades ago, in an attempt to improve the policymaking process, a group of analysts in Washington put in place a nonpartisan, analytical framework to ensure that policymakers could understand the exact nature and likely impact of alternative proposals. The framework involved an agreement to use a standard computer model with known assumptions and populated with the best and most recent data. The model produced standard output when alternative program specifications were entered, such as changes in the maximum award, level of tuition sensitivity of the award, expected family contribution, and other program algorithms.
The output was a standard table that displayed the resulting changes in cells. A simplified version looked something like this:
Impact of Proposal on Students and Institutions
Type and Control of College
Data Arrayed in Each Cell
Number of Recipients
Level of Program Funds
Share of Program Funds
Average Award of Dependent and Independent Students
The rows of the table (displayed on the left) represented levels of family income; the columns denoted institutions of different type, control, and cost of attendance. For example, cell A included the lowest-income recipients attending 2-year public colleges, cell B included their middle-income peers who attended 4-year public colleges, and cell C included their high-income peers who attended 4-year private colleges.
The bottom row contained program funds received, by type and control of institution. For example, cell D showed total program funds going to all other postsecondary institutions, and cell E showed total program costs. The remaining cells showed other combinations.
Within each cell (displayed on the right), the computer output would array the following data: number of recipients; level of program funds; share of program funds; and average award for dependent and independent students. Once this table was produced for the current programs, proposed changes could be entered into the model to produce a new table, for purposes of comparison to the benchmark table -- the status quo.
Proposals that did not significantly change the existing distribution of program funds, by family income and type of institution, as measured by the shares in the cells, were deemed neutral. Proposals that redistributed program funds toward the northwest portion of the table, that is, toward cell A, were deemed relatively consistent with program intent by most observers; while those that moved funds generally to the southeast portion of the table, toward Cell C, not so much. Even the most challenged participants got the hang of the exercise quickly.
The benefits of obtaining unanimous agreement to use this framework in the policy formulation process were profound. For each alternative proposal, policymakers had at their disposal: any and all changes made to the underlying demographic assumptions of the model; the complete set of all proposed program changes; and the impact on students, institutions, and taxpayers of implementing the changes. One major benefit of using the framework was minimizing, if not wholly excluding, obviously self-serving proposals that ran counter to any reasonable interpretation of program intent. Occasionally, however, such a proposal would slip through, to the great amusement of n-1 participants. (Wow, you really hate community college students, don’t you?)
Use of the framework had another really important advantage. Advocacy (nothing wrong with that!) could be quickly distinguished from analysis. Advocates, analysts, and the all-too-familiar hybrids, who wear multiple hats, had the same information. There was an even playing field with everyone’s cards in full sight on the table. When used to identify and compare equal cost options that held total funding constant and redistributed different shares to participants, a sometimes unsettling zero-sum game unfolded in which losses had to finance gains.
Lively discussions ensued. Some had to be taken outside.
It is important to note that the framework did not provide estimates of the likely impact on student outcomes, that is, actual changes in enrollment and persistence behavior. At the time, there were no reliable data to build into the model that predicted student behavior – particularly any induced positive or negative enrollment effects of the proposal.
But this early effort to standardize at least the analytical portion of the policy process was a resounding success. Because, without these first-order estimates, winners and losers under proposed changes could not be identified, much less educated guesses made about how students might actually behave in response.
As another round of Higher Education Act reauthorization approaches, the higher education policy community, more than ever, needs to develop a similar analytical framework, underpinned by a more sophisticated computer model, driven by far richer data, containing more grant programs – federal, state, and institutional. Creating such a framework would not be all that difficult, the returns would again be enormous, and the data are available.
The table would display students, by family income and dependency, and all institutions, by type, control, and cost of attendance. Separate tables could be created at the program, institutional, state, and national level.
The effort should start with simple questions: What information should be displayed in the cells? Certainly it should include at least those in the simple table above. Should dependent and independent recipients be treated separately? Yes. Should merit-based grants be included? Probably. How about nontraditional students? Of course. You get the idea.
Given today’s budget battles, momentous zero-sum decisions that hold program funding constant will be made at the federal and state level – decisions that will dramatically affect the enrollment and persistence decisions of low- and middle-income students, and institutions as well. Without an agreed-upon framework with which to compare alternative proposals, at least as to who gains and who loses, policy discussions will proceed unproductively as if policymakers were starting from scratch, when, in fact, they are not. Without such a framework, discussions will fail to take properly into account the sobering reality that there are already programs in place that students, parents, and institutions count on, and that changes in existing programs will not only add to complexity and confusion but also have important tradeoffs and consequences.
Building the analytical framework should start now with the Pell Grant program. Given its central importance to millions of students and thousands of institutions, all legislative proposals to modify or alter the program should be specified and evaluated using an up-to-date version of a standard computer model that all stakeholders, including students, can use – a model that includes a common set of inputs and outputs. This is particularly important in the case of proposed changes that would condition the Pell award on the basis of data not currently collected and used in the calculation of award, expected family contribution, or student and institutional eligibility.
Examples include making the Pell award conditional on measures of merit or progress. In such cases, the source of the data must be specified, a new parameter created, and the impact of making the award conditional on that parameter estimated using the model. Winnings must be balanced with losses, and educated guesses must at least be considered about what will likely happen to students affected by the proposed change – particularly those who would lose much needed grant aid if the change were incorporated in the program.
Perhaps most important, proposals whose cost and distributional analyses appear acceptable should be subjected to rigorous case-controlled testing with additional funds – holding students harmless – before implementation. Congress, the Administration, and state legislatures will certainly need this information to make decisions because redistributing a fixed amount of scarce need-based grant aid to meet national and state access and completion goals, while minimizing unintended harm to students and institutions, will be challenging.
Without the light that good data and analysis can shed on the effort, policymakers will again be dancing in the dark.
Bill Goggin is executive director of the Advisory Committee on Student Financial Assistance, an independent committee created by Congress in the Education Amendments of 1986 to provide technical, nonpartisan advice on student aid policy.
Democratic senators take aim at career colleges' marketing budgets, but bill would affect nonprofit colleges, too. While the legislation faces long odds, it could shape the ongoing debate over for-profits.