This month, Vice President Joe Biden led a round-table discussion with a group of college and university presidents from some of our nation’s largest institutions of higher education. The outcome of that meeting was an agreement by the leaders of 10 institutions or higher education systems to include a standardized “shopping sheet” in the financial aid packets sent to incoming students, beginning in the fall of 2013. A sample of the “shopping sheet,” which is designed to provide information relating to college costs, student indebtedness, and likelihood of degree completion, can be found here.
Though I recognize the alarming increase in college costs that has occurred during the last 15 years, and I applaud any honest effort to address this problem, I fear the “shopping sheet” fails to break new ground.
Transparency is a good thing, and students/parents should know what to expect when they select a college. The problems with the “shopping sheet,” however, are threefold.
First, this seems to be an attempt to repackage something that many colleges and universities are already doing. The College Portrait’s Voluntary System of Accountability (VSA) provides a more detailed and nuanced collection of pertinent information for those considering their college options. It includes costs related to tuition and fees, a personalized estimation of financial aid and loans, as well as details and data concerning admissions, campus life, student experiences/outcomes, and much more. The VSA is easy to navigate and also allows for comparison of institutions. Hundreds of colleges and universities are already participating in the VSA, and expansion of that number would be a positive step. Given the existence of the VSA, introduction of the “shopping sheet” seems a bit redundant and doesn’t offer any solution to the cost issue.
Second, the “shopping sheet” fails to address one of the hidden issues in the college-cost discussion -- time to degree. As I have discussed in the past, graduating on time dramatically reduces the total cost of college and increases one’s lifetime earning potential. Though the “shopping sheet” provides a snapshot of institutional and average 4-year graduation rates as well as student retention rates, this information is not sufficient for understanding the total cost/value proposition of attending a college. The College of New Jersey, where I serve as president, is one of only six public colleges and universities nationally that maintain 4-year graduation rates greater than 70 percent.
The reality is that most college students now take longer than 4 years to complete their degrees, or do not graduate at all. That makes 6-year graduation rates, which are included in the VSA but omitted from the “shopping sheet,” an important statistic for consideration. Other vital outcomes, such as post-graduate employment information, graduate school admission rates, and professional license or certification exam passage rates, are published on TCNJ’s admissions web site and in other locations. These data points can be very informative during the college-selection process but are currently overlooked by both the “shopping sheet” and the VSA. Inclusion of that information would be a strong enhancement.
Third, doing this sort of reporting through the “shopping sheet” or VSA or some other government-imposed mechanism, whether state or federal, forces colleges and universities to expend resources. The information provided in these reports can be very useful, but it does not get aggregated or analyzed unless you hire staff to do that work. That’s appropriate, if the expenditures improve educational quality or help increase effectiveness. Unfortunately, though collecting data and issuing reports may illustrate the cost problem, those actions will not solve the problem.
In order to actually address the college-cost issue, institutions must operate strategically and efficiently. They must manage course offerings in ways that optimize the deployment faculty and staff, facilitate the attainment of learning outcomes, and provide students with access to the courses they need for timely degree completion. Institutions also must offer support services that undergird the academic experience, eliminate roadblocks, and enhance the prospects of students graduating on time. Therefore, neither institutions nor their students can afford unnecessary redundancy in the name of political one-upmanship.
I think we can all agree that colleges and universities should be open and honest with prospective students about the actual cost of attaining a degree, not just enrolling for a year. Providing information that allows for simple, accurate comparison of institutions is a worthwhile goal, but I believe adding a few data points to the VSA would be a better strategy than implementing the “shopping sheet.” It’s important to remember, though, that talking about and reporting on our affordability problem is not enough; we need to find ways to solve it.
R. Barbara Gitenstein is president of the College of New Jersey.
Over the last four decades, federal and state policy makers have wrestled with how to design student aid programs to make them fair, efficient, and effective – and how to evaluate and improve those programs, once in place.
Early on it was discovered that competing interests could easily overtake and dominate the policy formulation process. Unsupported claims that programs were inefficient, poorly targeted, or unfairly favored one type of student or institution over another were not uncommon. Even proposals that appeared to alter the intent of the program, disenfranchise a whole class of students, or undermine a particular type of institution were offered with no accompanying data analysis. Often developed behind closed doors, such proposals gave little consideration to the impact of the proposed change on the enrollment, persistence, and completion behavior of affected students.
Over two decades ago, in an attempt to improve the policymaking process, a group of analysts in Washington put in place a nonpartisan, analytical framework to ensure that policymakers could understand the exact nature and likely impact of alternative proposals. The framework involved an agreement to use a standard computer model with known assumptions and populated with the best and most recent data. The model produced standard output when alternative program specifications were entered, such as changes in the maximum award, level of tuition sensitivity of the award, expected family contribution, and other program algorithms.
The output was a standard table that displayed the resulting changes in cells. A simplified version looked something like this:
Impact of Proposal on Students and Institutions
Type and Control of College
Data Arrayed in Each Cell
Number of Recipients
Level of Program Funds
Share of Program Funds
Average Award of Dependent and Independent Students
The rows of the table (displayed on the left) represented levels of family income; the columns denoted institutions of different type, control, and cost of attendance. For example, cell A included the lowest-income recipients attending 2-year public colleges, cell B included their middle-income peers who attended 4-year public colleges, and cell C included their high-income peers who attended 4-year private colleges.
The bottom row contained program funds received, by type and control of institution. For example, cell D showed total program funds going to all other postsecondary institutions, and cell E showed total program costs. The remaining cells showed other combinations.
Within each cell (displayed on the right), the computer output would array the following data: number of recipients; level of program funds; share of program funds; and average award for dependent and independent students. Once this table was produced for the current programs, proposed changes could be entered into the model to produce a new table, for purposes of comparison to the benchmark table -- the status quo.
Proposals that did not significantly change the existing distribution of program funds, by family income and type of institution, as measured by the shares in the cells, were deemed neutral. Proposals that redistributed program funds toward the northwest portion of the table, that is, toward cell A, were deemed relatively consistent with program intent by most observers; while those that moved funds generally to the southeast portion of the table, toward Cell C, not so much. Even the most challenged participants got the hang of the exercise quickly.
The benefits of obtaining unanimous agreement to use this framework in the policy formulation process were profound. For each alternative proposal, policymakers had at their disposal: any and all changes made to the underlying demographic assumptions of the model; the complete set of all proposed program changes; and the impact on students, institutions, and taxpayers of implementing the changes. One major benefit of using the framework was minimizing, if not wholly excluding, obviously self-serving proposals that ran counter to any reasonable interpretation of program intent. Occasionally, however, such a proposal would slip through, to the great amusement of n-1 participants. (Wow, you really hate community college students, don’t you?)
Use of the framework had another really important advantage. Advocacy (nothing wrong with that!) could be quickly distinguished from analysis. Advocates, analysts, and the all-too-familiar hybrids, who wear multiple hats, had the same information. There was an even playing field with everyone’s cards in full sight on the table. When used to identify and compare equal cost options that held total funding constant and redistributed different shares to participants, a sometimes unsettling zero-sum game unfolded in which losses had to finance gains.
Lively discussions ensued. Some had to be taken outside.
It is important to note that the framework did not provide estimates of the likely impact on student outcomes, that is, actual changes in enrollment and persistence behavior. At the time, there were no reliable data to build into the model that predicted student behavior – particularly any induced positive or negative enrollment effects of the proposal.
But this early effort to standardize at least the analytical portion of the policy process was a resounding success. Because, without these first-order estimates, winners and losers under proposed changes could not be identified, much less educated guesses made about how students might actually behave in response.
As another round of Higher Education Act reauthorization approaches, the higher education policy community, more than ever, needs to develop a similar analytical framework, underpinned by a more sophisticated computer model, driven by far richer data, containing more grant programs – federal, state, and institutional. Creating such a framework would not be all that difficult, the returns would again be enormous, and the data are available.
The table would display students, by family income and dependency, and all institutions, by type, control, and cost of attendance. Separate tables could be created at the program, institutional, state, and national level.
The effort should start with simple questions: What information should be displayed in the cells? Certainly it should include at least those in the simple table above. Should dependent and independent recipients be treated separately? Yes. Should merit-based grants be included? Probably. How about nontraditional students? Of course. You get the idea.
Given today’s budget battles, momentous zero-sum decisions that hold program funding constant will be made at the federal and state level – decisions that will dramatically affect the enrollment and persistence decisions of low- and middle-income students, and institutions as well. Without an agreed-upon framework with which to compare alternative proposals, at least as to who gains and who loses, policy discussions will proceed unproductively as if policymakers were starting from scratch, when, in fact, they are not. Without such a framework, discussions will fail to take properly into account the sobering reality that there are already programs in place that students, parents, and institutions count on, and that changes in existing programs will not only add to complexity and confusion but also have important tradeoffs and consequences.
Building the analytical framework should start now with the Pell Grant program. Given its central importance to millions of students and thousands of institutions, all legislative proposals to modify or alter the program should be specified and evaluated using an up-to-date version of a standard computer model that all stakeholders, including students, can use – a model that includes a common set of inputs and outputs. This is particularly important in the case of proposed changes that would condition the Pell award on the basis of data not currently collected and used in the calculation of award, expected family contribution, or student and institutional eligibility.
Examples include making the Pell award conditional on measures of merit or progress. In such cases, the source of the data must be specified, a new parameter created, and the impact of making the award conditional on that parameter estimated using the model. Winnings must be balanced with losses, and educated guesses must at least be considered about what will likely happen to students affected by the proposed change – particularly those who would lose much needed grant aid if the change were incorporated in the program.
Perhaps most important, proposals whose cost and distributional analyses appear acceptable should be subjected to rigorous case-controlled testing with additional funds – holding students harmless – before implementation. Congress, the Administration, and state legislatures will certainly need this information to make decisions because redistributing a fixed amount of scarce need-based grant aid to meet national and state access and completion goals, while minimizing unintended harm to students and institutions, will be challenging.
Without the light that good data and analysis can shed on the effort, policymakers will again be dancing in the dark.
Bill Goggin is executive director of the Advisory Committee on Student Financial Assistance, an independent committee created by Congress in the Education Amendments of 1986 to provide technical, nonpartisan advice on student aid policy.
Democratic senators take aim at career colleges' marketing budgets, but bill would affect nonprofit colleges, too. While the legislation faces long odds, it could shape the ongoing debate over for-profits.
After a panel fails to reach consensus, the U.S. Education Department will have a free hand in writing regulations that affect whether students in teacher preparation programs can receive some forms of financial aid.
Submitted by Tim Bishop on March 15, 2012 - 3:00am
America’s leadership in the global economy depends on a highly skilled, highly educated workforce. That’s why taxpayers support aid for higher education. But taxpayers rightly demand that their dollars be spent only on bona fide educational programs, and students deserve the opportunity to be educated, not just enrolled.
The Department of Education has a fundamental responsibility to taxpayers and students to make sure aid dollars are spent appropriately. Therefore, I and many of my colleagues have deep concerns about legislation passed last month in the House of Representatives that would limit effective oversight of the nearly $200 billion in student financial aid granted or guaranteed each academic year by the federal government.
Known as the “Protecting Academic Freedom in Higher Education Act” (H.R. 2117), the bill takes aim at the federal minimum standard for a “credit hour,” the basic unit for evaluating instructional programs. This bill would not only repeal the current regulations on what constitutes a credit hour, which are eminently reasonable, but also prohibit the Secretary of Education from ever promulgating a regulation in the future.
I am also concerned by the bill's repeal of the Department of Education's state authorization regulations, including vital consumer protections. The federal government has always required that colleges and universities be authorized by their states in order to receive federal aid funds, and current regulations stipulate that States must have a process in place to evaluate student complaints. Both of these common-sense requirements would be repealed by the bill.
Under the bill, for example, my home state of New York would have no recourse if a university based in a state with less stringent quality and curriculum requirements began operating a distance learning program that enrolled New York students. In short, the bill would make it impossible for states to guarantee the quality of programs operating inside their borders.
Federal regulations define an academic year as consisting of 24 to 36 credit hours and mandate that a student must carry at least 6 credit hours to be minimally eligible for financial aid. So if this bill became law, the government would determine eligibility for financial aid using a unit that is completely and permanently undefined. That situation is not only nonsensical, it also represents a threat to the government’s ability to police institutional fraud in the higher education industry.
Two years ago, the Department of Education's inspector general found that some colleges were awarding students more credits than they had actually earned, which allowed the institutions to collect more financial aid than they deserved. In response, the Department of Education formulated a reasonable minimum standard for the credit hour based on the so-called “Carnegie Definition” of instructional units, which has been widely used for decades. The federal regulation is also virtually identical to a regulation in place in New York State since 1976.
The regulation defines a credit hour as the amount of work represented in intended learning outcomes and verified by evidence of student achievement that is an institutionally established equivalency reasonably approximating not less than one hour of classroom instruction for 15 weeks per credit hour. The regulation’s use of the phrase “institutionally established equivalency” places the responsibility for determining what a credit hour is, within the context of a broad federal framework, where it belongs — with the faculty and with the accreditor of that particular institution.
I am very familiar with New York’s regulation, as I administered a college in Long Island for many years before I was elected to Congress.
Our cost of compliance with the credit-hour regulation was exactly zero, and we were able to create innovative programs including a semester at sea, cooperative education, internships, and courses that met in compacted time formats for 4 and 5 weeks -- all because we established an institutional equivalency that was sanctioned by our faculty, our accreditors, and -- for that matter -- the regulators in the State of New York.
The contention that this regulation stifles academic freedom and innovation is disproved by the record of New York’s internationally prominent colleges and universities over the past 35 years. The argument that it adds to the length of time students must spend in their degree program is simply not true.
What the regulation actually does is protect students and taxpayers from bad actors in the higher education industry who seek to profit from federal student aid funding while providing a substandard education to students. Furthermore, the permanent prohibition on regulating credit hours is effectively an invitation to future waste, fraud and abuse in aid programs.
The explosive growth in recent years of for-profit colleges, distance and online learning programs, and other nontraditional means of providing instruction educational services demand stronger oversight, not weaker. I am hopeful the Senate will reject this bill.