The University of California System is turning to celebrities for a new crowdfounding approach to raise money for financial aid, The Los Angeles Times reported. Celebrities are pledging access and performances if their supporters can raise set amounts of money. Jamie Foxx, the actor, will for example "rap a song like Bill Clinton, President Obama and Monique from the movie 'Precious'" if his fans raise $20,000. The idea is to attract young alumni and others who are not interested in traditional fund-raising appeals, officials said.
Adrian College has announced that it will repay all or part of the student loans of new graduates who fail to get jobs that pay at least $37,000. Under the plan, the college will make all of loan repayments due for of those who don't have a job that pays at least $20,000, and then a portion of the repayments for those with salaries of $20,000 to $37,000. The idea behind the program, called Adrian Plus, is to reassure students and families that they can attend a private liberal arts college without fear of debt they can't manage upon graduation. Adrian officials stressed that, based on past patterns, the vast majority of students won't need to partipate in the program.
Submitted by Paul Fain on September 11, 2013 - 3:00am
An official with the U.S. Department of Education on Tuesday suggested that a panel of negotiators consider including a program-level cohort default rate as part of proposed gainful employment regulations, which would would measure the employment outcomes of vocational programs at for-profit institutions and community colleges. That metric would be a new addition to an annual debt-to-income ratio and a discretionary income ratio.
John Kolotos, the official, who is a negotiator for the rule-making session that began this week, said the department had not vetted the details on how a loan default rate would work. But the department already has an institution-level rate in place, and he said the feds consider a three-year program-level rate of 30 percent (and one year at 40 percent) to be a "viable addition" to gainful employment. It would be a stand-alone measure, he said, meaning academic programs would lose eligibility for federal aid programs if they crossed the threshold, regardless of how they perform on other measures.
Investment in 529 college savings and prepaid tuition plans reached a record level in the first six months of 2013, according to a midyear report released Tuesday by the College Savings Plan Network.
Total investment in 529 plans reached $205 billion and the number of open 529 accounts increased to 11.43 million as of June 30, 2013, up from 10.74 million in December 2011.
“As families reach a crossroad about the value of higher education, our mid-year report finds that they have continued their commitment to save for and invest in college education,” Hon. Michael L. Fitzgerald, Chair of the College Savings Plans Network and State Treasurer of Iowa, said in the report. “The steady increase of total assets, account size and contributions in 529 plans are positive signs that Americans recognize saving for college as a long-term commitment and investment. Fostering this mid-year success, if the government undertakes the new initiative to slow tuition increases, then families’ hard earned 529 savings will go further to cover more college costs.”
529 plans are offered in 49 states and the District of Columbia.
WASHINGTON -- The U.S. Education Department’s attempts to regulate colleges and universities over the past several years provide good protections for students and taxpayers, the department’s independent investigatory arm has concluded.
The report by the department’s inspector general was released on the second day of a negotiated rule-making hearing aimed at rewriting the department’s controversial gainful employment regulations. It finds that some type of gainful employment metrics are needed to hold colleges accountable and to protect taxpayer money. The report also applauds the department’s efforts to define a credit hour and require institutions of higher education to be authorized by the state in which they operate.
The inspector general’s office relied on its previous audits and investigations to produce the analysis. It did not appear to evaluate the impact of the regulations or weigh alternative rule proposals.
Representative George Miller, the ranking Democrat on the House education committee, sought the study from the Education Department’s inspector general in response to legislation being pushed by House Republicans to repeal those regulations and prohibit the Obama administration from enacting new ones. The proposal cleared the Republican-led House education committee in July on a mostly party-line vote, with one Democrat supporting the measure.
Submitted by Ben Miller on September 3, 2013 - 3:00am
After a month of speculation, President Obama unveiled his plan to “shake up” higher education last week. As promised, the proposal contained some highly controversial elements, none greater than an announcement that the U.S. Department of Education will begin to rate colleges and universities in 2015 and tie financial aid to those results three years later. The announcement prompted typical clichéd Beltway commentary from the higher education industry of “the devil is in the details” and the need to avoid “unintended consequences,” which should rightfully be attributed as, “We are not going to outright object now when everyone’s watching but instead will nitpick to death later.”
But the ratings threat is more substantive than past announcements to put colleges “on notice,” if for no other reason than it is something the department can do without Congressional approval. Though it cannot actually tie aid received directly to these ratings without lawmakers (and the threat to do so would occur after Obama leaves office), the department can send a powerful message both to the higher education community and consumers nationwide by publishing these ratings.
Ratings systems, however, are no easy matter and require lots of choices in their methodologies. With that in mind, here are a few recommendations for how the ratings should work.
Ratings aren’t rankings.
Colleges have actually rated themselves in various forms for well over a hundred years. The Association of American Universities is an exclusive club of the top research universities that formed in 1900. The more in-depth Carnegie classifications, which group institutions based upon their focus and types of credentials awarded, have been around since the early 1970s. Though they may not be identified as such by most people, they are forms of ratings — recognitions of the distinctions between universities by mission and other factors.
A federal rating system should be constructed similarly. There’s no reason to bother with ordinal rankings like the U.S. News and World Report because distinguishing among a few top colleges is less important than sorting out those that really are worse than others. Groupings that are narrow enough to recognize differences but sufficiently broad to represent a meaningful sample are the way to go. The Department could even consider letting colleges choose their initial groupings, as some already do for the data feedback reports the Department produces through the Integrated Postsecondary Education Data System (IPEDS).
It’s easier to find the bottom tail of the distribution than the middle or top.
There are around 7,000 colleges in this country. Some are fantastic world leaders. Others are unmitigated disasters that should probably be shut down. But the vast majority fall somewhere in between. Sorting out the middle part is probably the hardest element of a ratings system — how do you discern within averageness?
We probably shouldn’t. A ratings system should sort out the worst of the worst by setting minimum performance standards on a few clear measures. It would clearly demonstrate that there is some degree of results so bad thatit merits being rated poorly. This standard could be excessively, laughably low, like a 10 percent graduation rate. Identifying the worst of the worst would be a huge step forward from what we do now. An ambitious ratings system could do the same thing on the top end using different indicators, setting very high bars that only a tiny handful of colleges would reach, but that’s much harder to get right.
Don’t let calls for the “right” data be an obstructionist tactic.
Hours after the President’s speech, representatives of the higher education lobby stated the administration’s ratings “have an obligation to perfect data.” It’s a reasonable requirement that a rating system not be based only on flawed measures, like holding colleges accountable just for the completion of first-time, full-time students. But the call for perfect data is a smokescreen for intransigence by setting a nearly unobtainable bar. Even worse, the very people calling for this standard are the same ones representing the institutions that will be the biggest roadblock to obtaining information fulfilling this requirement. Having data demands come from those keeping it hostage creates a perfect opportunity for future vetoes in the name of making perfect be the enemy of the good. It’s also a tried and true tactic from One Dupont Circle. Look at graduation rates, where the higher education lobby is happy to put out reports critiquing their accuracy after getting Congress to enact provisions that banned the creation of better numbers during the last Higher Education Act reauthorization.
To be sure, the Obama administration has an obligation to engage in an open dialogue with willing partners to make a good faith effort at getting the best data possible for its ratings. Some of this will happen anyway thanks to improvements to the department’s IPEDS database. But if colleges are not serious about being partners in the ratings and refuse to contribute the data needed, they should not then turn around and complain about the results.
Stick with real numbers that reflect policy goals.
Input-adjusted metrics are a wonk’s dream. Controlling for factors and running regressions get us all excited. But they’re also useless from a policy implementation standpoint. Complex figures that account for every last difference in institutions will contextualize away all meaningful information until all that remains is a homogenous jumble where everyone looks the same. Controlling for socioeconomic conditions also runs the risk of just inculcating low expectations for students based upon their existing results. Not to mention any modeling choices in an input-adjusted system will add another dimension of criticism to the firestorm that will already surround the measures chosen.
That does not mean context should be ignored. There are just better ways to handle it. First and foremost is making ratings on measures based on performance relative to peers. Well-crafted peer comparisons can accomplish largely the same thing as input adjustment since institutions would be facing similar circumstances, but still rely on straightforward figures. Second, unintended consequences should be addressed by measuring them with additional metrics and clear goals. For example, afraid that focusing on a college's completion rate will discourage enrolling low-income students or unfairly penalize those that serve large numbers of this type of students? The ratings should give institutions credit for the socioeconomic diversity of their student body, require a minimum percentage of Pell students, and break out the completion rate by familial income. Doing so not only provides a backstop against gaming, it also lays out clearer expectations to guide colleges' behavior, something the U.S. News rankings experience has shown that colleges clearly know how to do with less useful measures like alumni giving (sorry, Brown, for holding you back on that one).
Mix factors a college can directly control with ones it cannot.
Institutions have an incentive to improve on measures included in a rating system. But some subset of colleges will also try to evade or “game” the measure. This is particularly true if it’s something under their control — look at the use of forbearances or deferments to avoid sanctions under the cohort default rate. No system will ever be able to fully root out gaming and loopholes, but one way to adjust for them is by complementing measures under a college’s control with ones that are not. For example, concerns about sacrificing academic quality to increase graduation rates could be partially offset by adding a focus on graduates’ earnings or some other post-completion behavior that is not under the college’s control. Institutions will certainly object to being held accountable for things they cannot directly influence. But basing the uncontrollable elements on relative instead of absolute performance should further ameliorate this concern.
Focus on outputs but don’t forget inputs.
Results matter. An institution that cannot graduate its students or avoid saddling them with large loan debts they cannot repay upon completion is not succeeding. But a sole focus on outputs could encourage an institution to avoid serving the neediest students as a way of improving its metrics and undermine the access goals that are an important part of federal education policy.
To account for this, a ratings system should include a few targeted input metrics that reflect larger policy goals like socioeconomic diversity or first-generation college students. Giving colleges “credit” in the ratings for serving the students we care most about will provide at least some check against potential gaming. Even better, some metrics should have a threshold a school has to reach to avoid automatic classification into the lowest rating.
Put it together.
A good ratings system is both consistent and iterative. It keeps the core pieces the same year to year but isn’t too arrogant to include new items and tweak ones that aren’t working. These recommendations present somewhere to start. Group the schools sensibly — maybe even rely on existing classifications like those done by Carnegie. The ratings should establish minimum performance thresholds on the metrics we think are most indicative of an unsuccessful institution — things like completion rates, success with student loans, time to degree, etc. They should consist of outcomes metrics that reflect their missions—such as transfer success for two-year schools, licensure and placement for vocational offerings, earnings, completion and employment for four-year colleges and universities. But they should also have separate metrics to acknowledge policy challenges we care about — success in serving Pell students, the ability to get remedial students college-ready, socioeconomic diversity, etc. — to discourage creaming. The result should be something that reflects values and policy challenges, acknowledges attempts to find workarounds, and refrains from dissolving into wonkiness and theoretical considerations that are divorced from reality.
Ben Miller is a senior policy analyst in the New America Foundation's education policy program, where he provides research and analysis on policies related to postsecondary education. Previously, Miller was a senior policy advisor in the Office of Planning, Evaluation, and Policy Development in the U.S. Department of Education.
A student at St. Louis Community College was arrested Wednesday for a "violent" threat against the financial aid office, authorities said, The St. Louis Post-Dispatch reported. The Twitter message said that she was so frustrated with the financial aid office that she wanted to kill someone. The tweet didn't name an individual. College officials discovered the post through regular monitoring of social media about the college.
A federal program that provides student veterans with on-campus educational and career counseling will nearly triple its footprint across the country this fall, the Department of Veterans Affairs announced Thursday. Under a program called VetSuccess on Campus, the V.A. plans to provide 62 more campuses with counselors, on top of the existing 32 institutions already participating in the program.
The counselors help veterans navigate their educational and medical benefits. The institutions selected for expansion include about a dozen large public universities, some community colleges and several private institutions.
In unveiling his ambitious higher education plan last week, President Obama once again framed his desire to make college more affordable as a personal mission, reminding the audience at the State University of New York at Buffalo of his own experience with a hefty load of student loan debt.
Obama took out $42,753 in loans to pay his tuition at Harvard Law School, the Chicago Sun-Times reported. First Lady Michelle Obama went $40,762 in debt to finance her Harvard Law education. It was not until after Obama signed a $1.9 million book deal in 2004 -- the year he was elected to the U.S. Senate -- that the couple paid off all of their student loans, according to the Sun-Times. The Obamas’ law school debt came on top of their existing undergraduate loans (he from Occidental College and Columbia University and she from Princeton University) and pushed their combined outstanding balance at graduation above $120,000, Obama has previously said.
Both the president and first lady also attended law school for three years -- an amount of time that Obama last week urged law schools to consider shortening to two years to reduce the cost for students.