Submitted by Paul Fain on September 11, 2013 - 3:00am
An official with the U.S. Department of Education on Tuesday suggested that a panel of negotiators consider including a program-level cohort default rate as part of proposed gainful employment regulations, which would would measure the employment outcomes of vocational programs at for-profit institutions and community colleges. That metric would be a new addition to an annual debt-to-income ratio and a discretionary income ratio.
John Kolotos, the official, who is a negotiator for the rule-making session that began this week, said the department had not vetted the details on how a loan default rate would work. But the department already has an institution-level rate in place, and he said the feds consider a three-year program-level rate of 30 percent (and one year at 40 percent) to be a "viable addition" to gainful employment. It would be a stand-alone measure, he said, meaning academic programs would lose eligibility for federal aid programs if they crossed the threshold, regardless of how they perform on other measures.
Investment in 529 college savings and prepaid tuition plans reached a record level in the first six months of 2013, according to a midyear report released Tuesday by the College Savings Plan Network.
Total investment in 529 plans reached $205 billion and the number of open 529 accounts increased to 11.43 million as of June 30, 2013, up from 10.74 million in December 2011.
“As families reach a crossroad about the value of higher education, our mid-year report finds that they have continued their commitment to save for and invest in college education,” Hon. Michael L. Fitzgerald, Chair of the College Savings Plans Network and State Treasurer of Iowa, said in the report. “The steady increase of total assets, account size and contributions in 529 plans are positive signs that Americans recognize saving for college as a long-term commitment and investment. Fostering this mid-year success, if the government undertakes the new initiative to slow tuition increases, then families’ hard earned 529 savings will go further to cover more college costs.”
529 plans are offered in 49 states and the District of Columbia.
WASHINGTON -- The U.S. Education Department’s attempts to regulate colleges and universities over the past several years provide good protections for students and taxpayers, the department’s independent investigatory arm has concluded.
The report by the department’s inspector general was released on the second day of a negotiated rule-making hearing aimed at rewriting the department’s controversial gainful employment regulations. It finds that some type of gainful employment metrics are needed to hold colleges accountable and to protect taxpayer money. The report also applauds the department’s efforts to define a credit hour and require institutions of higher education to be authorized by the state in which they operate.
The inspector general’s office relied on its previous audits and investigations to produce the analysis. It did not appear to evaluate the impact of the regulations or weigh alternative rule proposals.
Representative George Miller, the ranking Democrat on the House education committee, sought the study from the Education Department’s inspector general in response to legislation being pushed by House Republicans to repeal those regulations and prohibit the Obama administration from enacting new ones. The proposal cleared the Republican-led House education committee in July on a mostly party-line vote, with one Democrat supporting the measure.
Submitted by Ben Miller on September 3, 2013 - 3:00am
After a month of speculation, President Obama unveiled his plan to “shake up” higher education last week. As promised, the proposal contained some highly controversial elements, none greater than an announcement that the U.S. Department of Education will begin to rate colleges and universities in 2015 and tie financial aid to those results three years later. The announcement prompted typical clichéd Beltway commentary from the higher education industry of “the devil is in the details” and the need to avoid “unintended consequences,” which should rightfully be attributed as, “We are not going to outright object now when everyone’s watching but instead will nitpick to death later.”
But the ratings threat is more substantive than past announcements to put colleges “on notice,” if for no other reason than it is something the department can do without Congressional approval. Though it cannot actually tie aid received directly to these ratings without lawmakers (and the threat to do so would occur after Obama leaves office), the department can send a powerful message both to the higher education community and consumers nationwide by publishing these ratings.
Ratings systems, however, are no easy matter and require lots of choices in their methodologies. With that in mind, here are a few recommendations for how the ratings should work.
Ratings aren’t rankings.
Colleges have actually rated themselves in various forms for well over a hundred years. The Association of American Universities is an exclusive club of the top research universities that formed in 1900. The more in-depth Carnegie classifications, which group institutions based upon their focus and types of credentials awarded, have been around since the early 1970s. Though they may not be identified as such by most people, they are forms of ratings — recognitions of the distinctions between universities by mission and other factors.
A federal rating system should be constructed similarly. There’s no reason to bother with ordinal rankings like the U.S. News and World Report because distinguishing among a few top colleges is less important than sorting out those that really are worse than others. Groupings that are narrow enough to recognize differences but sufficiently broad to represent a meaningful sample are the way to go. The Department could even consider letting colleges choose their initial groupings, as some already do for the data feedback reports the Department produces through the Integrated Postsecondary Education Data System (IPEDS).
It’s easier to find the bottom tail of the distribution than the middle or top.
There are around 7,000 colleges in this country. Some are fantastic world leaders. Others are unmitigated disasters that should probably be shut down. But the vast majority fall somewhere in between. Sorting out the middle part is probably the hardest element of a ratings system — how do you discern within averageness?
We probably shouldn’t. A ratings system should sort out the worst of the worst by setting minimum performance standards on a few clear measures. It would clearly demonstrate that there is some degree of results so bad thatit merits being rated poorly. This standard could be excessively, laughably low, like a 10 percent graduation rate. Identifying the worst of the worst would be a huge step forward from what we do now. An ambitious ratings system could do the same thing on the top end using different indicators, setting very high bars that only a tiny handful of colleges would reach, but that’s much harder to get right.
Don’t let calls for the “right” data be an obstructionist tactic.
Hours after the President’s speech, representatives of the higher education lobby stated the administration’s ratings “have an obligation to perfect data.” It’s a reasonable requirement that a rating system not be based only on flawed measures, like holding colleges accountable just for the completion of first-time, full-time students. But the call for perfect data is a smokescreen for intransigence by setting a nearly unobtainable bar. Even worse, the very people calling for this standard are the same ones representing the institutions that will be the biggest roadblock to obtaining information fulfilling this requirement. Having data demands come from those keeping it hostage creates a perfect opportunity for future vetoes in the name of making perfect be the enemy of the good. It’s also a tried and true tactic from One Dupont Circle. Look at graduation rates, where the higher education lobby is happy to put out reports critiquing their accuracy after getting Congress to enact provisions that banned the creation of better numbers during the last Higher Education Act reauthorization.
To be sure, the Obama administration has an obligation to engage in an open dialogue with willing partners to make a good faith effort at getting the best data possible for its ratings. Some of this will happen anyway thanks to improvements to the department’s IPEDS database. But if colleges are not serious about being partners in the ratings and refuse to contribute the data needed, they should not then turn around and complain about the results.
Stick with real numbers that reflect policy goals.
Input-adjusted metrics are a wonk’s dream. Controlling for factors and running regressions get us all excited. But they’re also useless from a policy implementation standpoint. Complex figures that account for every last difference in institutions will contextualize away all meaningful information until all that remains is a homogenous jumble where everyone looks the same. Controlling for socioeconomic conditions also runs the risk of just inculcating low expectations for students based upon their existing results. Not to mention any modeling choices in an input-adjusted system will add another dimension of criticism to the firestorm that will already surround the measures chosen.
That does not mean context should be ignored. There are just better ways to handle it. First and foremost is making ratings on measures based on performance relative to peers. Well-crafted peer comparisons can accomplish largely the same thing as input adjustment since institutions would be facing similar circumstances, but still rely on straightforward figures. Second, unintended consequences should be addressed by measuring them with additional metrics and clear goals. For example, afraid that focusing on a college's completion rate will discourage enrolling low-income students or unfairly penalize those that serve large numbers of this type of students? The ratings should give institutions credit for the socioeconomic diversity of their student body, require a minimum percentage of Pell students, and break out the completion rate by familial income. Doing so not only provides a backstop against gaming, it also lays out clearer expectations to guide colleges' behavior, something the U.S. News rankings experience has shown that colleges clearly know how to do with less useful measures like alumni giving (sorry, Brown, for holding you back on that one).
Mix factors a college can directly control with ones it cannot.
Institutions have an incentive to improve on measures included in a rating system. But some subset of colleges will also try to evade or “game” the measure. This is particularly true if it’s something under their control — look at the use of forbearances or deferments to avoid sanctions under the cohort default rate. No system will ever be able to fully root out gaming and loopholes, but one way to adjust for them is by complementing measures under a college’s control with ones that are not. For example, concerns about sacrificing academic quality to increase graduation rates could be partially offset by adding a focus on graduates’ earnings or some other post-completion behavior that is not under the college’s control. Institutions will certainly object to being held accountable for things they cannot directly influence. But basing the uncontrollable elements on relative instead of absolute performance should further ameliorate this concern.
Focus on outputs but don’t forget inputs.
Results matter. An institution that cannot graduate its students or avoid saddling them with large loan debts they cannot repay upon completion is not succeeding. But a sole focus on outputs could encourage an institution to avoid serving the neediest students as a way of improving its metrics and undermine the access goals that are an important part of federal education policy.
To account for this, a ratings system should include a few targeted input metrics that reflect larger policy goals like socioeconomic diversity or first-generation college students. Giving colleges “credit” in the ratings for serving the students we care most about will provide at least some check against potential gaming. Even better, some metrics should have a threshold a school has to reach to avoid automatic classification into the lowest rating.
Put it together.
A good ratings system is both consistent and iterative. It keeps the core pieces the same year to year but isn’t too arrogant to include new items and tweak ones that aren’t working. These recommendations present somewhere to start. Group the schools sensibly — maybe even rely on existing classifications like those done by Carnegie. The ratings should establish minimum performance thresholds on the metrics we think are most indicative of an unsuccessful institution — things like completion rates, success with student loans, time to degree, etc. They should consist of outcomes metrics that reflect their missions—such as transfer success for two-year schools, licensure and placement for vocational offerings, earnings, completion and employment for four-year colleges and universities. But they should also have separate metrics to acknowledge policy challenges we care about — success in serving Pell students, the ability to get remedial students college-ready, socioeconomic diversity, etc. — to discourage creaming. The result should be something that reflects values and policy challenges, acknowledges attempts to find workarounds, and refrains from dissolving into wonkiness and theoretical considerations that are divorced from reality.
Ben Miller is a senior policy analyst in the New America Foundation's education policy program, where he provides research and analysis on policies related to postsecondary education. Previously, Miller was a senior policy advisor in the Office of Planning, Evaluation, and Policy Development in the U.S. Department of Education.
A student at St. Louis Community College was arrested Wednesday for a "violent" threat against the financial aid office, authorities said, The St. Louis Post-Dispatch reported. The Twitter message said that she was so frustrated with the financial aid office that she wanted to kill someone. The tweet didn't name an individual. College officials discovered the post through regular monitoring of social media about the college.
A federal program that provides student veterans with on-campus educational and career counseling will nearly triple its footprint across the country this fall, the Department of Veterans Affairs announced Thursday. Under a program called VetSuccess on Campus, the V.A. plans to provide 62 more campuses with counselors, on top of the existing 32 institutions already participating in the program.
The counselors help veterans navigate their educational and medical benefits. The institutions selected for expansion include about a dozen large public universities, some community colleges and several private institutions.
In unveiling his ambitious higher education plan last week, President Obama once again framed his desire to make college more affordable as a personal mission, reminding the audience at the State University of New York at Buffalo of his own experience with a hefty load of student loan debt.
Obama took out $42,753 in loans to pay his tuition at Harvard Law School, the Chicago Sun-Times reported. First Lady Michelle Obama went $40,762 in debt to finance her Harvard Law education. It was not until after Obama signed a $1.9 million book deal in 2004 -- the year he was elected to the U.S. Senate -- that the couple paid off all of their student loans, according to the Sun-Times. The Obamas’ law school debt came on top of their existing undergraduate loans (he from Occidental College and Columbia University and she from Princeton University) and pushed their combined outstanding balance at graduation above $120,000, Obama has previously said.
Both the president and first lady also attended law school for three years -- an amount of time that Obama last week urged law schools to consider shortening to two years to reduce the cost for students.
President Obama has put forth a comprehensive plan to increase higher education value, holding colleges and universities accountable via a rating system based on the “outcomes” of access, graduation rates, graduate earnings and affordability.
It is hard to argue with the President’s intentions, nor the shove-rather-than-nudge strategy he employs, given the decades of higher education’s failure to rein in its costs or improve the success rate of students. The plan affirms higher education’s crucial role in fostering economic and social progress, puts colleges and universities on notice that the time for systemic change is now, not tomorrow, and creates rewards and punishments for institutions and students alike.
The president’s plan largely fails, however, to appropriately tackle the more fundamental value issue – far too little student learning. Myriad studies over the past several decades document that too little “higher” learning is taking place; college students do not make significant gains in critical thinking, problem solving, analytical reasoning, written communication skills, and ethical and moral development.
These are among the outcomes most observers claim to be the bedrock of higher learning necessary for work and careers in a 21st-century world economy. Indeed, a January 2013 Hart Research Associates survey of employers conducted for the Association of American Colleges and Universities found that 93 percent of corporate and business leaders believed “that a candidate’s demonstrated capacity to think critically, communicate clearly, and solve complex problems is more important than their undergraduate major.” Knowledge in specific fields, ethical judgment and integrity, intercultural skills, and the capacity for continued new learning ranked almost as high. Employers complain, however, that far too few college graduates exhibit such learning.
By mostly leaving incentives for this kind of higher learning out, the president’s plan guarantees that colleges and universities will focus on myopic metrics of success. I say “myopic” not because better graduation rates, decent post-graduation salaries, and lowered costs are unworthy goals but because they are not substitutes for higher education’s essential purpose.
Institutions follow rewards. The plan unwittingly will steer colleges and universities further away from higher learning to meet only easily measured goals, just as No Child Left Behind reduced school outcomes to narrow, short-answer test responses and the U.S. News and World Report rankings powerfully shaped academe’s cowardly obeisance to its learning-irrelevant criteria.
A college education that fails to ensure enduring higher learning is not worth the cost at any price. Only the cost, not the learning, is higher, thereby yielding low value. And lowering cost by installing a more technologically enabled educational assembly line simply makes low value more affordable.
The academy is responsible for so little learning because its normative model of teaching, learning and assessment is ineffective. It increasingly has left teaching to contingent or adjunct faculty, lowered expectations and standards, allowed minimum student effort to be rewarded with inflated grades, and constructed a laissez-faire smorgasbord curriculum feeding incoherent learning.
Using disconnected course credit compilation and degree attainment as surrogates for serious learning, higher education has ignored its own scholarly research showing that core outcomes (e.g., critical thinking, effective written and oral communication, applying knowledge to solve problems, ethical integrity) are by nature cumulative, not attainable in any one or two required courses or random out-of-classroom learning experiences.
One or two freshman writing seminars are not sufficient to produce competent writers. A required general education course in critical thinking does not by itself teach someone how to evaluate the credibility of information and solve problems. A course in ethics does not make a moral person. The national average of 10-13 hours of homework per week is not adequate for deep and abiding learning.
Learning becomes cumulative when the faculty (a collective noun) acts collectively to ensure that all coursework and majors share, reinforce and appropriately assess higher learning, a process that intentionally progresses each year in complexity, adequacy, and sophistication.
Only a small minority of colleges and universities perform at this level currently because academic cultural barriers like allegiance to department and discipline rather than institution, privileging research over teaching, and abhorrence of the additional faculty effort required to teach for and assess deep and complex learning effectively inhibit necessary collective action. In this sense, higher education is its own worst enemy, and lowering costs, for example, while necessary, will not fix the problem.
The president has opened up an important national debate about higher education value and he has asked for responses. But the value he demands is too little. What is most needed is institutional culture change and neither federal nor state mandates will get us there.
Colleges and universities have the knowledge and talent to drive the needed changes. The president, and the nation, would be well-served if we heard a credible, collective and sustained response from higher education offering solutions to the issues of cost, access and graduation rates and, more crucially, systemic ways to improve higher learning.
Such a response would be supported enthusiastically by the philanthropic and corporate sector. Anything less than such a response will create a vacuum of educational leadership inexorably filled quickly by federal and state political leaders -- all our college graduates. Need I say more?
Richard Hersh is a senior consultant with Keeling & Associates and co-author of We’re Losing Our Minds: Rethinking American Higher Education (Palgrave Macmillan, 2012).