There is a growing belief in higher education that if colleges don’t figure out how to measure the quality and value of their product, lawmakers will do it for them. Eighteen institutions are trying to get ahead of the growing accountability push with the release today of a new set  of performance measures.
The Voluntary Institutional Metrics Project was more than two years in the works. It seeks to give a “holistic” view of the performance of private nonprofits, for-profits, community colleges, online institutions and one research university that is a member of the Association of American Universities (see box).
While the 18 institutions cover every sector, most are community colleges or institutions steeped in distance education or other nontraditional forms of instruction.
The metrics include ways to access and analyze data in five areas: repayment and default rates on student loans, student progression and completion, institutional cost per degree, employment of graduates and student learning.
As is often the case with higher education data, the final product includes several gaps. Learning outcomes and employment data proved particularly problematic.
Participating colleges had hoped to play their cards by publicly releasing institutional “dashboards” based on the new metrics, according to several participants in the project. But the holes in the data were too large to take that last step.
Participants in Voluntary Institutional Metrics Project
- Alamo Colleges
- Anne Arundel Community College
- Capella University
- Charter Oak State College
- DeVry University
- Excelsior College
- Ivy Tech Community College
- Johnson County Community College
- Kentucky Community and Technical College System
- Louisiana Community and Technical College System
- Regis University
- Rio Salado College
- Southern New Hampshire University
- University of Maryland University College
- University of Missouri at Columbia
- Walden University
- Western Governors University
- Western Kentucky University
“We couldn’t get there,” said Ed Klonoski, president of Charter Oak State College, which is a participating institution. Klonoski and others said they were disappointed that group couldn’t get past those hurdles to release finalized dashboards. But several colleges plan to do so soon.
“Nobody’s dashboard is perfect,” Klonoski said. “This is scary stuff for higher education.”
A group of five or so institutions first got the ball rolling by approaching the Bill and Melinda Gates Foundation a few years ago with the idea of creating relatively comprehensive, fair metrics without adding new administrative burdens for colleges.
Charter Oak was part of that initial group, which eventually grew to 18. HCM Strategists, a public policy and advocacy firm, helped run the Gates-funded project. HCM today released a report  describing the metrics.
The goal of the project was to build on existing databases and other voluntary accountability systems, including those created by the National Association of Independent Colleges and Universities, the Association of Public and Land-grant Universities and the American Association of Community Colleges. This latest set is broader, participants said.
In addition, many of those previous data-driven efforts are aimed at students and their families. This set of common metrics, however, was designed with lawmakers as a primary audience. The reason, according to the report, is because of the often unwieldy nature of data collection that gets handed down from Washington or state capitals.
“Policy makers often seek data on too many variables, resulting in data overload and lack of focus,” the report said. “This sometimes leads to decisions based on anecdotal information.”
The project’s first step was to select five main areas to measure. The group then tapped top subject experts to prepare papers on how to access data and to present metrics.
Several participants said one of the best solutions that emerged in the process was to create measures that take into account the relative advantages and academic preparation of students that colleges serve. This approach – measuring inputs as well as outputs – often is said to be lacking in currently-available measuring sticks of colleges.
By that argument, it is misleading to compare colleges that enroll a small number of students who are at risk of not being able to repay loans – Ivy League institutions, for example – with colleges that serve many low-income or disadvantaged students.
To even the playing field, the metric used here includes a predicted range where an institution’s loan repayment and default rates should fall. Variables that influence that range include the proportion of students eligible for Pell Grants and the percentage who receive federal loans, as well as other publicly available data points.
Each institution is rated against its predicted range on the dashboards, which is a more nuanced and informed way of holding them accountable, according to the report. Federal gainful employment regulations, for example, proposed uniform thresholds for all institutions (although many critics said those thresholds were too low).
To measure efficiency, the dashboards include a cost-per-degree metric. Unlike other data sets, this one included operating costs but stripped out capital expenses, which can cloud the picture of what colleges spend to educate students.
The project also sought to use more comprehensive measures of college completion. It used metrics created by Complete College America and the National Governors Association as a starting point.
The final result includes part-time as well as full-time students. Student transfers also count, as do total credits attempted and time to a credential, which is similar to a measure the Association of Public and Land-grant Colleges recently designed in coordination with other higher-education associations.
However, the burden of collecting the souped-up completion information is substantial, the report said. For example, eight-year graduation rates for bachelor's-degree programs are labor-intensive to track, particularly for large institutions.
Employment data is even trickier.
The approach was developed in consultation with Tony Carnevale, director of Georgetown University’s Center on Education and the Workforce. It connects higher education data with unemployment insurance information, analyzing wages and employment status one and five years after graduation. Whether students were attending graduate school after completing is also factored in.
However, only a few states and colleges currently connect those sources of data, according to the report. And there is no standardized approach to for reporting employment outcomes.
The project also hit a brick wall with its attempt to set a standard for measuring student learning. It attempted to develop metrics for both core skills and major-specific -- or upper-division course equivalent -- learning. But the group was unable to adequately match program-level assessments with the current array of available testing.
Higher education should push hard for common standards and measurement tools that can track both employment and student learning, according to the report. In the meantime the group plans to roll the new dashboard out to policy makers and the academy.
“These metrics, considered collectively, present a coherent picture about cost, quality, efficiency, effectiveness, student ability to finance college and student success in employment,” the report said. “They help determine how resources, including public investments, are used and whether credentials offer sufficient value to justify cost.”