Better measures of college performance
State lawmakers increasingly want to tie public funding of higher education to colleges' performance. But measuring sticks that reflect the differences between institutions and who they serve are hard to find.
HCM Strategists and the Bill and Melinda Gates Foundation are trying to fill that gap with a series of new research papers and issue briefs. The campaign, dubbed “Context for Success,” attempts to give policymakers and colleges tools to better judge what works in higher education.
For example, graduation rates are a common way of sizing up colleges. But missing in this and other popular “accountability” measures is detailed information about incoming students – such as their academic preparation and risk factors.
“Based on raw numbers, a college with a graduation rate of 80 percent might seem much better than one with 50 percent, and one whose graduates earn $40,000 a year better than one whose graduates earn $25,000,” according to the new report. “But the comparison will not be ‘apples to apples’ if the college with better results started with better-prepared entering students. Unfiltered comparisons are misleading and can lead to bad policy decisions, misguided student choices and counterproductive incentives.”
Of course, even the seven research papers fail to fully account for the enormously complex amount of variables at play in student success rates at a college. But these methods get closer than much of what policymakers are currently using, according to the researchers. They enable comparisons between Red Delicious and Fuji apple varietals, one said, rather than using apples to oranges.
The papers center on three primary outcomes measures: student progression and completion, labor market results and the direct assessment of student learning. Four of the seven papers look at graduation rates, and two are focused on community colleges. For-profit institutions were not included in the research, and the methods for comparison appear to be most helpful for public colleges.
The researchers said nuanced ways of measuring college performance are important as policy interest in higher education grows, thanks to deep concerns about student debt and workforce development. Without good data, policies can do as much harm as good.
The Gates Foundation has been one of the strongest voices for accountability measures in higher education, having championed the “completion agenda” and argued against access to college without attention to student success. This two-year project appears to be an attempt to make sure that the policies the foundation helps influence are thoughtful.
HCM, with funding from Gates, brought together a group of prominent scholars of higher education and policymakers to discuss practical ways to bulk-up how college performance is measured. Participants included Sandy Baum, an expert on college finance, Thomas Bailey, director of the Community College Research Center at Columbia University’s Teachers College, and Nate Johnson, a consultant to HCM on higher education policy, funding and student success.
The research arrives as many states have already embarked on performance-based funding for colleges.
“In terms of practical implications for actual policies, it is no longer a question of whether data will be used to measure outcomes. This is being done now,” wrote Charles T. Clotfelter, a professor of public policy, economics and law at Duke University, in a paper summarizing the HCM project. “The pressing question is whether adjustments should be made for differences in student inputs, and how that ought to be done.”
Furthermore, Clotfelter’s paper said the philosophy behind No Child Left Behind has inevitably leaked into higher education policy discussions. And the college rankings from U.S. News & World Report are helping fill a void for consumers, he wrote, and not always in helpful ways.
Johnson said the bottom line is that higher education policy needs more “input-adjusted metrics,” particularly in the development of performance-based funding models. “It’s very difficult to overweight those variables.”
Following are short descriptions of the seven research papers, copies of which are available on HCM’s website:
“Can ‘Value-Added’ Methods Improve the Measurement of College Performance? Empirical Analyses and Policy Implications,” by Robert Kelchen and Douglas Harris: Going beyond popular college rankings with college graduation data from 1,200 institutions that adjusts for differences in student backgrounds.
“College Participation, Persistence, Graduation, and Labor Market Outcomes: An Input-Adjusted Framework for Assessing the Effectiveness of Tennessee’s Higher Education Institutions,” by David Wright, Grant Thrall, Celeste K. Carruthers, Matthew N. Murray and William F. Fox: Tracking the effectiveness of Tennessee’s public institutions while controlling for the characteristics of entering students.
“Using CIRP Student-Level Data to Study Input-Adjusted Degree Attainment,” by John Pryor and Sylvia Hurtado: The importance of input factors on a student cohort entering in 2004, drawing data from the CIRP Freshman Survey and the National Student Clearinghouse.
“Developing Input-Adjusted Metrics of Community College Performance,” by Thomas Bailey: Exploring possible strategies for developing ways to measure community college performance while adjusting for student inputs.
“Measuring Value-Added in Higher Education,” by Jesse Cunha and Trey Miller: A practical guide for policymakers interested in developing institutional measures for higher education that adjust, at least partially, for pre-existing differences among students.
“Using Student Learning as a Measure of Quality in Higher Education,” by Stephen Porter: A review of existing measures of student learning, which explores their strengths and weaknesses as a quality metric for higher education.
“Classifying Community Colleges Based on Students’ Patterns of Use,” by Peter Bahr: Examination of college-level variation among students’ course-taking and enrollment behavior at 105 community colleges in California.