While political support in Washington builds slowly for a federal student record database, Indiana and the University of Texas System get creative with their own data on how students fare after college.
Submitted by Paul Fain on October 7, 2016 - 3:00am
B Lab is a nonprofit group that issues a seal of approval to companies across 120 industries that adhere to voluntary standards based on social and environmental performance, accountability and transparency. After a two years of work, the group on Friday released a new benchmarking tool for colleges. The voluntary standards are designed to enable comparisons of both nonprofit and for-profit institutions.
"B Lab recognizes that the cost and outcomes of higher education, particularly regarding for-profit institutions, have become increasingly controversial, but regardless of structure institutions should put their students’ needs first," Dan Osusky, standards development manager at B Lab, said in a written statement. "We see our role as the promoter of robust standards of industry-specific performance that can be used by for-profits and nonprofits alike to create the greatest possible positive impact and serve the public interest, ultimately by improving the lives of their students."
A committee of experts, working with HCM Strategists and with funding from the Lumina Foundation, devised the standards. Laureate Education, a global for-profit chain, already uses the assessment tool.
More than 30 states now provide performance funding for higher education, with several more states seriously considering it. Under PF, state funding for higher education is not based on enrollments and prior-year funding levels. Rather, it is tied directly to institutional performance on such metrics as student retention, credit accrual, degree completion and job placement. The amount of state funding tied to performance indicators ranges from less than 1 percent in Illinois to as much as 80 to 90 percent in Ohio and Tennessee.
Performance funding has received strong endorsements from federal and state elected officials and influential public policy groups and educational foundations. The U.S. Department of Education has urged states to “embrace performance-based funding of higher education based on progress toward completion and other quality goals.” And a report by the National Governors Association declared, “Currently, the prevailing approach for funding public colleges and universities … gives colleges and universities little incentive to focus on retaining and graduating students or meeting state needs …. Performance funding instead provides financial incentives for graduating students and meeting state needs.”
But with all this state activity and national support, does performance funding actually work? As we report in a book being published this week, Performance Funding for Higher Education (Johns Hopkins University Press), the answer is both yes and no.
Based on extensive research we conducted in three states with much-discussed performance funding programs -- Indiana, Ohio, and Tennessee -- we find evidence for the claims of both those who champion performance funding and those who reject it. In keeping with the arguments of PF champions, we find that performance funding has resulted in institutions making changes to their policies and programs to improve student outcomes -- whether by revamping developmental education or altering advising and counseling services.
Underpinning those changes have been increased institutional efforts to gather data on their performance and to change their institutional practices in response.
But we often cannot clearly determine to what degree performance funding is driving those changes. Many of the colleges we studied stated they were already committed to improving student outcomes before the advent of performance funding. Moreover, in addition to PF, the states often are simultaneously pursuing other policies -- such as initiatives to improve developmental education or establish better student pathways into and through higher education -- that push institutions in the same direction as their PF programs. As a result, it is nearly impossible to determine the distinct contribution of PF to many of those institutional changes.
Meanwhile, supporting the arguments of the PF detractors, we have not found conclusive evidence that performance funding results in significant improvements in student outcomes -- and, in fact, we’ve discovered that it produces substantial negative side effects. In reviewing the research literature on PF impacts, we find that careful multivariate studies -- which compare states with and without performance funding and control for a host of factors besides PF that influence student outcomes -- largely fail to find a significant positive impact of performance funding on student retention and degree attainment. Those studies do find some evidence of effects on four-year college graduation and community college certificates and associate degrees in some states and some years. However, those results are too scattered to allow anyone to conclude that performance funding is having a substantial impact on student outcomes.
Various organizational obstacles may help explain that lack of effect. Many institutions enroll numerous students who are not well prepared for college. In addition, state performance metrics often do not align well with the missions of broad-access institutions such as community colleges, and states do not adequately support institutional efforts to better understand where they are failing and how best to respond.
Even if performance funding ultimately proves to significantly improve student outcomes, the fact remains that it has serious unintended impacts that need to be addressed. Faced both by state financial pressures to improve student outcomes and substantial obstacles to doing so easily, institutions are tempted to game the system. By reducing academic demands and restricting the enrollment of less-prepared students, broad-access colleges can retain and graduate more students, but only at the expense of an essential part of their social mission of helping disadvantaged students attain high-quality college degrees. Policy makers should address such negative side effects, or they could well vitiate any apparent success that performance funding achieves in improving student outcomes.
In the end, performance funding, like so many policies, is complicated and even contradictory. To the question of whether it works, our answer has to be both yes and no. It does prod institutions to better attend to student outcomes and to substantially change their academic and student-service policies and programs. However, performance funding has not yet conclusively produced the student outcomes desired, and it has engendered serious negative side effects. The question is whether, with further research and careful policy making, it is possible for performance funding to emerge as a policy that significantly improves student retention, graduation and job placement without paying a stiff price in reduced academic quality and restricted admission of disadvantaged students. Time will tell.
Kevin Dougherty is a senior research associate at the Community College Research Center, Teachers College, Columbia University and an associate professor at Teachers College. Sosanya M. Jones is an assistant professor at Southern Illinois University. Hana Lahr is a research associate, Rebecca S. Natow is a senior research associate, Lara Pheatt is a former research associate and Vikash Reddy is a postdoctoral research associate, all with CCRC.
Submitted by Paul Fain on October 6, 2016 - 3:00am
The Center for American Progress today released a report that proposes a "complementary competitor" to the current system of accreditation.
The report describes three primary components for an outcomes-focused, alternative system, which, like current accreditors, would serve as a gatekeeper to federal financial aid.
High standards for student outcomes and financial health;
Standards set by private third parties;
Data definition, collection and verification, as well as enforcement of standards by the federal government.
"If implemented, this new system would provide a pathway to address America’s completion and quality challenges through desperately needed innovation," the report said. "And it would do so while establishing strong requirements to ensure that students and taxpayers get their money’s worth."
TheWall Street Journal and Times Higher Education have partnered to produce yet another college ranking. Should we applaud, groan, ignore or something else? I choose applause -- with suggestions.
This new project represents a positive step. For starters, any ranking that further challenges the hegemony of what I have termed the“wealth, reputation and rejection” rankings from U.S. News & World Report is welcome. Frank Bruni said much the same thing in his recent New York Times column, “Why College Rankings Are a Joke.”
I traveled the country for two years for the U.S. Department of Education -- I called myself the “listener in chief” -- to hear what students and colleges wanted or worried about in the federal College Scorecard. I explained that one of the most important reasons to develop the College Scorecard was to help shift the focus in evaluating higher education institutions to better questions and to the results and differences that should really matter to students choosing colleges and taxpayers who underwrite student aid. Just this week, the White House put it this way:
By shining light on the value that institutions provide to their students, the College Scorecard aligns incentives for institutions with the goals of their students and community. Although college rankings have traditionally rewarded schools for rejecting students and amassing wealth instead of giving every student a fair chance to succeed in college, more are incorporating information on whether students graduate, find good-paying jobs and repay their loans.
Some ratings have already blazed new trails by giving weight to compelling dimensions. Washington Monthly, for example, improved the discourse when it added public service to its criteria. The New York Times ranking high-performing institutions by their enrollment rates for Pell-eligible students was an enormous contribution to rethinking what matters most. The Scorecard in turn contributed by adding some (but admittedly not all) of the central reasons we as a nation underwrite postsecondary education opportunity: completion, affordability, meaningful employment and loan repayment.
The WSJ/THE entrant also offers positive approaches to appreciate. The project shares some sensible choices with the Scorecard, starting with not considering selectivity. That’s good: counting how many students a college or university rejects often tells us more about name recognition and gamesmanship than learning.
The WSJ/THE rankings also incorporate repayment, which is a useful window that reflects such contributing factors as an institution’s sensitivity to affordability, which in turn includes net price and time to degree. It also grows out of whether students are well counseled and realistic about debt and projected income -- and use their flexible repayment options, like income-contingent choices, so they can handle living costs and also their loan obligations, even if they choose work that pays only moderately.
And it was a very smart choice to use value-added outcome measures, drawing on work by the Brookings Institution melding Scorecard information and student characteristics. That approach is designed “to isolate the contribution to the college to student outcomes.” It is also important because value-added metrics help respect the accomplishments of colleges that are taking, or want to increase enrollment of, populations of students that might not graduate or achieve other targets as easily as others.
That said, the WSJ/THE transition to outcomes-based measures is incomplete. A fresh new ranking is a chance to recognize colleges and universities that do a remarkable job in achieving strong results for students at affordable prices. Including a metric for per-student expenditure is an unfortunate relic from old-fashioned input-based rankings. It’s a problem not just because it mixes inputs and outcomes. More significantly, it clouds the focus on results, giving an advantage at the starting gate to incumbents that simply have a lot of money, even if other institutions are achieving better results more economically. That’s counterproductive when the goal should be to identify and reward institutional efficiency and affordability measures that generate good results as quickly as possible.
Even Better Questions
But it’s not too late. The tool already allows sorting by the component pillars, and Times Higher Education plans to work with the data to explore relationships and additional questions. WSJ/THE could rerun their analysis without the wealth measure and its conservative influence to see whether and how that alters the rankings. It will be interesting to see whether publics rise if resources are factored out. I’d like to think that a true results-based version would surface colleges and universities, perhaps a bit lesser known, that outperform their spending and bank accounts. Whether institutions achieve that through innovation, culture, dedication or some other advantages or efficiencies, it’s well worth a look.
These rankings include a new dimension, using a new 100,000-student survey that asks students about the opportunities their colleges and universities provide to engage in high-impact education practices. The survey also asks about students’ satisfaction and how enthusiastically they would recommend their institutions. We definitely need fresh ways to understand such education outcomes as well-being, lifelong learning, competence and satisfaction. How effectively does this survey, or the Gallup-Purdue survey, answer that need?
My first question is whether this really fits the sponsors’ desire to focus on outcome measures. Is it just an input or process measure, albeit an interesting one? I’d also like to know more about the relationship of opportunities to actual engagement in those practices, and I wonder how seriously or consistently students answered the questions. What does it mean to different students to have opportunities to “meet people from other countries,” for example, and does simply meeting them make for a more successful education?
This is also my chance to ask about the strength of the causal links between the opportunity to participate in a particular educational practice (whether an internship or speaking in class) and the outcomes, from learning to job getting and performance, with which they may be associated. This question applies to questions with the WSJ/THE “Did you have an opportunity to …” format, and is most vivid for the Gallup-Purdue study. Maybe the people whose personalities and preferences incline them to choose projects that give them close, long-term connections with faculty members -- or whom faculty members chose for those projects based on characteristics including charm, social capital or prior experiences -- are engaging, optimistic and, yes, privileged in ways that would also make them engaged and healthy later in life. Was the involvement with those practices in college really causal?
To evaluate the WSJ/THE question about recommending the college or university to others, it would be valuable to know more about the basis for the students’ responses: Were they thinking about academic or quality-of-life considerations? Past experience or projections for how their educations would serve them later? I wonder if their replies were colored by an intuition that their answers could affect their institution’s standing, thus kicking in both their loyalty and self-serving desire to promote it. In short, it’s hard to know how much these new measures tell us. But it’s a worthy effort to build new tools beyond the few crude metrics we have now for understanding differences among institutions.
Serious Work to Do
On a broader level, none of the ratings and rankings have made much progress in expressing learning outcomes, although we urgently need more powerful and sensitive ways to articulate them. Whether or not students have developed the knowledge and skills, the capacities and problem-solving abilities, appropriate to their program should be at the heart of any assessment of educational results.
It’s also time to move beyond Washington Monthly’sgood but simple additions to capture intangible and societal outcomes more successfully. By that failure of imagination, not only we ratings designers, but also all of us in higher education, have allowed income to take center stage among outcomes -- and played into the damaging transformation in perception of higher education from a public to a private good.
I said many times in the Scorecard conversation that it’s sensible and understandable for families to want to know if an educational experience would typically generate what they would consider a decent living, including the ability to handle the loans assumed to pay for it, and how employment or income results compare across programs or institutions. But, I went on, that does not mean that income can stand alone as though that’s all that matters. If an affordable housing advocate or journalist is satisfied in her preparation to do that work, or a dancer or teacher got exactly the education he needs to pursue his goals and be a good citizen, how can we measure and reflect that? This is not a news flash but an echo: we need better ways to communicate with families and students about how higher education makes a difference to both the student and society far beyond just economic returns.
There’s other work to be done in the college information enterprise. This month, as the Scorecard celebrated its first anniversary, the Education Department marked the occasion with a data refresh and also wove together new partnerships to expand the Scorecard’s value. One of the big challenges for any useful college information source is to make sure it reaches the students who need it the most: the ones who aren’t sure if, or may already doubt that, they can afford college, who know least about the options available, who are uncertain about the outcomes they can expect from college or at different schools. They’re the ones who suffer from the severe “guidance gap” at far too many high schools and among many adult populations. This intensified collaboration among government, counselor organizations and higher education institutions at every level is a wise strategy.
Ultimately, however, the most transformative role of the well-conceived rankings and scorecards will turn out to lie not in whether every student reads them but in their value in supporting institutional improvement. They are a gold mine for benchmarking and can help institutions choose which outcomes really matter and then work across functions to improve them.
The fact is that for decades colleges have invested far too much energy striving -- or replaying games -- to get better on measures that don’t really matter. Asking smarter questions about genuinely significant priorities can help us graduate more low-income students into rewarding work and find affordable paths to solid learning outcomes for citizens and workers. Better ratings, continuously improved and building on each other’s contributions, give us a chance to put higher education’s intense competitive energies into worthy races.
Jamienne S. Studley, former deputy under secretary of the U.S. Department of Education and president of Skidmore College, is national policy adviser with Beyond 12 and consultant to the Aspen Institute, colleges and nonprofits.
U.S. Senators Elizabeth Warren, Dick Durbin and Brian Schatz on Thursday introduced legislation that would increase accountability for accreditors and require new standards for student outcomes they use to evaluate colleges and universities. Warren and Durbin have frequently called for greater accountability in higher education, particularly in the for-profit college sector. But the bill suggests a new focus by consumer advocates on the role of accrediting agencies in providing oversight of colleges.
Earlier this week, the umbrella organization of seven regional accrediting bodies announced that its members would conduct a joint review of institutions with extremely low graduation rates -- a move that followed heavy scrutiny of accreditors’ performance in both the media and policy circles.
The Accreditation Reform and Enhanced Accountability Act would direct the Education Department to establish clear student outcome data, require accreditors to respond quickly to both state and federal investigations, add more transparency to accreditation decisions, address conflict-of-interest issues involving accreditors and the colleges they oversee, and give the department more power to punish or terminate failing accreditors, among other measures.
“Accrediting agencies are supposed to make sure students get a good education and ensure colleges aren’t cheating students while sucking down taxpayer money. But right now the accreditation system is broken,” Warren said. “This bill gives the Education Department more tools to hold accreditors accountable, increases accreditors’ focus on student outcomes and affordability, and requires accreditors to respond when there is evidence of colleges committing fraud.”
The Council of Regional Accrediting Commissions this week highlighted issues involving discrepancies in federal data on student outcomes, specifically involving graduation rates. The group of regional accreditors cited a need for better data from the Department of Education and said its members would consider additional information in their review of institutions with low graduation rates.
Submitted by Paul Fain on September 22, 2016 - 3:00am
The California State University System's governing board this week voted to increase its graduation rates by 2025, an effort that will cost an estimated $400 million or more. The system said it would seek to hit a 70 percent rate (meaning the six-year rate for freshmen), which would be a 13 percentage point increase from the current rate of 57 percent. The graduation-rate push also will include efforts to close achievement gaps for underrepresented and low-income students.
Around this time every year, as colleges and universities begin to spring back to life, I am reminded of my years working within central administration and the excitement in watching the sea of people full of promise come spilling back onto the campus. I remember the familiar faces of returning students, beaming with the fresh potential of a new year, who dropped by just to declare themselves back again or share goals for the year hatched over the summer.
But I also remember just as clearly the faces of the students who didn’t return. Those we lost somewhere along the way to graduation.
Many of those students still haunt me today. I remember one freshman I met when I was working as vice chancellor and chief of staff at UNC Greensboro. She came into my office at the end of the spring semester in tears. A straight-A student through high school, she arrived on our campus full of confidence. But that confidence was shattered when her professors told her that she was a terrible writer. She struggled through the year in silence, determined to improve. But she never got the help she needed. The tears rolled down that young woman’s face as she learned that she’d been placed on academic probation and would lose her scholarship. It was too late. We were too late.
There are thousands more stories like this young woman’s -- of students from low-income families who could have made it farther than their parents did but whom we somehow failed along the way.
We used to blame our students: their poverty, their underpreparation, the extra burdens they carry. It turns out, though, that it’s a lot about us. Yes, poverty and preparation matter. But the choices we make matter, too. Some institutions are simply doing a much better job of graduating their students than other institutions serving exactly the same kinds of students.
As we begin a new academic year, this can be a moment for improvement-minded institutional leaders to engage campus communities in honest, data-driven conversations about what we might do better. How can we more fully understand the journeys our students take on the way to the degree, noting where those journeys are speeded and guided, and where they derail? How can we renew our collective commitment to expand what's working and to confront -- and address -- what’s not?
To assist institutional leaders in their reflection and planning, The Education Trust has sought to identify and broadly share the high-impact practices of institutional leaders who have driven impressive improvement in completion rates, particularly for students who have gone historically underrepresented -- and underserved -- on our campuses: low-income and first-generation students and students of color. Most recently we’ve examined practices at Florida State University, San Diego State University, the University of Wisconsin Eau Claire and Georgia State University.
While each of these institutions is distinct in their mission, and each set of leaders distinct in their style, at the core of their improvement efforts are common practices and qualities -- many of them steeped in honest analysis of data. Those practices and qualities are:
Courage. When then San Diego State President Stephen Weber addressed his Faculty Senate, applauding the many ways in which the faculty had worked toward -- and attained -- excellence over the years, he went on to issue a challenge that would spark a decade-long improvement effort: “But a great university doesn’t lose almost two-thirds of its Latino freshmen along the road toward graduation.” Like Weber, all of the leaders at the campuses we’ve been learning from are clear-eyed, intentional and dogged in their approaches to institutional improvement. They roll up their sleeves alongside staff and faculty and ask hard questions of the data on student matriculation and success. They zero in on areas of strength and weakness to draw out promising practices and needed interventions.
Shared commitment. These leaders are keenly aware that, while they have a strong role to play in leading change, staff and faculty members operating closest to their students are the ones who enact that change. Using data, leaders at University of Wisconsin Eau Claire engaged departments as partners and problem solvers. Said one senior leader on campus, “We give them the data … we’re not telling them where the problem is; they identify the problem and we encourage them to solve the problem.”
In examining their data, they found that, while their six-year graduation rate was relatively high, the four-year graduation rate was extremely low at just 18 percent. To address that pattern, faculty and staff members identified course bottlenecks and acted to remove them.
At each of the institutions we’ve studied, leaders draw together partners at every level -- senior administrators, department heads, faculty members, student-affairs professionals -- to engage in data analysis and problem solving. And they arrive not with answers, but with questions, trusting that those assembled in the room have much to contribute to improvement efforts.
Timely data for targeted interventions. These leaders understand that their students struggle in real time -- and that those working closest to them need information to intervene in real time. Further, they know from disaggregating data that all students don’t struggle at the same time with the same obstacles or need the same supports. They take time to parse data to understand the needs of all their students -- first generation, transfer, black, Latino, immigrant and many others. They identify benchmarks and warning indicators to ensure that no student is left to languish and disappear at any point in their educational journey without real supports to turn the situation around.
For example, practitioners at Georgia State University noted, “Four or five years ago, we had nothing consistent in our system that would help us track students.” Today, an impressive online data repository gives faculty and staff members immediate access to 130 screens of the most requested data on student progression and success. Through their Graduation and Progression Success advising system, which tracks more than 700 markers of student success, nightly feeds generate lists of which students have missed which markers. That information enables advisers to reach out immediately with targeted support for students who stumble.
Continuing evaluation of the data. Leaders at these institutions always come back to the data. A longtime campus leader at Florida State University described the cultural change ushered in by former provost Lawrence G. Abele: “When he came in, there was a huge shift in culture. It was no longer OK to just do things you thought were right; you needed data to support new ideas and also to assess, evaluate and improve current programs.”
For instance, when campus leaders analyzed their dropout patterns, they found that while white students were most at risk of dropping out in their first year, black male students were more likely to leave after the second, third or even fifth year. They realized that their retention efforts needed to stretch beyond freshman year to guide students through the entire undergraduate trajectory. Like Abele, leaders at these fast-improving institutions convene their teams regularly to monitor and review the data and to make midcourse corrections to ensure that their efforts, energies and resources are directed where they are most needed.
The lessons these leaders offer provide real insight from within successful college and university change efforts. They remind all of us in higher education that “success for some” is no great institution’s epitaph -- that institutional success will be measured not by how well some students are served but by how well all groups of students are served. If institutional leaders and those of us working alongside them don’t have the courage to confront the reality of what’s happening on our campuses in the narratives of all students, whether on commencement lists or dropout rolls, we are merely comforting ourselves with a half-true story that plays on repeat each year.
Bonita J. Brown is director of higher education practice at The Education Trust. She most recently served as vice chancellor and chief of staff at the University of North Carolina at Greensboro.