Graduation rates

Colleges should focus less on student failure and more on success (essay)

In their effort to improve outcomes, colleges and universities are becoming more sophisticated in how they analyze student data – a promising development. But too often they focus their analytics muscle on predicting which students will fail, and then allocate all of their support resources to those students.

That’s a mistake. Colleges should instead broaden their approach to determine which support services will work best with particular groups of students. In other words, they should go beyond predicting failure to predicting which actions are most likely to lead to success. 

Higher education institutions are awash in the resources needed for sophisticated analysis of student success issues. They have talented research professionals, mountains of data and robust methodologies and tools. Unfortunately, most resourced-constrained institutional research (IR) departments are focused on supporting accreditation and external reporting requirements. 

Some institutions have started turning their analytics resources inward to address operational and student performance issues, but the question remains: Are they asking the right questions?

Colleges spend hundreds of millions of dollars on services designed to enhance student success. When making allocation decisions, the typical approach is to identify the 20 to 30 percent of students who are most “at risk” of dropping out and throw as many support resources at them as possible. This approach involves a number of troubling assumptions:

  1. The most “at risk” students are the most likely to be affected by a particular form of support.
  2. Every form of support has a positive impact on every “at risk” student.
  3. Students outside this group do not require or deserve support.

What we have found over 14 years working with students and institutions across the country is that:

  1. There are students whose success you can positively affect at every point along the risk distribution.
  2. Different forms of support impact different students in different ways.
  3. The ideal allocation of support resources varies by institution (or more to the point, by the students and situations within the institution).

Another problem with a risk-focused approach is that when students are labeled “at risk” and support resources directed to them on that basis, asking for or accepting help becomes seen as a sign of weakness. When tailored support is provided to all students, even the most disadvantaged are better-off. The difference is a mindset of “success creation” versus “failure prevention.” Colleges must provide support without stigma.

To better understand impact analysis, consider Eric Siegel’s book Predictive Analytics. In it, he talks about the Obama 2012 campaign’s use of microtargeting to cost-effectively identify groups of swing voters who could be moved to vote for Obama by a specific outreach technique (or intervention), such as piece of direct mail or a knock on their door -- the “persuadable” voters. The approach involved assessing what proportion of people in a particular group (e.g., high-income suburban moms with certain behavioral characteristics) was most likely to:

  • vote for Obama if they received the intervention (positive impact subgroup)
  • vote for Obama or Romney irrespective of the intervention (no impact subgroup)
  • vote for Romney if they received the intervention (negative impact subgroup)

The campaign then leveraged this analysis to focus that particular intervention on the first subgroup.

This same technique can be applied in higher education by identifying which students are most likely to respond favorably to a particular form of support, which will be unmoved by it and which will be negatively impacted and dropout. 

Of course, impact modeling is much more difficult than risk modeling. Nonetheless, if our goal is to get more students to graduate, it’s where we need to focus analytics efforts.

The biggest challenge with this analysis is that it requires large, controlled studies involving multiple forms of intervention. The need for large controlled studies is one of the key reasons why institutional researchers focus on risk modeling. It is easy to track which students completed their programs and which did not. So, as long as the characteristics of incoming students aren’t changing much, risk modeling is rather simple. 

However, once you’ve assessed a student’s risk, you’re still left trying to answer the question, “Now what do I do about it?” This is why impact modeling is so essential. It gives researchers and institutions guidance on allocating the resources that are appropriate for each student.

There is tremendous analytical capacity in higher education, but we are currently directing it toward the wrong goal. While it’s wonderful to know which students are most likely to struggle in college, it is more important to know what we can do to help more students succeed.

Dave Jarrat is a member of the leadership team at InsideTrack, where he directs marketing, research and industry relations activities.

Essay calls for comprehensive completion reforms instead of focus on undermatching

Last month the White House hosted a higher education summit to draw attention to the problem of college attainment among low-income students. The summit focused in particular on “undermatching,” in which high-achieving, low-income students fail to apply to highly selective colleges, and instead attend less competitive institutions.

It is without question that all students deserve a chance to attend a college that will give them the best shot in life, and I applaud efforts to better inform students about their choices. However, while we are rightly concerned about directing more underserved students to selective colleges, we should also recognize that sending more students to these colleges will not improve the overall quality of our higher education system. 

The reality is that even in a perfectly matched world, millions of low-income, minority, first-generation, and immigrant students will continue to enroll in community colleges. If we want to improve educational outcomes among these groups of students, then we need to improve the colleges so many of them will attend.

Community colleges have been extremely successful at opening the doors to college for disadvantaged students, but thus far, they have had less success in helping them graduate. Less than 40 percent of students who start in community colleges complete a credential in six years. The success rates are worse for low-income and minority students.

So how can community colleges deliver better quality for their students? It will not be easy. Over the last 15 years, faculty and administrators have worked tirelessly to implement reforms in teaching and support services. These efforts have failed to raise completion rates.

A critical reason for this disappointing outcome is that reform initiatives have focused too narrowly on one aspect of the student experience, such as entry, remedial education or the first semester. While many initiatives have led to some success for targeted students, these improvements have been too small and too short-lived to affect overall college performance.

Research conducted by the Community College Research Center (CCRC) at Columbia University’s Teachers College and others makes abundantly clear that improving services like developmental education is necessary but not sufficient: the entire student community college experience must be strengthened.

Some community colleges are beginning to recognize this imperative, and are entering a new phase of far more comprehensive and transformative reform. In particular, some are at the forefront of implementing what CCRC terms the guided pathways model.

That approach responds to the fact that most community college students need far more structure and guidance; it attends to all aspects of the student experience, from preparation and intake to completion. The model includes robust services to help students choose career goals and majors. It features the integration of developmental education into college-level courses and the organization of the curriculum around a limited number of broad subject areas that allows for coherent programs of study. And, importantly, it stresses the strong, ongoing collaboration between faculty, advisers and staff.

Initiatives such as the Gates-funded Completion by Design and Lumina's Finish Faster are advancing such comprehensive reforms by helping colleges and college systems create clear course pathways within programs of study that lead to degrees, transfer and careers.

The new Guttman Community College at the City University of New York (CUNY) -- perhaps the most ambitious example of a comprehensive approach to the community college student experience -- incorporates many elements of the guided pathways model. And CUNY’s ASAP program, which like Guttman takes a holistic approach to student success, has significantly improved associate degree completion rates.

Ambitious and comprehensive reforms are rare for good reason -- they are risky and difficult to implement. But they also offer the possibility of transformative improvement. Our frustration with the progress of reform in community colleges is not because skilled and dedicated people have not tried; rather, the reforms themselves have been self-limiting.

President Obama has rightly asked the nation to attend with renewed urgency to the problem of college attainment among low-income students. But the focus on undermatching is driven partly by a perception that the distribution of quality among colleges and universities is and will remain fixed.

This need not be so. Bold, large-scale reforms can improve institutions across the higher education system so that no matter where our neediest students enroll, they are ensured the best possible chance of success.

Thomas Bailey is director of the Community College Research Center at Teacher's College Columbia University.

Essay looks at how early warning systems can better boost retention

The news that Purdue University likely overstated the impact of its early warning system, Course Signals, has cast doubt about the efficacy of a host of technology products intended to improve student retention and completion. In a commentary published in Inside Higher Ed, Mark Milliron responded by arguing that “next-generation” early warning systems use more robust analytics and will be likely to get better results.

We contend that even with extremely robust and appropriate analytics, programs like Course Signals may still fall short if their adoption ignores the most pressing piece of electronic advising systems — their use on the front end, by advisers, faculty and students. Until more attention is paid to the messy, human side of educational technology, Course Signals — and other programs like it — will continue to show anemic impacts on student retention and graduation.

Over the past year, we have worked with colleges in the process of implementing Integrated Planning and Advising Systems (which include early warning systems like Course Signals). The adoption of early warning systems requires advisers, faculty and students to approach college success differently and should, in theory, refocus attention on how they engage with advising and support services. In practice, however, we have found that colleges consistently underestimate the challenge of ensuring that such systems are adopted effectively by end-users.

The concept of an early alert is far from new. In interviews, instructors and advisers have consistently reminded us that for years, students have received “early alert” feedback in the form of grades and midterm reports. Early warning systems may streamline this process, and provide the reports in a new format (a red light instead of a warning note, for example), but the warning itself isn’t terribly different.

What is potentially different about products like Course Signals is their ability to connect these course-level warnings to the broader student support services offered by the college. If early warning signals are shared across college personnel, and if those warnings serve to trigger new behaviors on their part, then we are likely to see changed student behavior and success. In other words, sending up a red light isn’t likely to influence retention. But if that red light leads to advisers or tutors reaching out to students and providing targeted support, we might see bigger impacts on student outcomes.

Milliron says, for example, that with predictive analytics, “student[s] might be advised away from a combination of courses that could be toxic for him or her.” But such advising doesn’t happen spontaneously: it requires advisers to be more proactive in preparing for and conducting each advising session. They must examine a student’s early warning profile, program plan and case file prior to the session; they must reframe how they present course choices to students; and they have to rethink what the best course combinations are for students with varying educational and career goals, as well as learning styles and abilities. Finally, they may have to link students to additional resources on campus — such as tutoring— and colleges need to ensure these services exist and are of high quality.

For this process to occur, advisers need to be well-versed in how to use the analytics, and be encouraged to move past registering students for the most common set of courses to courses that make sense for the individual. But because most colleges remain uncertain about the process changes that should occur when they adopt early warning systems, they are unable to provide the training that would help faculty and advisers make potentially transformative adjustments in their practice.

Even if colleges do adequately prepare faculty and advisers for this transition, there is much we still don’t know about how students will perceive and use the data and messages they receive from early warning systems. These unknowns may influence the extent to which the systems impact student outcomes.

For example, if students perceive early warnings as a reprimand rather than an opportunity to get help, they may ignore the signals or avoid efforts of college personnel to contact them. To anticipate and mitigate these kinds of potentially negative responses, it is important to understand how all students, not just those who use and enjoy early alert systems, experience and react to such signals. As Milliron notes, we need to figure how to send the right message to the right people in the right way.

Early warning systems are only tools, and colleges will have to pay closer attention to changing end-user culture in order to maximize their effectiveness. Currently, colleges are skipping this step. At the end of the day, even the best system and the best data depend on people to translate them into actions and behaviors that can influence student retention and completion.

Melinda Mechur Karp is a senior research associate at the Community College Research Center at Columbia University's Teachers College. Also contributing to the essay were Jeff Fletcher, a senior research assistant, Hoori Santikian Kalamkarian, a research associate, and Serena Klempin, a research associate.

Metrics of college performance don't reach adult students

Smart Title: 

Adult students aren't using College Scorecard and other consumer websites as they consider college, and they aren't interested in performance metrics like graduation rates and debt levels.

Complete College America report tracks state approaches to performance-based funding

Smart Title: 

Complete College America talks up peformance-based funding at its annual meeting, releasing a report that rates the 16 states that have tried it so far.

Students, faculty sign pledge for college completion

Smart Title: 

Students are asking faculty members to pledge to create a culture of completion.

Incoming student characteristics determine graduation rates, studies find

Smart Title: 

Colleges that serve fewer disadvantaged students have higher graduation rates, new studies find, a fact policy makers should heed.

New GAO report on spending patterns of veterans' tuition benefits

Smart Title: 

New federal report tracks where 1 million student veterans are going to college, and where the $11 billion in education benefits they receive is going.

Voluntary performance measures from Gates-backed group

Smart Title: 

Diverse group of 18 institutions, with Gates's backing, releases new set of metrics to measure colleges' performance and return on investment.

Associate degree program requirements typically top 60 credits

Smart Title: 

Community colleges often require more than 60 credits for associate degrees, which could be a barrier to graduation for some students.

Pages

Subscribe to RSS - Graduation rates
Back to Top