In their effort to improve outcomes, colleges and universities are becoming more sophisticated in how they analyze student data – a promising development. But too often they focus their analytics muscle on predicting which students will fail, and then allocate all of their support resources to those students.
That’s a mistake. Colleges should instead broaden their approach to determine which support services will work best with particular groups of students. In other words, they should go beyond predicting failure to predicting which actions are most likely to lead to success.
Higher education institutions are awash in the resources needed for sophisticated analysis of student success issues. They have talented research professionals, mountains of data and robust methodologies and tools. Unfortunately, most resourced-constrained institutional research (IR) departments are focused on supporting accreditation and external reporting requirements.
Some institutions have started turning their analytics resources inward to address operational and student performance issues, but the question remains: Are they asking the right questions?
Colleges spend hundreds of millions of dollars on services designed to enhance student success. When making allocation decisions, the typical approach is to identify the 20 to 30 percent of students who are most “at risk” of dropping out and throw as many support resources at them as possible. This approach involves a number of troubling assumptions:
The most “at risk” students are the most likely to be affected by a particular form of support.
Every form of support has a positive impact on every “at risk” student.
Students outside this group do not require or deserve support.
What we have found over 14 years working with students and institutions across the country is that:
There are students whose success you can positively affect at every point along the risk distribution.
Different forms of support impact different students in different ways.
The ideal allocation of support resources varies by institution (or more to the point, by the students and situations within the institution).
Another problem with a risk-focused approach is that when students are labeled “at risk” and support resources directed to them on that basis, asking for or accepting help becomes seen as a sign of weakness. When tailored support is provided to all students, even the most disadvantaged are better-off. The difference is a mindset of “success creation” versus “failure prevention.” Colleges must provide support without stigma.
To better understand impact analysis, consider Eric Siegel’s book Predictive Analytics. In it, he talks about the Obama 2012 campaign’s use of microtargeting to cost-effectively identify groups of swing voters who could be moved to vote for Obama by a specific outreach technique (or intervention), such as piece of direct mail or a knock on their door -- the “persuadable” voters. The approach involved assessing what proportion of people in a particular group (e.g., high-income suburban moms with certain behavioral characteristics) was most likely to:
vote for Obama if they received the intervention (positive impact subgroup)
vote for Obama or Romney irrespective of the intervention (no impact subgroup)
vote for Romney if they received the intervention (negative impact subgroup)
The campaign then leveraged this analysis to focus that particular intervention on the first subgroup.
This same technique can be applied in higher education by identifying which students are most likely to respond favorably to a particular form of support, which will be unmoved by it and which will be negatively impacted and dropout.
Of course, impact modeling is much more difficult than risk modeling. Nonetheless, if our goal is to get more students to graduate, it’s where we need to focus analytics efforts.
The biggest challenge with this analysis is that it requires large, controlled studies involving multiple forms of intervention. The need for large controlled studies is one of the key reasons why institutional researchers focus on risk modeling. It is easy to track which students completed their programs and which did not. So, as long as the characteristics of incoming students aren’t changing much, risk modeling is rather simple.
However, once you’ve assessed a student’s risk, you’re still left trying to answer the question, “Now what do I do about it?” This is why impact modeling is so essential. It gives researchers and institutions guidance on allocating the resources that are appropriate for each student.
There is tremendous analytical capacity in higher education, but we are currently directing it toward the wrong goal. While it’s wonderful to know which students are most likely to struggle in college, it is more important to know what we can do to help more students succeed.
Dave Jarrat is a member of the leadership team at InsideTrack, where he directs marketing, research and industry relations activities.
Congratulations on your MVP award at the NBA Celebrity All-Star game: 20 points, 8 boards, 3 assists and a steal -- you really filled up that stat sheet. Even the NBA guys were amazed at your ability to play at such a high level -- still. Those hours on the White House court are paying off!
Like you, I spent some time playing overseas after college and have long been a consumer of basketball box scores -- they tell you so much about a game. I especially like the fact that the typical box score counts assists, rebounds and steals — not just points. I have spent many hours happily devouring box scores, mostly in an effort to defend my favorite players (who were rarely the top scorers).
As a coach of young players, my wife Michele and I (she is the real player in the family) expanded the typical box score — we counted everything in the regular box score, then added “good passes,” “defensive stops,” “loose ball dives” and anything else we could figure out a way to measure. This was all part of an effort to describe for our young charges the “right way” to play the game. I think you will agree that “points scored” rarely tells the full story of a player’s worth to the team.
Mr. Secretary, I think the basketball metaphor is instructive when we “measure” higher education, which is a task that has taken up a lot of your time lately. If you look at all the higher education “success” measures as a basketball box score instead of a golf-type scorecard, it helps clarify two central flaws.
First, exclusivity. Almost every single higher education scorecard fails to account for the efforts of more than half of the students actually engaged in “higher” education.
At Mount Aloysius College, we love our Division III brand of Mountie basketball, but we don’t have any illusions about what would happen if we went up against those five freshman phenoms from Division I Kentucky (or UConn/Notre Dame on the women’s side) -- especially if someone decided that half our points wouldn’t even get counted in the box score.
You see, the databases for all the current higher education scorecards focus exclusively on what the evaluators call “first-time four-year bachelor’s-degree-seeking students.” Nothing wrong with these FTFYBDs, Mr. Secretary, except that they represent less than half of all students in college, yet are the only students the scorecards actually “count.”
None of the following “players” show up in the box score when graduation rates are tabulated:
Players who are non-starters (that is, they aren’t FTFYBDs) — even if they play every minute of the last three quarters, score the most points and graduate on time. These are students who transfer (usually to save money, sometimes to take care of family), spring enrollees (increasingly popular), part-time students and mature students (who usually work full-time while going to school).
Any player on the team, even a starter, who has transferred in from another school. If you didn’t start at the school from which you graduated, then you don’t “count,” even if you graduate first in your class!
Any player, even if she is the best player on the team, who switches positions during the game: Think two-year degree students who switch to a four-year program, or four-year degree students who instead complete a two-year degree (usually because they have to start working).
Any player who is going to play for only two years. This is every single student in a community college and also graduates who get a registered-nurse degree in two years and go right to work at a hospital (even if they later complete a four-year bachelor’s degree, they still don’t count).
Any scoring by any player that occurs in overtime: Think mature and second-career students who never intended to graduate on the typical schedule because they are working full time and raising a family.
The message sent by today’s flawed college scorecards is unavoidable: These hard-working students don’t count.
Mr. Secretary, I know that you understand how essential two-year degrees are to our economy; that students who need to transfer for family, health or economic reasons are just as valuable as FTFYBDs, and that nontraditional students are now the rule, not the exception. But current evaluation methods are almost universally out-of-date with readily available data and out of synch with the real lives of many students who simply don’t have the economic luxury of a fully financed four-year college degree. All five types of students listed above just don’t show up anywhere in the box score.
“Scorecards” should look more like box scores and include total graduation rates for both two- and four-year graduates (the current IPEDS overall grad rate), all transfer-in students (it looks like IPEDs may begin to track these), as well as transfer-out students who complete degrees (current National Student Clearinghouse numbers). These changes would provide a more accurate result for the student success rate at all institutions.
Another relatively easy fix would be to break out cohort comparisons that would allow Scorecard users to see how institutions perform when compared to others with a similar profile (as in the Carnegie Classifications).
The second issue is fairness.
Current measurement systems make no effort to account for the difference between (in basketball terms) Division I and Division III, between “highly selective schools” that “select” from the top echelons of college “recruits” and those schools that work best with students who are the first in their families to go to college, or low-income, or simply less prepared (“You can’t coach height,” we used to say).
As much as you might love the way Wisconsin-Whitewater won this year’s Division III national championship (last-second shot), I don’t think even the most fervent Warhawks fan has any doubt about how they would fare against Coach Bo Ryan’s Division-I Wisconsin Badgers. The Badgers are just taller, faster, stronger — and that’s why they’re in Division I and why they made it to the Final Four.
The bottom line on fairness is that graduation rates track closely with family income, parental education, Pell Grant eligibility and other obvious socioeconomic indicators. These data are consistent over time and truly incontrovertible.
Mr. Secretary, I know that you understand in a personal way how essential it is that any measuring system be fair. And I know you already are working on this problem, on a “degree of difficulty” measure, very like the hospital “acuity index” in use in the health care industry.
The classification system that your team is working on right now could assign a coefficient that weighs these measurable mitigating factors when posting outcomes. Such a coefficient would also help to identify those institutions that are doing the best job at serving these very students. Let us hope that your team can successfully weigh measurable mitigating factors to more fairly score schools. This also would help identify those institutions that are doing the best job at serving the students with the fewest advantages.
In the health care industry, patients are assigned “acuity levels” (based on a risk-adjustment methodology), numbers that reflect a patient’s condition upon admission to a facility. The intent of this classification system is to consider all mitigating factors when measuring outcomes and thus to provide consumers accurate information when comparing providers. A similar model could be adopted for measuring higher education outcomes.
This would allow consideration of factors like (1) Pell eligibility rates, (2) income relative to poverty rates, (3) percentage that are first-generation-to-college, (4) SAT scores, etc. A coefficient that factors in these “challenges” could best measure higher education outcomes. Such “degree of difficulty” factors, like “acuity levels,” would provide consumers accurate information for purposes of comparison.
Absent such a calculation, colleges will continue to have every incentive to “cream” their admissions, and every disincentive against serving the students you have said are central to our economic future, including two-year, low-income and minority students. That’s the “court” that schools like Mount Aloysius and 16 other Mercy colleges play on. We love our FTFYBDs, but we work just as hard on behalf of the more than 50 percent of our students whose circumstances require a less traditional but no less worthy route to graduation. We think they count, too.
Thanks for listening.
Thomas P. Foley
President Mount Aloysius College
Thomas P. Foley is president of Mount Aloysius College.
Recently The Atlantic predicted that one of the top five trends impacting higher education will be a push toward credit given for experience, proficiency and documented “competency.” The recent results of Inside Higher Ed’s survey of chief academic officers also show openness to competency-based outcomes.
For many, myself included, this simply sounds like a series of placement tests and seems like a pretty shallow approach to a college education and degree. However, as vice president of enrollment and chief marketing officer for a residential college, I can’t ignore the appeal of the “validation” of learning this trend suggests.
In fact, I find myself thinking more and more about how residential colleges, with their distinct missions, might respond to the potential threat this trend represents. I find myself hoping we can prove the residential environment results in valuable learning and life experiences beyond getting along with a roommate, asking someone on a date, learning how to tap a keg and configuring a renegade wireless network.
We can do more. Perhaps the idea of competency-based education should inspire us to think differently about how the learning environment of the residential experience is superior. Perhaps there are competencies associated with a residential college we’ve not done an adequate job of documenting?
This will not be easy for most of us. Our natural instinct to “wait and see how good our students turn out” to justify why students should live and learn on campus won’t work this time, as we face a skeptical public and witness more and more college presidents, administrators and boards reconsidering the value of online education. With some intentionality, we can do a much better job of proving why learning in a residential setting is superior.
We need to ask ourselves: Why is the residential campus experience of utmost importance to a contemporary undergraduate education? We must identify the sorts of learning that can only occur in such a setting, and validate, or better identify, the learning competencies that occur outside the classroom on a residential campus.
This will be difficult in an environment defined by shrinking resources, when many resort to thinking about eliminating activities considered not central to the core mission. The instinct is to cut, de-emphasize or keep separate and second. We see this time and time again in any setting that faces difficult choices about resources. But investment, integration and intentionality create a better path forward.
Can liberal arts colleges resist the urge to cut, and rethink how activities in the residential environment are central to the core mission? Can these colleges develop meaningful ways of measuring the value and impact of such activities and how they result in competencies that add value and worth? Can residential liberal arts colleges develop a “currency” that demonstrates they value out-of-classroom learning comparably to in-classroom learning? I hope so.
While many colleges would benefit from integrating out-of-classroom learning, residential liberal arts colleges must do so because of the infrastructure around which our colleges have been built -- residence and dining halls, student activity centers, athletic venues and performance halls. We need to prove these are not just modern amenities, but central to superior learning.
To validate this learning experience, residential liberal arts colleges will need to rethink historic barriers. Learning that occurs outside the classroom can no longer be viewed as “separate and second.”
Extra-curricular and co-curricular transcripts that fully document competencies and outcomes essential to success beyond college must evolve to be fully integrated with the academic program, and valued both internally and externally.
First, residential liberal arts colleges must clearly define the learning outcomes and expectations. This is frequently a faculty-driven exercise. Understanding the knowledge gained from an activity provides a framework around which out-of-classroom learning can be developed. This framework will allow for alignment of purpose and some measure of control about how central an out-of-classroom activity is to the core mission and which competencies are satisfied as a result.
Georgetown University was recently recognized for their excellent programming in the area of preparing student-athletes for leadership. Recognition of activities that successfully align with and even expand learning is critical for the public to be convinced that such activities are core to a high-quality education.
Next, residential liberal arts colleges must create a “currency” that meaningfully recognizes those activities that advance a student’s education, e.g., elective academic credit, a credit-bearing on-campus internship, or certificate for activities that demonstrate substantive interest and professional and personal development.
Student activities might be reorganized into mission-focused areas that provide students with experience not always fully represented in the academic program, but with relevance to a successful application for employment or graduate school.
Some examples might be: Leadership, Teamwork, Civic Engagement, Social Justice, Service Learning, Entrepreneurship and Business Development, Intercultural Understanding, Interfaith and Spiritual Development, Public Relations and Event Planning, and Sustainability. This approach is similar to competency-based certification, but broader than proof that a student can read a balance sheet or do a 10-minute presentation.
Finally, liberal arts colleges should engage in a broader conversation about why they are residential, without saying it’s because they’ve been that way for 150 years. Too many colleges assume students already understand. Such a learning environment can positively shape a student’s character and skillset, and result in sweeter success, but a residential community does not always acknowledge or articulate this success.
With competency-based education in the spotlight, residential colleges have an opportunity to renew a focus on the benefits to students who not only eat and sleep, but also meet colleagues, connect with mentors, challenge themselves in new ways, and develop 21st-century skills and competencies on campus.
If we do not champion and clearly identify the benefits to our students, we are vulnerable to the advocates of no-frills bachelor’s degrees, willy-nilly life experience for credit, online learning, and the commodifiers among us who believe the value of the college experience is test- and content-driven, rather than experiential and residential in nature.
W. Kent Barnds is executive vice president and vice president of enrollment, communication and planning at Augustana College, in Rock Island, Ill.
While there is heated debate over how best to fix America’s higher education system, everyone agrees on the need for meaningful reform. It’s difficult to argue against reform in the face of college attainment rates that are stalled at just under 40 percent and the growing number of graduates left wondering whether they will ever find careers that allow them to pay off their mounting debts.
Any policy debate should start with a clear picture of how the dollars are being spent and whether that money is achieving the desired outcomes. Unfortunately, a lack of accurate data makes it impossible to answer many of the most basic questions for students, families and policy makers who are investing significant time and money in higher education.
During the recent State of the Union address, President Obama talked about shaking up the system of higher education to give parents more information, and colleges more incentives to offer better value. Though he provided little detail, this most certainly referred to the broad vision for higher education reform he outlined over the summer centered around a new a rating system for colleges and universities that would eventually be used to influence spending decisions on federal student financial aid.
However, the President’s proposal rests on a data system that is imperfect, at best. As former U.S. Secretary of Education Margaret Spellings said of the President’s plan, “we need to start with a rich and credible data system before we leap into some sort of artificial ranking system that, frankly, would have all kinds of unintended consequences.”
The American Council on Education, which represents the presidents of more than 1,800 accredited, degree-granting institutions, including two- and four-year colleges, private and public universities, and nonprofit and for-profit entities, agrees on the need for better data as well.
A senior staff member at ACE has been quoted to say that “if the federal government develops a high-stakes ratings system, they have an obligation to have very accurate data,” and that he was “surprised that anyone would think it controversial that having such data is a prerequisite.”
In order to bridge the data gap, we introduced the Student Right to Know Before You Go Act, which would make the complete range of comparative data on colleges and universities easily accessible to the public online and free of charge by linking student-level academic data with employment and earnings data.
For the first time, students, and policy makers, would be able to accurately compare -- down to the institution and specific program of study -- graduation and transfer rates, frequency with which graduates go on to pursue higher levels of education, student debt and post-graduation earnings and employment outcomes. Such a linkage is the best feasible way to create this data-rich environment.
None of these metrics is currently available to those seeking to evaluate a school or program, though plenty of misleading data are out there.
For example, Marylhurst University, a small liberal arts school in Oregon, was assessed with a 0 percent graduation rate by the U.S. Department of Education. This is because the department's current metrics account only for first-time, full-time students, and Marylhurst serves nontraditional students who are part time or have returned to school later in life. Schools like this that serve nontraditional students -- who now make up the majority of all students -- don’t get credit for their success, at least not according to current federal evaluations.
With so many in the higher education community bemoaning the lack of quality data, and clear solutions forward on how to attain better data, why hasn’t it happened?
A major part of the answer: institutional self-interest. Every school in the country has widely disparate performance outcomes depending on the category, and many college presidents are in no hurry to make their less-than-appealing outcome data available for public scrutiny.
There’s a fear that students and families will vote with their pocketbooks and choose different schools that better meet their needs. The abundance of inaccurate and incomplete data provides institutional leaders with a line of defense: so long as such data are the norm upon which they are ranked and rated, they can defend themselves on the basis of flawed methodology.
Not all schools fear the implications of better quality data; in fact, many schools crave these data and want them made public. They know they’ll stack up well against their competition.
Moreover, many schools realize that getting better data is critical to helping identify what’s working and what’s not for their students in order to build stronger programs. Nevertheless, some of the “Big Six” higher education associations still cling to the status quo and represent a key challenge to realizing these commonsense reforms.
It is long past time for these important actors to look away from their self-interest and toward what’s in America’s collective interest -- a future where higher education produces better outcomes for students and the economy -- by supporting the Know Before You Go Act.
U.S. Sen. Ron Wyden is an Oregon Democrat, and U.S. Sen. Marco Rubio is a Florida Republican.
It’s that time of decade again, when randomly selected departments at U of All People are faced with assessment. The administration brings in a posse of NAAAAAA experts with credentials bought from the people who sell fake IDs, and has the faculty entertain them for three days while they poke their noses into everything, including Professor Winkle’s Dryden seminar, which no one has disturbed in years. Here’s how the process works, at least in the English department:
Three months before the assessors arrive, the department is galvanized into action by the chair, acting on directives from the dean, obeying the orders of the provost, who bows to the president. “The assessors are coming, the assessors are coming!” shouts the chair from the comparative safety of the rostrum at the semester’s first departmental faculty meeting while everyone else dives for cover. After this warning shot comes the collective indignation of the faculty -- How dare they judge us? We’re in the humanities! -- as the professors go through the Kübler-Ross stages of denial, anger, bargaining, depression, and acceptance.
When everyone has settled down (except for Professor Winkle, who’s settled in for a nap), the chair starts planning the arduous task of self-judgment. The task consists of recruiting three faculty members who blinked at the wrong time, including Professor Winkle, who opened his eyes after his nap. The disgruntled three are assigned to gauge how much the students aren’t learning from the department’s courses.
What are the standards, criteria, methods? The Renaissance contingent proposes noble goals, such as achieving wisdom and learning to appreciate a Shakespearean sonnet, but no one wants to set the bar too high, or the assessment will be that this department needs to pull up its socks.
The faculty debate setting the bar absurdly low: for instance, that students should learn to read, but there’s no guarantee of students passing that bar, either. After several more meetings and the formation of a committee to oversee the assessment committee, the proposal is that each student should be familiar with the terms literature and irony; must know how to put together an argumentative essay proving that Shakespeare was a great writer; and should have enough literary history to realize that 1800 came after 1564, and that both are before 1922. These arbitrary criteria, once insisted upon, achieve a solidity as satisfying as trompe l’oeil papier-mâché walls.
The methods for data collection are decided by the assessment committee, eager to pass on responsibility to other, unwilling faculty. The methods involve snatching away student essays for disappointed analysis: counting how many times the words in my personal opinion and irregardless appear in the essays, seeing whether the arguments hold water (Professor Winkle performs that job over the sink in the fourth floor men’s restroom), and checking for spelling and grammar, assuming that the faculty are up to it.
As an extra concession, the department tracks alumni/ae to see whether anyone actually used the English major to wangle a job; and contemplates giving an exit exam to department seniors, though the offer of free pizza to anyone who’ll sit for the exam gets only three takers. The sample questions include references to periods, movements, literary terms, authors and works, and seven questions on Dryden. The sample size of all the data varies from a dozen to one faked reply by Professor Winkle.
Other creative assessment methods involve tossing the student essays downstairs to see which go farthest, and throwing the I Ching. To tabulate the results: charts with percentages look good, as do bulleted lists, though the superimposition of one over the other is probably (too late) a poor decision.
Tension mounts till the assessors arrive, at least one in a rumpled brown business suit, all looking as if they haven’t slept since the start of the fall semester. The assessors ask a lot of questions, visit classes, and interview people whom no one ever thought to talk to previously, including Clarice, the custodial supervisor for the liberal arts building. Eventually, they write up a report that recommends a 15 percent reduction in adjunct labor, greater funding for core courses, less departmental internecine warfare, and more attention paid to Dryden.
The report is circulated down the ranks until, months later, it reaches the English department faculty. Since the administration has ignored the implications of the report, the department restricts discussion to only 17 hours, spread out among four faculty meetings.
What rides on all this? Not much till next decade’s visit, when the department scrambles to recall what it did the last time.
David Galef directs the creative writing program at Montclair State University. His latest book is the short story collection My Date With Neanderthal Woman (Dzanc Books).