I approach the topic of the appropriate reach of government regulation into higher education in very much of two minds. On the one hand, I am the president of an independent-minded private college that has been in continuous operation for 139 years and delivers strong outcomes in terms of access, persistence, graduation, employment and post-graduation debt. Regulation from the federal government isn't likely to impose higher performance thresholds than we have already established for ourselves (and consistently achieved), or to improve our performance, but added regulations will very likely impose new costs on us related to compliance, in addition to being just plain irritating.
On the other hand, I serve on the Board of Trustees of the Higher Learning Commission, and that service has opened my eyes both to the broad variety of institutions that the Commission serves and, very frankly, to instances of institutions that have gone awry, that are not serving their students well, that are not good stewards of the federal dollars that flow through their budgets, and that are either unwilling to admit their shortcomings or unable to address them.
The investment that government -- both federal and state -- makes in financial aid to students, who then pay that money to us so that we can use it to deliver our programs, is certainly considerable, and we need to be good stewards of it, so that students are well-served and taxpayers' dollars well-spent. If those ends are to be achieved, some regulation will be necessary.
So, how much is just right? Here’s an answer: the minimum amount necessary to achieve the two goals I just mentioned: ensure that students are well-served and that tax dollars are well-spent
As the reaction from the higher education community to the Department of Education's talk about a federal rating system for colleges and universities demonstrates, those seemingly simply goals I just articulated aren't simple at all once you get into any level of detail in specifying what it means to be "well-served" or "well-spent."
Does "well-served" for example tie out to a minimally acceptable four- or six-year graduation rate? What about open-access institutions whose mission is to prepare underserved students to succeed at a different kind of institution? What about institutions in a situation where graduation may not be the most important goal?
"Well-spent" raises similar questions. If you are an institution with a graduation rate in the 90 percents, but the percent of Pell-eligible students in your student body doesn’t reach the number of Pell-eligible students that somebody in an office in Washington decided was minimally acceptable, does that mean the federal dollars that flowed to your budget through student tuition payments weren't well-spent because they weren't supporting certain policy goals, despite evidence that your program is effective?
These problems aren't new. Every regulated industry faces them, and perhaps as we think about proposed increases in the regulation of higher education a wise thing to do would be to study those industries -- if any -- where the right balance between the actors in the industry and government regulation has been struck.
In the meantime, here are a few thoughts about how much government regulation is just right:
It's too much if it imposes compliance costs and burdens on institutions that plainly are serving students well and being good stewards of tax dollars.
It's not enough if there's demonstrable evidence that there are numbers of institutions with clearly articulated and appropriate mission statements that are not delivering on those missions but are nevertheless consuming significant resources.
It's not enough if there is clear and demonstrable evidence that self-regulation, and by that I mean accreditation, is ineffective.
It's too much if regulation requires an institution that is otherwise flourishing to change its mission in response to the policy goals of whoever happens to be running the U.S. Department of Education at the moment.
It's too much if the net effect is to narrow the diversity of types of higher education institutions in America, the diversity of their missions, of their entry points, and so forth.
It's too much if a compliance industry grows up around regulation.
It's too much if it can't be demonstrated that the net effect of the regulations, after the costs and burdens it imposes, has been to make institutions better serve students and steward tax dollars.
Many institutions of higher education in America don't need more regulation to help or force them do their job. Some do. Regulation that starts from that simple fact is most likely to be good for students, good for higher education, and good for the country.
David R. Anderson is president of St. Olaf College, in Minnesota. This column is adapted from remarks made at the panel on “How Much Government Regulation of Higher Education is Just Right?” at the 2014 Annual Conference of the Higher Learning Commission.
The academic preparation of incoming colleges students has a strong impact on dropout rates, according to a newly released report from the ACT, which is a nonprofit testing organization. The findings show that students have the greatest risk of dropping out if they earn lower scores on college readiness assessments, particularly students with less-educated parents.
Inside Higher Ed is today releasing a free compilation of articles -- in print-on-demand format -- about the drive to increase the number of Americans with college credentials. The articles reflect challenges faced by colleges, and some of the key strategies they are adopting. Download the booklet here.
On Monday, April 28, at 2 p.m. Eastern, Inside Higher Ed editors Scott Jaschik and Doug Lederman will conduct a free webinar to talk about the issues raised in the booklet's articles. To register for the webinar, please click here.
A newly formed coalition of 20 states is trying to create joint data standards and data sharing agreements for non-degree credentials, like industry certifications. While demand is high for these credentials, data is scarce on whether students are able to meet industry-specified competencies. The Workforce Credentials Coalition, which held its first meeting at the New America Foundation on Monday, wants to change that by developing a unified data framework between colleges and employers. Community college systems in California and North Carolina are leading the work.
Also this week, the Workforce Data Quality Campaign released a new report that describes states and schools that have worked to broker data-sharing agreements with certification bodies and licensing agencies. The goal of those efforts is to improve non-degree programs and to reduce confusion about the different types of credentials.
Congratulations on your MVP award at the NBA Celebrity All-Star game: 20 points, 8 boards, 3 assists and a steal -- you really filled up that stat sheet. Even the NBA guys were amazed at your ability to play at such a high level -- still. Those hours on the White House court are paying off!
Like you, I spent some time playing overseas after college and have long been a consumer of basketball box scores -- they tell you so much about a game. I especially like the fact that the typical box score counts assists, rebounds and steals — not just points. I have spent many hours happily devouring box scores, mostly in an effort to defend my favorite players (who were rarely the top scorers).
As a coach of young players, my wife Michele and I (she is the real player in the family) expanded the typical box score — we counted everything in the regular box score, then added “good passes,” “defensive stops,” “loose ball dives” and anything else we could figure out a way to measure. This was all part of an effort to describe for our young charges the “right way” to play the game. I think you will agree that “points scored” rarely tells the full story of a player’s worth to the team.
Mr. Secretary, I think the basketball metaphor is instructive when we “measure” higher education, which is a task that has taken up a lot of your time lately. If you look at all the higher education “success” measures as a basketball box score instead of a golf-type scorecard, it helps clarify two central flaws.
First, exclusivity. Almost every single higher education scorecard fails to account for the efforts of more than half of the students actually engaged in “higher” education.
At Mount Aloysius College, we love our Division III brand of Mountie basketball, but we don’t have any illusions about what would happen if we went up against those five freshman phenoms from Division I Kentucky (or UConn/Notre Dame on the women’s side) -- especially if someone decided that half our points wouldn’t even get counted in the box score.
You see, the databases for all the current higher education scorecards focus exclusively on what the evaluators call “first-time four-year bachelor’s-degree-seeking students.” Nothing wrong with these FTFYBDs, Mr. Secretary, except that they represent less than half of all students in college, yet are the only students the scorecards actually “count.”
None of the following “players” show up in the box score when graduation rates are tabulated:
Players who are non-starters (that is, they aren’t FTFYBDs) — even if they play every minute of the last three quarters, score the most points and graduate on time. These are students who transfer (usually to save money, sometimes to take care of family), spring enrollees (increasingly popular), part-time students and mature students (who usually work full-time while going to school).
Any player on the team, even a starter, who has transferred in from another school. If you didn’t start at the school from which you graduated, then you don’t “count,” even if you graduate first in your class!
Any player, even if she is the best player on the team, who switches positions during the game: Think two-year degree students who switch to a four-year program, or four-year degree students who instead complete a two-year degree (usually because they have to start working).
Any player who is going to play for only two years. This is every single student in a community college and also graduates who get a registered-nurse degree in two years and go right to work at a hospital (even if they later complete a four-year bachelor’s degree, they still don’t count).
Any scoring by any player that occurs in overtime: Think mature and second-career students who never intended to graduate on the typical schedule because they are working full time and raising a family.
The message sent by today’s flawed college scorecards is unavoidable: These hard-working students don’t count.
Mr. Secretary, I know that you understand how essential two-year degrees are to our economy; that students who need to transfer for family, health or economic reasons are just as valuable as FTFYBDs, and that nontraditional students are now the rule, not the exception. But current evaluation methods are almost universally out-of-date with readily available data and out of synch with the real lives of many students who simply don’t have the economic luxury of a fully financed four-year college degree. All five types of students listed above just don’t show up anywhere in the box score.
“Scorecards” should look more like box scores and include total graduation rates for both two- and four-year graduates (the current IPEDS overall grad rate), all transfer-in students (it looks like IPEDs may begin to track these), as well as transfer-out students who complete degrees (current National Student Clearinghouse numbers). These changes would provide a more accurate result for the student success rate at all institutions.
Another relatively easy fix would be to break out cohort comparisons that would allow Scorecard users to see how institutions perform when compared to others with a similar profile (as in the Carnegie Classifications).
The second issue is fairness.
Current measurement systems make no effort to account for the difference between (in basketball terms) Division I and Division III, between “highly selective schools” that “select” from the top echelons of college “recruits” and those schools that work best with students who are the first in their families to go to college, or low-income, or simply less prepared (“You can’t coach height,” we used to say).
As much as you might love the way Wisconsin-Whitewater won this year’s Division III national championship (last-second shot), I don’t think even the most fervent Warhawks fan has any doubt about how they would fare against Coach Bo Ryan’s Division-I Wisconsin Badgers. The Badgers are just taller, faster, stronger — and that’s why they’re in Division I and why they made it to the Final Four.
The bottom line on fairness is that graduation rates track closely with family income, parental education, Pell Grant eligibility and other obvious socioeconomic indicators. These data are consistent over time and truly incontrovertible.
Mr. Secretary, I know that you understand in a personal way how essential it is that any measuring system be fair. And I know you already are working on this problem, on a “degree of difficulty” measure, very like the hospital “acuity index” in use in the health care industry.
The classification system that your team is working on right now could assign a coefficient that weighs these measurable mitigating factors when posting outcomes. Such a coefficient would also help to identify those institutions that are doing the best job at serving these very students. Let us hope that your team can successfully weigh measurable mitigating factors to more fairly score schools. This also would help identify those institutions that are doing the best job at serving the students with the fewest advantages.
In the health care industry, patients are assigned “acuity levels” (based on a risk-adjustment methodology), numbers that reflect a patient’s condition upon admission to a facility. The intent of this classification system is to consider all mitigating factors when measuring outcomes and thus to provide consumers accurate information when comparing providers. A similar model could be adopted for measuring higher education outcomes.
This would allow consideration of factors like (1) Pell eligibility rates, (2) income relative to poverty rates, (3) percentage that are first-generation-to-college, (4) SAT scores, etc. A coefficient that factors in these “challenges” could best measure higher education outcomes. Such “degree of difficulty” factors, like “acuity levels,” would provide consumers accurate information for purposes of comparison.
Absent such a calculation, colleges will continue to have every incentive to “cream” their admissions, and every disincentive against serving the students you have said are central to our economic future, including two-year, low-income and minority students. That’s the “court” that schools like Mount Aloysius and 16 other Mercy colleges play on. We love our FTFYBDs, but we work just as hard on behalf of the more than 50 percent of our students whose circumstances require a less traditional but no less worthy route to graduation. We think they count, too.
Thanks for listening.
Thomas P. Foley
President Mount Aloysius College
Thomas P. Foley is president of Mount Aloysius College.
Institutional research offices at public colleges and universities that are part of state systems focused more heavily on data collection and report writing than on analysis and communication, and spend far more of their time examining student retention and graduation than issues related to campuses' use of money, people and facilities, the National Association of System Heads says in a new report. The study, based on surveys of campus and system IR officials and interviews with campus leaders, says that IR officials themselves are more confident than their bosses are about whether the institutional research offices can adapt to the increased demands on their institutions to use data to improve their performance.
"IR offices are running hard and yet many are still falling behind, deluged by demands for data collection and report writing that blot out time and attention for deeper research, analysis and communication," the report states. Institutional leaders "often expressed the need for some ‘outside’ help in this area, drawing from expertise from other complex organizations such as hospitals, where there is a sense that more is being done to use data to drive both accountability and change."