Assessment

Wake Forest U. tries to measure well-being

Smart Title: 

Wake Forest U. looks to measure the lives of its students and alumni.

We need a new student data system -- but the right kind of one (essay)

The New America Foundation’s recent report on the Student Unit Record System (SURS) is fascinating reading.  It is hard to argue with the writers’ contention that our current systems of data collection are broken, do not serve the public or policy makers very well, and are no better at protecting student privacy than their proposed SURS might be. 

It also lifts the veil on One Dupont Circle and Washington behind-the-scenes lobbying and politics that is delicious and also troubling, if not exactly "House of Cards" dramatic. Indeed, it is good wonkish history and analysis and sets the stage for a better informed debate about any national unit record system.

As president of a nonprofit private institution and paid-up member of NAICU, the industry sector and its representative organization in D.C. that respectively stand as SURS roadblocks in the report’s telling, I find myself both in support of a student unit record system and worried about the things it wants to record. Privacy, the principle argument mounted against such a system, is not my worry, and I tend to agree with the report’s arguments that it is the canard that masks the real reason for opposition: institutional fear of accountability. 

Our industry is a troubled one, after all, that loses too many students (Would we accept a 50 percent success rate among surgeons and bridge builders?) and often saddles them with too much debt, and whose outputs are increasingly questioned by employers.

The lack of a student record system hinders our ability to understand our industry, as New America’s Clare McCann and Amy Laitinen point out, and understanding the higher education landscape remains ever more challenging for consumers. A well-designed SURS would certainly help with the former and might eventually help with the latter problem, though college choices have so much irrationality built into them that consumer education is only one part of the issue.  But what does “well-designed” mean here? This is where I, like everyone, gets worried.

For me, three design principles must be in place for an effective SURS:

Hold us accountable for what we can control. This is a cornerstone principle of accountability and data collection. As an institution, we should be held accountable for what students learn, their readiness for their chosen careers, and giving them all the tools they need to go out there and begin their job search. Fair enough. But don’t hold me accountable for what I can’t control:

  • The labor market. I can’t create jobs where they don’t exist, and the struggles of undeniably well-prepared students to find good-paying, meaningful jobs say more about the economy, the ways in which technology is replacing human labor, and the choices that corporations make than my institutional effectiveness.  If the government wants to hold us accountable on earnings post-graduation, can we hold it accountable for making sure that good-paying jobs are out there?
  • Graduate motivation and grit. My institution can do everything in its power to encourage students to start their job search early, to do internships and network, and to be polished and ready for that first interview.  But if a student chooses to take that first year to travel, to be a ski bum, or simply stay in their home area when jobs in their discipline might be in Los Angeles or Washington or Omaha, there is little I can do.  Yet those have a lot of impact on the measure of earnings just after graduation.
  • Irrational passion. We should arm prospective students with good information about their majors: job prospects, average salaries, geographic demand, how recent graduates have fared.  However, if a student is convinced that being a poet or an art historian is his or her calling, to recall President Obama’s recent comment, how accountable is my individual institution if that student graduates and then struggles to find work? 

We wrestle with these questions internally.  We talk about capping majors that seem to have diminished demand, putting in place differential tuition rates, and more.  How should we think about our debt to earnings ratio? None of this is an argument against a unit record system, but a plea that it measure things that are more fully in our institutional control.   For example, does it make more sense to measure earnings three or five years out, which at least gets us past the transitional period into the labor market and allows for some evening out of the flux that often attends those first years after graduation? 

Contextualize the findings. As has been pointed out many times, a 98 percent graduation rate at a place like Harvard is less a testimony to its institutional quality than evidence of its remarkably talented incoming classes of students.  Not only would a 40 percent graduation rate at some institutions be a smashing success, but Harvard would almost certainly fail those very same students. As McCann and Laitinen point out, so much of what we measure and report on is not about students, so let’s make sure that an eventual SURS provides consumer information that makes sense for the individual consumer and institutional sector. 

If the consumer dimension of a student unit record system is to help people make wise choices, it can’t treat all institutions the same and it should be consumer-focused.  For example, can it be “smart” enough to solicit the kind of consumer information that then allows us to answer not only the question the authors pose, “What kinds of students are graduating from specific institutions?” but “What kinds of students like you are graduating from what set of similar institutions and how does my institution perform in that context?”

This idea extends to other items we might and should measure. For example, is a $30,000 salary for an elementary school teacher in a given region below, at, or above the average for a newly minted teacher three years after graduation?  How then are my teachers doing compared to graduates in my sector? Merely reporting the number without context is not very useful. It’s all about context.

What we measure will matter. This is obvious and it speaks to both the power of measuring and raises the specter of inadvertent consequences.  A cardiologist friend commented to me that his unit’s performance is measured in various ways and the simplest way for him to improve its mortality metric is to take fewer very sick heart patients. He of course worries that such a decision contradicts its mission and why he practices medicine. It continues to bother me that proposed student records systems don’t measure learning, the thing that matters most to my institution.  More precisely, that they don’t measure how much we have moved the dial for any given student, how impactful we have been. 

Internally, we have honed our predictive analytics based on student profile data and can measure impact pretty precisely.  Similarly, if we used student profile data as part of the SURS consumer function, we might be able to address more effectively both my first and second design principles. 

Imagine a system that was smart enough to say “Based on your student profile, here is the segment of colleges similar students most commonly attend, what the average performance band is for that segment, and how a particular institution performs within that band across these factors.…”  We would address the thing for which we should be held most accountable, student impact, and we’d provide context. And what matters most -- our ability to move students along to a better education -- would start to matter most to everyone and we’d see dramatic shifts in behaviors in many institutions.

This is the hard one, of course, and I’m not saying that we ought to hold up a SURS until we work it out. We can do a lot of what I’m calling for and find ways to at least let institutions supplement their reports with the claims they make for learning and how they know.  In many disciplines, schools already report passage rates on boards, C.P.A. exams, and more.  Competency-based models are also moving us forward in this regard. 

These suggestions are not insurmountable hurdles to a national student unit record system. New America makes a persuasive case for putting in place such a system and I and many of my colleagues in the private, nonprofit sector would support one. 

But we need something better than a blunt instrument that replaces one kind of informational fog for another.  That is their goal too, of course, and we should now step back from looking at what kinds of data we can collect to also look at our broader design principles and what kinds things we should collect and how we can best make sense of that data for students and their families. 

Their report gives us a lot of the answer and smart guidance on how a system might work.  It should also be our call to action to further refine the design model to take into account the kinds of challenges outlined above.

Paul LeBlanc is president of Southern New Hampshire University.

UT System creates database to track graduates' earnings, debt

Smart Title: 

University of Texas System creates web tool to track graduates' earnings and debt five years after leaving college, among other outcomes.

Conference Connoisseurs visit the City of Brotherly Love (and cheesesteaks)

Our conference-going gourmands check out the culinary treats of the City of Brotherly Love.

Editorial Tags: 
Show on Jobs site: 

The risks of assessing only what students know and can do (essay)

A central tenet of the student learning outcomes "movement" is that higher education institutions must articulate a specific set of skills, traits and/or dispositions that all of its students will learn before graduation. Then, through legitimate means of measurement, institutions must assess and publicize the degree to which its students make gains on each of these outcomes.

Although many institutions have yet to implement this concept fully (especially regarding the thorough assessment of institutional outcomes), this idea is more than just a suggestion. Each of the regional accrediting bodies now requires institutions to identify specific learning outcomes and demonstrate evidence of outcomes assessment as a standard of practice.

This approach to educational design seems at the very least reasonable. All students, regardless of major, need a certain set of skills and aptitudes (things like critical thinking, collaborative leadership, intercultural competence) to succeed in life as they take on additional professional responsibilities, embark (by choice or by circumstance) on a new career, or address a daunting civic or personal challenge. In light of the educational mission our institutions espouse, committing ourselves to a set of learning outcomes for all students seems like what we should have been doing all along.

Yet too often the outcomes that institutions select to represent the full scope of their educational mission, and the way that those institutions choose to assess gains on those outcomes, unwittingly limit their ability to fulfill the mission they espouse. For when institutions narrow their educational vision to a discrete set of skills and dispositions that can be presented, performed or produced at the end of an undergraduate assembly line, they often do so at the expense of their own broader vision that would cultivate in students a self-sustaining approach to learning. What we measure dictates the focus of our efforts to improve.

As such, it’s easy to imagine a scenario in which the educational structure that currently produces majors and minors in content areas is simply replaced by one that produces majors and minors in some newly chosen learning outcomes. Instead of redesigning the college learning experience to alter the lifetime trajectory of an individual, we allow the whole to be nothing more than the sum of the parts -- because all we have done is swap one collection of parts for another. Although there may be value in establishing and implementing a threshold of competence for a bachelor’s degree (for which a major serves a legitimate purpose), limiting ourselves to this framework fails to account for the deeply held belief that a college experience should approach learning as a process -- one that is cumulative, iterative, multidimensional and, most importantly, self-sustaining long beyond graduation.

The disconnect between our conception of a college education as a process and our tendency to track learning as a finite set of productions (outcomes) is particularly apparent in the way that we assess our students’ development as lifelong learners. Typically, we measure this construct with a pre-test and a post-test that tracks learning gains between the years of 18 and 22 -- hardly a lifetime (the fact that a few institutions gather data from alumni 5 and 10 years after graduation doesn’t invalidate the larger point).

Under these conditions, trying to claim empirically that (1) an individual has developed and maintained a perpetual interest in learning throughout their life, and that (2) this lifelong approach is directly attributable to one’s undergraduate education probably borders on the delusional. The complexity of life even under the most mundane of circumstances makes such a hypothesis deeply suspect. Yet we all know of students that experienced college as a process through which they found a direction that excited them and a momentum that carried them down a purposeful path that extended far beyond commencement.

I am by no means suggesting that institutions should abandon assessing learning gains on a given set of outcomes. On the contrary, we should expect no less of ourselves than substantial growth in all of our students as a result of our efforts. Designed appropriately, a well-organized sequence of outcomes assessment snapshots can provide information vital to tracking student learning over time and potentially increasing institutional effectiveness. However, because the very act of learning occurs (as the seminal developmental psychologist Lev Vygotsky would describe it) in a state of perpetual social interaction, taking stock of the degree to which we foster a robust learning process is at least as important as taking snapshots of learning outcomes if we hope to gather information that helps us improve.

If you think that assessing learning outcomes effectively is difficult, then assessing the quality of the learning process ought to send chills down even the most skilled assessment coordinator’s spine. Defining and measuring the nature of process requires a very different conception of assessment – and for that matter a substantially more complex understanding of learning outcomes.

Instead of merely measuring what is already in the rearview mirror (i.e., whatever has already been acquired), assessing the college experience as a process requires a look at the road ahead, emphasizing the connection between what has already occurred and what is yet to come. In other words, assessment of the learning that results from a given experience would include the degree to which a student is prepared or “primed” to make the most of a future learning experience (either one that is intentionally designed to follow immediately, or one that is likely to occur somewhere down the road). Ultimately, this approach would substantially improve our ability to determine the degree to which we are preparing students to approach life in a way that is thoughtful, pro-actively adaptable, and even nimble in the face of both unforeseen opportunity and sudden disappointment.

Of course, this idea runs counter to the way that we typically organize our students’ postsecondary educational experience. For if we are going to track the degree to which a given experience “primes” students for subsequent experiences -- especially subsequent experiences that occur during college -- then the educational experience can’t be so loosely constructed that the number of potential variations in the order of a student experiences virtually equals the number of students enrolled at our institution.

This doesn’t mean that we return to the days in which every student took the same courses at the same time in the same order, but it does require an increased level of collective commitment to the intentional design of the student experience, a commitment to student-centered learning that will likely come at the expense of an individual instructor’s or administrator’s preference for which courses they teach or programs they lead and when they might be offered.

The other serious challenge is the act of operationalizing a concept of assessment that attempts to directly measure an individual’s preparation to make the most of a subsequent educational experience. But if we want to demonstrate the degree to which a college experience is more than just a collection of gains on disparate outcomes – whether these outcomes are somehow connected or entirely independent of each other – then we have to expand our approach to include process as well as product. 

Only then can we actually demonstrate that the whole is greater than the sum of the parts, that in fact the educational process is the glue that fuses those disparate parts into a greater -- and qualitatively distinct -- whole.

Mark Salisbury is director of institutional research and assessment at Augustana College, in Illinois. He blogs at Delicious Ambiguity.

Editorial Tags: 
Image Source: 
Getty Images

Students, faculty sign pledge for college completion

Smart Title: 

Students are asking faculty members to pledge to create a culture of completion.

Seven state coalition pushes for more information about military credit recommendations

Smart Title: 

Seven states partner up to ensure that student veterans earn college credit for service, while also calling for help from ACE and the Pentagon.

Colleges should end outdated policies that don't put students first (essay)

When institutions and organizations begin to identify with processes instead of intended outcomes, they become vulnerable. They lose sight of their real missions and, when faced with challenges or disruptive innovation, often struggle to survive. 

Eastman Kodak, once the dominant brand in photography, identified too closely with the chemical processes it used and failed to recognize that its overarching mission was photography rather than film and film processing. Swiss watch manufacturers likewise identified too closely with the mechanical workings of their watches and lost market share to companies that understood that the real mission was the production of reliable and wearable instruments to tell time. If railroads had viewed their mission as transportation of people and goods rather than moving trains on tracks, we might have some different brand names on airplanes and vehicles today.  

In retrospect, it seems that the decisions made by these industries defied common sense. Although the leaders were experienced and capable, they were blinded by tradition, and they confused established processes with the real mission of their enterprises.

Higher education today identifies closely with its processes. In open-access public institutions, we recruit, admit and enroll students; assess them for college readiness; place or advise those who are not adequately prepared into remedial classes; give others access to a bewildering variety of course options, often without adequate orientation and advising; provide instruction, often in a passive lecture format; offer services to those who seek and find their way to them; grade students on how well they can navigate our systems and how well they perform on assignments and tests; and issue degrees and certificates based upon the number of credits the students accumulate in required and elective courses. 

We need to fund our institutions, so we concentrate on enrollment targets and make sure classroom seats are filled in accordance with regulations that specify when we count our students for revenue purposes.

At the same time that American higher education is so focused on and protective of its processes, it is also facing both significant challenges and potentially disruptive innovation. Challenges include responding to calls from federal and state policy makers for higher education to increase completion rates and to keep costs down, finding ways that are more effective to help students who are unprepared for college to become successful students, making college information more accessible and processes more transparent for prospective students and their parents, explaining new college rating systems and public score cards, coordinating across institutional boundaries to help an increasingly mobile student population to transfer more seamlessly and successfully from one institution to another and to graduate, dealing with the threat to shift from peer-based institutional accreditation to a federal system of quality assurance, and responding to new funding systems that are based upon institutional performance. 

Potentially disruptive innovations include the increasing use of social media such as YouTube and other open education resources (OER) for learning, the advent of massive online open courses (MOOCs), the quick access to information made possible by advances in technology, and the potential for a shift from the Carnegie unit to documented competencies as the primary way to measure student progression.

One of today’s most significant challenges to higher education is the increased focus on student success. In response to calls and sometimes financial incentives from policy makers -- and with the assistance provided by major foundations -- colleges and universities are shifting their focus from student access and opportunity to student access and success. Higher education associations have committed themselves to helping institutions improve college completion rates. The terminology used is that we are shifting from an “access agenda” to a “success agenda” or a “completion agenda.” 

This identification with outcomes is positive, but it raises concerns about both loss of access to higher education for those students who are less likely to succeed, and the potential for decreased academic rigor. The real mission of higher education is student learning; degrees and certificates must be the institution’s certification of identified student learning outcomes rather than just accumulated credits.

Faculty and academic administrators, perhaps working with appropriate representatives from business and industry, need to identify the learner competencies that should be developed by the curriculum. The curriculum should be designed or modified to ensure that those competencies are appropriately addressed. Students should be challenged to rise to the high expectations required to master the identified competencies and should be provided the support they need to become successful. Finally, learners should be assessed in order to ensure that a degree or certificate is a certification of acquired competencies. 

What would we do differently if, rather than identifying with our processes, we identified with our overarching mission -- student learning? When viewed through the lens of student learning, many of the processes that we currently rely upon and the decisions we make (or fail to make) seem to defy common sense. The institution itself controls some of these policies and practices; others are policies (or the lack of policies) between and among educational institutions; and some are the result of state or federal legislation.

A prime example of a detrimental institutional process is late registration, the practice of allowing students to register after orientation activities -- and often after classes have begun. Can we really expect students to be successful if they enter a class after it is under way? Research consistently shows that students who register late are at a significant disadvantage and, most often, fail or drop out.

Yet, many institutions continue this practice, perhaps in the belief that they are providing opportunity -- but it is opportunity that most often leads to discouragement and failure. Some institutional leaders may worry about the potential negative impact on budgets of not having seats filled. However, the enrollment consequences to eliminating late registration have almost always been temporary or negligible.

Sometimes institutional policies are developed in isolation and create unintended roadblocks for students. When I assumed the presidency of Palomar College, the college had a policy that students could not repeat a course in which they received a passing grade (C or above). But another policy prohibited students who had not received a grade of B or higher in the highest-level developmental writing class from progressing to freshman composition. Students who passed the developmental class with a grade of C were out of luck and had to transfer to another institution if they were to proceed with their education. The English faculty likely wanted only the best-performing students from developmental writing in their freshman composition classes, but this same objective could be accomplished by raising the standards for a C grade in the developmental writing class.

Higher education institutions rely on their faculty and staff to accomplish their missions, so it is important for everyone to understand it in the same way. A faculty member I once met told me that he was proud of the high rate of failure in his classes. He believed that it demonstrated both the rigor of his classes and his excellence as a teacher. If we measured the excellence of medical doctors by the percentage of their patients who die, it would make as much sense. Everyone at the institution has a role in promoting student learning, and everyone needs to understand that the job is to inspire students and help them to be successful rather than sorting out those who have challenges.

"The mission of higher education is student learning, and all of or policies, procedures, and practices must be aligned with that mission if our institutions are to remain relevant."

It is important for faculty and staff to enjoy their work, to feel valued by trustees, administrators, peers, and students -- and for them to feel free to innovate and secure in their employment. As important as our people are to accomplishing our mission, their special interests are not the mission. Periodic discussions about revising general education requirements are often influenced by faculty biases about the importance of their disciplines or even by concerns about job security rather than what students need to learn as part of a degree or certificate program. Before these discussions begin, ground rules should be established so that the determinations are based upon desired skills and knowledge of graduates.

Too often, students leave high school unprepared for college, and they almost always face barriers when transferring from one higher education institution to another. The only solution to these problems is for educators to agree on expectations and learning outcome standards. However, institutional autonomy and sometimes prejudice act as barriers to faculty dialogue across institutional boundaries. It is rare for community college faculty and administrators to interact with their colleagues in high schools -- and interaction between university and community college faculty is just as rare. 

Why should we be surprised when students leaving high school are often not ready to succeed in college or when the transition between community college and university is not as seamless as it should be for students? If we are serious about increasing the rates of success for students, educators will need to come together to begin important discussions about standards for curriculums and expectations for students.

Despite the best intentions of legislators, government policies often force the focus of institutions away from the mission of student learning.  In California, legislation requires community colleges to spend at least 50 percent of their revenue on classroom faculty.  Librarians, counselors, student advisers, and financial aid officers are “on the other side of the Fifty Percent Law.”  The ratio of student advisers or counselors is most often greater than a thousand to one. Research clearly demonstrates that investments in student guidance pay off in increased student learning and success.  Despite the fact that community college students are the most financially disadvantaged students in higher education, they are less likely to receive the financial aid they deserve. Yet, the Fifty Percent Law severely limits what local college faculty and academic administrators can do on their campuses to meet the needs of students in these areas.  Clearly, this law is a barrier to increasing student learning and success. Perhaps state legislators and the faculty unions that lobby them do not trust local trustees and administrators to spend resources appropriately, but this law, in its current form, defies common sense if our mission is student learning.

At the federal level, systems of accountability that track only students who are first-time, full-time freshmen to an institution do not make sense in an era when college students are more mobile than ever and in an environment in which most community college students attend part-time.  A few years ago, I met with a group of presidents of historically black universities and encouraged them to work with community colleges to increase the number of students who transfer to their institutions.  The presidents told me that doing so could lower their measured student success rates because transfers are not first-time freshmen, and the presidents were not willing to take that risk. Fortunately, officials in the U.S. Department of Education are aware of this issue and are working to correct data systems. 

There are many other examples of policies and procedures that seem senseless when viewed through the lens of student learning rather than cherished processes and tradition, just as it seems silly that Eastman Kodak did not recognize that its business was photography or that the Swiss watch manufacturers did not understand that their business was to manufacture accurate and affordable wristwatches. 

American higher education today is increasingly criticized for increasing costs and low completion rates. Higher education costs have risen at an even faster rate than those of health care; student indebtedness has skyrocketed to nearly $1 trillion; and college completion rates in the United States have fallen to 16th in the world. In addition, new technologies and innovations may soon threaten established practices.

Challenging the status quo and confronting those with special interests that are not aligned with the mission of higher education can be risky for both elected officials and educational leaders. But given the challenges that we face today, “muddling through” brings even greater risks. Every decision that is made and every policy that is proposed must be data-informed, and policy makers and leaders need the courage to ask how the changes will affect student learning, student success, and college costs. Existing policies and practices should be examined with the same questions in mind. Faculty and staff need to be free of restraining practices so they can experiment with strategies to engage students and to help them to learn.

Colleges and universities are too important for educators to deny the challenges and demands of today and too important for policy makers to pass laws because of pressure from special interests or based on their recollection of what college used to be. Decisions cannot be based on past practices when the world is changing so rapidly. The mission of higher education is student learning, and all of our policies, procedures and practices must be aligned with that mission if our institutions are to remain relevant.  

George R. Boggs is the president and CEO emeritus of the American Association of Community Colleges. He is a clinical professor for the Roueche Graduate Center at National American University.

Essay on how President Obama's rating system should work

After a month of speculation, President Obama unveiled his plan to “shake up” higher education last week. As promised, the proposal contained some highly controversial elements, none greater than an announcement that the U.S. Department of Education will begin to rate colleges and universities in 2015 and tie financial aid to those results three years later. The announcement prompted typical clichéd Beltway commentary from the higher education industry of “the devil is in the details” and the need to avoid “unintended consequences,” which should rightfully be attributed as, “We are not going to outright object now when everyone’s watching but instead will nitpick to death later.”

But the ratings threat is more substantive than past announcements to put colleges “on notice,” if for no other reason than it is something the department can do without Congressional approval. Though it cannot actually tie aid received directly to these ratings without lawmakers (and the threat to do so would occur after Obama leaves office), the department can send a powerful message both to the higher education community and consumers nationwide by publishing these ratings.

Ratings systems, however, are no easy matter and require lots of choices in their methodologies. With that in mind, here are a few recommendations for how the ratings should work. 

Ratings aren’t rankings.

Colleges have actually rated themselves in various forms for well over a hundred years. The Association of American Universities is an exclusive club of the top research universities that formed in 1900. The more in-depth Carnegie classifications, which group institutions based upon their focus and types of credentials awarded, have been around since the early 1970s. Though they may not be identified as such by most people, they are forms of ratings — recognitions of the distinctions between universities by mission and other factors.

A federal rating system should be constructed similarly. There’s no reason to bother with ordinal rankings like the U.S. News and World Report because distinguishing among a few top colleges is less important than sorting out those that really are worse than others. Groupings that are narrow enough to recognize differences but sufficiently broad to represent a meaningful sample are the way to go. The Department could even consider letting colleges choose their initial groupings, as some already do for the data feedback reports the Department produces through the Integrated Postsecondary Education Data System (IPEDS).

It’s easier to find the bottom tail of the distribution than the middle or top.

There are around 7,000 colleges in this country. Some are fantastic world leaders. Others are unmitigated disasters that should probably be shut down. But the vast majority fall somewhere in between. Sorting out the middle part is probably the hardest element of a ratings system — how do you discern within averageness?

We probably shouldn’t. A ratings system should sort out the worst of the worst by setting minimum performance standards on a few clear measures. It would clearly demonstrate that there is some degree of results so bad thatit  merits being rated poorly. This standard could be excessively, laughably low, like a 10 percent graduation rate. Identifying the worst of the worst would be a huge step forward from what we do now. An ambitious ratings system could do the same thing on the top end using different indicators, setting very high bars that only a tiny handful of colleges would reach, but that’s much harder to get right.

Don’t let calls for the “right” data be an obstructionist tactic.

Hours after the President’s speech, representatives of the higher education lobby stated the administration’s ratings “have an obligation to perfect data.” It’s a reasonable requirement that a rating system not be based only on flawed measures, like holding colleges accountable just  for the completion of first-time, full-time students. But the call for perfect data is a smokescreen for intransigence by setting a nearly unobtainable bar. Even worse, the very people calling for this standard are the same ones representing the institutions that will be the biggest roadblock to obtaining information fulfilling this requirement. Having data demands come from those keeping it hostage creates a perfect opportunity for future vetoes in the name of making perfect be the enemy of the good. It’s also a tried and true tactic from One Dupont Circle. Look at graduation rates, where the higher education lobby is happy to put out reports critiquing their accuracy after getting Congress to enact provisions that banned the creation of better numbers during the last Higher Education Act reauthorization.

To be sure, the Obama administration has an obligation to engage in an open dialogue with willing partners to make a good faith effort at getting the best data possible for its ratings. Some of this will happen anyway thanks to improvements to the department’s IPEDS database. But if colleges are not serious about being partners in the ratings and refuse to contribute the data needed, they should not then turn around and complain about the results.

Stick with real numbers that reflect policy goals.

Input-adjusted metrics are a wonk’s dream. Controlling for factors and running regressions get us all excited. But they’re also useless from a policy implementation standpoint. Complex figures that account for every last difference in institutions will contextualize away all meaningful information until all that remains is a homogenous jumble where everyone looks the same. Controlling for socioeconomic conditions also runs the risk of just inculcating low expectations for students based upon their existing results. Not to mention any modeling choices in an input-adjusted system will add another dimension of criticism to the firestorm that will already surround the measures chosen.

That does not mean context should be ignored. There are just better ways to handle it. First and foremost is making ratings on measures based on performance relative to peers. Well-crafted peer comparisons can accomplish largely the same thing as input adjustment since institutions would be facing similar circumstances, but still rely on straightforward figures. Second, unintended consequences should be addressed by measuring them with additional metrics and clear goals. For example, afraid that focusing on a college's completion rate will discourage enrolling low-income students or unfairly penalize those that serve large numbers of this type of students? The ratings should give institutions credit for the socioeconomic diversity of their student body, require a minimum percentage of Pell students, and break out the completion rate by familial income. Doing so not only provides a backstop against gaming, it also lays out clearer expectations to guide colleges' behavior, something the U.S. News rankings experience has shown that colleges clearly know how to do with less useful measures like alumni giving (sorry, Brown, for holding you back on that one).

Mix factors a college can directly control with ones it cannot.

Institutions have an incentive to improve on measures included in a rating system. But some subset of colleges will also try to evade or “game” the measure. This is particularly true if it’s something under their control — look at the use of forbearances or deferments to avoid sanctions under the cohort default rate. No system will ever be able to fully root out gaming and loopholes, but one way to adjust for them is by complementing measures under a college’s control with ones that are not. For example, concerns about sacrificing academic quality to increase graduation rates could be partially offset by adding a focus on graduates’ earnings or some other post-completion behavior that is not under the college’s control. Institutions will certainly object to being held accountable for things they cannot directly influence. But basing the uncontrollable elements on relative instead of absolute performance should further ameliorate this concern.

Focus on outputs but don’t forget inputs.

Results matter. An institution that cannot graduate its students or avoid saddling them with large loan debts they cannot repay upon completion is not succeeding. But a sole focus on outputs could encourage an institution to avoid serving the neediest students as a way of improving its metrics and undermine the access goals that are an important part of federal education policy.

To account for this, a ratings system should include a few targeted input metrics that reflect larger policy goals like socioeconomic diversity or first-generation college students. Giving colleges “credit” in the ratings for serving the students we care most about will provide at least some check against potential gaming. Even better, some metrics should have a threshold a school has to reach to avoid automatic classification into the lowest rating.

Put it together.

A good ratings system is both consistent and iterative. It keeps the core pieces the same year to year but isn’t too arrogant to include new items and tweak ones that aren’t working. These recommendations present somewhere to start. Group the schools sensibly — maybe even rely on existing classifications like those done by Carnegie. The ratings should establish minimum performance thresholds on the metrics we think are most indicative of an unsuccessful institution — things like completion rates, success with student loans, time to degree, etc. They should consist of outcomes metrics that reflect their missions—such as transfer success for two-year schools, licensure and placement for vocational offerings, earnings, completion and employment for four-year colleges and universities. But they should also have separate metrics to acknowledge policy challenges we care about — success in serving Pell students, the ability to get remedial students college-ready, socioeconomic diversity, etc. — to discourage creaming. The result should be something that reflects values and policy challenges, acknowledges attempts to find workarounds, and refrains from dissolving into wonkiness and theoretical considerations that are divorced from reality.

Author/s: 
Ben Miller
Author's email: 
millerb@newamerica.net

Ben Miller is a senior policy analyst in the New America Foundation's education policy program, where he provides research and analysis on policies related to postsecondary education. Previously, Miller was a senior policy advisor in the Office of Planning, Evaluation, and Policy Development in the U.S. Department of Education.

For-profit Kaplan branches out with learning science projects

Smart Title: 

Kaplan, which includes Pearson-like ed-tech offerings as well as for-profit degree programs, won't miss a beat as The Washington Post moves on.

Pages

Subscribe to RSS - Assessment
Back to Top