While there is heated debate over how best to fix America’s higher education system, everyone agrees on the need for meaningful reform. It’s difficult to argue against reform in the face of college attainment rates that are stalled at just under 40 percent and the growing number of graduates left wondering whether they will ever find careers that allow them to pay off their mounting debts.
Any policy debate should start with a clear picture of how the dollars are being spent and whether that money is achieving the desired outcomes. Unfortunately, a lack of accurate data makes it impossible to answer many of the most basic questions for students, families and policy makers who are investing significant time and money in higher education.
During the recent State of the Union address, President Obama talked about shaking up the system of higher education to give parents more information, and colleges more incentives to offer better value. Though he provided little detail, this most certainly referred to the broad vision for higher education reform he outlined over the summer centered around a new a rating system for colleges and universities that would eventually be used to influence spending decisions on federal student financial aid.
However, the President’s proposal rests on a data system that is imperfect, at best. As former U.S. Secretary of Education Margaret Spellings said of the President’s plan, “we need to start with a rich and credible data system before we leap into some sort of artificial ranking system that, frankly, would have all kinds of unintended consequences.”
The American Council on Education, which represents the presidents of more than 1,800 accredited, degree-granting institutions, including two- and four-year colleges, private and public universities, and nonprofit and for-profit entities, agrees on the need for better data as well.
A senior staff member at ACE has been quoted to say that “if the federal government develops a high-stakes ratings system, they have an obligation to have very accurate data,” and that he was “surprised that anyone would think it controversial that having such data is a prerequisite.”
In order to bridge the data gap, we introduced the Student Right to Know Before You Go Act, which would make the complete range of comparative data on colleges and universities easily accessible to the public online and free of charge by linking student-level academic data with employment and earnings data.
For the first time, students, and policy makers, would be able to accurately compare -- down to the institution and specific program of study -- graduation and transfer rates, frequency with which graduates go on to pursue higher levels of education, student debt and post-graduation earnings and employment outcomes. Such a linkage is the best feasible way to create this data-rich environment.
None of these metrics is currently available to those seeking to evaluate a school or program, though plenty of misleading data are out there.
For example, Marylhurst University, a small liberal arts school in Oregon, was assessed with a 0 percent graduation rate by the U.S. Department of Education. This is because the department's current metrics account only for first-time, full-time students, and Marylhurst serves nontraditional students who are part time or have returned to school later in life. Schools like this that serve nontraditional students -- who now make up the majority of all students -- don’t get credit for their success, at least not according to current federal evaluations.
With so many in the higher education community bemoaning the lack of quality data, and clear solutions forward on how to attain better data, why hasn’t it happened?
A major part of the answer: institutional self-interest. Every school in the country has widely disparate performance outcomes depending on the category, and many college presidents are in no hurry to make their less-than-appealing outcome data available for public scrutiny.
There’s a fear that students and families will vote with their pocketbooks and choose different schools that better meet their needs. The abundance of inaccurate and incomplete data provides institutional leaders with a line of defense: so long as such data are the norm upon which they are ranked and rated, they can defend themselves on the basis of flawed methodology.
Not all schools fear the implications of better quality data; in fact, many schools crave these data and want them made public. They know they’ll stack up well against their competition.
Moreover, many schools realize that getting better data is critical to helping identify what’s working and what’s not for their students in order to build stronger programs. Nevertheless, some of the “Big Six” higher education associations still cling to the status quo and represent a key challenge to realizing these commonsense reforms.
It is long past time for these important actors to look away from their self-interest and toward what’s in America’s collective interest -- a future where higher education produces better outcomes for students and the economy -- by supporting the Know Before You Go Act.
U.S. Sen. Ron Wyden is an Oregon Democrat, and U.S. Sen. Marco Rubio is a Florida Republican.
It’s that time of decade again, when randomly selected departments at U of All People are faced with assessment. The administration brings in a posse of NAAAAAA experts with credentials bought from the people who sell fake IDs, and has the faculty entertain them for three days while they poke their noses into everything, including Professor Winkle’s Dryden seminar, which no one has disturbed in years. Here’s how the process works, at least in the English department:
Three months before the assessors arrive, the department is galvanized into action by the chair, acting on directives from the dean, obeying the orders of the provost, who bows to the president. “The assessors are coming, the assessors are coming!” shouts the chair from the comparative safety of the rostrum at the semester’s first departmental faculty meeting while everyone else dives for cover. After this warning shot comes the collective indignation of the faculty -- How dare they judge us? We’re in the humanities! -- as the professors go through the Kübler-Ross stages of denial, anger, bargaining, depression, and acceptance.
When everyone has settled down (except for Professor Winkle, who’s settled in for a nap), the chair starts planning the arduous task of self-judgment. The task consists of recruiting three faculty members who blinked at the wrong time, including Professor Winkle, who opened his eyes after his nap. The disgruntled three are assigned to gauge how much the students aren’t learning from the department’s courses.
What are the standards, criteria, methods? The Renaissance contingent proposes noble goals, such as achieving wisdom and learning to appreciate a Shakespearean sonnet, but no one wants to set the bar too high, or the assessment will be that this department needs to pull up its socks.
The faculty debate setting the bar absurdly low: for instance, that students should learn to read, but there’s no guarantee of students passing that bar, either. After several more meetings and the formation of a committee to oversee the assessment committee, the proposal is that each student should be familiar with the terms literature and irony; must know how to put together an argumentative essay proving that Shakespeare was a great writer; and should have enough literary history to realize that 1800 came after 1564, and that both are before 1922. These arbitrary criteria, once insisted upon, achieve a solidity as satisfying as trompe l’oeil papier-mâché walls.
The methods for data collection are decided by the assessment committee, eager to pass on responsibility to other, unwilling faculty. The methods involve snatching away student essays for disappointed analysis: counting how many times the words in my personal opinion and irregardless appear in the essays, seeing whether the arguments hold water (Professor Winkle performs that job over the sink in the fourth floor men’s restroom), and checking for spelling and grammar, assuming that the faculty are up to it.
As an extra concession, the department tracks alumni/ae to see whether anyone actually used the English major to wangle a job; and contemplates giving an exit exam to department seniors, though the offer of free pizza to anyone who’ll sit for the exam gets only three takers. The sample questions include references to periods, movements, literary terms, authors and works, and seven questions on Dryden. The sample size of all the data varies from a dozen to one faked reply by Professor Winkle.
Other creative assessment methods involve tossing the student essays downstairs to see which go farthest, and throwing the I Ching. To tabulate the results: charts with percentages look good, as do bulleted lists, though the superimposition of one over the other is probably (too late) a poor decision.
Tension mounts till the assessors arrive, at least one in a rumpled brown business suit, all looking as if they haven’t slept since the start of the fall semester. The assessors ask a lot of questions, visit classes, and interview people whom no one ever thought to talk to previously, including Clarice, the custodial supervisor for the liberal arts building. Eventually, they write up a report that recommends a 15 percent reduction in adjunct labor, greater funding for core courses, less departmental internecine warfare, and more attention paid to Dryden.
The report is circulated down the ranks until, months later, it reaches the English department faculty. Since the administration has ignored the implications of the report, the department restricts discussion to only 17 hours, spread out among four faculty meetings.
What rides on all this? Not much till next decade’s visit, when the department scrambles to recall what it did the last time.
David Galef directs the creative writing program at Montclair State University. His latest book is the short story collection My Date With Neanderthal Woman (Dzanc Books).
A central tenet of the student learning outcomes "movement" is that higher education institutions must articulate a specific set of skills, traits and/or dispositions that all of its students will learn before graduation. Then, through legitimate means of measurement, institutions must assess and publicize the degree to which its students make gains on each of these outcomes.
Although many institutions have yet to implement this concept fully (especially regarding the thorough assessment of institutional outcomes), this idea is more than just a suggestion. Each of the regional accrediting bodies now requires institutions to identify specific learning outcomes and demonstrate evidence of outcomes assessment as a standard of practice.
This approach to educational design seems at the very least reasonable. All students, regardless of major, need a certain set of skills and aptitudes (things like critical thinking, collaborative leadership, intercultural competence) to succeed in life as they take on additional professional responsibilities, embark (by choice or by circumstance) on a new career, or address a daunting civic or personal challenge. In light of the educational mission our institutions espouse, committing ourselves to a set of learning outcomes for all students seems like what we should have been doing all along.
Yet too often the outcomes that institutions select to represent the full scope of their educational mission, and the way that those institutions choose to assess gains on those outcomes, unwittingly limit their ability to fulfill the mission they espouse. For when institutions narrow their educational vision to a discrete set of skills and dispositions that can be presented, performed or produced at the end of an undergraduate assembly line, they often do so at the expense of their own broader vision that would cultivate in students a self-sustaining approach to learning. What we measure dictates the focus of our efforts to improve.
As such, it’s easy to imagine a scenario in which the educational structure that currently produces majors and minors in content areas is simply replaced by one that produces majors and minors in some newly chosen learning outcomes. Instead of redesigning the college learning experience to alter the lifetime trajectory of an individual, we allow the whole to be nothing more than the sum of the parts -- because all we have done is swap one collection of parts for another. Although there may be value in establishing and implementing a threshold of competence for a bachelor’s degree (for which a major serves a legitimate purpose), limiting ourselves to this framework fails to account for the deeply held belief that a college experience should approach learning as a process -- one that is cumulative, iterative, multidimensional and, most importantly, self-sustaining long beyond graduation.
The disconnect between our conception of a college education as a process and our tendency to track learning as a finite set of productions (outcomes) is particularly apparent in the way that we assess our students’ development as lifelong learners. Typically, we measure this construct with a pre-test and a post-test that tracks learning gains between the years of 18 and 22 -- hardly a lifetime (the fact that a few institutions gather data from alumni 5 and 10 years after graduation doesn’t invalidate the larger point).
Under these conditions, trying to claim empirically that (1) an individual has developed and maintained a perpetual interest in learning throughout their life, and that (2) this lifelong approach is directly attributable to one’s undergraduate education probably borders on the delusional. The complexity of life even under the most mundane of circumstances makes such a hypothesis deeply suspect. Yet we all know of students that experienced college as a process through which they found a direction that excited them and a momentum that carried them down a purposeful path that extended far beyond commencement.
I am by no means suggesting that institutions should abandon assessing learning gains on a given set of outcomes. On the contrary, we should expect no less of ourselves than substantial growth in all of our students as a result of our efforts. Designed appropriately, a well-organized sequence of outcomes assessment snapshots can provide information vital to tracking student learning over time and potentially increasing institutional effectiveness. However, because the very act of learning occurs (as the seminal developmental psychologist Lev Vygotsky would describe it) in a state of perpetual social interaction, taking stock of the degree to which we foster a robust learning process is at least as important as taking snapshots of learning outcomes if we hope to gather information that helps us improve.
If you think that assessing learning outcomes effectively is difficult, then assessing the quality of the learning process ought to send chills down even the most skilled assessment coordinator’s spine. Defining and measuring the nature of process requires a very different conception of assessment – and for that matter a substantially more complex understanding of learning outcomes.
Instead of merely measuring what is already in the rearview mirror (i.e., whatever has already been acquired), assessing the college experience as a process requires a look at the road ahead, emphasizing the connection between what has already occurred and what is yet to come. In other words, assessment of the learning that results from a given experience would include the degree to which a student is prepared or “primed” to make the most of a future learning experience (either one that is intentionally designed to follow immediately, or one that is likely to occur somewhere down the road). Ultimately, this approach would substantially improve our ability to determine the degree to which we are preparing students to approach life in a way that is thoughtful, pro-actively adaptable, and even nimble in the face of both unforeseen opportunity and sudden disappointment.
Of course, this idea runs counter to the way that we typically organize our students’ postsecondary educational experience. For if we are going to track the degree to which a given experience “primes” students for subsequent experiences -- especially subsequent experiences that occur during college -- then the educational experience can’t be so loosely constructed that the number of potential variations in the order of a student experiences virtually equals the number of students enrolled at our institution.
This doesn’t mean that we return to the days in which every student took the same courses at the same time in the same order, but it does require an increased level of collective commitment to the intentional design of the student experience, a commitment to student-centered learning that will likely come at the expense of an individual instructor’s or administrator’s preference for which courses they teach or programs they lead and when they might be offered.
The other serious challenge is the act of operationalizing a concept of assessment that attempts to directly measure an individual’s preparation to make the most of a subsequent educational experience. But if we want to demonstrate the degree to which a college experience is more than just a collection of gains on disparate outcomes – whether these outcomes are somehow connected or entirely independent of each other – then we have to expand our approach to include process as well as product.
Only then can we actually demonstrate that the whole is greater than the sum of the parts, that in fact the educational process is the glue that fuses those disparate parts into a greater -- and qualitatively distinct -- whole.
Mark Salisbury is director of institutional research and assessment at Augustana College, in Illinois. He blogs at Delicious Ambiguity.
A recent research paper published by the Wisconsin Center for the Advancement of Postsecondary Education and reported on by Inside Higher Ed criticized states' efforts to fund higher education based in part on outcomes, in addition to enrollment. The authors, David Tandberg and Nicholas Hillman, hoped to provide a "cautionary tale" for those looking to performance funding as a "quick fix."
While we agree that performance-based funding is not the only mechanism for driving change, what we certainly do not need are impulsive conclusions that ignore positive results and financial context. With serious problems plaguing American higher education, accompanied by equally serious efforts across the country to address them, it is disheartening to see a flawed piece of research mischaracterize the work on finance reform and potentially set back one important effort, among many, to improve student success in postsecondary education.
As two individuals who have studied performance funding in depth, we know that performance funding is a piece of the puzzle that can provide an intuitive, effective incentive for adopting best practices for student success and encourage others to do so. Our perspective is based on the logical belief that tying some funding dollars to results will provide an incentive to pursue those results. This approach should not be dismissed in one fell swoop.
We are dismayed that the authors were willing to assert an authoritative conclusion from such simplistic research. The study compares outcomes of states "where the policy was in force" to those where it was not -- as if "performance funding" is a monolithic policy everywhere it has been adopted.
The authors failed to differentiate among states in terms of when performance funding was implemented, how much money is at stake, whether performance funds are "add ins" or part of base funding formulas, the metrics used to define and measure "performance," and the extent to which "stop loss" provisions have limited actual change in allocations. These are critical design issues that vary widely and that have evolved dramatically over the 20-year period the authors used to decide if "the policy was in force" or not.
Treating this diverse array of unique approaches as one policy ignores the thoughtful work that educators and policy makers are currently engaged in to learn from past mistakes and to improve the design of performance funding systems. Even a well-designed study would probably fail to reveal positive impacts yet, as states are only now trying out new and better approaches -- certainly not the "rush" to adopting a "quick fix" that the authors assert. It could just as easily be argued that more traditional funding models actually harm institutions trying to make difficult and necessary changes in the best interest of students and their success (see here and here).
The simplistic approach is exacerbated by two other design problems. First, we find errors in the map indicating the status of performance funding. Texas, for example, has only recently implemented (passed in spring 2013) a performance funding model for its community colleges; it has yet to affect any budget allocations. The recommended four-year model was not passed. Washington has a small performance funding program for its two-year colleges but none for its universities. Yet the map shows both states with performance funding operational for both two-year and four-year sectors.
Second, the only outcome examined by the authors was degree completions as it "is the only measure that is common among all states currently using performance funding." While that may be convenient for running a regression analysis, it ignores current thinking about appropriate metrics that honor different institutional missions and provide useful information to drive institutional improvement. The authors make passing reference to different measures at the end of the article but made no effort to incorporate any realism or complexities into their statistical model.
On an apparent mission to discredit performance funding, the authors showed a surprising lack of curiosity about their own findings. They found eight states where performance funding had a positive, significant effect on degree production but rather than examine why that might be, they found apparent comfort in the finding that there were "far more examples" of performance funding failing the significance tests.
"While it may be worthwhile to examine the program features of those states where performance funding had a positive impact on degree completions," they write, "the overall story of our state results serves as a cautionary tale." Mission accomplished.
In their conclusion they assert that performance funding lacks "a compelling theory of action" to explain how and why it might change institutional behaviors.
We strongly disagree. The theory of action behind performance funding is simple: financial incentives shape behaviors. Anyone doubting the conceptual soundness of performance funding is, in effect, doubting that people respond to fiscal incentives. The indisputable evidence that incentives matter in higher education is the overwhelming priority and attention that postsecondary faculty and staff have placed, over the years, on increasing enrollments and meeting enrollment targets, with enrollment-driven budgets.
The logic of performance funding is simply that adding incentives for specified outcomes would encourage individuals to redirect a portion of that priority and attention to achieving those outcomes. Accepting this logic is to affirm the potential of performance funding to change institutional behaviors and student outcomes. It is not to defend any and all versions of performance funding that have been implemented, many of which have been poorly done. And it is not to criticize the daily efforts of faculty and staff, who are committed to student success but cannot be faulted for doing what matters to maintain budgets.
Surely there are other means -- and more powerful means -- to achieve state and national goals of improving student success, as the authors assert. But just as surely it makes sense to align state investments with the student success outcomes that we all seek.
Nancy Shulock is executive director of the Institute for Higher Education Leadership & Policy at California State University at Sacramento, and Martha Snyder is senior associate at HCM Strategists.
In an effort to better-understand differences among student subgroups, the institutional leadership requested an analysis of engagement levels among Zombie students.
Analysis of institutional data indicates that students who self-report as Zombies also report statistically significant lower levels of engagement across a wide range of important student experiences. Many of these lower levels of engagement on specific student experience items are also negative predictors of Zombie student satisfaction.
Zombie students report lower levels of participation in class discussion despite higher satisfaction with faculty feedback. Further investigation found that these students often find it difficult to raise their hand above their heads in response to the instructor’s questions.
Zombie students also report that their co-curricular experiences had less impact on their understanding of how they relate to others. Additional analysis of focus group transcripts suggests a broad lack of self-awareness.
Zombie students indicate that they have fewer serious conversations with students who differ by race, ethnicity, socioeconomic status, or social values. Instead, Zombie students seem to congregate and rarely extend themselves out of their comfort zone.
Interestingly, our first- to second-year retention rate of Zombie students is 100 percent, despite high reports of tardiness and absences. Yet our six year graduation rate is 0 percent. While some have expressed concern over these conflicting data points, the Commencement Committee has suggested that the graduation ceremony is long enough already without having Zombie students shuffling aimlessly across the stage.
Finally, Zombie students report an increased level of one-on-one student/faculty interaction outside of class. However, we found no correlation between the substantial drop in the number of evening faculty from last year (108) to this year (52) and the number of Zombie students enrolled in night courses. Strangely, the Zombie students in these courses did indicate an unusually high level of satisfaction with the institution’s meal plan.
Mark Salisbury is director of institutional research and assessment at Augustana College, in Illinois. He blogs at Delicious Ambiguity, where a version of this essayfirst appeared.