You have /5 articles left.
Sign up for a free account or log in.

LAS VEGAS -- Like many scholarly meetings, the annual conference of the Association for the Study of Higher Education is a mish-mash of styles and subject matter -- symposiums framed around theories ("Latent Class Analysis in Higher Education"), policy-oriented sessions designed to produce practical solutions ("Studying Study Abroad: Policy Relevant Research for Improved Participation, Programs and Outcomes"), and many discussions focused on subgroups of students ("Access, Experience, and Persistence of Military Veterans at Four-Year Institutions") or other groups ("The Experiences and Challenges of Black Female Faculty at Predominantly White Institutions").

Evident amid the panoply of presentations at this year's meeting here  -- some highlights of which are described below -- was a rising tide of questioning about the quality and utility of much of the work being shared at the conference.

Younger scholars new to the association and some longtime ASHE members expressed the view (mostly privately but in a few cases in direct exhortations to their peers to be more relevant and rigorous) that higher education researchers are stuck in old ways and on old issues (“How many more papers do we need on retention?” one asked), and are years behind peers in other disciplines in adopting and applying certain kinds of statistical analyses (more on that below).

Despite this year’s record attendance, more than one ASHE member questioned how many more years scholars will flock to meetings like this one to hear presentations of greatly varying quality on a diffuse set of topics. While these issues are far from unique for the higher education scholars’ group, they may be more acute because of ASHE’s history (which many applaud) of strongly encouraging graduate students to attend and present at its conference.

Below are synopses and snippets of some of the more compelling sessions this reporter attended.

Ripped From the Headlines

If journalism is the first draft of history, and traditional historical scholarship undertaken years after the fact is well, history, you'd need another way to describe the analyses that some of ASHE’s more thoughtful and provocative voices engaged in here Thursday on the opening day of the conference.

One by one, the scholars rose, with varying degrees of seriousness and playfulness, to offer their scholarly and cultural takes on some of the more disturbing higher education news of the last 18 months. Robert Birnbaum of the University of Maryland got things off to a rousing (but slightly unnerving) start with a discussion of campus shootings and other violence that featured him using a toy gun to “shoot” members of the audience (later on he threw plastic grenades into the crowd).

Brian Pusser of the University of Virginia analyzed – you guessed it – his institution’s governance crisis last summer, which he said offered significant lessons to scholars and policy makers concerned about the future of public higher education. Pusser characterized the governing board’s attempt to oust U.Va. President Teresa Sullivan as an example of the ascendance of “winner-take-all governance” in higher education, in which board members – many of them business leaders who believe that “if you’re not No. 1 you’re in peril” – viewed Sullivan as not appearing to move aggressively or speedily enough to keep pace with other elite universities on technology and other issues.

“The board’s view appeared to be that the days of incremental decision making in higher education are over, or should be,” and instead favored a rush to a “high-risk strategy,” Pusser said. “I disagree. This seems to me to be a really important moment to be patient, and to think very carefully about risk and reward.”

Pusser said scholars of higher education should be working to help institutional leaders redefine, in an era of declining state support, what it means for public universities to be state institutions pursuing the public good, with “new prestige hierarchies” that differentiate their missions and goals from those of the private research universities that they increasingly seem to be mimicking.

William G. Tierney of the University of Southern California (and formerly of Pennsylvania State University) described the disaster at Penn State as a failure of personal responsibility on the part of many individuals, and of the shared responsibility that everyone at a college or university has for the institution’s well-being.

“Collective responsibility means we need to speak out and speak up,” Tierney, director of the Pullias Center for Higher Education and University Professor and Wilbur-Kieffer Professor of Higher Education, told the graduate students and professors and student affairs professionals in the audience. “It can’t be, wait until you’re an assistant professor and get tenure, or until you’re an associate or full professor, or until you get an endowed chair, or until hell freezes over.

“Collective responsibility begins with me,” Tierney said. “It’s thinking about … what I will do to make my institution a stronger place, a better place.”

Sara Goldrick-Rab, an associate professor of educational policy studies and sociology at the University of Wisconsin at Madison, discussed the role that she and other scholars who study federal financial aid must play in analyzing what’s working (and, importantly, admitting what’s not) in a higher ed financing system that has student debt soaring.

In response to a question about whether these and other stories are creating a “metanarrative of failing institutions” that can be used to undermine public support for higher education, Goldrick-Rab said they perhaps were – but that that did not mean that ASHE’s members should shy away from pointing out problems when they see them.

“This is not the higher ed lobby,” she said. “We know there are critiques to be levied – just ask our students who are accumulating debt and struggling in other ways. It’s very hard to look around and believe that higher education is doing really well right now. Does that mean it should be destroyed,” like some critics suggest? she said. “No. But we do ourselves a disservice if we shy away from critiques.”

Let's Get Random

Goldrick-Rab was hardly alone in encouraging ASHE members to engage in more-rigorous analyzing whether programs and policies designed to help students actually do so. In a session on Saturday, James T. Minor of the Southern Education Foundation noted the pressure on institutions to prove their efficacy in educating low-income students (with the prospect of federal funds being tied to that success), and the resulting need for better understanding of what works and what doesn't.

The best technique for assessing how effective programs are, most researchers agree, is randomized control, where one group assigned to receive a set of treatments (medicine or therapy, for instance, in the case of physical or mental health) and one is not.

Many educators have rebuffed the prospect of using such an approach in their world, recoiling at the idea of withholding support (financial aid, say, or some form of academic help) from some students and giving it to others, said Heather D. Wathington, an assistant professor of education at the University of Virginia. But "I don't think we have a more powerful approach" for deciding which programs should ultimately be offered than randomization, and it can be done in ways that don't "reduce services or take away something that somebody has," she said. (Wathington said she had taken to describing those in the control group as engaging in "business as usual," with those in the treatment group getting something extra.)

Through her work with the National Center for Postsecondary Research, Wathington has studied the effect of learning communities and summer bridge programs on low-income students and those in developmental education.

Ethical considerations aren't the only barrier to the use of randomized trials in education; such trials are also very expensive. But Stephen R. Porter, a professor of higher education at North Carolina State, argued that the benefits of random assignment can be largely achieved by looking at the relative performance of research subjects just above and below a cutoff score for participation in a certain program.

Consider a researcher trying to assess the impact of a remedial education program into which students are placed based on their score on a placement test. Students who are otherwise very similar might rise or fall just enough in their performance on such a test based on small things, such as sleeping through an alarm and missing the beginning of the test, skipping breakfast or having a headache. So comparing the educational outcomes of students who scored just low enough to be required to participate in the program with those who just narrowly averted going into it is likely to come much closer to mirroring the results of a random assignment than is any standard regression analysis, Porter said.

Porter and a North Carolina State colleague, Paul D. Umbach, presented a paper at the ASHE meeting (using this style of research, known as regression discontinuity) that studied the impact of loan indebtedness on students' academic performance, by examining those who just qualified (and those who just missed qualifying) for a loan-forgiveness program based on family income.

"With there being so many programs and policies, and our needing to know whether they work, I think we're going to see a lot more field experiments using these experimental and quasi-experimental designs," Porter said.

Next Story

Written By

More from News