Asking for Input -- Deliberately
Among the 50 or so participants, some clearly came into Carnegie Mellon University's deliberative poll on same-sex marriage Wednesday with a “gut feeling --- a perspective that they weren’t able to articulate a clear rationale for,” says Michael Bridges, associate director of educational support at the Eberly Center for Teaching Excellence and an adjunct professor of psychology at Carnegie Mellon.
Ostensibly, at least, they were there to somehow support or challenge it.
“This is one of the first models where you have to know what you’re talking about,” says Ashley Birt, a 2007 alumnus and intern who was involved with preparing for Wednesday’s event. “You have to listen to what’s being said; you can’t just go off on a tangent and rant. You can’t just stick your head in the sand.”
The “Campus Conversation” on same-sex marriage was the fifth “deliberative poll” conducted at Carnegie Mellon to gauge the (informed) sentiments of those connected to the college on a host of specific issues -- including faculty course evaluations, file sharing, public art policy, and a student bill of rights. Based on a protocol developed by Stanford University’s James S. Fishkin that Robert Cavalier, a teaching professor of philosophy at Carnegie Mellon, calls “both very elegant in its design but challenging in its implementation,” the deliberative polling method is premised on random sampling of a population (in this case, students, faculty, staff and alumni) and the development of balanced background materials offering various perspectives on an issue in a condensed, but still comprehensive, format.
Although difficult to pull off, Cavalier and others at Carnegie Mellon describe campus-level deliberative polls as rich opportunities to solicit deeper, more meaningful and representative input on a variety of topics pertaining to university policy -- and, when transferred to the community level, (as Carnegie Mellon is hoping to do) on matters of politics too.
“If you just accept the premise that it’s worth listening to what the public would think if they were just more informed and more engaged, then deliberative polling provides a method,” Fishkin says.
A time-intensive one, though. In addition to Cavalier’s efforts leading the initiative, Carnegie Mellon’s "Campus Conversations" program is made possible by a part-time project manager who helps coordinate the polls (which happen once a semester), the efforts of a psychology professor (Bridges) who does a lot of the social science groundwork in terms of random sampling and survey analyses, a set of interns, and the willingness of professors to share their time and expertise when it’s relevant to particular polls. “It’s our experience that it takes about two months to develop background material, to do it in such a way that we’ve done justice to the various positions and created material in a language that the public can understand,” says Cavalier, who is affiliated with Carnegie Mellon’s Center for the Advancement of Applied Ethics and Political Philosophy.
But the effort, he says, is paying off. “We’re beginning to get a sense that this kind of process does actually have intrinsic value for the campus as well as instrumental value for particular policy decisions.”
Reforming the Faculty Evaluation: Deliberative Polling in Action
To understand the process, consider a deliberative poll held one year ago at Carnegie Mellon on a subject dear to many a college professor: the reform of faculty course evaluations. In preparing for the event, its planners developed a 17-page background document that outlined the history of faculty course evaluations (FCEs) at Carnegie Mellon and various perspectives on the evaluations. A sample subhead from the document, for instance, reads, “The FCEs are a useful/problematic instrument when used to evaluate instructors and courses because..." The subsequent section includes both the argument that students use “flawed criteria” in completing evaluations and the finding that about half the faculty respondents to a university survey believe the evaluations provide valuable feedback -- and that 80 percent believe they should be administered.
In addition to the background document, the event planners developed a list of library resources participants could also consult (including a 2005 Slate article about RateMyProfessors.com, “The Hottest Professors on Campus”). Students and faculty were randomly selected and invited to participate, and a “convenience sample” -- kept separate from the random sample during the actual deliberation -- was also recruited through posters and advertisements. Students completed a demographic survey upon registration, and were also asked to complete a pre-survey regarding their positions on the topic. They were able to access the background document two weeks before the deliberative poll.
On the actual day of the deliberation, Cavalier explains, the crowd was divided into discussion groups of six to eight participants, each led by trained moderators. After a structured discussion, the groups coalesced back together to ask questions of an expert panel. Following that and another half hour of group discussion, participants filled out a post-survey. The results are available here.
“As a result, what you get at the end of the day is what, in our case, the faculty and students, members of the Carnegie Mellon community, think about the issue, once they’ve gotten a chance to become informed about it and discussed it amongst themselves and with experts,” Cavalier says. “That’s the protocol that’s simple but challenging.”
In the case of the poll on faculty evaluations, only 43 of the students and faculty randomly invited showed up (a 3.6 percent response rate), casting questions about whether their beliefs were truly representative of the campus -- the attainment of a representative sample the obvious goal of the random sampling technique. Tom Sullivan, an associate teaching professor of electrical and computer engineering and former chair of the Faculty Senate Ad Hoc Committee on Faculty Course Evaluations, says that the committee’s recommendation on reforming the evaluations – accepted by the administration and implemented this fall -- did not end up being based on the results of the deliberative poll. But Sullivan, who fielded questions as an expert panelist during the poll, says the discontent with the old system voiced during the deliberations backed up some of the committee’s ultimate recommendations. (The recommendations primarily involved shortening the evaluations to two specific questions and leaving lots of space for comments).
Challenges and Strengths -- and Expansion
There are challenges to the deliberative model, with one major challenge being the development of "balanced" background materials -- not only in terms of time but also in terms of what "balanced" means. Bradley A. Porter, a senior at Carnegie Mellon and intern involved with the deliberative polling process, conducted research for the background document on same-sex marriage used Wednesday. He describes the struggle in presenting various perspectives in a way that still services the truth.
"There's the Fox News idea of 'fair and balanced' where you just invite the craziest Democrat and craziest Republican and give them equal time," Porter says. But while that method may sometimes be effective when it comes to presenting pure opinions, it often doesn't work when it comes to presenting research findings and facts, he says -- especially when they aren't necessarily comparable.
"The arguments in favor of allowing same-sex marriage and the arguments against it tend to be like two ships passing in the night...You can find ample material on both sides but they weren't always speaking to each other; they were speaking to their various constituencies and audiences," Porter says.
Cavalier acknowledges some complaints that the panelists at Wednesday's event leaned too far toward the "gay rights" side of the spectrum, even though he says they were successful in answering questions as scholars rather than pundits -- one of their challenges. "Deliberations are always going to be challenged by the nature of the background material, the people that do show up and the survey materials," he says. "We're trying to do the best we can under real-world circumstances."
And there is growing interest in the model. Representatives from three other Pennsylvania colleges came to Carnegie Mellon Wednesday to prepare for deliberative polls on same-sex marriage they will hold, based on Carnegie Mellon materials, at their campuses in the spring. The ultimate goal is that, funding permitting, they'll then host citizen polls too, the participants drawn randomly from voter rolls in the counties surrounding the colleges. Carnegie Mellon has also created a handbook for other colleges interested in adopting the deliberative poll model.
Other institutions have used the model on a community level -- including eight that hosted deliberative polls that will be featured on a PBS special in January -- though Carnegie Mellon was the first to use it on a campus-specific basis, says Fishkin, director of Stanford's Center for Deliberative Democracy. As part of the American Association of State Colleges and Universities’ American Democracy Project initiative, representatives from 17 different colleges participated in training in the model at Stanford in September, says Fishkin.
“There’s an ever-present need to consult the public and then you have to ask, 'What public and under what conditions?' Fishkin says. "This is a practical way to get representative and informed public input on some issue."
"Universities have issues, communities have issues, states have issues, nations have issues -- this combines academic expertise that is fairly widespread about public policy, about public opinion and polling, and random sampling, and it satisfies a need. So why shouldn’t it spread to campuses?”