Will Anyone Listen?

Experience of past blue ribbon panels suggests reasons why Spellings panel may or may not have lasting impact.
September 27, 2006

On January 21, 1998, the National Commission on the Cost of Higher Education issued its final report -- calling on colleges to get more serious about minimizing tuition increases, but rejecting the idea of federal price controls. That same day, Monica Lewinsky became a household name when newspapers first reported on her relationship with President Clinton, setting off a scandal that led to his impeachment.

"I had been scheduled to be on CNN's morning show the next day, but I got bumped to a Saturday night business show," recalls William E. Troutt, president of Rhodes College, who was chair of the panel, which was created by Congress.

Troutt's experience reflects a reality that members of the Commission on the Future of Higher Education may be advised to remember: While there are many factors that make some commissions more influential than others, not all of them are within anyone's control. Troutt and others involved in the college cost panel are particularly proud of the way the group started a conversation about the differences between cost, price and net price -- and they believe that the distinctions between those definitions are becoming better understood. But did they have a chance for attention in 1998 against an intern's dress? "Too many distractions" can hurt a report's chances of getting attention is the way Troutt puts it.

Would "A Nation at Risk" -- the 1983 study widely seen as the most effective education commission report of the last 25 years -- have been less effective if a sex scandal had broken the April day it was released? No one knows, but successful reports tend to have multiple things going for them. "You had a report that was written by a very respectable group of people -- people who couldn't be dismissed as partisan. You had strong language in the report. It was quotable and was quoted by the media. Reagan held a press conference. And the story got picked up nationwide," says Kevin R. Kosar, author of Failing Grades: The Federal Politics of Education Standards. "Pull out one of those factors and 'Nation at Risk' could have come and gone with little notice."

In judging a blue ribbon commission, there is some debate about how to measure success. Plenty of commissions are created to make issues go away, or to give the appearance of doing something, not necessarily to do something. Others say you can only judge a panel a success if it does something real.

"A commission is successful if you can track things that it recommended to real changes based on the recommendations," says Clifford Adelman, a researcher at the U.S. Department of Education whose studies were used in "Nation at Risk" and numerous other reports over the years. With "Nation at Risk," he says, you very quickly had legislators in states saying that they wanted specific reforms, who cited "Nation at Risk," and who won changes.

The success or failure of the education secretary's new report, Adelman says, will depend in part on how effective the department is in "using propaganda," in the sense of persuading people in a democratic society that something matters. He asks whether there will be a sustained effort, whether people in Education Department offices around the country will try to capture attention for the report, whether the department is working to get the report on the Web sites of education groups, who will win the battle for defining what the report says. That, in turn, will depend on a variety of factors, according to Adelman and other experts.

The Bipartisan Badge. Experts on commissions (on education and other topics) say that they have a much better chance of bringing about change if they are bipartisan, with members who weren't picked because they have particular axes to grind. Chris Simmons, who was a policy analyst for the college cost commission, says that one of the things that impressed people who worked with that panel was that ideas were not set in stone. "It was very much a data-driven process, with more than 100 pages of data that we released," he says. "We had people on the commission who came in with some ideas and who as they learned more from the data we collected, they changed their minds," says Simmons, associate vice president for federal relations at Duke University. That kind of attitude gives panelists credibility, he adds.

Frederick M. Hess, a resident scholar and director of education policy studies at the American Enterprise Institute, agrees. "It's a political science truism that commissions that are bipartisan always do much better" at accomplishing their goals. However, Hess adds that what matters isn't just the breakdown of committee members, but a sense of whether everyone had a say in developing the report. "A lot depends on how organizations and people react to the report," he says. "It will be interesting to see if this report is perceived as bipartisan."

It's Having a Single Message, Stupid. Many experts on blue ribbon panels say James Carville would know how to pull one off -- with focus. "Nation at Risk" worked, they say, because its message was ultimately simple: the American education system was tanking and time was running out. The kinds of measures legislators and governors proposed after getting excited about "Nation at Risk" weren't hard to understand: Tougher graduation requirements for high schools, raising college admissions standards, recruiting better school teachers -- the approach may have varied but there were a series of relatively straightforward steps that could be taken.

At the higher education level too, reports with specific agendas tend to lead to change more quickly -- especially given how decentralized American colleges are. In the wake of "Nation at Risk," the Education Department produced a new report, "Involvement in Learning," and the National Governors Association produced a report called "Time for Results." Both contained calls for states to do a better job of assessing what students learn, and both reports led a number of states to start new assessment projects, says Peter Ewell, vice president of the National Center for Higher Education Management Systems. While the reports sought significant change, Ewell says, they each had "a crisp message" and were produced by "people who had done their homework and who weren't just looking for adversarial situations."

People who think having a single message and clear steps to follow aren't convinced that the education secretary's new panel is well positioned.

William G. Tierney, director of the Center for Higher Education Policy Analysis at the University of Southern California, says that the latest report fits a pattern of every major report on higher education for the last 20 years. They make a point of saying that American higher education is superb, attracting the best students and researchers from all over the world. "And then they say that if we don't change immediately, everything is going to be terrible." The message has come to be seen as a "Chicken Little" situation. It just doesn't make sense for panels to simultaneously talk about how wonderful everything is and how messed up everything is at the same time, Tierney says. The message is lost.

Adelman agrees that messages need to be clear, but he isn't as skeptical about the education secretary's ability to sell her commission's report -- or at least parts of it. He thinks her calls for restraint on prices and for simplified financial aid forms meet the kinds of simplicity tests that experts talk about. "These are very appealing issues to people," he says. He's less certain that there will be progress on measuring student learning -- because there is so much variation on what's going on already, and it's not a simple area to reform.

Relevance Matters. When prominent public officials give major talks, press coverage follows. But there's a difference between being fodder for some op-eds, and seeing ideas carried out on campus. Looking at past federal reports, experts say that a relevance test explains a lot -- even if reports initially captured a lot of attention.

When he ran the National Endowment for the Humanities in the Reagan administration, William J. Bennett issued a report called "To Reclaim a Legacy: A Report on the Humanities in Higher Education," based on the work of a panel of scholars he assembled. The report blasted colleges for not teaching students enough of the classics, and for what Bennett saw as a lack of rigor in humanities education. During President George H.W. Bush's administration, the NEH was led by Lynne V. Cheney and she returned to those themes in several reports, such as "50 Hours: A Core Curriculum for College Students."

All of these reports were subject to widespread debate at the time, with culture warriors praising or bashing them, but rather limited action followed. Adelman says that's because people with the power to make changes are more likely to address an issue that affects a lot of people -- and that education reports that focus on too narrow a subset don't go forward. "These reports did nothing," says Adelman. "They had nothing to do with what was going on in higher education. Lynne Cheney thinks that people were going to college to study philosophy. They're not. They are studying nursing and accounting. If I'm a dean, I've got to deal with enrollment management and retention and state finances and students who want degrees in accounting."

To the extent that reports may succeed if they reflect campus realities, some think that there is such a drive already on campuses about assessment that themes of the new report will get real attention. Patrick J. McGuinn, author of No Child Left Behind and the Transformation of Federal Education Policy, 1965-2005, is an assistant professor of political science at Drew University. He says that he's struck that everyone at his university (even people who don't study education policy) talks about assessment.

"The assessment mantra has pervaded higher education," he says, and people try to make "data-driven decisions" on whether to change a program in one way or another. McGuinn says that this is evidence that policy discussions can have an impact even before (or without) legislation. He notes that in 1989, the first President Bush held an "education summit" with the nation's governors. The meeting itself didn't result in immediate legislation or lasting change, but it set an agenda (one embraced in part by the then governor of Arkansas when he become president).

When educational institutions sense change is coming, they may act quickly because a relevant question is always is: Who decides how to make reforms? "I think there's a huge difference between whether you do it yourself or have it imposed on you," he says.

Money. Ewell of NCHEMS says that there is another clear pattern with regard to whether commissions are effective or not. "Real money" tends to be involved, he says. The states that started assessment reforms after "Nation at Risk" -- some of them states looked on favorably by those pushing for more assessment today -- did so at a time that they were making healthy increases in spending on higher ed as a whole. "Putting funding behind this does make a difference," he says.

Ewell and others agree, however, that it can be very hard to tell whether a commission will succeed or fail at bringing about long-term change.

Hess, of the American Enterprise Institute, says that when "Nation at Risk" was produced, "it was very much a toss-up whether it would have a real impact or be one more in the file cabinet."

Simmons, the Duke official who worked on the college cost commission, says that even after having the study debut with Lewinskygate, he's been pleased that the ideas of the panel are still being cited. Not a month goes by, he says, that he doesn't have someone call or write him to get a copy of the report.


Be the first to know.
Get our free daily newsletter.


Back to Top