"Effectiveness" and "efficiency" are dirty words to some people in academe, often promoted by government technocrats or by those who believe higher education can be reduced to measurable outcomes that show up on the bottom line.
Steven Brint and Charles Clotfelter don't fit into either category. But as editors of a new volume of the Russell Sage Foundation Journal of the Social Sciences entitled "Higher Education Effectiveness," the scholars from University of California at Riverside and Duke University, respectively, accept the idea that "effectiveness" -- defined as how well an organization is meeting a set of agreed-upon objectives -- is a perfectly reasonable thing to try to assess within higher education.
But that depends, in part, on how broadly one defines the objectives of higher education, they say.
“Most discussions of higher education effectiveness today focus on graduating as many students as possible at a low cost while trying to ensure that these students are prepared for the labor market,” Brint said via email. “Our book looks at effectiveness in terms of the historically important objectives of higher education. We think of human capital development more broadly than the term ‘prepared for the labor market’ suggests, and we are interested in the distribution of opportunities for high-quality educational experiences across racial-ethnic and socioeconomic groups. We also emphasize the quality of research produced by the faculty, a key to the continuing strength of U.S. higher education. We are sympathetic to current thinking on cost efficiency, but we look at efficiency in the context of these historically important objectives.”
The papers in the volume provide evidence that the editors strive for a definition of effectiveness that goes well beyond degree production or faculty productivity. Two studies examine the quality of teaching and level of student achievement in science, mathematics, engineering and technology courses. Another examines historical data to make the case that the University of Wisconsin at Madison and some other flagship universities have over several decades expanded access for undergraduate students from the top income quartile at the expense of those in the middle two quartiles. Yet another finds that "students in states with particularly large increases in public four-year tuition costs were substantially more likely to enroll in less-selective public four-year and two-year institutions in the state."
Two other studies in the volume are particularly distinctive, in part because they challenge some of the memes favored by many of the usual champions of higher ed effectiveness.
Research by three Northwestern University scholars and a professor at SUNY Downstate Medical Center takes aim at the "college for all" movement, or at least the version of it that suggests many more Americans should get bachelor's degrees.
The authors, led by James E. Rosenbaum, a professor of sociology, education and social policy at Northwestern, do not take issue with the reams of data showing that bachelor's degree graduates earn much more and have other career advantages over those with lesser degrees or high school degrees. "College for all" advocates, the researchers say, cite those data to "encourage students to see bachelor's degrees" but "mostly" ignore subbaccalaureate credentials such as associate degrees or certificates.
But they analyze a federal database of 2004 high school graduates to make several key points:
- Nearly half of students who enter college with the worst academic preparation do not earn any credential at all.
- Those who get some college education but no credential have little edge in the employment market over students with just a high school degree. Students from low-income backgrounds and those with low test scores are disproportionately among those who strive for but fail to complete a bachelor's degree.
- Students who complete a postsecondary certificate have academic qualifications that are similar to (if not lesser than) those with "some college" education -- but have better job market outcomes than those students.
The researchers marshal those data not to argue that academically underprepared students from low-income backgrounds should settle for an associate degree or certificate, rather than strive for a bachelor's degree. But neither should they be discouraged, as the authors suggest they often are, from getting a shorter-term credential than the B.A.
"Our society gives youth a too narrow vision of college options, careers and the academic requirements for attaining them. In particular, while most students pursue B.A. degrees that may have low odds of success for the most disadvantaged among them, they often ignore valuable sub-B.A. credentials," they write. "We do youth a disservice by avoiding mention of sub-B.A.s and their desirable features. Advocates of the universal B.A. pursuits should reconsider blindly advising all students into a singular goal that prevents them from seeking sub-B.A. credentials that offer fewer academic and financial obstacles, better odds, desirable outcomes, and the potential to pursue B.A. plans later."
Another study in the volume questions not whether it's appropriate to try to measure how colleges perform, but that doing so in flawed ways can create serious problems. Not subtly, the researchers point their finger at approaches like the Obama administration's now-abandoned plan to rate colleges.
The analysis by Michal Kurlaender and Scott Carrell of the University of California at Davis and Jacob Jackson of the Public Policy Institute of California dissect data on student transfer, persistence and completion from California's 108 community colleges and find, as one would expect, that there is great variation in how they fare.
And being at a better institution matters: "Going from the 10th to the 90th percentile of campus quality is associated with a [37.3 percent] increase in student transfer, a [20.8 percent] increase in the probability of persisting, a [42.2 percent] increase in the probability of transferring to a four-year college, and a [26.6 percent] increase in the probability of completion," they write.
But they further examine the data by adjusting for the characteristics of the colleges' students, and find that the rank ordering of the institutions changes enormously when the makeup of their student bodies is factored in.
"The average campus changed plus or minus 30 ranks, the largest positive change being 75 and the largest drop, negative 49," they write. "Our results suggest that policy makers wishing to rank schools based on quality should adjust rankings for differences across campuses in student-level inputs."
Read more by
You may also be interested in...
Today’s News from Inside Higher Ed
Inside Higher Ed’s Quick Takes
What Others Are Reading