teachinglearning

New study links student motivations for going to college to their success

Section: 
Smart Title: 

New study suggests that the reasons students seek a higher education can have a big impact on their grades and likelihood of staying enrolled.

Colleges start new academic programs

Smart Title: 

Essay on how professors can deal with assessment

My first encounter with assessment came in the form of a joke. The seminary where I did my Ph.D. was preparing for a visit from the Association of Theological Schools, and the dean remarked that he was looking forward to developing ways to quantify all the students' spiritual growth. By the time I sat down for my first meeting on assessment as a full-time faculty member in the humanities at a small liberal arts college, I had stopped laughing. Even if we were not setting out to grade someone’s closeness to God on a scale from 1 to 10, the detailed list of "learning outcomes" made it seem like we were expected to do something close. Could education in the liberal arts — and particularly in the humanities — really be reduced to a series of measurable outputs?

Since that initial reaction of shock, I have come to hold a different view of assessment. I am suspicious of the broader education reform movement of which it forms a part, but at a certain point I asked myself what my response would be if I had never heard of No Child Left Behind or Arne Duncan. Would I really object if someone suggested that my institution might want to clarify its goals, gather information about how it’s doing in meeting those goals, and change its practices if they are not working? I doubt that I would: in a certain sense it’s what every institution should be doing. Doing so systematically does bear significant costs in terms of time and energy — but then so does plugging away at something that’s not working. Paying a reasonable number of hours up front in the form of data collection seems like a reasonable hedge against wasting time on efforts or approaches that don’t contribute to our mission. By the same token, getting into the habit of explaining why we’re doing what we’re doing can help us to avoid making decisions based on institutional inertia.

My deeper concerns come from the pressure to adopt numerical measurements. I share the skepticism of many of my colleagues that numbers can really capture what we do as educators in the humanities and at liberal arts colleges. I would note, however, that there is much less skepticism that numerical assessment can capture what our students are achieving — at least when that numerical assessment is translated into the alphabetical form of grades. In fact, some have argued that grades are already outcome assessment, rendering further measures redundant.

I believe the argument for viewing grades as a form of outcome assessment is flawed in two ways. First, I simply do not think it’s true that student grades factor significantly in professors’ self-assessment of how their courses are working. Professors who give systematically lower grades often believe that they are holding students to a higher standard, while professors who grade on a curve are simply ranking students relative to one another. Further, I imagine that no one would be comfortable with the assumption that the department that awarded the best grades was providing the best education — many of us would likely suspect just the opposite.

Second, it is widely acknowledged that faculty as a whole have wavered in their dedication to strict grading, due in large part to the increasingly disproportionate real-world consequences grades can have on their students’ lives. The "grade inflation" trend seems to have begun because professors were unwilling to condemn a student to die in Vietnam because his term paper was too short, and the financial consequences of grades in the era of ballooning student loan debt likely play a similar role today. Hence it makes sense to come up with a parallel internal system of measurement so that we can be more objective.

Another frequently raised concern about outcome assessment is that the pressure to use measures that can easily be compared across institutions could lead to homogenization. This suspicion is amplified by the fact that many (including myself) view the assessment movement as part of the broader neoliberal project of creating “markets” for public goods rather than directly providing them. A key example here is Obamacare: instead of directly providing health insurance to all citizens (as nearly all other developed nations do), the goal was to create a more competitive market in an area where market forces have not previously been effective in controlling costs.

There is much that is troubling about viewing higher education as a competitive market. I for one believe it should be regarded as a public good and funded directly by the state. The reality, however, is that higher education is already a competitive market. Even leaving aside the declining public support for state institutions, private colleges and universities have always played an important role in American higher education. Further, this competitive market is already based on a measure that can easily be compared across institutions: price.

Education is currently a perverse market where everyone is in a competition to charge more, because that is the only way to signal quality in the absence of any other reliable measure of quality. There are other, more detailed measures such as those collected by the widely derided U.S. News & World Report ranking system — but those standards have no direct connection to pedagogical effectiveness and are in any case extremely easy to game.

The attempt to create a competitive market based on pedagogical effectiveness may prove unsuccessful, but in principle, it seems preferable to the current tuition arms race. Further, while there are variations among accrediting bodies, most are encouraging their member institutions to create assessment programs that reflect their own unique goals and institutional ethos. In other words, for now the question is not whether we’re measuring up to some arbitrary standard, but whether institutions can make the case that they are delivering on what they promise.

Hence it seems possible to come up with an assessment system that would actually be helpful for figuring out how to be faithful to each school or department’s own goals. I have to admit that part of my sanguine attitude stems from the fact that Shimer’s pedagogy embodies what independent researchers have already demonstrated to be “best practices” in terms of discussion-centered, small classes — and so if we take the trouble to come up with a plausible way to measure what the program is doing for our students, I’m confident the results will be very strong. Despite that overall optimism, however, I’m also sure that there are some things that we’re doing that aren’t working as well as they could, but we have no way of really knowing that currently. We all have limited energy and time, and so anything that can help us make sure we’re devoting our energy to things that are actually beneficial seems all to the good.

Further, it seems to me that strong faculty involvement in assessment can help to protect us from the whims of administrators who, in their passion for running schools "like a business," make arbitrary decisions based on their own perception of what is most effective or useful. I have faith that the humanities programs that are normally targeted in such efforts can easily make the case for their pedagogical value, just as I am confident that small liberal arts schools like Shimer can make a persuasive argument for the value of their approach. For all our justified suspicions of the agenda behind the assessment movement, none of us in the humanities or at liberal arts colleges can afford to unilaterally disarm and insist that everyone recognize our self-evident worth. If we believe in what we’re doing, we should welcome the opportunity to present our case.

Adam Kotsko is assistant professor of humanities at Shimer College.

Section: 
Editorial Tags: 

Essay on students who are engaged

Books abound about student disengagement. We read about their apathy and indifference to the world around them. Data, sadly, support these claims. Youth voting rates are low, especially when President Obama isn’t on the ballot, and while there is some partaking in community activities, critics have noted that some of this engagement is the product of high schools "mandating" volunteerism as a graduation requirement.

My experiences – both as a political scientist and as a dean of the school of liberal arts at the Savannah College of Art and Design – suggest that we administrators and professors doth protest too much. Give our students a compelling text and topic, and they will engage.

I recently visited a philosophy class in which Plato’s Republic was assigned. The students were tackling Book Six, where questions spill off the pages about who should rule, and what qualities make for a viable ruler. Can a "rational" person, removed from impulses and passions, command and lead? How can, or should one remove oneself from temptation and emotion? Can the rational and emotive be separated? Do citizens trust those who are like them? How much of leading and governing is about the rational, and how much is about appearances and images?

As the professor and I raised these questions, I noticed immediately that the students had done the reading. We administrators read about how today’s students do not read. But these students – all of whom were non-liberal arts majors – had immersed themselves in the text. They were quoting passages and displaying keen interest, both in the text itself and the questions that were being raised. It is not surprising that Plato enlivened the classroom. But these future artists and designers recognized the power of the text. They appreciated how the words had meaning, and the questions were worth exploring.

Second, this experience, and others like it, gave me pause. We administrators may need to tweak our conceptions of our students. Sure, Academically Adrift is an important book, and yes, the data show that the degree of reading comprehension has declined. But we should not misconstrue that data as tantamount to disengagement, nor should we assign fewer readings, simply imply because there are data that show many students do not complete reading assignments. This recommendation – of assigning less reading and teaching it in greater depth – was one of the suggestions made by José Antonio Bowen, author of Teaching Naked, in his dynamic and imaginative keynote address at this year’s annual meeting of the Association of American Colleges and Universities.

The point here is not to debate Bowen’s recommendation – that is for another time and place. Similarly, I am well aware that this experience in Philosophy 101 may be unique, and is dubiously generalizable. (I should add that encountering students who are excited about discussing big ideas also occurs in other classrooms -- photography and art history, for example, that I have visited as well.)

This enthusiasm is not a recipe for assigning Plato in every class, although that is an idea that most definitely would generate discussion. That written, I believe that we should reconsider how we administrators and educators think about student engagement. It is more than knowledge about civics and current events. It is bigger and deeper than service learning, or a passion to work in one’s community.

Provide students with a compelling text and a professor who knows how to raise thought-provoking questions, and students will ponder, debate and imagine the world in new and different ways. They will learn how to think critically and creatively. Cultivating that form of student engagement is no easy task, but it begins by exposing students to great texts and great ideas. Engagement is more than a form of political participation. It is the core of the liberal arts.
 

Robert M. Eisinger is dean of the School of Liberal Arts at the Savannah College of Art and Design.

Section: 
Editorial Tags: 

Interview with co-editors of new book on future of business education

Smart Title: 

Editors of new volume of essays discuss their ideas about the importance to undergraduate programs of including arts and sciences disciplines in meaningful ways.

Measures of college efficiency too often ignore full chain of production

Economists are often criticized for treating colleges as if they were factories: using models that evaluate college efficiency in creating outputs (student completions) for a given input (cost).

In fact, in many ways a college education is like the factory production process: students start at the beginning and then, after a sequence of “inputs” in the form of courses and support services, some graduate successfully at the end.

Unfortunately, economic analyses of college efficiency typically do not look at college as a process. Economic models have traditionally tried to understand college efficiency through a simple input-per-output equation. For example, they may look at a graduation rate in 2012 and compare that to the resources available in the college in 2012.

This approach might be reasonable if college only took one year to complete. It might be reasonable if the college experience was a steady dosage, with the freshman year being the same as the sophomore year. It might be reasonable if there were as many freshmen as sophomores. Needless to say, college is not one year. First-year and second-year requirements are not the same and have different costs. And at community colleges the freshman class is typically more than twice the size of the sophomore class.

The truth is, contemporary factory managers have a much better understanding of their factory's production process than economists do of how colleges operate. Factory managers understand that it matters what happens along the entire chain of production. They know that getting more output at the front end means that the whole production chain must work better. Improvements in one area won't help if they create bottlenecks later on. They also know that efficiency does not come from sacrificing quality.

The same understanding should be applied to the college experience. Improving the quality of instruction in introductory courses won't help if students can't access high-demand majors, such as nursing. Pouring resources into one early intervention won’t help if other programs lose resources and decline in quality as a result. And increasing retention rates won't improve efficiency if it leads students to drop out in their second year instead of their first. In fact, improved retention requires more upper-level courses (which tend to cost more) and makes colleges look less efficient if graduation rates remain unchanged.

In sum, looking at snapshots is not likely to help make colleges more efficient. Instead, it would be more helpful to investigate the process of college and understand what resources are available to a cohort of students as they progress through their college years. We have begun this investigation by using detailed transcript and costs data from one college and simulating different student progression rates.

As well as providing a better understanding of what resources are needed to get a student through to completion, this model enables us to evaluate different reform strategies. We find that increasing first-year math pass rates will increase completions and make the college more efficient. But an equivalent improvement in preparing students to be college-ready has a much greater effect on efficiency.

By contrast, improving persistence rates helps improve completion rates but it does not make the college that much more efficient: many students simply drop out having taken more classes. Finally, getting “lingerers” -- students who have persisted for years and accrued large numbers of credits -- to complete their awards will significantly boost efficiency, as will ensuring that more students who transfer to a four-year institutions earn an associate degree before they transfer.

Much more work needs to be done in this area. But to better understand the economics of college completion we need to more accurately model the resources that are required as students progress through college.

Clive Belfield is an associate professor of economics at Queens College, City University of New York. Davis Jenkins is a senior research associate at the Community College Research Center at Columbia University's Teachers College.

Interview with the authors of new book on STEM teaching

Smart Title: 

Authors of new book discuss ways to improve teaching and learning in the STEM fields.

Controversy Over McGill Med School Reforms

Many medical faculty members at McGill University are protesting plans to shift the medical school curriculum from a research orientation to a focus on family medicine, The Montreal Gazette reported. The government of Quebec is strongly encouraging the shift, and supporters of the plan said that it will produce physicians who are needed by various communities. But professors say that McGill has traditionally played a key role in producing the physicians who also conduct high-level research, and that this mission is being gutted.

 

Video of instructor at USC sets off controversy, but is context missing?

Section: 
Smart Title: 

Video of instructor at U. of Southern California bashing Republicans in class goes viral and is cited as evidence of liberal indoctrination. But critics don't mention that he was hired as adjunct for program that seeks partisans -- liberals and conservatives alike.

Call to Improve Federal STEM Education Efforts

Federal programs to promote science and technology education need better coordination and better analysis of their effectiveness, says a new report by the U.S. Government Accountability Office. There are 209 programs in all, the GAO found, and the number of programs within an individual federal agency range from 3 to 46.

"Agencies' limited use of performance measures and evaluations may hamper their ability to assess the effectiveness of their individual programs as well as the overall STEM education effort," the report said. "Specifically, program officials varied in their ability to provide reliable output measures--for example, the number of students, teachers, or institutions directly served by their program. Further, most agencies did not use outcomes measures in a way that is clearly reflected in their performance planning documents."

 

Pages

Subscribe to RSS - teachinglearning
Back to Top