Research

HHS nominee Tom Price opposes embryonic stem cell research

Smart Title: 

Views held by Tom Price, Trump’s pick for health secretary, would put him at odds with many scientists, but academic groups say he appreciates medical education and value of research.

How assessment falls significantly short of valid research (essay)

In a rare moment of inattention a couple of years ago, I let myself get talked into becoming the chair of my campus’s Institutional Review Board. Being IRB chair may not be the best way to endear oneself to one’s colleagues, but it does offer an interesting window into how different disciplines conceive of research and the many different ways that scholarly work can be used to produce useful knowledge.

It has also brought home to me how utterly different research and assessment are. I have come to question why anyone with any knowledge of research methods would place any value on the results of typical learning outcomes assessment.

IRB approval is required for any work that involves both research and human subjects. If both conditions are met, the IRB must review it; if only one is present, the IRB can claim no authority. In general, it’s pretty easy to tell when a project involves human subjects, but distinguishing nonresearch from research, as it is defined by the U.S. Department of Health and Human Services, is more complicated. It depends in large part on whether the project will result in generalizable knowledge.

Determining what is research and what is not is interesting from an IRB perspective, but it has also forced me to think more about the differences between research and assessment. Learning outcomes assessment looks superficially like human subjects research, but there are some critical differences. Among other things, assessors routinely ignore practices that are considered essential safeguards for research subjects as well as standard research design principles.

A basic tenet of ethical human subjects research is that the research subjects should consent to participate. That is why obtaining informed consent is a routine part of human subject research. In contrast, students whose courses are being assessed are typically not asked whether they are willing to participate in those assessments. They are simply told that they will be participating. Often there is what an IRB would see as coercion. Whether it’s 20 points of extra credit for doing the posttest or embedding an essay that will be used for assessment in the final exam, assessors go out of their way to compel participation in the study.

Given that assessment involves little physical or psychological risk, the coercion of assessment subjects is not that big of a deal. What is more interesting to me is how assessment plans ignore most of the standard practices of good research. In a typical assessment effort, the assessor first decides what the desired outcomes in his course or program are. Sometimes the next step is to determine what level of knowledge or skill students bring with them when they start the course or program, although that is not always done. The final step is to have some sort of posttest or “artifact” -- assessmentspeak for a student-produced product like a paper rather than, say, a potsherd -- which can be examined (invariably with a rubric) to determine if the course or program outcomes have been met.

On some levels, this looks like research. The pretest gives you a baseline measurement, and then, if students do X percent better on the posttest, you appear to have evidence that they made progress. Even if you don’t establish a baseline, you might still be able to look at a capstone project and say that your students met the declared program-level outcome of being able to write a cogent research paper or design and execute a psychology experiment.

From an IRB perspective, however, this is not research. It does not produce generalizable knowledge, in that the success or, more rarely, failure to meet a particular course or program outcome does not allow us to make inferences about other courses or programs. So what appears to have worked for my students, in my World History course, at my institution, may not provide any guidance about what will work at your institution, with your students, with your approach to teaching.

If assessment does not offer generalizable knowledge, does assessment produce meaningful knowledge about particular courses or programs? I would argue that it does not. Leaving aside arguments about whether the blunt instrument of learning outcomes can capture the complexity of student learning or whether the purpose of an entire degree program can be easily summed up in ways that lend themselves to documentation and measurement, it is hard to see how assessment is giving us meaningful information, even concerning specific courses or programs.

First, the people who devise and administer the assessment have a stake in the outcome. When I assess my own course or program, I have an interest in the outcome of that assessment. If I create the assessment instrument, administer it and assess it, my conscious or even unconscious belief in the awesomeness of my own course or program is certain to influence the results. After all, if my approach did not already seem to be the best possible way of doing things, as a conscientious instructor, I would have changed it long ago.

Even if I were the rare human who is entirely without bias, my assessment results would still be meaningless, because I have no way of knowing what caused any of the changes I have observed. I have never seen a control group used in an assessment plan. We give all the students in the class or program the same course or courses. Then we look at what they can or cannot do at the end and assume that the course work is the cause of any change we have observed. Now, maybe this a valid assumption in a few instances, but if my history students are better writers at the end of the semester than they were at the beginning of the semester, how do I know that my course caused the change?

It could be that they were all in a good composition class at the same time as they took my class, or it could even be the case, especially in a program-level assessment, that they are just older and their brains have matured over the last four years. Without some group that has not been subjected to my course or program to compare them to, there is no compelling reason to assume it’s my course or program that’s causing the changes that are being observed.

If I developed a drug and then tested it myself without a control group, you might be a bit suspicious about my claims that everyone who took it recovered from his head cold after two weeks and thus that my drug is a success. But these are precisely the sorts of claims that we find in assessment.

I suspect that most academics are either consciously aware or at least unconsciously aware of these shortcomings and thus uneasy about the way assessment is done. That no one says anything reflects the sort of empty ritual that assessment is. Faculty members just want to keep the assessment office off their backs, the assessment office wants to keep the accreditors at bay and the accreditors need to appease lawmakers, who in turn want to be able to claim that they are holding higher education accountable.

IRBs are not supposed to critique research design unless it affects the safety of human subjects. However, they are supposed to weigh the balance between the risks posed by the study and the benefits of the research. Above all, you should not waste the time or risk the health of human subjects with research that is so poorly designed that it cannot produce meaningful results.

So, acknowledging that assessment is not research and not governed by IRB rules, it still seems that something silly and wasteful is going on here. Why is it acceptable that we spend more and more time and money -- time and money that have real opportunity costs and could be devoted to our students -- on assessment that is so poorly designed that it does not tell us anything meaningful about our courses or students? Whose interests are really served by this? Not students. Not faculty members.

It’s time to stop this charade. If some people want to do real research on what works in the classroom, more power to them. But making every program and every faculty member engage in nonresearch that yields nothing of value is a colossal, frivolous waste of time and money.

Erik Gilbert is a professor of history at Arkansas State University.

Editorial Tags: 
Image Source: 
iStock/applesimon
Is this diversity newsletter?: 

Essay on campaign 2016 and scholarship on Clinton

Two months ago I started keeping a notebook about the presidential election -- in part to jot down my musings and fulminations in a real-time chronicle of the most terrifying length of track on this year’s roller-coaster ride, and in part to wean myself from the habit of snarling profanities at the cable television news. (It was scaring the cats.)

A nickname for the project suggested itself -- The Trump Dump. For it really has been just the one candidate -- his moods and his impulses, far more than his policies, insofar as they could ever be determined -- who set the terms and the pace of the entire contest. Making sense of 2016 meant making sense of Donald Trump, or, rather, of how he ever emerged as a serious political force. “He is impervious to every bullet he shoots into his own feet,” reads one of my notes from before the first debate. “It’s hard to keep thinking about this, but impossible to stop.”

Hillary Clinton, by contrast, is all ineluctability and no enigma. She became the de facto presumptive Democratic candidate for 2016 no later than 2009. Even the scandals linked to her name seem perennial. As a tough-minded and successful professional woman in her 60s, Clinton embodies a misogynist’s worst nightmare, but that just means that the psychodrama of recent months has all been on the part of the candidate with the Tic Tacs.

The Clinton campaign’s greatest advantage was never her aura of inevitability, of course, but rather the widespread suspicion that a Trump presidency would prove to be, like a game of Russian roulette, altogether too exciting for everyone involved. HRC would have guaranteed us the comforts of familiar crises: annual displays of government-shutdown brinksmanship for one, along with a shrinking Supreme Court as the justices die off, with confirmation hearings postponed until after the latest presidential impeachment attempt.

In reading a selection of the master’s theses and doctoral dissertations on Hillary Clinton that academics completed between 1994 and May of this year, I’ve had much the same feeling: most of the scholarly attention to her has come from two or three disciplines and focused on a small range of topics.

I made two collections of abstracts from an online repository of theses and dissertations -- 30 in all, although one item appeared in both sets, bringing the total down to 29. The degrees sought were about evenly divided between the M.A. and the Ph.D., along with one Ed.D. and two M.S. degrees. A plurality of the work -- 10 out of all the theses or dissertations -- was identified as conducted in communications departments, with three more in rhetoric. Departments of political science, sociology, education and leadership studies hosted one study each, while two were listed as done in liberal studies programs. Of the five theses or dissertations for which no disciplinary affiliation was given, at least two or three showed an affinity to the study of rhetoric and communications -- historically, closely associated fields.

In short, more than half of the work on Clinton was performed by students working in rhetoric/communications. In a rough analysis of the topics, I found that 14 were clearly marked as focusing on gender (an implicit emphasis in a number of others). Ten each were identified as studies of rhetoric and media; three specified a focus on communication in general and three on online communication specifically. Seven concentrated on Clinton as first lady and nine on her 2008 campaign. All that said, I should make clear that a single thesis or doctoral dissertation might fall under up to three of these topical headings.

Over all, the emphasis of the studies was overwhelmingly on Clinton either as a user of some form of communication media or as an object of media representation. To give two master’s theses as examples, respectively: Christina Young Guest’s M.A. thesis, “Political Feminine Style and the Feminist Implications of the Respective Convention Speeches: Hillary Rodham Clinton and Sarah Palin” (University of Central Missouri, 2010), and Heidi Johnson’s “Clinton as Matron, Palin as MILF in 2008 Political Cartoons: Transformation in the Caricature of Female Authority?” (Hawaii Pacific University, 2009). As these titles suggest, questions regarding communications and gender issues were interrelated: every dissertation or thesis specifically focused on gender also addressed some aspect of rhetoric, media or communication.

Much less common were studies focusing on Clinton and policy. By my count, only five did. To risk an overgeneralization, researchers have tended to be more interested in how Clinton challenged or was constrained by traditional female roles or implicit assumptions about the proper relationship between public and private identity than in her activity as a senator or secretary of state.

The most recent of the studies -- accepted for the master of arts in liberal studies at Wake Forest University in May of this year -- concerned a matter that proved especially persistent throughout this year’s campaign: Whitney Jessica Threatt’s “A Transparent Hillary Clinton Through the Lens of Apologia Discourse,” wherein Clinton’s email server and its vexed status is addressed with respect to the Obama presidency’s policy on transparency and open government.

Drawing on a specialist literature about apologia (discursive mitigation when accused of injury and/or failure to live up to a certain standard), Threatt considers how various routine responses (denial, corrective action, shifting of blame, etc.) can serve to improve or worsen the accused’s situation vis-à-vis an audience. Complicating apologia for a very public figure such as Clinton is the double problem of media repetition (asking the same question over and over “suggests that the charges brought are true”) and widespread “alienation from politicians as well as the political process.”

Between Whitewater, the Lewinsky affair and so forth, Clinton has spent much of the past quarter century negotiating the terms of “image repair.” (Let’s nod at the existence of an additional set of specialist typologies here and just continue.) Meanwhile, opposing political operatives have built entire careers around raising the earlier circumstances for discussion again at every opportunity. In the case of the private email server, the researcher finds Clinton using certain forms of apologia employed in earlier controversies but with one mode in particular. Examining speeches and interviews with Clinton, Threatt notes that she “consistently attempts to demonstrate that she, herself, has been transparent about not only the investigation but throughout her time as secretary of state.” She assures listeners “that she is doing everything in her power to display transparency by providing the public with the actual emails …. The fact that she is doing more than what has been asked of her insinuates that she is being a leader.”

The upshot here is that Clinton has had an arsenal of rhetorical strategies at her disposal and considerable practice in using them -- with repetition and consistency as primary guiding principles, in part because the same questions and accusations return time and time again. On Tuesday, those strategies failed her.

Section: 
Editorial Tags: 
Image Source: 
Getty Images
Is this diversity newsletter?: 

How academics can use Twitter most effectively (essay)

Patrick Iber provides guidance on how to make Twitter work for academic purposes.

Job Tags: 
Ad keywords: 
Section: 
Editorial Tags: 
Show on Jobs site: 
Image Source: 
istock/Farbai

The challenges of publishing research on marginalized communities (essay)

J. Sumerau provides advice to researchers who find themselves studying groups or phenomena missing -- mostly or entirely -- from the current scientific literature.

Job Tags: 
Ad keywords: 
Section: 
Editorial Tags: 
Show on Jobs site: 

Essay on Edgar Cayce, sociology of religion, terahertz waves and 'Repo Man'

Around this time 20 years ago, I met an elderly gentleman who’d had what sounded like an exceptionally interesting and unusual dissertation-writing experience. A couple of recent coincidences bring the encounter to mind and so inspired this little causerie.

His name was Harmon Bro, and he was in his late 70s when we met. He’d spent the better part of 50 years as an ordained minister and Jungian psychotherapist. If anyone ever looked the part of a Jungian archetype, it was Harmon, who personified the Wise Old Man. In 1955, the University of Chicago Divinity School awarded him a Ph.D. after accepting a doctoral thesis called “The Charisma of the Seer: A Study in the Phenomenology of Religious Leadership.”

It was based in part on work Harmon did in his early 20s as an assistant to Edgar Cayce, “the sleeping prophet.” Despite minimal education, Cayce, it is said, could give long, extemporaneous discourses in response to questions posed to him while he was in a trance state. Among these “readings” were medically sophisticated diagnoses of people miles or continents away, as well as detailed accounts of ancient history and predictions of the future.

Cayce died in 1945, but he left a vast mass of transcripts of his “readings.” By the 1960s, publishers were mining them to produce a seemingly endless series of paperback books extolling Cayce’s powers. Insofar as the New Age can be said to have founding figures, he was one of them.

Harmon was clearly a believer in Cayce’s miraculous powers. I was not (and am not) but have always enjoyed the legends by and about him. As a schoolboy, for example, he would put a textbook under his pillow and absorb its contents while asleep. He graduated (so to speak) to the Akashic Records -- an ethereal library documenting life on Atlantis and in ancient Egypt, and much else besides. He could also see into the future, but the track record is not impressive: China did not convert to Christianity in 1968, nor did Armageddon arrive in 1999. Cayce also predicted that an earthquake in the 1960s would cause California to sink into the Pacific Ocean. It remains attached to the continental United States as of this writing.

Harmon didn’t take skepticism as a threat or an insult, and anyway I preferred listening to arguing. He stressed how very improbable Cayce had been as a subject for serious scholarly attention in the 1950s -- at the University of Chicago, no less. It took three or four tries to get his topic approved; by the time the dissertation was finished and accepted, it felt like every faculty member concerned with the history and psychology of religion had weighed in on it. He happily lent me a copy (when anyone expresses interest in a decades-old dissertation, its author will usually have one of two responses: pleasure or horror), and from reading it, I could see that the scrutiny had been all for the best. It obliged him to practice a kind of methodological agnosticism about Cayce’s powers, and he demonstrated a solid grounding in the social-scientific literature on religion -- in particular, Max Weber’s work on prophetic charisma.

But by 1996, Harmon Bro was not at all happy with the institutions routinizing that charisma. The man he’d known and studied had an ethical message -- “love thy neighbor as thyself,” more or less. The New Age ethos amounted to “love thyself and improve thy karma.” You didn’t have to share his worldview to see his point.

The timing was fortunate: we grew acquainted during what proved to be the final year of Harmon Bro’s life. His obituary in the Chicago Tribune in 1997 made no reference to Cayce, but looking it up just now leaves me with a definite feeling of synchronicity: Harmon died on Sept. 13, which is also the date I’m finishing this piece. A message from Harmon, via the cosmic unconscious?

Probably not, although it was another and even more far-flung coincidence that reminded me of him in the first place. On Friday, the journal Nature Communication published a paper called “Terahertz time-gated spectral imaging for content extraction through layered structures,” which the science-news website EurekAlert kindly translates into laymanese as “Researchers prototype system for reading closed books.” Not by putting them under a pillow and sleeping on them, alas, but it’s impressive even so.

Researchers at the Massachusetts Institute of Technology and the Georgia Tech Institute of Technology collaborated in developing a system that uses bursts of terahertz radiation (“the band of electromagnetic radiation between microwaves and infrared light,” says EurekAlert) to create images of the surfaces of individual pieces of paper in a stack. Ink in a printed letter absorbs the radiation differently from the blank page around it; the contrast between the signals reflecting back are fed into an algorithm that identifies the letter on the page. The prototype can “read” the surfaces of up to nine pages in a pile; with more work, reading at greater depths seems possible. The story quotes one of the researchers as saying, “The Metropolitan Museum in New York showed a lot of interest in this, because they want to, for example, look into some antique books that they don’t even want to touch.” The signal-sorting algorithm may yet enable spambots to defeat captchas. (Which arguably represents grounds for halting research right away, though that is unlikely.)

The train of association between breaking technological news from last week and the memory of one of the more generous and unusual people to cross my path is admittedly twisty and random. On the other hand, reading by terahertz radiation seems like another example of Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.”

Editorial Tags: 
Image Caption: 
Edgar Cayce

How your dissertation can influence your teaching (essay)

My dissertation influences my teaching not so much through its content but by the lessons I learned writing it, says Peter Wayne Moe.

Job Tags: 
Editorial Tags: 
Show on Jobs site: 
Image Source: 
iStock
Image Size: 

Legislature-mandated environmental policy center ruffles feathers at UNC

Smart Title: 

North Carolina legislators raised concerns when they funded the creation of a new entity at Chapel Hill to explore environmental policy and research.

In academe, who is really objective? (essay)

Failing to value and respect the types of data that minoritized scholars are collecting -- and the ways we are collecting them -- is a form of silencing us, writes Jackson Wright Shultz.

Ad keywords: 
Section: 
Topic: 
Editorial Tags: 
Show on Jobs site: 
Is this diversity newsletter?: 

Most research by cisgender scholars can be defined as me-search (essay)

If me-search refers to people who study populations they are a part of, then most survey work done by cisgender scholars fits the definition of it, argues J. Sumerau.

Ad keywords: 
Section: 
Editorial Tags: 
Show on Jobs site: 

Pages

Subscribe to RSS - Research
Back to Top