For social scientists starting their careers, creating research models that work is crucial. A new book suggests that they may be unaware of problems they face in part because scholars don't share stories of what didn't work on their projects, and how to deal with particular challenges. Research Confidential: Solutions to Problems Most Social Scientists Pretend They Never Have has just been published by the University of Michigan Press. The essays in the collection are all by younger scholars, including the volume's editor, Eszter Hargittai, an associate professor of communication studies at Northwestern University, a fellow at the Berkman Center for Internet & Society at Harvard University and a career advice columnist for Inside Higher Ed. Hargittai responded to questions about the book.
Q: I was struck by the part of your subtitle where you say "pretend they never have." Why do you think social scientists don't recognize or hide from problems with their research methods?
A: This title refers less to what social scientists recognize and more to what shows up in the final write-up of their projects. When one reads journal articles, the methodological sections tend to make the projects sound rather straight-forward. In books, details about methods are usually relegated to an appendix, at best, and do not tell the reader the reality of data collection. Instead, they are pretty, cleaned-up versions of what happened. For example, they will include the number of final interviews the researcher conducted, but they won't include details about how many attempts it took to get a person to come to an interview.
It is certainly the case that such detailed descriptions may be out of place in some such write-ups, but the problem is that then readers do not realize the true complexities involved with the process. For example, students will not understand what amount of effort went into securing all of the interviews and how much frustration was associated with last-minute cancellations and other hurdles that may have come up. Similarly, journal articles don't tend to explain that it took IRB three times as long to approve a project than expected and thus everything was delayed. Again, that information may not be useful for the final write-up of results, but without seeing such details, it is hard for new scholars to recognize that they are indeed the reality of actual research and must be accounted for in planning new projects. This probably contributes to why so many people -- both students and faculty -- underestimate the length of time any project will take.
In another vein, I also think some social scientists encounter fewer problems, because they compromise the quality of their research. Sure, some data collection methods are easier than others (e.g., sending out a survey to one's friends and colleagues may be easier than finding a suitable sample that does not consist of people in one's network), but depending on the questions one is asking, compromising on methodological rigor may limit significantly what conclusions, if any, one can then derive from the data.
Q: You approached younger scholars, not the senior scholars, of the disciplines for the chapters. Why did you go that route?
A: It is certainly the case that more senior scholars will often have more relevant experiences upon which to draw for helpful guidance and this probably explains why they tend to be the authors of many methods books. Nonetheless, I think it is important for junior scholars to get examples from their peers for several reasons. For one, the resources available to junior scholars are often more limited than those available to more senior colleagues so hearing about experiences of those at a similar stage may be especially helpful. That is, in undertaking a project, it may be more helpful to hear how others handled a situation in light of having to deal with all details on their own as opposed to having had the opportunity to hire staff and outsource many of the logistics and worries of a project.
Additionally, status in the academic hierarchy may influence various aspects of a project. For example, the kind of reactions one receives from administrators and potential research participants may depend on being a student versus an established faculty member. Such status differences may also influence the extent to which the researcher can get feedback about the project. Also, hearing from those with relatively few experiences going into a project may help highlight the types of hurdles that more senior scholars no longer think about and take for granted.
Finally, some of the methods featured in the volume concern new challenges raised by digital media. Much of the cutting-edge research in this realm is being conducted by more junior researchers so it was important to include them for that reason.
Q: As you mentioned, several chapters relate to digital research (online surveys, data collection through text messaging, etc.). How much do you think the digital era has changed the challenges facing the social science researcher? Are digital techniques being used to full advantage?
A: Digital media offer tremendous opportunities in some areas, but -- especially in the academic realm -- many are just beginning to take advantage of these if at all. Of course, as the chapters in the book that deal with such topics explain, what may seem like a convenient opportunity comes with its own set of unique challenges such as skeptical reactions from Institutional Review Boards or the difficulty of making sure that tools work as expected. Additionally, part of what these chapters emphasize is that some of the hurdles faced with more traditional methods still remain, it is still important to have a carefully thought out research design and to plan and pretest to the extent possible each step of the process carefully.
But regardless of these various challenges, there are some exciting opportunities that we should be exploring. I am now working on a follow-up volume to this one that will focus specifically on the increasing number of new opportunities offered by digital tools to make sure that social scientists are taking advantage of these, but also ensuring that they are doing so in an informed manner.
Q: You mentioned Institutional Review Boards and indeed the topic comes up in the book several times. I hear a lot of criticism of them at social science meetings -- do you think the board members' focus reflects the real problems in research methods today?
A: It is extremely important for members of Institutional Review Boards to keep up with new tools and procedures so that they understand novel methods and can ensure the protection of human subjects without unduly compromising -- or in some cases making entirely impossible -- important cutting-edge research. IRBs sometimes seem to lose sight of what should be their priority: making sure that there is no harm to respondents as a result of a research project. Rather, sometimes boards focus in on logistical details that ultimately have no bearing on protecting research subjects.
It is interesting to compare approaches across institutions and see how differently some matters are addressed. In some cases, studies are approved at one institution much quicker than at another based on very similar criteria. It is unfortunate that institutional affiliation would drive this rather than more clearly identified criteria across the board.
It is also important to note here that many of our peers in other countries do not face the same types of IRB hurdles that we do and thus researchers at American institutions can be at a disadvantage as we are kept away from doing cutting-edge research in a timely manner. To be sure, it is extremely important to protect human subjects from harmful research. However, it is also important that the process, which purports to do this indeed does this rather than simply causing a bottleneck in research.
Q: Which problem in the book was the biggest surprise to you?
A: Because I think about matters of this sort a lot, I don't think any problem caught me off guard. Nonetheless, I was struck by some of the hurdles some of my colleagues have had to face and thought myself fortunate that up until now I have not had to -- and *knocks on wood* will never have to -- experience myself firsthand. For example, I cannot imagine the frustration that comes from a third party losing (more precisely throwing out) a good chunk of my data. I once had something happen along similar lines, but we "only" lost about 5 percent of our sample. It was definitely very unfortunate, but since we had a large sample and since the error was random, it did not compromise our entire study.
What was truly gratifying in putting together this volume was the level of honesty that many of the authors brought to the descriptions of their projects. It is not pleasant to rehash serious obstacles and in my opinion it also takes courage to lift the curtain up and let people take a peak into your research process. These difficulties may get discussed informally in hallways, but are rarely put into print. However, without publishing them, they will not be accessible to the many scholars -- junior and senior -- who could surely benefit from understanding the true realities of empirical social science research better. That is what this volume set out to do and thanks to the generous contributions of the authors, initial feedback from readers suggests that it is succeeding in meeting this need.