Social scientists "have long suffered from an academic inferiority complex," argue Kevin A. Clarke and David M. Primo, in a New York Times op-ed published earlier this month. "They often feel that their disciplines should be on a par with the 'real' sciences and self-consciously model their work on them."
In their recently published book A Model Discipline: Political Science and the Logic of Representations (Oxford University Press), Clarke and Primo delve into the ramifications of this "physics envy" for political science. In their quest to emulate the hard sciences, Clarke and Primo write, political scientists have placed far too much emphasis on model testing, resulting in the widespread view "that theoretical models must be tested to be of value and that the ultimate goal of empirical analysis is theory testing."
A Model Discipline argues that the logic behind this stance is hopelessly flawed, while its impacts have been detrimental to political science in a variety of ways.
Clarke, who is associate professor of political science, and Primo, who is associate professor of political science and business administration -- both at the University of Rochester -- answered Inside Higher Ed's e-mailed questions about their book's themes and implications, as well as the changes they'd like to see in their field.
Q: A Model Discipline is an unusual sort of political science book. What prompted you to write it?
Clarke: Our book is actually not that unusual. A number of prominent political scientists have written books asserting, with little or no evidence, that there is only one way to do Science: write down a deductive model, deductively derive an hypothesis from it, and test the hypothesis. If the hypothesis survives the test, the model is confirmed. If the hypothesis does not survive, the model is disconfirmed. (Elster's More Nuts and Bolts for the Social Sciences and Morton's Methods and Models: A Guide to the Empirical Analysis of Formal Models in Political Science are but two such books.) These works, in our estimation, misunderstand the nature of social scientific models, but their views have become pervasive at the upper reaches of the discipline. We see our book as a necessary corrective.
The actual research was prompted by a student who asked, "Why test deductive models?" The essence of a deductive model is that if the assumptions of the model are true, then the conclusions must be true. If the assumptions are false, then the conclusions may be true or false, and the logical connection to the model is broken. The point is that social scientists work with assumptions that are known to be false. Thus, whether a model's conclusions are true or not has nothing to do with the model itself, and "testing" cannot tell us anything that we did not already know.
Q: What do you mean when you write that "science is not what we think it is"?
Clarke: Most social scientists, indeed most people, believe that there exists something called The Scientific Method. This belief is usually the product of half-remembered sidebars in high school science textbooks, which promote a highly sanitized and organized version of science free of subtlety and complexity. This version of science is based on 19th-century physics and has little relevance for modern physics and even less for the modern social sciences. Scholars who study what scientists do consistently report that scientists do nothing consistently. The Scientific Method that many political scientists are chasing is nothing but a chimera.
Primo: Political scientists, economists, and other social scientists have longed for decades to have the same credibility as their peers in disciplines like physics. In reality, though, science is a messy process, and there is vigorous disagreement about how to study the natural world. The popular image of science is of researchers discovering absolute truths using an agreed-upon method -- the scientific method. Yet we need only read about the latest medical study or the ongoing debate over global warming data to see that there are few absolute truths in the scientific world, and many approaches to science.
Q: A central point of your book involves the flaws in a research procedure called hypothetico-deductivism, or H-D. Can you offer a brief definition of H-D? Why is it problematic?
Primo: Economist Wade Hands has written that one variant of H-D, falsificationism, makes “good 3x5-card philosophy of science.” Simplifying somewhat, H-D is a three-step, “propose-derive-test” approach to science. Step 1 is to propose a theory. Step 2 is to derive a prediction from that theory. Step 3 involves subjecting the prediction to a statistical test. If the prediction passes the test, the theory gains support. If the prediction fails the test, the theory is falsified. (Falsificationists argue that a theory can be falsified but never verified, so this strain of H-D focuses only on predictive failures, not successes).
Clarke: There are two major problems with H-D as practiced in social sciences. First, as alluded to in question 1, deductions are truth-preserving. True conclusions follow from true premises. Nothing is known about conclusions that follow from false premises; the conclusions may be true or false. In the social sciences, we generally know that our premises (the theoretical model) are false. There is nothing to be learned, then, from testing the model.
Second, H-D requires determining the truth status of the prediction that is derived from the model. In the social sciences, that determination requires a complicated statistical model that rests on often questionable assumptions. Thus, theoretical models are never tested with actual data; they are tested with models of data, which are notoriously fragile. The real confrontation is not between theory and data, but between two models, each of which is partial and flawed. H-D has nothing to say about this situation.
Q: You return several times to the idea that models are like maps. What is the importance of this analogy -- and how does it differ from the standard way of understanding models?
Clarke: We argue that to see models as true or false is a category mistake. Models are neither and should be viewed as objects. The map analogy, which draws on the work of philosopher Ronald Giere, is nearly perfect. A map is an object, and a map is a model (it is a two-dimensional representation of a three-dimensional world). No one would think to ask whether a map is true. Consider, for example, the distortions of a subway map. A subway rider wants to know whether the map is useful for navigating the subway, but the very same map is generally useless for navigating city streets. Maps are useful for some purposes and not for others.
If we adopt the analogy between models and maps, it no longer makes sense to "`test'' a model in the H-D sense. Instead, scholars need to assess whether particular models are useful for particular purposes. Theoretical models, for instance, can be used as exploratory devices for investigating putative causal mechanisms, and empirical models can be useful in measuring difficult to operationalize concepts. In neither case is a "true'' model required.
Q: What does it mean to say that "political scientists privilege the empirical model ... over the theoretical model," and why is this a mistake?
Primo: Too often we let statistical models tell us whether our theoretical models are worth anything, with very little attention paid to the quality of the underlying statistical model. Statistical models, after all, make loads of assumptions and are in many respects more fragile than theoretical models. Yet, if a statistical model produces results in accord with a theoretical model, that’s where the scrutiny stops.
Clarke: Many political scientists believe that theoretical models that are not tested are useless and amount to mere "mathematical masturbation'' (the actual phrase many critics use). As discussed above, theoretical models are not tested with data, but with models of data. Determining that a theoretical model is "false" based on the results of an empirical model implies privileging the empirical model over the theoretical model. The fact is that both theoretical models and empirical models are partial, display limited accuracy, and are purpose-relative. There is no reason to treat either kind of model as closer to some truth.
Q: Why should models "be judged not by how well they predict ... but how useful they are"? What difference would this make, in practical terms?
Primo: In the book we highlight the different uses for theoretical and empirical models. A model can have many other uses besides prediction, and a narrow focus on prediction rules out the construction of important types of models. For instance, suppose that we want to understand why wars start, given that they are so costly. Modelers have shown that one reason for war initiation is the different information available to each country’s leaders. These informational models don’t produce “predictions” in any real sense, but they do offer us a deeper understanding about the causes of war.
These kinds of models are the exception rather than the rule today, however.
Q: What does your book mean for the average political scientist? What do you hope its broader impact will be?
Clarke: It is unlikely that entrenched political scientists will change their research habits based on our arguments. Our impact, we hope, will be in the long run. If we can encourage scholars to think differently about models, the purposes of models, and the interactions between models, then change will occur over time. Scholars will begin to specify the purposes of the models in their work, and reviewers will drop demands that all theoretical models be tested. Slowly but surely change will take place.
Primo: I would like this book to start a long-overdue dialogue in the discipline. We’ve had debates in recent years about the merits of rational choice theory and about whether the field has become too technical. There has been very little debate, however, about whether H-D is the appropriate standard for the discipline. In fact, the debate over the merits of rational choice theory was largely fought on H-D grounds. I hope that A Model Discipline will lead political scientists, whether they agree or disagree with the arguments in the book, to think more deeply about the foundations of our enterprise.
In the short run, I don’t expect practice to change much. But in the long run, my hope is that the book will lead to changes in how we teach graduate students, the incentives junior faculty members face, and what’s considered “good work” in the field. I view our book as continuing in a long tradition at Rochester, begun by William Riker decades ago, of daring to be different.