Last weekThe New York Times published a reproduction of a poison-pen letter that Martin Luther King Jr. received 50 years ago this month, a few weeks before he accepted the Nobel Peace Prize. A couple of passages in the screed suggest it was accompanied by audiotape of MLK in a hotel room, indulging in a round of extramarital recreation. King and his circle assumed that J. Edgar Hoover was behind the whole thing – a reasonable guess, since bugging a hotel room counted as a sophisticated surveillance operation in 1964.
Portions of the letter have been quoted by King’s biographers for years, and Hoover’s animus against King and the rest of the civil rights movement was obvious enough. But in her essay for the Times, Beverly Gage -- the history professor from Yale who found the original draft in the National Archives – underscores something that only shows through with the whole document in front of you. It might be called an element of psychosexual frenzy.
The note -- prepared by one of Hoover’s agents, but reflecting his own preoccupations regarding King -- purported to be from a disillusioned African-American supporter. MLK’s “alleged lovers get the worst of it,” writes Gage. “They are described as ‘filthy dirty evil companions’ and ‘evil playmates,’ all engaged in ‘dirt, filth, evil and moronic talk.’ The effect is at once grotesque and hypnotic, an obsessive’s account of carnal rage and personal betrayal…. Near the end, it circles back to its initial target, denouncing him as an ‘evil, abnormal beast.’ ”
All in a day’s work at J. Edgar’s FBI. The only thing surprising about the note is the lack of any charge that King was a Communist Party stooge. Hoover’s practice of collecting information on the sex lives of prominent individuals served the perfectly straightforward function of bolstering his personal authority, of course. And it worked: he served as the bureau’s director for almost 50 years, in part because he had the goods in his hands to derail any effort to replace him. But there is also a hint of voyeurism to the director’s “Official/Confidential File.” Blackmail is power -- and power, as someone once said, is the ultimate aphrodisiac.
The director only comes onstage about halfway through Jessica R. Pliley’s Policing Sexuality: The Mann Act and the Making of the FBI (Harvard University Press). It would be excessive to call Hoover a minor figure in the book, but it certainly displaces him from his familiar status as prime mover in the bureau’s history.
Pliley, an assistant professor of women’s history at Texas State University, begins a generation or two before the creation of the Bureau of Investigation in 1908 (the name was changed in 1935), with the stresses and strains of American society in the late 19th century that ultimately gave rise to one of the laws the bureau tried to enforce: the Mann Act, which made it a felony to transport “any woman or girl” across state lines “for the purpose of prostitution or debauchery, or for any other immoral purpose.”
The law, passed in 1910, now seems almost idiomatically peculiar: As with the decision to make alcohol, tobacco, and firearms the purview of a single law-enforcement agency, most people would have a hard time explaining the logic behind it. Pliley traces its roots to a series of moral panics in the United States over the changes induced by the country’s rapid expansion and urbanization. A growing national economy brought with it an expanded market for prostitution -- the horrors of which were summed up by 19th-century reformers as “white slavery.”
"[M]easures to reassert control over the American libido were always one or two steps behind the social changes -- and enforcement could never be much more than episodic."
That phrase expressed the moral fervor of the abolitionist spirit finding a new cause, while also carrying its share of racial overtones, especially in sensational accounts of blue-eyed girls servicing the lusts of nonwhite customers. The influx of immigrants was another concern. Women finding their way in a new country were especially vulnerable. But there was also the need to protect America's precious bodily fluids from the contaminating influence of foreign cultures, with their deplorably lax moral standards and unwholesomely exotic bedroom practices. (Despite the xenophobia, there was something to the last point. By the 1920s, any bordello trying to keep its clientele had to offer “the French,” i.e., fellatio.)
Urbanization and the automobile multiplied the temptations for other sins of the flesh, as well as the venues for committing them. The danger of a young woman being seduced and abandoned after false promises of marriage became more intense when parents knew that the cad might impregnate her in the rumble seat, then drive off to who knows where.
Pliley devotes most of the first third of her book to building up, layer by layer, a picture of the trends and anxieties of the period -- some of them overblown, but with enough examples from the legal record of women raped and then forced into sexual labor to show that it wasn’t all a matter of yellow journalism.
Pliley also discusses various laws and social campaigns that emerged in response -- efforts to shore up the norms by which sexual activity would be restricted to monogamous, legally married straight couples of the same race, who, while not necessarily born in the U.S., otherwise tried to make themselves as inconspicuous as possible. But measures to reassert control over the American libido were always one or two steps behind the social changes -- and enforcement could never be much more than episodic.
When Representative James R. Mann proposed the White Slave Traffic Act (soon to be known by his name) to Congress in 1910, its odd mandate reflected the effort to patch over some of the existing gaps in terms just broad enough to cover problems that ever-faster means of transportation were bound to create.
It met a little opposition. One Congressman expressed concern that “immoral purposes” was so vague that it might apply to horse racing and chicken fighting. Southern politicians were initially troubled that the law might infringe on states’ rights, but found themselves charged with a lack of concern with the protection of white womanhood, which settled the matter soon enough. President Taft signed the bill into law the day Congress sent it to his desk.
The burden of enforcing the Mann Act soon fell to the Justice Department’s recently formed Bureau of Investigation, which had a small staff and not much precedent for how to proceed. An early investigation seemed like a promising way to crack the organized traffic in prostitution between bordellos in Connecticut, Louisiana, and other states. But it turned out the hookers operated as free agents who traveled from bordello to bordello in a circuit. Customers, madams, and sex workers alike seem to have found it a reasonably satisfactory arrangement.
Pliely points out that the whole “white slavery” discourse rested on the idea that women wanted, more or less by instinct, to establish a monogamous relationship and start a family, and would only enter or remain in prostitution under threat of violence. But the interstate pimp ring that turned out not to exist suggested otherwise. The author shows that a great deal of the case load for agents in the early decades of the bureau pertained to cases of adultery where the lovers had fled the state. The aggrieved spouse could charge the adulterous man with violating the Mann Act, despite his paramour being perfectly happy with the situation. She had been taken “across state lines for immoral purposes,” though the investigation usually ended once she had agreed to return to her husband.
Thanks to the Great Depression, the bureau was able to enter the headlines for cases involving banks robbers and gangsters – and, a bit later, political radicals, as well as professional spies. But Pliley notes that in the late 1930s, Hoover (who joined the bureau in 1919 and became director five years later) reasserted the original understanding of the Mann Act as a measure against prostitution.
“The Bureau investigated only when the right person invited it,” she writes, “a father, a husband, or a male local law enforcement official. When the Bureau considered aggravated cases of sexual exploitation, it almost always conceived of prosecuting these crimes as defending the family (and concomitantly upholding men’s rights to control the sexuality of their dependents) rather than upholding an idea of female sexual sovereignty.”
It seems almost superfluous to mention the other implicit requirement: the man in question had to be white. The author names a few cases in which the complainant was of another color, but it seems that the most agents ever did was to fill out some paperwork, presumably to humor him.
None of this can really be attributed to Hoover, though. He executed the law, and enforced its biases, but they were established well before he joined the Bureau.
Policing Sexuality takes the story up to roughly America’s entry in World War Two, but I think the surveillance of MLK and the vicious letter from 50 years ago take on a new aspect in light of Pliley’s research. She directs our attention away from the director to the matrix in which the Bureau took shape. That challenges the habit of regarding the FBI as an institution shaped, and distorted, by his personality -- parts of which are expressed in the letter to King, written by a subordinate who knew what he wanted.
But the letter also echoes the concerns that Pliley finds in the Mann Act well before Hoover took power. Besides hostility to African-American advancement (one undercurrent of the "white slavery" theme), it expresses a fervent, one might even say deranged, aversion to sex outside of marriage. That Hoover shared these attitudes made him a perfect fit for the job. He thrived in it, and was good at it, although “good” isn't really how it looks from here.
In 2009, the Cornell Law Review published an article called “The Anti-Corruption Principle” by Zephyr Teachout, then a visiting assistant professor of law at Duke University. In it she maintained that that the framers of the U.S. Constitution were “obsessed” (that was Teachout’s word) with the dangers of political corruption – bribery, cronyism, patronage, the making of laws designed to benefit a few at the expense of public well-being, and so on.
Such practices, and the attitudes going with them, had eaten away, termite-like, at the ethos of the ancient Roman republic and done untold damage to the spirit of liberty in Britain as well. The one collapsed; the other spawned “rulers who neither see, nor feel, nor know / but leech-like to their fainting country cling,” as Shelley in a poem about George III’s reign wrote some years later. But in Teachout’s reading, the framers were obsessed with corruption without being fatalistic about it. The best way to reduce the chances of corruption was by reducing the opportunities for temptation – for example, by preventing any “Person holding any Office of Profit or Trust” from “accepting any present, Emolument, Office, or Title, of any kind whatever, from any King, Prince, or foreign State” without explicit permission from Congress. Likewise, a separation of powers among the executive, legislative, and judicial branches was, in part, an expression of the anti-corruption principle.
Teachout indicated in a footnote that her argument would be expanded in a forthcoming book, called The Meaning of Corruption, due out the following year. It was delayed. For one thing, Teachout moved to Fordham University, where she is now an associate professor of law. And for another, her law-review article gained the unusual eminence of being cited by two Supreme Court Justices, Antonin Scalia and John Paul Roberts, in their opinions concerning the landmark Citizens United v. Federal Elections Commission decision.
Now Teachout’s book has appeared as Corruption in America: From Benjamin Franklin’s Snuff Box to Citizens United, from Harvard University Press – an appreciably livelier title, increasing the likelihood (now pretty much a certainty) that it will inform the thinking of many rank-and-file Democratic Party supporters and activists.
Whether it will resonate with their leaders beyond the level of campaign rhetoric is another matter. Each of the two parties has a revolving door between elected office and the lobbying sector. While discussing the book here last week, I mentioned that suspicion and hostility toward lobbying were conspicuous in American political attitudes until fairly recently. They still are, of course, but with nothing like the intensity exhibited when the state of Georgia adopted a constitution outlawing the practice in 1877: “Lobbying is declared to be a crime, and the General Assembly shall influence this provision by suitable penalties,” including a prison sentence of up to five years. Other efforts to curtail lobbying were less severe, though nonetheless sharper than today’s statutes requiring lobbyists to register and disclose their sources of funding.
“[T]he practice of paying someone else to make one’s arguments to people in authority,” writes Teachout, “threatened to undermine the moral fabric of civil society…. In a lobbyist-client relationship, the lobbyist, by virtue of being a citizen, has a distinct relationship to what he himself might believe. He is selling his own citizenship, or one of the obligations of his own citizenship, for a fee.”
The lobbyist’s activity is “more akin to selling the personal right to vote than selling legal skills,” as a lawyer does. Nor is that the only damage lobbying does to the delicate ecology of mutual confidence between state and citizen. It “legitimates a kind of routine sophistry and a casual approach towards public argument. It leads people to distrust the sincerity of public arguments and weakens their own sense of obligation to the public good” – thereby creating “the danger of a cynical political culture.” (So that’s how we got here.)
Clearly something went wrong. The anti-corruption principle, as Teachout formulates it, entails more than the prevention of certain kinds of acts – say, bribery. It’s also supposed to strengthen the individual citizen’s faith in and respect for authority while also promoting the general welfare. But private interest has a way of seeing itself as public interest, as exemplified in a railroad lobbyist’s remarks to Congress during the Gilded Age: If someone “won’t do right unless he’s bribed to do it,” he said, “…I think it’s a man’s duty to go up and bribe him.”
Teachout refers to an erosion of the anti-corruption principle over time, but much of her narrative documents a recurring failure to give anti-corruption laws teeth. “Criminal anticorruption laws were particularly hard to prosecute” during the 19th century, she writes, because “the wrongdoers – the briber and the bribed – had no incentive to complain,” while “the defrauded public was dispersed, with no identifiable victim who would drive the charge.” The concept of corruption has dwindled to that bribery defined as quid pro quo in the narrowest possible terms: “openly asking for a deal in exchange for a specific government action.”
In a colloquy appearing in the Northwestern University Law Review, Seth Barrett Tillman, a lecturer in law at the National University of Ireland Maynooth, suggests that a core problem with Teachout’s argument is that it overstates how single-mindedly anti-corruption the framers of the U.S. Constitution actually were. The Articles of Confederation made broader anti-corruption provisions on some points, for example.
And “if the Framers believed that corruption posed the chief danger to the new Republic,” he writes, “one wonders why corrupt Senate-convicted and disqualified former federal officials were still eligible to hold state offices—offices which could indirectly affect significant operations of the new national government—and were also (arguably) eligible to hold congressional seats, thereby injecting corrupt officials directly into national policy-making.”
Concerned about corruption? Definitely. “Obsessed” with it? Not so much. There is much to like about Teachout’s book, but treating the framers of the Constitution as possessing the keys to resolving 21st-century problems seems extremely idealistic, and not in a good way.
It's taken a while, but we’ve made a little progress on the mathesis universalis that Leibniz envisioned 300 or so years ago – a mathematical language describing the world so perfectly that any question could be answered by performing the appropriate calculations.
Aware that the computations would be demanding, Leibniz also had in mind a machine to do them rapidly. On that score things are very much farther along than he could ever have imagined. And while the mathesis universalis itself seems destined to remain only the most beautiful dream of rationalist philosophy, there’s no question that Leibniz would appreciate the incredible power to store and retrieve information that we’ve come to take for granted. (Besides being a polymathic genius, he was a librarian.)
Johanna Drucker’s Graphesis: Visual Forms of Knowledge Production, published by Harvard University Press, focuses in part on the capacity of maps, charts, diagrams, and other modes of display to encode and organize information. But only in part: while Drucker’s claims for the power of visual language are less extravagantly ambitious than Leibniz’s for mathematical symbols, it is a matter of degree and not of kind. (The author is professor of bibliographical studies at the Graduate School of Education and Information Studies of the University of California at Los Angeles.)
“The complexity of visual means of knowledge production,” she writes, “is matched by the sophistication of our cognitive processing. Visual knowledge is as dependent on lived, embodied, specific knowledge as any other field of human endeavor, and integrates other sense data as part of cognition. Not only do we process complex representations, but we are imbued with cultural training that allows us to understand them as knowledge, communicated and consensual, in spite of the fact that we have no ‘language’ of graphics or rules governing their use.”
Forget the old saw about a picture being worth a thousand words. Drucker’s claim is not about pictorial imagery, as such. A drawing or painting may communicate information about how a person or place looks, but the forms she has in mind (bar graphs, for example, or Venn diagrams) perform a more complex operation. They convert information into something visually apprehended.
We learn to understand and use these visual forms so readily that they seem almost self-evident. Some people know how to read a map better than others -- but all of us can at least recognize one when we see it. Likewise with tables, graphs, calendars, and family trees. In each case we intuitively understand how the data are organized, if not what they mean.
But the pages of Graphesis teem with color reproductions of 5,000 years’ worth of various modes of visually rendered knowledge – showing how they have emerged and developed over time, growing familiar but also defining or reinforcing ways to apprehend information.
A good example is the mode of plotting information on a grid. Drucker reproduces a chart of planetary movements in that form from 10th-century edition of Macrobius. But the idea didn’t catch on: “The idea of graphical plotting either did not occur, or required too much of an abstraction to conceptualize.” The necessary leap came only in the early 17th century, when Descartes reinvented the grid in developing analytical geometry. His mathematical tool “combined with intensifying interest in empirical measurements,” writes Drucker, “but they were only slowly brought together into graphic form. Instruments adequate for gathering ‘data’ in repeatable metrics came into play … but the intellectual means for putting such information into statistical graphs only appeared in fits and starts.”
And in the 1780s, a political economist invented a variation on the form by depicting the quantity of various exports and imports of Scotland as bars on a graph – an arresting presentation, in that it shows one product being almost twice as heavily traded as any other. (The print is too small for me to determine what it was.) The advantages of the bar graph in rendering information to striking effect seem obvious, but it, too, was slow to enter common use.
“We can easily overlook the leap necessary to abstract data and then give form to its complexities,” writes Drucker. And once the leap is made, it becomes almost impossible to conceive such data without the familiar visual tools.
If the author ever defines her title term, I failed to mark the passage, but graphesis would presumably entail a comprehensive understanding of the available and potential means to record and synthesize knowledge, of whatever kind, in visual form. Drucker method is in large measure inductive: She examines a range of methods of presenting information to the eye and determines how the elements embed logical concepts into images.
While art history and film studies (especially work on editing and montage) are relevant to some degree, Drucker’s project is very much one of exploration and invention. Leibniz’s mathesis was totalizing and deductive; once established, his mathematical language would give final and definitive answers. By contrast, graphesis would entail the regular creation of new visual tools in keeping with the appearance of new kinds of knowledge, and new media for transmitting it.
“The ability to think in and with the tools of computational and digital environments,” the author warns, “will only evolve as quickly as our ability to articulate the metalanguages of our engagement.”
That passage, which is typical, is some indication of why Graphesis will cull its audience pretty quickly. Some readers will want to join her effort; many more will have some difficulty in imagining quite what it is. Deepening the project's fascination, for those drawn to it, is Drucker's recognition of an issue so new that it still requires a name: What happens to the structuring of knowledge when maps, charts, etc. appear not just on a screen, but one responsive to touch? The difficulties that Graphesis presents are only incidentally matters of diction; the issues themselves are difficult. I suspect Graphesis may prove to be an important book, for reasons we'll fully understand only somewhere down the line.
The dominion of open educational resources is apparently looming large, if one were to judge by a blog thread touched off with a panel discussion at a recent Knewton event. David Wiley, participating in the panel, made the bold claim that “in the near future, 80 percent of textbooks would be replaced by OER content.” Jose Ferreira responded critically to that view a few days later with a blog post, to which Wiley offered a dissenting reply. Michael Feldstein then weighed in with a dissenting perspective of his own.
It’s a spirited and fruitful discussion; well worth a read. Their comments, though, didn’t tackle what I’ve come to see as the core issue for the OER movement, a foundational assumption that has crimped its progress. The assumption holds that because open-source educational content is like open-source software -- in that it’s free content that you can chop up, remix, and share with anyone -- its application and uses should follow in a similar way.
The short history of the two movements makes clear that this is not the case. As David Wiley points out, the first openly licensed educational materials were published more than 15 years ago, around the time that Linux led the movement of open-source software (OSS) into the mainstream. So why did one open-source movement take off as the other tarried on the margins, championed only by the most stalwart advocates?
While Linux has long been part of standard practice, and our daily computing lives would be unthinkable without open-source software, more than 90 percent of faculty textbook adoptions in the U.S. are still locked-down, expensive commercial materials. Most don’t doubt the unsustainability of the present course (including most publishers), but it’s also plain to see that the OER movement had not yet offered a truly satisfying alternative. The failure of OER to become mainstream at this point is only underscored by the myriad forces working in its favor: economic pressures, greater administrative accountability, government oversight and budget cuts, and a truly broken publisher model.
A clear reason for the different trajectories is the commercial support that OSS has enjoyed, and that OER has not. Contrary to the common view that OSS has advanced largely through loosely organized communities of volunteers, it’s actually often strongly supported through private enterprise. More than 80 percent of the contributions to Linux, for example, come today from companies like Google and Samsung. But the success of OSS isn’t simply through commercial appropriation. Instead, companies were able to support OSS because they were building on an already-present foundation of voluntarism in the hacker community. While a volunteer community of course exists in OER, it does not have the depth and breadth of its OSS counterpart. The voluntarism of the hacker community does not, in other words, map well onto the community of academic instructors.This situation isn’t an accident of history but reflects a fundamental difference in the roles and self-understanding of each group.
With OSS, the hacker is often an end user but more centrally the creator and modifier of code. And to the extent that hackers form a community, it is a community of problem-solvers addressing issues that concern their work directly. In his seminal book on hacker open-source culture, The Cathedral and the Bazaar, Eric Raymond suggests that “Every good work of software starts by scratching a developer’s personal itch.” Contrast this with the relationship faculty have to the educational content they use: for most, it’s a tool for teaching a class, a means of supporting an activity that is largely extrinsic to the tasks of creating and modifying pedagogical content. Most instructors are not editors, let alone creators of their classroom content; they are simply end users.
If there’s a personal itch to scratch at all, it’s usually in the area of original scholarship and research, not teaching materials (let’s recall that the Internet was born to share research, not lesson plans). For most instructors, the textbook is a convenient package, without which the task of managing a class would be that much more laborious. Commercial publishers have long recognized what the OER movement has not: that often-overworked and underpaid instructors are looking to content and course technology to make their lives easier, not to take on the additional responsibility of managing their own content without financial recognition for that labor. Unlike the open-source hacker, the thrill of belonging to a community of problem-solvers of content simply isn’t their thing. To truncate an otherwise large topic, instructors are not hackers and that changes everything. Or it should have for the OER movement.
The recent gains of, and the growing prospects for, OER are, in fact, a tacit acknowledgement of this difference. No doubt the single biggest success to date for the movement is the OpenStax project, but this success breaks any illusion that the practice of OER is analogous to that of open software. Connexions, the OpenStax predecessor project at Rice, languished for years as an open-source content platform until Rice hired Joel Thierstein as associate provost to turn the project around. What did he do? Thierstein, who previously worked in the private sector developing content for the telecommunications industry, had a simple and very powerful idea: raise grant money to hire the same companies that ghostwrite textbooks for the traditional publishers, and then release the texts into the public domain under the most open license available.
As commercial textbook equivalents, their use required no behavioral changes for faculty. They would not be “learning objects” or fragments that required additional faculty work. Faculty could use them as teaching tools, just as they would conventional content, except, in this case, they’re free. Like the commercial publishers, Thierstein rightly understood that faculty want an easy and straightforward way to adopt high quality and appropriate content. Thierstein’s success enabled Rice to go forward with additional fund-raising and the Connexion’s rebranding as OpenStax. A simple idea has had a significant impact.
And yet for all the success of OpenStax, it’s also clear that a free version of a commercial text will never alone be sufficient for OER to reach the mainstream, nor should it be. Some learning technologies, either already in use or emerging, have the capacity to improve student success significantly. The OER movement’s almost singular focus on cost can obscure the larger objective -- actually getting more students through to graduation while ensuring that they’ve learned (and enjoyed learning) something along the way.
The risk for the OER movement is that it unwittingly reinforces the kind of resource disparities we see everywhere else in our society: a situation in which the well-off enjoy content with the latest technologies and practices, and the not-so-well-off manage without them. To be sure, OpenStax partnerships with third-party technology partners are a recognition of this need, but these relations are still established within the traditional publisher/tech partner binary model, with the difference that the core content is low-cost or free. As important as that project is, it doesn’t yet realize the promise of OER as disaggregated high-quality content created and modified from anywhere.
A better way forward is to compensate the stakeholders -- faculty, copyright holders, and technologists, principally -- for their contributions to the OER ecosystem. This can be done by charging students nominally for the OER courses they take or as a modest institutional materials fee. When there are no longer meaningful costs associated with the underlying content, it becomes possible to compensate faculty for the extra work while radically reducing costs to students. While I launched a new venture to do this, what’s needed are lots of entities -- for-profit and nonprofit -- to experiment with funding models. It’s all achievable and there will likely be no single way to accomplish it.
From this will emerge a new breed of courseware, one that preserves the low cost and flexibility of open content while embracing learning technologies that support faculty and student success. Certainly such a model involves costs, though not so much for the content as for the tools that improve its use and for the people on the ground who are actually doing the work of curating and adapting materials. Align the incentives in the right way, and this model of for openness can empower faculty members and institutions in unprecedented ways. It will encourage local innovation so that, over time, the courseware, now unlocked and financially supported, becomes an expression of the teaching itself.
Openness, then, lends itself to a new order of distributed content development that includes outstanding learning technologies; I think all the bloggers mentioned above recognize this. But precisely because instructors are not hackers and belong to an entirely different community of practice, a system for distributed content development also needs to be accompanied by a system of distributed financial incentives. When this all comes together -- and it will -- then courseware will escape commodification and become a creative and low-cost force in education. Only then should we begin to count the percentages.
Are students evaluated on their academic work, or on how well they navigate the college environment? Both, a recent book argues -- which is why mentoring programs should aim to unmask the "hidden curriculum" for at-risk students.
Like a t-shirt that used to say something you can’t quite read anymore, a piece of terminology will sometimes grow so faded, or be worn so thin, that retiring it seems long overdue. The threadbare expression “socially constructed” is one of them. It’s amazing the thing hasn’t disintegrated already.
In its protypical form -- as formulated in the late 1920s, in the aphorism known as the Thomas theorem – the idea was bright and shapely enough: “If men define situations as real, they are real in their consequences.” In a culture that regards the ghosts of dead ancestors as full members of the family, it’s necessary to take appropriate actions not to offend them; they will have a place at the table. Arguments about the socially constructed nature of reality generalize the Thomas theorem more broadly: we have access to the world only through the beliefs, concepts, categories, and patterns of behavior established by the society in which we live.
The idea lends itself to caricature, of course, particularly when it comes to discussion of the socially constructed nature of something brute and immune to argumentation like, say, the force of gravity. “Social constructivists think it’s just an idea in your head,” say the wits. “Maybe they should prove it by stepping off a tall building!”
Fortunately the experiment is not often performed. The counterargument from gravity is hardly so airtight as its makers like to think, however. The Thomas theorem holds that imaginary causes can have real effects, But that hardly implies that reality is just a product of the imagination.
And as for gravity -- yes, of course it is “constructed.” The observation that things fall to the ground is several orders of abstraction less than a scientific concept. Newton’s development of the inverse square law of attraction, its confirmation by experiment, and the idea’s diffusion among the non-scientific public – these all involved institutions and processes that are ultimately social in nature.
Isn’t that obvious? So it seems to me. But it also means that everything counts as socially constructed, if seen from a certain angle, which may not count as a contribution to knowledge.
A new book from Temple University Press, Darin Weinberg’s Contemporary Social Constructionism: Key Themes, struggles valiantly to defend the idea from its sillier manifestations and its more inane caricatures. The author is a reader in sociology and fellow at King’s College, University of Cambridge. “While it is certainly true that a handful of the more extravagant and intellectually careless writers associated with constructionism have abandoned the idea of using empirical evidence to resolve debates,” he writes, not naming any names but manifestly glaring at people over in the humanities, “they are a small and shrinking minority.”
Good social constructionist work, he insists, “is best understood as a variety of empirically grounded social scientific research,” which by “turn[ing] from putatively universal standards to the systematic scrutiny of the local standards undergirding specific research agendas” enables the forcing of “the tools necessary for discerning and fostering epistemic progress.”
The due epistemic diligence of the social scientists renders them utterly distinct from the postmodernists and deconstructionists, who, by Weinberg's reckoning, have done great damage to social constructionism’s credit rating. “While they may encourage more historically and politically sensitive intuitions regarding the production of literature,” he allows, “they are considerably less helpful when it comes to designing, implementing, and debating the merits of empirically grounded social scientific research projects.”
And that is being nice about it. A few pages later, Weinberg pronounces anathema upon the non-social scientific social-constructionists. They are “at best pseudo-empirical and, at worst, overtly opposed to the notion that empirical evidence might be used to improve our understanding of the world or resolve disputes about worldly events.”
Such hearty enthusiasm for throwing his humanistic colleagues under the bus is difficult to gainsay, even when one doubts that a theoretical approach to art or literature also needs to be “helpful when it comes to designing, implementing, and debating the merits of empirically grounded social scientific research projects.” Such criticisms are not meant to be definitive of Weinberg’s project. A sentence like “Derrida sought to use ‘deconstruction’ to demonstrate how specific readings of texts require specific contextualizations of them” is evidence chiefly of the author’s willingness to hazard a guess.
The book’s central concern, rather, is to defend what Weinberg calls “the social constructionist ethos” as the truest and most forthright contemporary manifestation of sociology’s confidence in its own disciplinary status. As such, it stresses “the crucially important emphases” that Weinberg sees as implicit in the concept of the social – emphases “on shared human endeavor, on relation over isolation, on process over stasis, and on collective over individual, as well as the monumental epistemic value of showing just how deeply influenced we are by the various sociohistorical contexts in which we live and are sustained.”
But this positive program is rarely in evidence so much as Weinberg’s effort to close off “the social” as something that must not and cannot be determined by anything outside itself – the biological, psychological, economic, or ecological domains, for example. “The social” becomes a kind of demiurge: constituting the world, then somehow transcending its manifestations.
It left this reader with the sense of witnessing a disciplinary turf war, extended to almost cosmological dimensions. The idea of social construction is a big one, for sure. But even an XXL can only be stretched just so far before it turns baggy and formless -- and stays that way for good.