The cover of Roger Owen’s The Rise and Fall of Arab Presidents for Life (Harvard University Press) shows Maummar Qaddafi and Bashar Assad in happier days. The genial and beloved Libyan, so modest that he claimed no higher position than colonel, stands with fist in the air, militant and feisty as ever. The Syrian technocrat wears what can only be called a big goofy grin. They look vigorous, confident, secure.
Does Assad ever think back on that era now, in the quiet moments between massacres of his own people? The recent fortunes of his peer group must inspire some nostalgia, as well as rage. The current situation of Hosni Mubarak (no longer a pharaoh, not yet a mummy) is bad enough. It proves that compromise is a slippery slope; holding on to power demands a willingness to fight to the death. As the example of Libya shows, even that may not be enough.
But the real horror of the situation, for Assad anyway – a far greater concern than any report of his armed forces “killing and sexually abusing children and using them as human shields" – is that his people might not just fight him to the death, but deliver it to him in person, and record themselves doing so with their cell phones, for all the world to watch: Lèse majesté, then, with a vengeance.
It’s impossible to read Owen’s book without divided attention -- one eye on the page, the other on the news. In that respect, the book is timely. But it is also untimely, and not just because Owen, a professor of Middle East history at Harvard University, completed it a year ago. The endnotes cite one article dated as late as August 2011; otherwise, the references suggest he finished it last May.
In fact most of it was done at the end of 2010. It was conceived and written, that is, just before Mohamed Bouazizi’s suicide by self-immolation (his final protest against the Tunisian authorities who had made it impossible for him to earn a living) set the whole region ablaze. Even with a final chapter on “The Sudden Fall” of the old order, Owen’s book is very much a pre-Arab Spring text. A description from the Harvard University Press website says the book “exposes for the first time the origins and dynamics of a governmental system that largely defined the Arab Middle East in the twentieth century.” This is, to be blunt, misleading. The Rise and Fall of Arab Presidents for Life is very much in the mainstream of recent U.S. scholarship on the region. Analysts have been considering the various flavors of political authoritarianism there for some time now. Owen’s concerns are their concerns. The orientation of this work is more or less epitomized by the title of a well-known journal article: “Why Are There No Arab Democracies?”
Owen’s “presidents for life” ruled countries that others have identified as cases of “dynastic republicanism” or “monarchial presidency.” His list includes Algeria, Egypt, Iraq, Lebanon, Libya, Sudan, Syria, Tunisia, and Yemen. Lebanon is an outlier here, as is post-Saddam Iraq. Owen describes them as having "constrained presidencies. The office is relatively weak -- dominated by outside forces (Syria in the case of Lebanon, the US with Iraq) and obliged to tread carefully given sectarian divisions within the country. In Iraq's case, a presidency-for-life once existed, but Lebanese presidents have left office voluntarily, except, of course, when assassinated.
The other regimes, by contrast, have been exceedingly stable. That stability might be explained by the efforts of any given state’s repressive apparatus, of course; but then you had to explain why the repressive apparatus itself proved so trustworthy and loyal. Junior military officers can be ambitious, after all. But once the likes of Saddam Hussein and Colonel Qadaffi assumed command, they kept it -- at least until outside military forces broke their grip.
From specialized work on countries in the region, Owen extracts and synthesizes enough shared elements to produce a generalized model of the arrangement that proved so durable for so long. The origins can be traced to what he calls “the authoritarian presidential regimes established soon after independence,” usually in the wake of the Second World War. A few readers will wonder if that’s going far back enough. The experience of colonization is not the schooling in pluralism and rule-of-law it is sometimes made out to be.
In any event, the incentives for a postcolonial concentration of authority are clear enough. Establishing national sovereignty is an obvious one, and in the early days it meant bringing much of the economy under state control as necessary to direct production for local needs. All the better if oil was the chief commodity. Besides creating a middle class of engineers and other professionals to run industry, national revenues could be directed towards building infrastructure, meaning employment for a wide range of skill grades.
State control of the economy assured plenty of money to fund the military, thereby consolidating another vested interest in stability, while at the same time building up a separate internal security forces to keep an eye on the military as well as the civilian population. Any paranoia on the part of the presidents-for-life was completely justified. Owen notes that by the early 1970s, most of them had come into office from the military and could appreciate the need to build “coup-proof regimes.”
Putting family members into key positions throughout the system gave the presidents-for-life another layer of oversight and control. In time, some regimes could even allow a bit of parliamentary politics as a valve to let off steam. And even when their economies underwent varying degrees of privatization, things remained well in hand. Previously nationalized industries were sold off to cronies, and only trusted people permitted to deal with foreign companies.
Enough people and institutions had enough of an investment in this arrangement to make continuity of leadership worth their while. In Syria, Assad succeeded his father. In Egypt, the younger Mubarak’s inauguration was a matter of time. This was tolerable for the people who benefited from the arrangement, and it them an incentive to ignore those who didn't.
At a certain intensity, corruption no longer counts as corruption; it’s just how things get done. And the men who served as the godfather to each national syndicate enjoyed the benefit of watching how one another did their jobs. They were a cohort. Owen calls it the “demonstration effect” – the diffusion of authoritarian techniques by example.
It clearly worked, as that photo of Qaddafi and Assad shows – at least until it didn’t. Four of the nine presidents-for-life in power on the first day of 2011 have left office and another has agreed to step down when his term has ended. As for the other four, well, it ain’t over ‘til it’s over. And nobody saw the reversal failure coming, least of all on the scale that it did.
In an essay titled “The Middle East Academic Community and the ‘Winter of Arab Discontent’: Why Did We Miss It?” (published last year), F. Gregory Gause, a professor of political science at the University of Vermont answers that he and his colleagues were “focused (and in many ways rightly so) on explaining the anomalous regime stability that characterized the Arab world in the 40 years leading up to these events.”
It was never, he says, a matter of assuming that people were happy, but rather of focusing on the efficacy and robustness of authoritarian institutions. That sounds like a good description of the topic of The Rise and Fall of the Arab Presidents for Life.
Gause’s self-critical remarks seem worth quoting at length. A single-minded concern with the regimes’ strength “led us to discount the possibility of mass political mobilization, largely because we had seen previous efforts in this direction fail. It led us to make assumptions about the relationship between regimes and their militaries that turned out, in some cases, not to be true. It led us to overestimate the regime-strengthening effects of neo-liberal economic reform. It led us to discount the regime-threatening effects of demographic change and new social media, not because we did not recognize the fact of demographic change and new social media, but rather because we thought the regimes were strong enough to absorb the pressures generated by them.”
Owen’s last chapter takes up those undetected factors in the fragility of the monarchial presidential regimes, and concludes that the Arab Spring was another instance of the “demonstration effect” at work in the region – people learning from and using one another’s experience, as their leaders had. Fair enough, I guess. But the most important books on 2011 will begin at that point, rather than end there.
Thomas Hobbes said that if he had read as much as others he would be as ignorant as they. Today most university faculty lack Hobbes's aplomb, and everyone complains that there's simply too much to read. The flood of books, articles, and blog posts never stops. (And here's one more!) Academic norms require that scholars "engage the literature," but the potentially relevant literature is enormous, especially for those who aspire to some kind of interdisciplinary approach. And at many universities, declining budgets and increasing administrative duties threaten the little time left for reading.
To make matters worse, academic culture seems carefully designed to maximize worries that one hasn't read enough. The convention of obsequious citation ensures that everyone thinks others have read more than they have. And now some journals are trying to raise their impact factor by pressuring authors to pad their articles with superfluous references — pressure experienced by one in five academics, according to a recent study.
How many times have you heard someone publicly admit to not having read a key book in their field? Never. Perhaps you know the game "humiliation" from the David Lodge novel? If not, just nod and smile in feigned recognition, then secretly go look it up. Of course, those with more cultural and professional power may be able to afford admitting they haven't read something — "You know, believe it not, I've actually never read Hamlet" — but by breaking the norm, they reinforce both their status and the norm itself.
There are no simple fixes, but here are two basic approaches to managing the overload of "must read" publications: demarcate and associate.
The first approach separates necessary from unnecessary reading, good from bad. Some handy demarcation criteria appear in the philosopher Harry Frankfurt's charming book On Bullshit. (It was all the rage during the Bush administration, but it's still worth reading, especially since it's extremely short.) Frankfurt says that bullshit is not the same as lying. Bullshit is speech or action that reveals an utter lack of concern with truth (presumably in areas where some kind of truth matters and can be discerned through established criteria, which is more problematic than Frankfurt admits). Frankfurt thinks that mass democracies are especially prone to bullshit, because they encourage every citizen to say something about every subject. Another source of bullshit is our confessional culture of personal authenticity and sincerity, based on the mistaken assumption that it's easier to understand yourself than the world. And although Frankfurt doesn't discuss it, one of the most fecund sources of bullshit is the doctrine of publish-or-perish, which fosters concern with professional status rather than saying something true and important. Other things being equal, cutting the bullshit from your reading list probably entails avoiding publications that are so obsessed with their own narrow disciplinary concerns (we've all been there) that they never get around to addressing other people or things.
You might object that you need to read at least some of a book to know that you don't need to read more, which leads to the second approach: association. Rather than focus on separating good from bad, try to see how good, bad, and everything between fits together. Learn how in Pierre Bayard's How to Talk about Books You Haven't Read. The title sounds like a guide for bullshitters, but Bayard challenges common assumptions about what it means to read a book in the first place. He notes that everyone interprets books differently, and the moment you finish reading a book you start forgetting it. What's most important about a book is not the details of its content, but its place in a cultural discourse. So a person who's recently heard about a book, maybe read a review and skimmed a few pages, could have more to say about it than a person who read it cover-to-cover a few years ago. Bayard uses a refreshingly humble citation system: UB: unknown book; SB: skimmed book; HB: heard about book; FB: forgotten book. And yes, to repeat every reviewer's joke: I actually read Bayard's book, whatever that means.
Maybe I could have skipped to the last chapter, where Bayard argues that, for the critic, books should fulfill the same function as nature for the writer or painter: "not to serve as the object of his work, but to stimulate him to write." He says that "what is essential is to speak about ourselves and not about books, or to speak about ourselves by way of books." Maybe so, in part. But then Bayard writes, "In the end, we need not fear lying about the text, but only lying about ourselves." Even for a literary critic, that sounds like the sort of narcissism that would drive Frankfurt nuts.
So, no surprises here: the answer must lie in both identifying what's worth reading and learning how it fits together with everything else. That may help one find an appropriate balance between reading and writing, between understanding the world and expressing oneself.
Enough said. Now I have some reading to do.
Mark B. Brown is associate professor of government at California State University at Sacramento.
Student presentations are a common feature of many courses, but presentation quality varies dramatically. Nearly every student has endured text-heavy PowerPoints read verbatim and doubted the credibility of a presentation’s content. Yet, student presentations are pedagogically important; they provide students with an opportunity to take ownership of an issue and improve their public speaking ability – a valuable, employment-related skill.
Faculty members often urge students to meet for assistance with their presentations, but only the outliers show up. Detailed instructions for producing quality presentations sometimes go unnoticed or ignored. Even dedicated, high-achieving students can miss the mark come presentation day. The end result is a waste of valuable instruction time. Fifteen minutes of ineffective student-to-student instruction multiplied by 25 student presentations equal six-plus person hours of lost learning.
Who is at fault? An episode of “The Apprentice,” which aired fall 2010, provides a possible answer. Donald Trump assigned two teams the same task. One team failed miserably. In the boardroom, Trump showed no mercy to Gene, who had done a poor job presenting, or Wade, the project manager who had selected Gene, but who had failed to verify Gene’s ability to perform this important task.
Each week Trump fires one person. Should Trump fire Gene, the unprepared presenter, or Wade, the project manager who failed to assure quality control procedures? In what was described as a shocking move, Trump fired both men. However, his decision was sound; Gene performed poorly and Wade, who is ultimately responsible for the quality of the show, failed to do his job.
What if a student performs like Gene? What should happen if a student provides erroneous, irrelevant, and unimportant information, fails to provide credible references, and is unable to provide answers to basic questions? Who should be "fired" – the student who delivered an unacceptable presentation, the professor who had no advance knowledge of the presentation’s content and allowed it to proceed during class, or both?
From my experience, requiring students to meet with the professor at least one week prior to their presentations in order to obtain permission to present is an effective method that dramatically improves student presentations and ensures more effective use of instructional time. It can be framed as a business meeting in which the vice president (professor) requests a meeting to review the work of the lead presenter (student) prior to presenting to an important client (the class). This meeting might even be graded. Certainly, a VP would not wait until the big presentation to evaluate the work of the lead presenter.
The purpose of these meetings is not simply to evaluate and approve student work. The meetings provide an opportunity to assist the student inside the "zone of proximal development"; I see what the student is able to do without assistance and what he or she can achieve with assistance.
At the start of each individual meeting I address an e-mail to the student and then add notes, links to videos and articles, and electronic documents archived in desktop folders. Although most undergraduates have grown up in the information age, many of these so-called “digital natives” do not demonstrate the ability to sift through data and identify what is important. Despite having mentioned in class that the founder of Wikipedia discourages academic use of this community-generated encyclopedia, it still appears on slides. Fortunately, each Wiki is left on the cutting room floor.
Determining the credibility of other websites involves asking students, “What do you know about this organization? What is their mission? Who is responsible for the content?” We explore the site to find the answers. Once a source is found to be credible, deciding what information to include is guided by the question, “Knowing that memory is imperfect, what will students retain from your presentation one year later?”
For presentations in my class, students must carefully select one or more videos and show clips that total five minutes. We discuss the credibility of the video and determine whether it repeats what the student will discuss. Viewing the video is essential. Prior to the adoption of my current policy, a student began to show an inappropriate video during his presentation. The video included profanity, and lacked any apparent educational value. I asked him to pause the video and explain why he chose the video and what we could expect to see. He replied, "I don’t know – I haven’t seen it." Now, before the video’s debut in class, I say, “Tell me about the video. Why did you choose this video and not another?”
During the meeting I ask students to answer the discussion questions they plan to use. Often, the questions are duds and their answers brief. We refine the questions with Bloom’s Taxonomy and higher-ordered learning outcomes in mind and generate discussion questions that are more likely to inspire passionate debate.
After three semesters of observational data, the improvement has been unmistakable, and the early results of an Institutional Review Board-approved study indicate that 84 percent of students agree or strongly agree that the meeting was beneficial and 77 percent agree or strongly agree that the meeting helped them to avoid procrastination. One student who had completed over 70 credit hours wrote, "This was the first required faculty-student meeting I have encountered in my college career. It was highly beneficial…. If there was no meeting, my presentation would have been a major disaster." A graduate wrote, "By setting an earlier 'due date' I avoided throwing together a presentation the night before I actually had to present it." The highest compliment came from a student who blurted out in class, "These are better than many professors’ presentations."
I have found a number of benefits to required meetings with students beyond the improved quality of the presentations themselves. These face-to-face meetings typically leave me with a greater sense of a personal relationship with the student, and I would venture to say the feeling is mutual. Taking the time to meet outside of normal class hours clearly indicates to students that the professor cares. It also gives them a better idea of the rigor that underlies the peer-review process – how their professors’ scholarship thrives on the constructive criticism of others – and how this can ultimately elevate the quality of their own work. Finally, it might be considered a “high-impact practice” that opens minds and improves retention. Although there may be no panacea for subpar student presentations, the lesson I learned from "The Apprentice" – that I am at least partially accountable for the quality of student’s presentations – has improved the classes I teach and the quality of my relationships with students.
Christopher A. Hirschler is an assistant professor of health studies at Monmouth University.