books

Chegg takes to social media after receiving cease and desist order from Southern Connecticut State U.

Smart Title: 

Southern Connecticut State tells Chegg that university's contract with Barnes & Noble bans anyone else from marketing textbook rentals to students.

 

New book explores achievements and challenges of China's 'rising research universities'

Smart Title: 

New book explores achievements and challenges of Chinese research universities as they continue their quest to achieve "world-class" status.

Review of John Urry, 'Offshoring'

Searching JSTOR for the term “globalization” yields well over 89,000 results, which hardly comes as a surprise. But its earliest appearance is odd: it's in an article from the September 1947 issue of The Journal of Education Research called “A Rational Technique of Handwriting” by H. Callewaert, identified by the editors as a Belgian medical doctor.

The article is the only thing he wrote that I have been able to locate, but it makes clear the man's deep commitment to good penmanship. To establish its teaching on a firm, scientific basis, Callewaert felt compelled to criticize “the notion of globalization” – which, in the first half of the 20th century at least, derived from the work of Jean-Ovide Decroly, a pioneering figure in educational psychology, also from Belgium. (Searching JSTOR for the variant spelling “globalisation” turns up a citation of Decroly in The Philosophical Review as early as 1928.) An interesting paper on Decroly available from the UNESCO website explains that globalization in his sense referred to something that happens around the ages of 6 and 7, as experiences of play, curiosity, exercise, formal instruction, and so on develop their “motor, sensory, perceptual, affective, intellectual and expressive capacities” which form “the basis of all their future learning.” In globalization, these activities all come together to form a world. 

Callewaert’s complaint was that teaching kids to write in block letters at that age and trusting “globalization” will enable them to develop the motor skills needed for readable cursive is an error -- whereas teaching them his system of “rational writing,” with its “harmonious coordination of the movements of our fingers,” will do the trick. (There are diagrams.)

Two things sent me off to explore the early history of the term, which is also used in higher mathematics. One of them was a realization that many friends and colleagues -- most of them, probably -- cannot remember a time when references to globalization were not ubiquitous and taken for granted. They’ve heard about it since busy with their own globalization, a là Jean-Ovide Decroly.

Glancing back at JSTOR, we find that papers in economics and international relations begin mentioning globalization in the early 1950s, though not very often. Less than one percent of the items using it (in any of its senses) appeared before 1990. After that, the avalanche: more than 77 percent of the references have been in papers published since the turn of the millennium.

A new book by John Urry called Offshoring (Polity) includes a thumbnail intellectual history of the quarter-century or so since the term became inescapable. “At least a hundred studies each year documented the nature and impact of many global processes,” he writes, surely by way of extreme understatement. “Overall, it seemed that economies, finance, media, migration, tourism, politics, family life, friendship, the environment, the internet, and so on, were becoming less structured within nation-states and increasingly organized across the globe.” (Urry, a professor of sociology at Lancaster University, is the author of a number of such studies.)

Globalization was, so to speak, a social order with the World Wide Web as its template: “characterized by mostly seamless jumps from link to link, person to person, company to company, without regard to conventional, national boundaries through which information was historically located, stored, and curated.”

There were worriers, since sinister and even fatal things could also catch a ride in the flow: terrorism, organized crime, ferocious microorganisms, etc. But the prevailing wisdom seemed to be that globalization was an irresistible force, and an irreversible one; that we were getting more cosmopolitan all the time; and that the cure for the ailments of globalization was more globalization (as John Dewey said about democracy).

Such was the rosy color that the concept took on in the 1990s, which now looks like a period of rather decadent optimism. Offshoring is Urry’s look at what we could call the actually existing globalization of today. Its title refers to what Urry considers the central dynamic of the present-day world: a continuous “moving [of] resources, practices, peoples, and monies from one territory to others, often along routes hidden from view.” It is not a new term, and in common usage it calls to mind at least three well-known activities. One is the transfer of manufacturing from a highly industrialized country to one where the costs of production, in particular wages, are much lower. (In its first appearance in JSTOR, from 1987, “offshoring” is used in this sense.) 

Another kind of offshoring is the concealment of assets through banks or business entities set up in countries where financial regulations and taxes are minimal, if they even exist. A company with its headquarters in the Cayman Islands, for example, is unlikely to have an office or personnel there; its affairs can usually be handled through a post-office box. And finally, there is offshore drilling for oil. 

Distinct as these activities are, Urry understands them as converging aspects of a process that defines the dark underside of globalization. Forget the happy vision of goods, services, culture, and information being exchanged through channels that cut across and negate boundaries between nation-states. The reality is one of an increasingly symbiotic relationship between the economies of prosperous societies and the governments of various countries that serve as tax havens:

“[M]ore than half of world trade passes through these havens, almost all High Net Worth Individuals possess offshore accounts enabling tax ‘planning,’ [and] ninety nine of Europe’s largest companies use offshore subsidiaries…. Overall, one quarter to one third of all wealth is held ‘offshore.’ The scale of this offshored money makes the world much more unequal than previous researchers ever imagined. Fewer than 100 million people own the astonishing $21 trillion offshore fortune. This is equivalent to the combined GDPs of the U.S. and Japan, the world’s first and third largest economies.”

With enormous masses of wealth thus safely secured off in the distance -- far from the clutches of the nation-state, which might insist on diverting some of it to schools, infrastructure, etc. -- postindustrial societies must adapt to “offshored” manufacturing and energy resources. (The author has in mind dependence on fuel coming into a country from any source, not just the rigs pumping oil from beneath the seafloor.) At the same time, another sort of offshoring is under way: the ocean itself occupied by shipping platforms so huge that they cannot dock in any harbor, and “arcane ownership patterns at sea which make it almost impossible to pin down and ensure that ships are properly built, maintained, and kept seaworthy.”

Urry’s earlier work has explored the connections between social organization and the experience of space. Here, he seems to take aim at the old claims for globalization as a force for mobility, links across cultures, and even the emergence of a sense of planetary citizenship. The spaces created by offshoring are very different – characterized by concealment, restricted access, and distances between social strata that look like bottomless chasms. Urry's proposed remedy, in a nutshell, is for the nation-state to reimpose taxation on all that extraterritorial wealth. He must have felt obliged to suggest something, but it's like recommending that you escape from a threatening situation by learning to fly. One would appreciate a lesson in how it is to be done.

 

Review of David R. Shumway, "Rock Star: The Making of Musical Icons from Elvis to Springsteen"

Most readers’ first response to David Shumway’s Rock Star: The Making of Musical Icons from Elvis to Springsteen (Johns Hopkins University Press) will be to scan its table of contents and index with perplexity at the performers left out, or barely mentioned. Speaking on behalf of (among others) Lou Reed, Joe Strummer, and Sly and the Family Stone fans everywhere, let me say: There will be unhappiness.

For that matter, just listing the featured artists may do the trick. Besides the names given in the subtitle, we find James Brown, Bob Dylan, the Rolling Stones, the Grateful Dead, and Joni Mitchell – something like the lineup for an hour of programming at a classic rock station. Shumway, a professor of English at Carnegie Mellon University, makes no claim to be writing the history of rock, much less formulating a canon. The choice of artists is expressly a matter of his own tastes, although he avoids the sort of critical impressionism (see: Lester Bangs) that often prevails in rock writing. The author is a fan, meaning he has a history with the music. But his attention extends wider and deeper than that, and it moves in directions that should be of interest to any reader who can get past “Why isn’t _____ here?”

More than a set of commentaries on individuals and groups, Rock Star is a critical study of a cultural category -- and a reflection on its conditions of existence. Conditions which are now, arguably, far along the way to disappearing.

The name of the first rock song or performer is a matter for debate, but not the identity of the first rock star. Elvis had not only the hits but the pervasive, multimedia presence that Shumway regards as definitive. Concurring with scholars who have traced the metamorphoses of fame across the ages (from the glory of heroic warriors to the nuisance of inexplicable celebrities), Shumway regards the movie industry as the birthplace of “the star” as a 20th-century phenomenon: a performer whose talent, personality, and erotic appeal might be cultivated and projected in a very profitable way for everyone involved.

The audience enjoyed what the star did on screen, of course, but was also fascinated by the “real” person behind those characters. The scare quotes are necessary given that the background and private life presented to the public were often somewhat fictionalized and stage-managed. Fans were not always oblivious to the workings of the fame machine. But that only heightened the desire for an authentic knowledge of the star.

Elvis could never have set out to be a rock star, of course – and by the time Hollywood came around to cast him in dozens of films, he was already an icon thanks to recordings and television appearances. But his fame was of a newer and more symbolically charged kind than that of earlier teen idols.

Elvis was performing African-American musical styles and dance steps on network television just a few years after Brown v. Board of Education – but that wasn’t all. “The terms in which Elvis’s performance was discussed,” Shulway writes, “are ones usually applied to striptease: for example, ‘bumping and grinding.’ ” He dressed like a juvenile delinquent (the object of great public concern at the time) while being attentive to his appearance, in particular his hair, to a degree that newspaper writers considered feminine.

The indignation Elvis generated rolled up a number of moral panics into one, and the fans loved him for it. That he was committing all these outrages while being a soft-spoken, polite young man – one willing to wear a coat and tails to sing “Hound Dog” to a basset hound on "The Milton Berle Show" (and later to put on Army fatigues, when Uncle Sam insisted) only made the star power more intense: those not outraged by him could imagine him as a friend.

Elvis was the prototype, but he wasn’t a template. Shumway’s other examples of the rock star share a penchant for capturing and expressing social issues and cultural conflicts in both their songs and how they present themselves, onstage and off. But they do this in very different ways – in the cases of James Brown and Bob Dylan, changing across the length of their careers, gaining and losing sections of their audience with each new phase. The shifts and self-reinventions were very public and sometimes overtly political (with James Brown's support for Richard Nixon being one example) but also reflected in stylistic and musical shifts. In their day, such changes were sometimes not just reactions to the news but part of it, and part of the conversations people had about the world.  

Besides the size of the audience, what distinguishes the rock star from other performers is the length of the career, or so goes Shumway’s interpretation of the phenomenon. But rewarding as the book can be – it put songs or albums I’ve heard a thousand times into an interesting new context – some of the omissions are odd. In particular (and keeping within the timespan Shumway covers) the absence of Jimi Hendrix, Janis Joplin, and Jim Morrison seems problematic. I say that not as a fan disappointed not to find them, but simply on the grounds that each one played an enormous role in constituting what people mean by the term “rock star.” (That includes other rock stars. Patti Smith elevated Morrison to mythological status in her own work, while the fact that all three died at 27 was on Kurt Cobain’s mind when he killed himself at the same age.)

I wrote to Shumway to ask about that. (Also to express relief that he left out Alice Cooper, my own rock-history obsession. Publishers offering six-figure advances for a work of cultural criticism should make their bids by email.)

“My choices are to some extent arbitrary,” he wrote back. “One bias that shaped them is my preference for less theatrical performers as opposed to people such as David Bowie (who I have written about, but chose not to include here) or Alice Cooper.” But leaving out the three who died at 27 “was more than a product of bias. Since I wanted to explore rock stars’ personas, I believed that it was more interesting to write about people who didn’t seem to be playing characters on stage or record. I agree with you about the great influence of Jim Morrison, Janis Joplin, and Jimi Hendrix, but I don’t think their personas have the complexity of the ones I did write about. And, they didn’t figure politically to the degree that my seven did. The main point, however, is that there is lots of work to be done here, and I hope that other critics will examine the personas the many other rock stars I did not include.”

The other thing that struck me while reading Rock Star was the sense that it portrayed a world now lost, or at least fading into memory. Rock is so splintered now, and the "technology of celebrity" so pervasive, that the kind of public presence Shumway describes might not be possible now. 

“The cause is less the prevalence of celebrity,” he replied, “than the decline of the mass media. Stars are never made by just one medium, but by the interaction of several. Earlier stars depended on newspapers and magazines to keep them alive in their fans hearts and minds between performances. Radio and TV intensified these effects. And of course, movie studios and record companies had a great deal of control over what the public got to see and hear. The result was that very many people saw and heard the same performances and read the same gossip or interviews. With the fragmentation of the media into increasingly smaller niches, that is no longer the case. The role of the internet in music distribution has had an especially devastating effect on rock stardom by reducing record companies’ income and the listeners’ need for albums. The companies aren’t investing as much in making stars and listeners are buying songs they like regardless of who sings them.”

That's not a bad thing, as such, but it makes for a more compartmentalized culture, while the beautiful thing with rock 'n' roll is when it blows the doors off their hinges.

 

 

Editorial Tags: 

Essay on study of ebook publishing

A technological visionary created a little stir in the late ‘00s by declaring that the era of the paper-and-ink book as dominant cultural form was winding down rapidly as the ebook took its place. As I recall, the switch-off was supposed to be complete by the year 2015 -- though not by a particular date, making it impossible to mark your day planner accordingly.

Cultural dominance is hard to measure. And while we do have sales figures, even they leave room for interpretation. In the June issue of Information Research, the peer-reviewed journal’s founder T.D. Wilson takes a look at variations in the numbers across national borders and language differences in a paper called “The E-Book Phenomenon: A Disruptive Technology.” Wilson is a senior professor at the Swedish School of Library and Information Science, University of Borås, and his paper is in part a report on research on the impact of e-publishing in Sweden.

He notes that the Book Industry Study Group, a publishing-industry research and policy organization, reported last year that ebook sales in the United States grew by 45 percent between 2011 and 2012 – although the total of 457 million ebooks that readers purchased in 2012 still lagged 100 million copies behind the number of hardbacks sold the same year. And while sales in Britain also surged by 89 percent over the same period, the rate of growth for non-Anglophone ebooks has been far more modest.

Often it’s simply a matter of the size of the potential audience. “Sweden is a country of only 9.5 million people,” Wilson writes, “so the local market is small compared with, say, the UK with 60 million, or the United States with 314 million.” And someone who knows Swedish is far more likely to be able to read English than vice versa. The consequences are particularly noticeable in the market for scholarly publications. Swedish research libraries “already spend more on e-resources than on print materials,” Wilson writes, “and university librarians expect the proportion to grow. The greater proportion of e-books in university libraries are in the English language, especially in science, technology and medicine, since this is the language of international scholarship in these fields.”

Whether or not status as a world language is a necessary condition for robust ebook sales, it is clearly not a sufficient one. Some 200 million people around the world use French as a primary or secondary language. But the pace of Francophone ebook publishing has been, pardon the expression, snail-like -- growing just 3 percent per year, with “66 percent of French people saying that they had never read an ebook and did not intend to do so,” according to a study Wilson cites. And Japanese readers, too, seem to have retained their loyalty to the printed word: “there are more bookshops in Japan (almost 15,000 in 2012) than there are in the entire U.S.A. (just over 12,000 in 2012).”

Meanwhile, a report issued not long after Wilson’s paper appeared shows that the steady forward march of the ebook in the U.S. has lately taken a turn sideways. The remarkable acceleration in sales between 2008 and 2012 hit a wall in 2013. Ebooks brought in as much that year ($3 billion) as the year before. A number of factors were involved, no doubt, from economic conditions to an inexhaustible demand for Fifty Shades of Grey sequels. But it’s also worth noting that even with their sales plateauing, ebooks did a little better than trade publishing as a whole, where revenues contracted by about $300 million.

And perhaps more importantly, Wilson points to a number of developments suggesting that the ebook format is on the way to becoming its own, full-fledged disruptive technology. Not in the way that, say, the mobile phone is disruptive (such that you cannot count on reading in the stacks of a library without hearing an undergraduate’s full-throated exchange of pleasantries with someone only ever addressed as “dude”) but rather in the sense identified by Clayton Christensen, a professor of business administration at the Harvard Business School.

Disruption, in Christensen’s usage, refers, as his website explains it, to “a process by which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves up market, eventually displacing established competitors.” An example he gives in an article for Foreign Affairs is, not surprisingly, the personal computer, which was initially sold to hobbyists -- something far less powerful as a device, and far less profitable as a commodity, than “real” computers of the day.

The company producing a high-end, state-of-the-art technology becomes a victim of its own success at meeting the demands of clientele who can appreciate (and afford) its product. By contrast, the “disruptive” innovation is much less effective and appealing to such users. It leaves so much room for improvement that its quality can only get better over time, as those manufacturing and using it explore and refine its potentials – without the help of better-established companies, but also without their blinkers. By the time its potential is being realized, the disruptive technology has developed its own infrastructure for manufacture and maintenance, with a distinct customer base.

How closely the ebook may resemble the disruptive-technology model is something Wilson doesn’t assess in his paper. And in some ways, I think, it’s a bad fit. The author himself points out that when the first commercial e-readers went on the market in 1998, it was with the backing of major publishing companies (empires, really) such as Random House and Barnes & Noble. And it’s not even as if the ebook and codex formats were destined to reach different, much less mutually exclusive, audiences. The number of ebook readers who have abandoned print entirely is quite small – in the US, about five percent.

But Wilson does identify a number of developments that could prove disruptive, in Christensen’s sense. Self-published authors can and do reach large readerships through online retailers. The software needed to convert a manuscript into various ebook formats has become more readily available, and people dedicated to developing the skills could well bring out better-designed ebooks than well-established publishers do now. (Alas! for the bar is not high.)

Likewise, I wonder if the commercial barriers to ebook publishing in what Wilson calls “small-language countries” might not be surmounted in a single bound if the right author wrote the right book at a decisive moment. Unlike that Silicon Valley visionary who prophesied the irreversible decline of the printed book, I don’t see it as a matter of technology determining what counts as a major cultural medium. That’s up to writers, ultimately, and to readers as well.

Editorial Tags: 

How rumors spread via sloppy citation practices

Smart Title: 

New article points out that through lazy or fraudulent citations, scholars spread rumors -- at times creating "academic urban legends." The story of spinach and an allegedly misplaced decimal point shows how.

The Chegg-Ingram Partnership

Chegg's new relationship with the Ingram Content Group could be key to Chegg expanding in digital textbook markets, The New York Times reported. Chegg was founded as a textbook-rental business and of late has been pushing to grow in e-texts. The Times noted that a major challenge for Chegg has been buying and distributing print textbooks, and that the deal with Ingram -- which will provide the physical texts -- frees Chegg to expand elsewhere.

 

Ad keywords: 

Book argues that mentoring programs should try to unveil colleges' "hidden curriculum"

Smart Title: 

Are students evaluated on their academic work, or on how well they navigate the college environment? Both, a recent book argues -- which is why mentoring programs should aim to unmask the "hidden curriculum" for at-risk students.

New book argues that education schools aren't adequately preparing teachers

Smart Title: 

New book argues that education schools too often neglect teacher training -- leaving it up to teachers to figure things out on their own.

Review of Anna M. Young, "Prophets, Gurus, and Pundits: Rhetorical Styles and Public Engagement"

Many a thick academic tome turns out to be a journal article wearing a fat suit. So all due credit to Anna M. Young, whose Prophets, Gurus, and Pundits: Rhetorical Styles and Public Engagement was published by Southern Illinois University Press this year. Her premise is sound; her line of argument looks promising; and she gets right to work without the rigmarole associated with what someone once described as the scholarly, “Yes, I read that one too” tic. 

Indeed, several quite good papers could be written exploring the implicit or underdeveloped aspects of her approach to the role and the rhetoric of the public intellectual. Young is an associate professor of communication at Pacific Lutheran University, in Tacoma, Washington. Much of the book is extremely contemporary in emphasis (to a fault, really, just to get my complaint about it out front here). But the issue it explores goes back at least to ancient Rome -- quite a while before C. Wright Mills got around to coining the expression “public intellectual” in 1958, in any case.

The matter in question emerges in Cicero’s dialogue De Oratore, where Young finds discussed a basic problem in public life, then and now. Cicero, or his stand-in character anyway, states that for someone who wants to contribute to the public discussion of important matters, “knowledge of a vast number of things is necessary, without which volubility of words is empty and ridiculous.”

On the other hand -- as Cicero has a different character point out -- mere possession of learning, however deep and wide, is no guarantee of being able to communicate that learning to others. (The point will not be lost on those of you surreptitiously reading this column on your mobile phones at a conference.)

Nobody “can be eloquent on a subject that he does not understand,” says Cicero. Yet even “if he understands a subject ever so well, but is ignorant of how to form and polish his speech, he cannot express himself eloquently about what he does understand.”

And so what is required is the supplementary form of knowledge called rhetoric. The field had its detractors well before Cicero came along. But rhetoric as defined by Aristotle referred not to elegant and flowery bullshit but rather to the art of making cogent and persuasive arguments.

Rhetoric taught how to convey information, ideas, and attitudes by selecting the right words, in the right order, to deliver in a manner appropriate to a particular audience -- thereby convincing it of an argument, generally as a step toward moving it to take a given action or come to a certain judgment or decision. The ancient treatises contain not a little of what would later count as psychology and sociology, and modern rhetorical theory extends its interdisciplinary mandate beyond the study of speech, into all other forms of media. But in its applied form, rhetoric continues to be a skill of skills – the art of using and coordinating a number of registers of communication at the same time: determining the vocabulary, gestures, tone and volume of voice, and so on best-suited to message and audience.  

When the expression “public intellectual” was revived by Russell Jacoby in the late 1980s, it served in large part to express unhappiness with the rhetorical obtuseness of academics, particularly in the humanities and social sciences. The frustration was not usually expressed quite that way. It instead took the form of a complaint that intellectuals were selling their birthright as engaged social and cultural critics in exchange for the mess of pottage known as tenure. It left them stuck in niches of hyperspecialized expertise. There they cultivated insular concerns and leaden prose styles, as well as inexplicable delusions of political relevance.

The public intellectual was a negation of all of this. He or she was a free-range generalist who wrote accessibly, and could sometimes be heard on National Public Radio. In select cases the public intellectual was known to Charlie Rose by first name.

I use the past tense here but would prefer to give the term a subscript: The public intellectual model ca. 1990 was understood to operate largely or even entirely outside academe, but that changed over the following decade, as the most prominent examples of the public intellectual tended to be full-time professors, such as Cornel West and Martha Nussbaum, or at least to teach occasionally, like Judge Richard Posner, a senior lecturer in law at the University of Chicago.

And while the category continues to be defined to some degree by contrast with certain tried-and-true caricatures of academic sensibility, the 2014 model of the public intellectual can hardly be said to have resisted the blandishments of academe. The danger of succumbing to the desire for tenure is hardly the issue it once might have seemed. 

Professor Young’s guiding insight is that public intellectuals might well reward study through rhetorical analysis -- with particular emphasis on aspects that would tend to be missed otherwise. They come together under the heading “style.” She does not mean the diction and syntax of their sentences, whether written or spoken, but rather style of demeanor, comportment, and personality (or what’s publicly visible of it).

Style in Young’s account includes what might be called discursive tact. Among other things it includes the gift of knowing how and when to stop talking, and even to listen to another person’s questions attentively enough to clarify, and even to answer them. The author also discusses the “physiological style” of various public intellectuals – an unfortunate coinage (my first guess was that it had something to do with metabolism) that refers mostly to how they dress.

A public intellectual, then, has mastered the elements of style that the “traditional intellectual” (meaning, for the most part, the professorial sort) typically does not. The public perceives the academic “to be a failure of rhetorical style in reaching the public. He is dressed inappropriately. She carries herself strangely. He describes ideas in ways we cannot understand. She holds the floor too long and seems to find herself very self-important.” (That last sentence is problematic in that a besetting vice of the self-important that they do not find themselves self-important; if they did, they’d probably dial it down a bit.)

Now, generations of satirical novels about university life have made clear that the very things Young regards as lapses of style are, in fact, perfectly sensible and effective rhetorical moves on their own terms. (The professor who wears the same argyle sweater year-round has at least persuaded you that he would rather think about the possible influence of the Scottish Enlightenment on The Federalist Papers than the admittedly large holes.)

But she longs for a more inclusive and democratic mode of engagement of scholarship with the public – and of the public with ideas and information it needs. To that end, Young identifies a number of public-intellectual character types that seem to her exemplary and effective. “At different times,” she writes, “and in different cultural milieus, different rhetorical styles emerge as particularly relevant, powerful, and persuasive.” And by Young’s count, six of them prevail in America at present: Prophet, Guru, Sustainer, Pundit, Narrator, and Scientist.

“The Prophet is called by a higher power at a time of crisis to judge sinners in the community and outline a path of redemption. The Guru is the teacher who gains a following of disciples and leads them to enlightenment. The Sustainer innovates products and processes that sustain natural, social, and political environments. The Pundit is a subject expert who discusses the issues of the day in a more superficial way via the mass media. The Narrator weaves experiences with context, creating relationships between event and communities and offering a form of evidence that flies below the radar in order to provide access to information.” Finally, the Scientist “rhetorically constructs his or her project as one that answers questions that have plagued humankind since the beginnings….”

The list is presumably not meant to be exhaustive, but Young finds examples of people working successfully in each mode. Next week we'll take a look at what the schema implies -- and at the grounds for thinking of each style as successful.

 

Editorial Tags: 

Pages

Subscribe to RSS - books
Back to Top