The time has come to ask the question: When will we see the complete digital transformation of higher education in the United States?
The need for the shift to digital are painfully clear: Grades are lagging, students aren’t graduating, and those who do earn a degree often don’t have the skills that employers want. While digital learning won’t solve all these problems, we need to find ways to drive students’ performance to help them recoup their college investment, and I believe that digital represents the fastest and best option.
With these needs in mind, I’m willing to put my stake in the ground.
As I see it, the publishing industry needs to do all it can to ensure that within 36 months, higher education in the U.S. will be completely digital. I’m not talking about a slight or even gradual increase in e-book adoptions or the use of adaptive learning. I’m talking about a total transition from a reliance on print textbooks to a full embrace of digital content and learning systems. Aside from the college library, you hopefully won’t be able to find a printed textbook on a college campus in three years. And if you are, we should all be disappointed.
To date, the rate of adoption of digital course materials has been slower than most would have expected. Only around 3 percent of students today purchase e-books over print, and less than half of my company’s customers come to us for digital.
There are a few reasons why I think we haven’t seen greater uptake. For one, education is a high-stakes endeavor for students, with important outcomes riding on it. While students may be willing to switch to digital in some aspects of their lives, when it comes to studying, they often want to stick with what they know. There’s also the fact that until recently, the user experience offered by e-books and other digital technology just hasn’t been very good. A glorified PDF of a printed page is not compelling to students. Finally, and I think most importantly, the value proposition of digital to students and institutions hasn’t been clear. Many students and colleges are unaware of how digital can enhance the learning experience beyond making it more portable and affordable – and provide real results.
For such a big transition — a leap forward, really — three years may seem like a short period of time. In today’s technology landscape, it’s an eon. Thirty-six months ago the iPad didn’t exist. Now, 65 million units later, it has changed the way we consume, create and share information. If that number isn’t big enough for you, try this one: 760 million — that is how many tablets Forrester expects will be in use by 2016.The adoption of these devices is happening at a lightning rate, and the inevitability of falling prices will make them even more accessible to students.
Student attitudes toward digital in the classroom are also evolving. Studies show that after using technology in an education setting for only a short time, students are realizing that they can’t live without it. As the design of digital education materials and technology continues to improve, students’ affinity for it will only grow.
It’s one thing for digital content and learning systems to offer a nice user experience and some interactive features. It’s another to help make meaningful gains in student performance.
Today’s digital technology already meets this challenge. Super-adaptive systems such as McGraw-Hill’s LearnSmart, a digital homework tutor that adapts to each student’s individual knowledge levels and creates custom study paths, are making a dramatic impact on student outcomes by scaling a personalized learning experience. An effectiveness study of LearnSmart showed that students using the program have seen significant improvements in pass rates, retention rates and increases in their overall academic performance. Results like these – whether they come from McGraw-Hill or other leading companies in our field – are something we just can’t afford to ignore, especially in light of the rising costs for higher education and falling student achievement.
If you want to get a sense of how confident we are in the effectiveness of this technology, take a look at a recent pay-for-performance partnership McGraw-Hill Education formed with Western Governors University. This partnership ties the fees we receive for learning materials to the grades of the students using those materials in class.
For professors – the foundation of our higher education system – digital provides an important collateral benefit. Working with students who come to class prepared and have an active interest in what they're learning allows them to spend less class time reviewing the basics and more time exploring advanced concepts. This is the type of teaching that leads to higher-order learning, and it’s the type of teaching that professors love doing the most.
When we talk about innovation, it’s usually in the context of technology. But where innovation is really shining through in education is in the models that learning companies are developing with colleges and universities to provide digital technology to students more affordably.
Colleges such as Indiana University and the University of Minnesota are partnering withlearning companies to ensure that all students have access to the learning materials for their courses at a price that’s substantially lower than what they’re used to paying – as much as 60 percent less than a print textbook. At a price that’s comparable with a used print book, students receive all of the benefits of going digital: portability, instant access to course material on the first day of class, and seamless integration with adaptive learning systems that provide personalized instruction.
While the transition to an all-digital learning materials experience may not always be comfortable, it’s one that is a necessary part of the solution. Technology isn’t just about improving access or engagement, it’s about achieving what should be the main goal of our higher education system today: improving student performance.
If my 36-month timeline sounds ambitious, that’s because it is. We have the tools to help solve one of the greatest challenges of our times – we just have to put them to use.
Brian Kibby is president of McGraw-Hill Higher Education.
Students estimate they spend $655 annually on required course materials, down from $667 two years ago and $702 four years ago, according to a study released by the National Association of College Stores. Officials attributed the decline to the wider availability of options like renting textbooks.
Of the many strange things in Gulliver’s Travels that make it hard to believe anyone ever considered it a children’s book, the most disturbing must be the Struldbruggs, living in the far eastern kingdom of Luggnagg, not covered by Google Maps at the present time.
Gulliver’s hosts among the Luggnaggian aristocracy tell him that a baby is born among them, every so often, with a red dot on the forehead -- the sign that he or she is a Struldbrugg, meaning an immortal. Our narrator is suitably amazed. The Struldbruggs, he thinks, have won the cosmic lottery. Being “born exempt from that universal Calamity of human Nature,” they “have their Minds free and disengaged, without the Weight and Depression of Spirits caused by the continual Apprehension of Death.”
The traveler has no trouble imagining the life he might lead as an immortal, given the chance. First of all, Gulliver tells his audience at dinner, he would spend a couple of hundred years accumulating the largest fortune in the land. He’d also be sure to master all of the arts and sciences, presumably in his spare time. And then, with all of that out of the way, Gulliver could lead the life of a philanthropic sage, dispensing riches and wisdom to generation after generation. (A psychoanalytic writer somewhere uses the expression “fantasies of the empowered self,” which just about covers it.)
But then the Lubnaggians bring him back to reality by explaining that eternal life is not the same thing as eternal youth. The Struldbruggs “commonly acted like Mortals, till about thirty Years old,” one of Gulliver’s hosts explains, “after which by degrees they grew melancholy and dejected, increasing in both till they came to four-score." The expression “midlife crisis” is not quite the one we want here, but close enough.
From the age of eighty on, “they had not only all the Follies and Infirmities of other old Men, but many more which arose from the dreadful Prospect of never dying.” Forget the mellow ripening of wisdom: Struldbruggs “were not only Opinionative, Peevish, Covetous, Morose, Vain, Talkative, but incapable of Friendship, and dead to all natural Affection.”
It gets worse. Their hair and teeth fall out. “The Diseases they were subject to still continuing without increasing or diminishing,” Gulliver tells us. “In talking they forgot the common Appellation of Things, and the Names of Persons, even of those who are their nearest Friends and Relations. For the same Reason they never can amuse themselves with reading, because their Memory will not serve to carry them from the beginning of a Sentence to the end; and by this Defect they are deprived of the only entertainment whereof they might otherwise be capable.”
It is a vision of hell. Either that, or a prophecy of things to come, assuming the trends of the last few decades continue. Between 1900 and 2000, the average life expectancy in the United States rose from 49 to 77 years; between 1997 and 2007, it grew by 1.4 years. This is not immortality, but it beats dying before you reach 50. The span of active life has extended as well. The boundary markers of what counts as old age keep moving out.
From a naïve, Gulliverian perspective, it is all to the good. But there’s no way to quantify changes in the quality of life. We live longer, but it's taking longer to die as well. Two-thirds of deaths among people over the age of 65 in the United States are caused by three chronic conditions: heart disease, cancer, and stroke. The “life” of someone in a persistent vegetative state (in which damage to the cerebral cortex is so severe and irreversible that cognitive functions are gone for good) can be prolonged indefinitely, if not forever.
More horrific to imagine is the twilight state of being almost vegetative, but not quite, with some spark of consciousness flickering in and out -- a condition of Struldbruggian helplessness and decay. “I grew heartily ashamed of the pleasing Visions I had formed,” says Gulliver, “and thought no Tyrant could invent a Death into which I would not run with Pleasure from such a Life.”
Howard Ball’s book At Liberty to Die: The Battle for Death With Dignity in America (New York University Press) is a work of advocacy, as the subtitle indicates. The reader will find not the slightest trace of Swiftian irony in it. Ball, a professor emeritus of political science at the University of Vermont, is very straightforward about expressing bitterness -- directing it at forces that would deny “strong-willed, competent, and dying adults who want to die with dignity when faced with a terminal illness” their right to do so.
The forces in question fall under three broad headings. One is the religious right, which Ball sees as being led, on this issue at least, by the Roman Catholic Church. Another is the Republican Party leadership, particularly in Congress, which he treats as consciously “politicizing the right-to-die issue” in a cynical manner, as exemplified by the memo of a G.O.P. operative on “the political advantage to Republicans [of] intervening in the case of Terri Schiavo.” (For anyone lucky enough to have forgotten: In 1998, after Terri Shiavo had been in a persistent vegetative state for eight years, her husband sought to have her feeding tube removed, setting off numerous rounds of litigation, as well as several pieces of legislation that included bills in the US Congress. The feeding tube was taken out and then reattached twice before being finally removed in 2005, after which Schiavo died. The website of the University of Miami's ethics program has a detailed timeline of the Schiavo case.)
The third force Ball identifies is that proverbial 800-pound gorilla known as the Supreme Court of the United States. Its rulings in Washington v. Glucksburg and Vacco v. Quill in 1997 denied the existence of anything like a constitutionally protected right to physician-assisted death (PAD). States can outlaw PAD -- or permit it, as Montana, Oregon, and Washington do at present. In the epigraph to his final chapter, Ball quotes a Colorado activist named Barbara Coombs Lee: “We think the citizens of all fifty states deserve death with dignity.” But the Supreme Court of the United States will not be making that a priority any time soon.
“The central thesis of the book,” states Ball, “is that the liberty found in the U.S. Constitution’s Fifth and Fourteenth ‘Due Process’ Amendments extends... [to] the terminally ill person's right to choose to die with dignity -- with the passive assistance of a physician -- rather than live in great pain or live a quality-less life.” The typical mode of “passive assistance” would be “to give painkilling medications to a terminally ill patient, with the possibility that the treatment will indirectly hasten the patient’s death.”
Ball notes that a Pew Research Center Survey from 2005 showed that an impressive 84 percent level of respondents “approved of patients being able to decide whether or not to be kept alive through medical treatment or choosing to die with dignity.”
Now, for whatever it’s worth, that solid majority of 84 percent includes this columnist. If the time for it ever comes, I’d want my doctor to supersize me on the morphine drip without breaking any laws. Throughout Ball's narrative of the successes and setbacks of the death-with-dignity cause, I cheered at each step forward, and felt appalled all over again while reading the chapter he calls “Terri Schiavo’s Tragic Odyssey,” although it did seem like the more suitable adjective would be “grotesque.” Tragedy implies at least some level of dignity.
The author also introduced me to a blackly humorous expression, “death tourist,” which refers to a person "visiting" a state to take advantage of physician-assisted suicide being legal there.
As a member of the choir, I liked Ball's preaching, but it felt like the sermon was missing an index card or two. As mentioned earlier, the book’s “central thesis” is supposed to be that the due-process guarantees in Constitution extend to the right to death with dignity. And so the reader has every reason to expect a sustained and careful argument for why that legal standard applies. None is forthcoming. The due-process clauses did come up when the Supreme Court heard oral arguments in 1997, but it rejected them as inapplicable. This would seem to be the point in the story where that central thesis would come out swinging. The author would show, clearly and sharply, why the Court was wrong to do so. He doesn't. It is puzzling.
Again, it sounds very categorical when Ball cites that Pew survey from 2005 showing 84 percent agreement that individuals had a right to choose an exit strategy if medical care were not giving them a life they felt worth living. But the same survey results show that when asked whether they believed it should be legal for doctors to "assist terminally ill patients in committing suicide," only 44 percent favored it, while 48 percent were opposed. With the matter phrased differently -- surveyors asking if it should be legal for doctors to "give terminally ill patients the means to end their lives" -- support went up to 51 percent, while 40 percent remained opposed. This reveals considerably more ambivalence than the 84 percent figure would suggest.
The notion that a slippery slope will lead from death with dignity to mass programs of euthanasia clearly exasperates Ball, and he can hardly be faulted on that score. A portion of the adult population is prepared to believe that any given social change will cause the second coming of the Third Reich, this time on American soil. (Those who do not forget the History Channel are condemned to repeat fairly dumb analogies.) But the slippery-slope argument will more likely be refuted in practice than through argument. Whether or not the law recognizes it, the right to make decisions about one’s own mortality or quality of life will exist any time someone claims it. One of the medical profession’s worst-kept secrets for some time now is that plenty of physicians will oblige a suffering patient with the means to end their struggle. (As Ball notes, this came up in the Supreme Court discussions 15 years ago.)
And the demand is bound to grow, as more and more of us live long enough to see -- like Gulliver -- that there are worse fates than death. Brilliant legal minds should apply themselves to figuring out how to make an ironclad case for the right a decent departure from this mortal coil. At Liberty to Die is useful as a survey of some obstacles standing in the way. But in the meantime, people will find ways around those obstacles, even if it means taking a one-way, cross-continental trip to the Pacific Northwest. There are worse ways to go.
Call it philosophical synesthesia: the work of certain thinkers comes with a soundtrack. With Leibniz, it’s something baroque played on a harpsichord -- the monads somehow both crisply distinct and perfectly harmonizing. Despite Nietzsche’s tortured personal relationship with Wagner, the mood music for his work is actually by Richard Strauss. In the case of Jean-Paul Sartre’s writings, or at least some of them, it’s jazz: bebop in particular, and usually Charlie Parker, although it was Dizzie Gillespie who wore what became known as “existentialist” eyeglasses. And medieval scholastic philosophy resonates with Gregorian chant. Having never managed to read Thomas Aquinas without getting a headache, I find that it’s the Monty Python version:
Such linkages are, of course, all in my head -- the product of historical context and chains of association, to say nothing of personal eccentricity. But sometimes the connection between philosophy and music is much closer than that. It exists not just in the mind’s ear but in the thinker’s fingers as well, in ways that François Noudelmann explores with great finesse in The Philosopher’s Touch: Sartre, Nietzsche, and Barthes at the Piano (Columbia University Press).
The disciplinary guard dogs may snarl at Noudelmann for listing Barthes, a literary critic and semiologist, as a philosopher. The Philosopher’s Touch also ignores the principle best summed up by Martin Heidegger (“Horst Vessel Lied”): “Regarding the personality of a philosopher, our only interest is that he was born at a certain time, that he worked, and that he died." Biography, by this reasoning, is a distraction from serious thought, or, worse, a contaminant.
But then Noudelmann (a professor of philosophy at l’Université Paris VIII who has also taught at Johns Hopkins and New York Universities) has published a number of studies of Sartre, who violated the distinction between philosophy and biography constantly. Following Sartre’s example on that score is a dicey enterprise -- always in danger of reducing ideas to historical circumstances, or of overinterpreting personal trivia.
The Philosopher’s Touch runs that risk three times, taking as its starting point the one habit its protagonists had in common: Each played the piano almost every day of his adult life. Sartre gave it up only as a septuagenarian, when his health and eyesight failed. But even Nietzsche’s descent into madness couldn’t stop him from playing (and, it seems, playing well).
All of them wrote about music, and each published at least one book that was explicitly autobiographical. But they seldom mentioned their own musicianship in public and never made it the focus of a book or an essay. Barthes happily accepted the offer to appear on a radio program where the guest host got to spin his favorite recordings. But the tapes he made at home of his own performances were never for public consumption. He was an unabashed amateur, and recording himself was just a way to get better.
Early on, a conductor rejected one of Nietzsche’s compositions in brutally humiliating terms, asking if he meant it as a joke. But he went on playing and composing anyway, leaving behind about 70 works, including, strange to say, a mass.
As for Sartre, he admitted to daydreams of becoming a jazz pianist. “We might be even more surprised by this secret ambition,” Noudelmann says, “when we realize that Sartre did not play jazz! Perhaps this was due to a certain difficulty of rhythm encountered in jazz, which is so difficult for classical players to grasp. Sight-reading a score does not suffice.” It don’t mean a thing if it ain’t got that swing.
These seemingly minor or incidental details about the thinkers’ private devotion to the keyboard give Noudelmann an entrée to a set of otherwise readily overlooked set of problems concerning both art -- particularly the high-modernist sort -- and time.
In their critical writings, Sartre and Barthes always seemed especially interested in the more challenging sorts of experimentation (Beckett, serialism, Calder, the nouveau roman, etc.) while Nietzsche was, at first anyway, the philosophical herald of Wagner’s genius as the future of art. But seated at their own keyboards, they made choices seemingly at odds with the sensibility to be found in their published work. Sartre played Chopin. A lot. So did Nietzsche. (Surprising, because Chopin puts into sound what unrequited love feels like, while it seems like Nietzsche and Sartre are made of sterner stuff. Nietzsche also loved Bizet’s Carmen. His copy of the score “is covered with annotations, testifying to his intense appropriation of the opera to the piano.” Barthes liked Chopin but found him too hard to play, and shifted his loyalties to Schumann – becoming the sort of devotee who feels he has a uniquely intense connection with an artist. “Although he claims that Schumann’s music is, through some intrinsic quality, made for being played rather than listened to,” writes Noudelmann, “his arguments can be reduced to saying that this music involves the body that plays it.”
Such ardor is at the other extreme from the modernist perspective for which music is the ideal model of “pure art, removed from meaning and feeling,” creating, Noudelmann writes, “a perfect form and a perfect time, which follow only their own laws.... Such supposed purity requires an exclusive relation between the music and a listener who is removed from the conditions of the music’s performance.”
But Barthes’s passion for Schumann (or Sartre’s for Chopin, or Nietzsche’s for Bizet) involves more than relief at escaping severe music for something more Romantic and melodious. The familiarity of certain compositions; the fact that they fall within the limits of the player’s ability, or give it enough of a challenge to be stimulating; the way a passage inspires particular moods or echoes them -- all of this is part of the reality that playing music “is entirely different from listening to it or commenting on it.” That sounds obvious but it is something even a bad performer sometimes understands better than a good critic.
“Leaving behind the discourse of knowledge and mastery,” Noudelmann writes, “they maintained, without relent and throughout the whole of their existence, a tacit relation to music. Their playing was full of habits they had cultivated since childhood and discoveries they had made in the evolution of their tastes and passions.” More is involved than sound.
The skills required to play music are stored, quite literally, in the body. It’s appropriate that Nietzsche, Sartre, and Barthes all wrote, at some length, about both the body and memory. Noudelmann could have belabored that point at terrific length and high volume, like a LaMonte Young performance in which musicians play two or three notes continuously for several days. Instead, he improvises with skill in essays that pique the reader's interest, rather than bludgeoning it. And on that note, I must now go do terrible things to a Gibson electric guitar.
The University of Missouri Press will live -- in name, at least, although whether it's too early to know whether its form will live up to the name. The university on Monday unveiled some details about the operations of the reimagined press, nearly two months after it announced plans to phase out the press because of financial constraints, setting off complaints from many critics in academe. Missouri officials described the new iteration of the press as more focused on digital publishing and designed to provide more teaching and training to students. The news release does not say how it would do the latter, but an article in The Columbia Tribune said Missouri faculty members would be peer reviewers, and graduate students and interns from relevant campus programs would help edit its publications.
Speer Morgan, editor of The Missouri Review, a literary journal, and a professor of English, will direct the new press.
Christopher Hayes probably finished writing most of Twilight of the Elites: America After Meritocracy (Crown) before assuming his current duties as talk show host on MSNBC. Television is not a medium that fosters the kind of argument that requires parentheses -- much less the application to American social and political problems of ideas originally developed by any dead European thinker not named Alexis de Tocqueville. (Even then, you shouldn’t overdo it.)
Hayes uses the term “elite” as a social scientist might -- as a label for the leading stratum of an organized group -- rather than in its usual capacity, as a snarl word. His criticisms of the American status quo are clear enough, and harsh enough. Given that status quo, however, the book’s measured, non-hyperventilating approach may be a problem. There’s big money in professional ranting, and the audience for cable TV punditry usually wants catharsis, not concepts.
Twilight takes as its starting point the unmistakable collapse of public confidence in most American institutions, political and otherwise, that has been going on for decades now. Even before the financial heart attack of 2008, a Gallup poll showed that trust in 12 out of 16 institutions had reached an all-time low – with the ones that lost the most being “also the most central to the nation’s functioning: banks, major companies, the press, and, perhaps most troublingly, Congress.” Hayes points out that the approval ratings for Congress were lower than those for either Paris Hilton or the prospect of the country going communist.
Other studies show that trust in the presidency surged for a year with Obama’s election, but by 2010 “had plummeted back down toward Bush levels in the immediate aftermath of Hurricane Katrina.”
The least trusting cohort, the author notes, “are those who came of age in the aughts.” In 2010, the Harvard Institute of Politics surveyed 3000 people between the ages of 18 and 29 “about whether they thought various institutions did the right thing all of the time, most of the time, some of the time, or never. Of the military, the Supreme Court, the President, the United Nations, the federal government, Congress, traditional media, cable news, and Wall Street executives, only one – the U.S. military – was believed to do the right thing all or most of the time by a majority of respondents.”
As striking as the decline of public confidence in major institutions is the collapse (or at least severe dysfunction) of institutions themselves. We have the examples of Enron, Lehmann Brothers, public schools, and the athletics program at Penn State, as well as… actually, that’s enough for now. To extend the list just seems morbid.
So a lack of confidence in authority figures and leading institutions is not groundless. But it can be generalized (and pandered to) in irrational ways. “From 1970 to 2000” notes Hayes, “the number of annual reported whooping cough cases in the United States hovered at around 5,000 a year. In 2010, it spiked to 27,500 cases.” The change reflected the rise of a movement premised on the idea that vaccinations cause autism, even if those know-it-all scientists say otherwise.
Here, again, examples could be multiplied. Every time it snows, for example, the same old sarcastic remarks about global warming get jotted out. Whether or not ignorance is bliss, it sure does know self-defense.
“When our most central institutions are no longer trusted,” Hayes writes, “we each take refuge in smaller, balkanized epistemic encampments, aided by the unprecedented information technology at our disposal…. And this is happening at just the moment when we face the threat of catastrophic climate change, what is likely to be the largest governing challenge that human beings have ever faced in the history of life on this planet.”
So, it’s bad. The word “twilight” in Hayes’s title is Götterdämmerung-y for good reason, although the book itself is much less gloomy than it certainly could be. From time to time, many of us, I assume, try to extrapolate from current trends to how things might develop over the next 10 or 20 years -- only to have our brains freeze up and shut down, probably to avoid picturing it too vividly.
Instead of futurology, Hayes digs into what he calls “the crisis of authority” with ideas borrowed from Robert Michels and Vilfredo Pareto – two of the three classic sociological theorists of elitism (the third being Gaetano Mosca). In this context, “elitism” refers, not to a set of attitudes or behaviors usually judged as more or less obnoxious, but to the general principle that any given society has a layer of people with disproportionate power over others. The thought that there will always be such a layer may be exceptionally disagreeable, although Pareto took it as more or less given.
For Michels, it came as an unwelcome realization that “a gulf which divides the leaders from the masses” emerges in even the most democratic of organizations, because they end up with knowledge and the tools giving them advantages over the rank-and-file. He called it “the Iron Law of Oligarchy.” The best-case scenario involves keeping that governing layer as accountable as possible.
The distinguishing feature of the contemporary American social pyramid, in Hayes’s account, is that efforts to check the power of earlier elites (insert cartoon of Ivy League WASPs making deals at the country club here) have boomeranged. The meritocratic principle is that knowledge and skill, rather than inherited advantage, should determine which personnel should be in positions of authority.
It sounds like a reasonable accommodation with the Iron Law of Oligarchy -- as anti-elitist an arrangement as possible, given the complexity of 21st-century problems. But what it’s actually created is a set of “interlocking institutions that purport to select the brightest, most industrious, and most ambitious members of the society and cultivate them into leaders” who have “a disposition to trust [their] fellow meritocrats and to listen to those who occupy the inner circle of winners.”
Since career mobility is one of the perks of meritocratic life, the camaraderie has a dark side, exemplified by a bit of in-house lingo from the hedge-fund world: IBGYBG, which stands for “I’ll be gone, you’ll be gone.” And we’ve seen how that works out. It is the nature of this elite, Hayes writes, "that it can't help but produce failure" because "it is too socially distant to properly manage the institutions with which it has been entrusted."
There's much else in Twilight of the Elites that I've scanted here in the interest of finishing this column, but one final point seems important to make: The book's title is a nod to Christopher Lasch's posthumous collection of essays The Revolt of the Elites and the Betrayal of Democracy (1994), while Lasch's title in turn alluded to that of Jos Oretega y Gassett's classic The Revolt of the Masses (1930). One way to look at it is that all three reflect a set of ongoing, unsolved problems about authority and legitimacy -- always crisis, never a resolution. And here a cynical voice chimes in to say, "So why even bother thinking about it? Even if things do get worse, IBGYBG. Lol." I don't know whether the Devil exists or not, but if he did, that's what he'd sound like.