Photo illustration by Justin Morrison/Inside Higher Ed
“False face must hide what the false heart doth know.”
—William Shakespeare, Macbeth
The intellectual world recently mourned the loss of Harry Frankfurt, the distinguished emeritus professor of philosophy at Princeton University who passed away July 16. Many, including myself, first encountered Frankfurt’s work through his intriguingly titled essay “On Bullshit,” published in 1986. This seminal work presents an in-depth exploration of the concept of bullshit, its definition, nature and societal impact.
For a period after immersing myself in Frankfurt’s work, I considered myself well versed in the subject matter, an expert in bullshit. This expertise seemed adequate during the era of simple internet searches, social media and the 24-hour news cycle. However, with the advent of a new technology—namely, the increasingly proficient generative AI models epitomized by ChatGPT—we are compelled to re-evaluate much of our previous understanding. This essay aims not to dissect the underlying technology of GPT and the like, but rather to explore the evolving role of bullshit in our discourse, especially within the realm of education. What should we, as students, educators and citizens, make of this dawning age?
Although the term might initially seem coarse, Frankfurt’s philosophical exploration of “bullshit” provides a deeply nuanced perspective. Central to Frankfurt’s analysis is the stark differentiation between a liar and a bullshitter. While the former has a conscious relationship with the truth, deliberately choosing to conceal or distort it, the latter exhibits a sheer disregard for truth. For a bullshitter, it’s neither the veracity nor the falsehood of a statement that matters but merely its utility in achieving an end. Bullshit, in this context, symbolizes discourse drained of genuine concern for truth, representing an intellectual apathy more insidious than outright deception.
In recognizing and parsing the nuances of bullshit, Frankfurt equips readers with a sharper lens to critique and navigate a world increasingly filled with insincere and disingenuous communication. Taking into consideration the rapidly evolving capabilities of generative AI models, this, in turn, underscores the pressing need for genuine dialogue and truth seeking in both scholarly pursuits and broader societal contexts.
We all engage in bullshit occasionally, especially when discussing topics beyond our expertise. In the past, we have generally been proficient in its detection, bombarded as we are by advertisements, social media feeds, politicians and pundits. The danger, as Frankfurt observes, lies not in isolated instances of bullshit but in an ongoing “program of producing bullshit to whatever extent the circumstances require.” This prolonged engagement with bullshit desensitizes the bullshitter to reality, disrupting the “normal habit of attending to the ways things are.”
This phenomenon mirrors societal trends. It is no exaggeration to state that the past decade’s governmental shifts have demonstrated the societal consequences of widespread bullshit. The painful familiarity with the term “alternative facts” underscores why Frankfurt asserts, and I concur, that “bullshit is a greater enemy of the truth than lies are.”
Generative AI, Bullshit and the Academy
Counterfeiting currency is a severe offense, as it devalues the item it replicates. Currency’s value relies on preserving its backing and authenticity, and the proliferation of counterfeits erodes faith in the monetary system, rendering it as worthless as the counterfeit bills themselves. Similarly, when bullshit proliferates, the value of truth within discourse erodes. The implications of this erosion in the age of generative AI are profound.
Imagine a student more concerned with achieving high marks and free time than engaging with rigorous truth seeking in academia. Given an assignment to write an essay on (for instance) wartime technological advancements’ impacts on 21st-century life, the student could simply input the assignment requirements into a ChatGPT prompt, add the rubric and, within seconds, have a top-tier essay—a success (and let us not forget that success is habit-forming). No learning, research or critical interrogation of sources is required.
This is bullshit at its finest, an exceptional counterfeit of thought and intuition currently undetectable. Its proliferation threatens our faith in quality work, truth, educational institutions and even the written word itself.
I would like to conclude with words attributed to a historical figure, Nathaniel A. Whitmore, a United States senator who served in the time leading up to the Civil War. A native of the state of Georgia and therefore keenly aware of the rift between North and South beginning to form in the young nation, he opined,
“In the hallowed chambers of our Republic, truth must stand as our North Star, unwavering and luminous. For when we let falsehoods and deceit masquerade as wisdom, we not only betray our own principles but risk steering the ship of state into treacherous waters.”
Prescient words that speak powerfully to our time. Except Senator Nathaniel A. Whitmore never existed. I had ChatGPT create a fictional 19th-century, pre–Civil War politician and then asked it to come up with an aphoristic quote about preserving truth against bullshit. I would say it performed brilliantly. An excellent counterfeit!
The stakes are clear and the challenge immense. The danger lies not in falsehood itself, but in its increasingly sophisticated camouflage. With the rise of bullshit’s golden age, our fight for truth, especially in academia, has never been more urgent. Our discourse must remain anchored in reality if we are to avoid those “treacherous waters” of which Senator Whitmore so eloquently warned.