Last week, Yale announced it was dropping the requirement for applicants to submit either the SAT or ACT essay test. This comes after Harvard made the same choice this past March. The number of schools requiring the essay portion of these exams is rapidly dwindling, hopefully heading for zero if we have any luck at all.
Good riddance to bad rubbish, if you ask me.
In my forthcoming book, Why They Can’t Write: Killing the Five-Paragraph Essay and Other Necessities, I include a chapter on “The Problem of Assessment and Standardization,” which looks at the disconnect between how we test for writing competency using standardized tests, and the actual act of writing.
The goal of the SAT essay exam, to provide a level playing field that can’t be gamed with admissions coaches or college counselors is maybe laudable.
But the exam itself is brutally bad, an example of the limits of testing articulated by sociologist William Bruce Cameron, “Not everything that counts can be counted, and not everything that can be counted, counts.”
At its inception in 2005, it is a timed, handwritten essay, written from a prompt following a short reading that raises an “issue.” Outside research is forbidden. It asks students to produce writing under conditions they would never face in the real world. Not even on an essay exam in college would they be required to write on a topic dropped from the blue with which they quite possibly had zero previous familiarity.
When you add in that the resulting essays were assessed by temps required to spend no more than three minutes per essay in order to meet production quotas, you get a perfect recipe for bad writing.
The actual content of the essays was entirely irrelevant. In fact, according to Les Perlman, former director of MIT’s Writing Across the Curriculum program and an expert in writing assessment, the best strategy is to “Just make stuff up.”
Perlman told Slate, “It doesn’t matter if [what you write] is true or not. “In fact, trying to be true will hold you back.”
As I argue in the book, the SAT essay exam asks students to produce a writing simulation, rather than wrestle the real problems associated with writing.
If we believe writing to be an act of thinking – and we should believe this because it’s true – these sorts of assessments have been conditioning students to think in “wrong” ways about writing. No only are these assessments useless in judging a student’s preparation for college-level work, they reinforce attitudes toward, and experiences of writing which are actively harmful.
Figuring out how to gain access to student writing in order to assess their skills in a school admission context seems to be an intractable issue. A timed, canned exercise like the SAT essay exam is simply bogus. The standard college admission essay isn’t especially authentic itself, and is subject to all kinds of inequities to boot. When some students get the help of college admission consultants charging up to five figures, it’s hard to see a way to provide the resources necessary for all students to compete on an even playing field.
Portfolios which collect student artifacts produced in the normal course of their high school educations may hold some promise, though we’re likely to see many of the same inequities reinforced with this approach as our current system.
But maybe there’s a different way for us to think about assessing students as developing writers.
What if we could gain insights into exactly that, how students think as they’re confronted with a writing-related problem?
Historically, one of the biggest struggles of the first-year writing students I’ve worked with is their adherence to the “rules” of writing they’ve either been explicitly taught or implicitly internalized. The five-paragraph structure, which includes a thesis at the end of the first paragraph, and a concluding paragraph which starts with “In conclusion…” are two of the most common moves in evidence.
They struggle with the assignments in college first-year writing mainly because they’re not used to thinking like writers. They’re not asked to consider audience, genre, message, purpose, evidence, tone, language, are any of the other numerous things writers are juggling when they are producing writing.
So my approach is to get students thinking like writers as quickly as possible, doing rhetorical analyses on everything they read and write, completing audience analyses which assess their audiences’ needs, attitudes, and knowledge.
Fairly quickly, they grasp the kind of thinking they’re being asked to do, though often, the writing they produce from that thinking lags behind in terms of quality and coherence. Developing a well-functioning writing process is a lifelong struggle. To expect students to nail that aspect of the writer’s practice in a semester while also tackling this new way of thinking seems like an unreasonable ask.
But by the end of the semester, even when the writing itself seems to lag, I know students are going to be okay eventually if they’re engaging in the kind of thinking writers do.
To assess this, in end-of-semester final conferences, as we discuss their last assignment, I ask what they would have to change in their essay if, for example, we switched up the audience, or even changed the medium of presentation from an essay to a video, or a tweet.
It’s a question which students have not prepared for, and yet if they’re thinking like writers, they can answer it. They know where evidence intended for one audience may not meet the needs of another. They can identify wording choices or examples which may have to be rethought to maximize information relevance. They may not know exactly how to do what they need to do, but they can articulate the outlines of an approach. They know what new problems they need to solve.
This is how writers think.
I have a lot more thinking of my own to do to figure out how this could translate into an assessment suitable for admissions purposes, but I believe it can be done.
At the least, with the imminent demise of the SAT essay exam we can stop doing active harm.