What can we conclude when undergraduates bemoan, "How did anyone ever come up with this stuff?" Although the students might feel confused or bedazzled, there’s one thing for certain: the instructor jumped over the requisite missteps that originally led to the discovery at hand. This type of intellectual revisionism often depicts weighty concepts and conclusions as slick and sanitized, and, as a result, foreign and intangible.
In reality, every idea from every discipline is a human idea that comes from a natural, thoughtful, and (ideally) unending journey in which thinkers deeply understand the current state of knowledge, take a tiny step in a new direction, almost immediately hit a dead end, learn from that misstep, and, through iteration, inevitably move forward. That recipe for success is not just the secret formula for original scholarly discovery, but also for wise, everyday thinking for the entire population. Hence, it is important to explicitly highlight how essential those dead ends and mistakes are — that is, to teach students the power of failure and how to fail effectively.
Individuals need to embrace the realization that taking risks and failing are often the essential moves necessary to bring clarity, understanding, and innovation. By making a mistake, we are led to the pivotal question: "Why was that wrong?" By answering this question, we are intentionally placing ourselves in a position to develop a new insight and to eventually succeed. But how do we foster such a critical habit of mind in our students — students who are hardwired to avoid failure at all costs? Answer: Just assess it.
For the last decade or so, I’ve put my students’ grades where my mouth is. Instead of just touting the importance of failing, I now tell students that if they want to earn an A, they must fail regularly throughout the course of the semester — because 5 percent of their final grade is based on their "quality of failure." Would such a scheme provoke a change in attitude? Absolutely — with this grading practice in place, students gleefully take more risks and energetically engage in discussions.
And when a student (say, Aaron) makes a mistake in class, he exclaims, "Oh well, my quality of failure grade today is really high." The class laughs and then quickly moves to the serious next step — answering: Why was that wrong? It’s not enough to console an incorrect response with a nurturing, "Oh, Aaron, that’s not quite right, but we still think you’re the best! Now, does anyone else have another guess?" Instead, a mistake solicits either the enthusiastic yet honest response, "Congratulations, Aaron — that’s wrong! Now what lesson or insight is Aaron offering us?" or the class question, "What do you think? Is Aaron correct?" Either way, the students have to actively listen and then react, while Aaron sees his comment as an important element that allows the discussion to move forward.
I often refer back again and again to someone’s previous mistake to celebrate just how significant it was. If we foster an environment in our classrooms in which failing is a natural and necessary component in making progress, then we allow our students to release their own genius and share their authentic ideas — even if (or especially when) those ideas aren’t quite polished or perfectly formed.
After returning a graded assignment and reviewing the more challenging questions, I ask students to share their errors — and the class immediately comes to life: everyone wants to show off their mistakes as they now know they are offering valuable learning moments. What’s more, in this receptive atmosphere, it’s actually fun to reveal those promising gems of an idea that turned out to be counterfeit.
More recently, I’ve asked my students to intentionally fail — in the spirit of an industrial stress test. I now require my students to write a first draft of an essay very quickly and poorly — long before its due date — and then have the students use that lousy draft as a starting point for the (hopefully lengthy) iterative process of revising and editing. When the work is due, they must submit not only their final version, but also append their penultimate draft all marked up with their own red ink. This strategy assures that they will produce at least one intermediate draft before the final version. Not surprisingly, the quality of their work improved dramatically.
When I consult with or lead workshops for faculty and administrators, they are drawn to this principle of intentionally promoting failure, which inevitably leads to the question: How do you assess it? The first time I tried my 5 percent "quality of failure," I had no idea how to grade it. But I practiced what I preached — taking a risk and being willing to fail in the noble cause of teaching students to think more effectively. I passionately believe that assessment concerns should never squelch any creative pedagogical experiment. Try it today, and figure out how to measure it tomorrow.
In the case of assessing "quality of failure," at the end of the semester I ask my students to write a one-page reflective essay describing their productive failure in the course and how they have grown from those episodes (which might have occurred outside of class — including false starts and fruitful iterations). They conclude their essay by providing their own grade on how they have evolved through failure and mistakes (from 0 – meaning "I never failed" or "I learned nothing from failing" to 10 – meaning "I created and understood in profound, new ways from my failed attempts"). I read their narratives, reflect on their class participation and willingness to take risks, and then usually award them the surprisingly honest and restrained grades they gave themselves. To date, I’ve never had a student complain about their "quality of failure" grade.
To my skeptical colleagues who wonder if this grading scheme can be exploited as a loophole to reward unprepared students, I remind them that we should not create policies in the academy that police students, instead we should create policies that add pedagogical value and create educational opportunity. And with respect to my grading failure practice, I found no such abuse at the three institutions in which I have employed it (Williams College, the University of Colorado at Boulder and Baylor University). On the contrary, if implemented correctly, you will see your students more engaged, more prepared, and more thoughtful in class discussions and in life.
Beyond the subject matter contained in the 32 to 48 courses that typical undergraduates fleetingly encounter, our students’ education centers about the most important creative feat of their lives — the creation of themselves: Creating a mind enlivened by curiosity and the intellectual audacity to take risks and create new ideas, a mind that sees a world of unlimited possibilities. So we as educators and scholars should constantly be asking ourselves: Have I taught my students how to successfully fail? And if not, then: What am I waiting for?
Edward Burger is the Francis Christopher Oakley Third Century Professor of Mathematics at Williams College, and is an educational and business consultant. Other practical ways to fail and inspire students to make productive mistakes can be found in his latest book (co-authored with Michael Starbird), The 5 Elements of Effective Thinking(Princeton University Press).
Teacher education has been under siege in the last few years, the first line of attack in the growing criticism and more aggressive regulation of higher education.
Most recently, the U.S. Department of Education proposed — in a highly contentious negotiated rule-making exercise — to use test scores of graduates’ students to evaluate schools of education, despite the warnings of leading researchers that such scores are unstable and invalid for this purpose. Furthermore, in an unprecedented move, the department would limit eligibility for federal TEACH grants to prospective teachers from highly rated programs, denying aid to many deserving candidates while penalizing programs that prepare teachers for the most challenging teaching assignments.
This was only the most recent example of how education reformers have made teachers and teacher education a punching bag, painting those in the entire field as having low standards and being unwilling to accept responsibility for the quality of their work.
However, teacher educators from across the country are stepping up to create new, more valid accountability tools. An important part of this effort is the spread of the edTPA, a new performance assessment process that examines — through candidates’ plans, videotapes of instruction, evidence of student work and learning, and commentary — whether prospective teachers are really ready to teach. As highlighted recently in The New York Times, the assessment focuses on whether teachers can organize instruction to promote learning for all students, including new English learners and students with disabilities, and how they analyze learning outcomes to create greater student success.
This new assessment was developed by a team of researchers and teacher educators at Stanford University, of which I have been privileged to be a part, working with teachers and teacher educators across the country. The American Association of Colleges for Teacher Education (AACTE) helped to coordinate higher education involvement. Ultimately, teacher educators and state agencies in 24 states and the District of Columbia formed a Teacher Performance Assessment Consortium (TPAC) to develop and test the assessment. Today, about 160 colleges of education are field-testing the assessment, with the goal of transforming initial licensure, improving teacher education, and informing accreditation.
This may be the first time that the teacher education community has come together to hold itself accountable for the quality of teachers who are being prepared and to develop tools its members believe are truly valid measures of teaching knowledge and skill. Unlike other professionals, teachers have historically had little control over the tests by which they are evaluated. This rigorous, authentic measure represents a healthy and responsible professionalization of teacher preparation.
The edTPA is built on the portfolio-based model teachers developed two decades ago through the National Board for Professional Teaching Standards, and on additional work by California educators since 2002, coordinated by staff at Stanford. Teacher educators from more than 30 traditional and alternative programs helped develop the Performance Assessment for California Teachers (PACT) as the basis for an initial license. The PACT is scored in a consistent fashion by faculty members, instructors, supervisors, cooperating teachers, and principals in partnership schools. It provides vivid evidence of what beginning teachers can do, as well as useful information for guiding their learning and that of the programs themselves.
The assessment puts aside the tired arguments about which pathways to teaching are better and, instead, evaluates candidates on whether they can meet a common standard of effective practice. Unlike most current teacher tests, scores on PACT have proven to predict the capacity of candidates to foster student achievement as beginning teachers.
California programs have found the assessment so helpful in guiding and improving their practice — and that of their candidates — that they have continued the work on their own dime, even when promised state funds disappeared. One California teacher educator put it this way: "This experience has forced me to revisit the question of what really matters in the assessment of teachers, which in turn means revisiting the question of what really matters in the preparation of teachers."
As a teacher educator in California who uses the PACT, I agree with this evaluation. It has focused our candidates and program on what it means to teach effectively and it has improved our collective work. We now rely on it as a central part of our ongoing program improvement efforts.
A national version of the assessment process was started as interest spread across the country. First, a teacher educator from the University of California at Santa Barbara moved to the University of Washington and took the PACT with him. Faculty at the University of Washington liked the assessment so much they adopted it and talked about it to others in the state, who also got engaged. Ultimately, the state of Washington proposed building a similar model to use for beginning licensure. California educators also got jobs in other states and took the idea with them. Teacher educators from other states asked to be part of the project and urged the National Council for Accreditation of Teacher Education as well as their own state agencies to look at edTPA because they believe it measures their work more accurately than many other approaches currently on the books.
Meanwhile, AACTE coordinated information sessions and conversations. Ultimately, a group of teacher educators from across the country decided to create a national version, recruited Pearson as an operational partner to manage the large number of participants, and when it came time to field test the assessment, the interest grew to 22 states, 160 institutions of higher education, and more than 7,000 teaching candidates participating in the TPA field test
Demand for edTPA grew so rapidly that support was needed to deliver it to campuses and states that asked for it. Stanford chose Evaluation Systems, a long-time developer of state teacher assessments that is now part of Pearson, to provide support for administering the assessment. As the administrative partner for the National Board’s portfolio assessment as well, Pearson brought the experience, capacity, and infrastructure to deploy the edTPA to scale quickly, so that the field would not have to wait to see the benefits in the classroom.
During the field test, an instructor at a Massachusetts college made national news when she challenged the assessment as corporatization of the teacher education process that replaces the relationship between instructor and students. Nothing could be further from the truth. Instructors and supervisors continue to teach, observe, support, and evaluate candidates, as they always have. The assessment – which allows teachers to be evaluated authentically in their own student teaching or internship classrooms teaching curriculums and lessons they have designed – focuses attention on the kinds of things all beginning teachers need to learn: how to plan around learning goals and student needs, how to engage in purposeful instruction and reflect on the results; how to evaluate student learning and plan for next steps for individual students and the class as a whole.
Like assessments in other professions, such as the bar exam or the medical boards, the edTPA is a peer-developed process that evaluates how well candidates have mastered a body of knowledge and skills, and a tool that teacher educators and institutions of higher learning can use to develop their programs. It does not restrict or replace the judgment of professionals in designing their courses and supervising their candidates, as they always have. It adds information about the candidate's performance to supplement those judgments. The edTPA scorers are themselves experienced teacher educators and accomplished teachers in the same fields as the candidates being evaluated, many of them from the programs participating in the assessment.
In fact, the field test has engendered considerable excitement at most universities, where conversations about how to prepare teachers have deepened. Amee Adkins, a teacher educator at Illinois State University, says, "[edTPA] provides something long overdue in teacher education: a clear, concise, and precise definition of the core of effective beginning teaching. It takes us a step further than other professional licensure exams because it goes beyond knowledge and judgment and examines actual candidate performance."
Vanderbilt University’s Marcy Singer-Gabella notes that faculty at the eight Tennessee universities piloting the assessment say that working with edTPA has led to more productive conversations about teaching practices and how to develop them. She adds: "At Vanderbilt, where we have used [edTPA] data to make changes, our candidates are better prepared and more skilled, according to school principals and teachers."
And the candidates themselves report that the TPA has helped them develop the habits and routines for planning, assessing, and adjusting instruction that allow them to succeed and keep learning as they teach. By comparison, as one put it, the teacher evaluation systems in their districts are “a piece of cake.”
In the context of the current debates about teacher education quality, it has been inspiring to see educators step up and accept the challenge to create something better, rather than merely complaining about narrow measures that do not reflect our highest aspirations. The best hope for significantly improving education at all levels of the system is for educators to take charge of accountability and make it useful for learning and improvement.
Linda Darling-Hammond is the Charles E. Ducommun Professor of Teaching and Teacher Education at Stanford University.
There is a lot of pressure on academic institutions to be innovative these days, and faculty members are often characterized as roadblocks to change. Given the entrenched and highly structured rewards system within which we operate, it should come as no surprise that many faculty colleagues are risk-averse when it comes to exploring new trends in scholarship or pedagogy. Rather than bemoaning the hidebound, luddite, or traditionalist nature of faculty, anyone who wants to encourage innovative teaching and scholarship at their college or university should instead ask, what are the institutional barriers to experimentation here, and how might they be lowered?
When it comes to teaching, a highly personal and performative activity, fear of failure can take many forms. But for many faculty members it is crystallized in a single, dreaded object: the student course evaluation. The possibility of receiving negative student evaluations can be a powerful deterrent to colleagues who may be interested in incorporating new technology, radically altering course design, or exploring new areas of expertise. In an effort to counter that fear, we have just instituted a new policy at Middlebury College that allows faculty to designate new courses as exempt from official course evaluations -- a system that quickly became known as the "pass/fail option for faculty."
As provost, I had appointed some 40 colleagues last fall to task forces on curricular innovation that were charged with developing proposals to promote pedagogical, technological, and interdisciplinary experimentation. Though group discussions were energetic and forward-looking, it quickly became apparent that the perceived risks of experimentation could stand in the way of individual faculty adoption of many of the best ideas that emerged. In the hope of creating a receptive atmosphere for the task force recommendations, I proposed that we consider adjusting the institutional expectations embodied in our three major processes of faculty evaluation — course response forms, annual salary forms, and reviews for reappointment and promotion — in ways that might encourage innovation in both teaching and scholarship. After consultation with relevant faculty committees, we have already made the recommended adjustments in two of these areas. (As someone who has been involved in discussions about the challenges of evaluating digital scholarship, I was not surprised that the third area, review language, was less of a slam dunk).
The new course evaluation policy is simple: all faculty members who have completed two full years of teaching will have the option of designating one course every two years as "course response form-optional." In such courses, the standard evaluations will be distributed to students, and collected, but returned to the instructor only (who may then stipulate whether or not they should be included in their central administrative files). Like the student who chooses to take a particular course pass/fail in order to mitigate his or her fear of exploring unknown territory, an instructor who is trying something new now has the option of teaching an "ungraded" course.
While we take course evaluations very seriously at Middlebury, they are only one data point in a peer review process that includes multiple classroom visits by a candidate’s department chair, some senior colleagues, and all three members of the collegewide tenure and promotions committee. We are confident that omitting two or three sets of evaluations from a colleague’s multiyear file will not preclude a rigorous assessment of their teaching. In my experience, having sat with T&P committees for a number of years, and read many thousands of course evaluations, sound judgments about teaching effectiveness rest on discerning patterns over time and across course types, and not on judging the success of any one or two courses.
This policy represents a small change, but I believe it has symbolic value: it says that we do not equate teaching excellence with perfection, but instead expect all teachers to be lifelong learners, even at the art of teaching. It says that we trust faculty colleagues to do their best, even when no one is looking over their shoulders. And it says that an institution that demands innovation has to support innovation. Much of the current commentary about higher education emphasizes the imminent dangers of disruption from without; the best way to cushion that disruption, in my view, is to welcome change from within.
Alison Byerly holds an interdisciplinary appointment as college professor at Middlebury College, where she also served until recently as provost and executive vice president. While on leave in 2012-13, she is a visiting scholar in literature at the Massachusetts Institute of Technology. Her bookAre We There Yet? Virtual Travel and Victorian Realism is forthcoming this fall from the University of Michigan Press.