You have /5 articles left.
Sign up for a free account or log in.

Almost three decades after the publication of Ernest Boyer’s Scholarship Reconsidered, hundreds of institutions have revised their promotion and tenure policies to include a broader definition of scholarship, including “engaged scholarship.” I have studied academic reward systems for almost 20 of those years, advised many colleges and universities as they revised their promotion and tenure policies, and studied the careers of engaged scholars. I’ve come to recognize that if we want to change academe to better reward engaged scholarship, we must update and script the evaluation process.

Here I define engaged scholarship as when faculty engage their expertise with those of community stakeholders to co-create knowledge that serves the public good. Examples of such projects might include new training for health professionals on the role of implicit bias in medical diagnosis, a newly designed playground for a school with children with physical disabilities or an oral history project exploring the effect of DACA on college students.

Hundreds of campuses revised their promotion and tenure guidelines as part of a movement to become more “engaged institutions,” places where students are involved in service learning and civic engagement, and faculty connect their teaching and research to public problems and grand challenges.

Armed with the knowledge that their campuses have made such reforms, an increasing number of engaged scholars enter the tenure track hopeful for success. Yet too often they find that colleagues do not understand their work, and they have few mentors to guide them. Nor do they have any guarantee their work will be reviewed by someone who is involved in engaged scholarship. The purpose of their work may be to impact policy reform, school curriculum or health-care practices, but they are told one number -- the H-index -- will be used to assess impact.

Perhaps more important, their work is being valued or devalued in a whole system of what researchers Cecilia L. Ridgeway and Shelley J. Correll might call “unscripted” interactions, with little oversight from administrative structures and procedures. Promotion and tenure committees receive few directives as to how to evaluate the work. Absent a charge, rubric or other guidance, evaluation committees have to improvise to determine whether the candidate has met a largely undefined standard of excellence.

The flawed evaluation process I describe here is ubiquitous. It is apparent in institutions that have established broader definitions of scholarship in their guidelines, as well as in institutions with nationally recognized commitments to community engagement.

To be sure, some academics will downplay the importance of this issue. More faculty members than not who formally sit for tenure are successful, and our campuses have made significant strides in terms of building infrastructure to support community engagement -- developing partnerships with communities and enhancing faculty development programs. Many campuses have explicit statements valuing community engagement in their policies.

But, borrowing from a scene in the movie A Few Good Men, process matters. When a cadet in that film is asked where in the manual it says where and when mealtime is, he answers, “Nowhere.” He is asked, “Then how do the cadets know when and where to eat?” The cadet replies, “We follow the crowd at chow time.” Likewise, we need to look beyond what our policies say to what we actually do in assessing engaged scholarship.

Where do we start? We need to add clarity and transparency, for which we have precedents. For example, in an effort to reduce implicit bias in the hiring process, institutions often ask faculty search committees to identify concrete criteria and apply them to the evaluation of each candidate. Such decision-making tools or rubrics reduce biases we bring into decision making. Clear rubrics for evaluating engaged scholarship would likewise help to mitigate biases and add clarity to evaluation by fostering clear and shared understanding. For example, a rubric evaluating engaged scholarship might include such areas as impact, significance, rigor of theoretical approach and rigor of methods. The committee would then examine the portfolio and scholarly products for evidence these criteria had been met (keeping in mind the project aims and relevant contexts).

Likewise, before a search committee meets to deliberate on a set of candidates, they often receive a diversity charge -- one often given to promotion and tenure committees, as well. The charge typically underscores the importance of equity, diversity and inclusion in hiring and promotion and tenure decisions, and it creates parameters for what can and cannot be part of a decision.

For example, committees are charged to apply department tenure criteria to assess candidate performance and to not allow issues unrelated to the criteria (such as race, gender, age, sexual orientation, use of parental leave, politics or personal characteristics) to shape decisions. Colleges and universities that value engaged scholarship could likewise incorporate guidance on evaluating engaged scholarship into their promotion and tenure charges, allowing committee members to ask questions and get answers before they meet to evaluate candidates. Such committee charges would increase the transparency, fairness and legitimacy of the evaluation for everyone.

We also need to add context to the process of peer review of engaged scholarship. The entire peer-review system actually works on the presumption of context and expertise. I am not asked to review articles or teach classes on ethnomusicology, because I do not study or have experience in that field. In many joint appointments, however, the second department can provide feedback and context to the primary department on the candidate’s work in the other field before the committee votes. Why wouldn’t we want to add the same context to the evaluation of engaged scholarship? We should encourage departments to include among external reviewers some scholars who are knowledgeable about engaged scholarship, as well as to invite an on-campus engaged scholar to sit in on deliberations and provide context if that experience does not reside among the committee.

Relatedly, the evaluation process needs to take the purpose of the engaged scholarship into context when assessing impact. If, for example, scholars work with a hospital to develop workshops for emergency room personnel on the impact of social biases on how patients present pain, with the intention of mitigating biases and improving health outcomes, we should look at health outcomes and increased awareness among personnel -- not the number of citations of research articles. Scholars should be encouraged to write impact statements contextualized to the nature and aims of their work.

Until people become more familiar with the process of assessing engaged scholarship, we need to structure or “script” the evaluation pathway for engaged scholars. We have done this before in other areas. For example, college and universities create “modified criteria” or MOUs for faculty members hired with special circumstances (e.g., needing specific lab equipment, holding administrative roles or bringing in several years toward tenure). Such modified criteria recognize those distinct contexts as approved divergences from typical department criteria and are shared in every evaluation to tenure and beyond. Such entrance MOUs, signed off by senior faculty and administrators, can lay out any alternative writing venues, scholarly products, peer review and/or measures of impact that differ from the norms or criteria used in departments.

If we don’t do a better job of “scripting” the evaluation process, we should not be surprised when it fails engaged scholars. By adding clarity, context and structure to that process, we will strengthen the legitimacy of our academic reward systems and bring engaged scholarship, and scholars, closer to full participation.

Next Story

More from Career Advice