The prominence of Marxist thinkers in many academic fields ensures that graduate students study commodification; the prevalence of self-serving pedagogical practices ensures that those students too often become commodities themselves. You’ve read the book, now act, or be acted on, in the movie.
Competition for graduate students, some of it inevitable, occurs frequently within and among graduate programs. They may vie with each other to attract the most desirable candidates for admission. (Justifications of the decision at Johns Hopkins University to increase graduate stipends tellingly conflate the laudable motives of helping students to avoid debt and the dubious one of encouraging them to select this program even if other institutions might offer a livable though somewhat smaller stipend and a program that is more appropriate to the applicant in other ways.) And decisions by administrators to downsize doctoral programs may lead to competition for warm bodies to fill a seminar that might otherwise be canceled.
Most troubling, however, are the techniques some professors use to encourage students to choose themselves as dissertation director. These issues assume different form in disciplines, notably the sciences, where graduate students often join a team addressing the adviser’s own project. Hence this essay concentrates instead on areas where students’ projects do not involve actual participation in the adviser’s research — and on institutions where the regrettable behavior in question flourishes. Its absence or delimitations elsewhere (including Fordham University, where I now teach) demonstrates that many issues are not only field- but also institution-specific.
A professor’s motives for attracting — when does it become luring? — potential dissertators, like the practices deployed to do so, occupy a spectrum: the unexceptional, the ambiguous, the dubious, and too often the downright egregious and pernicious.
At one pole, being a good teacher typically involves delight in sharing interests and enthusiasms; one may also wish to support new or, alternatively, neglected trends in the field. All those understandable, even desirable, reactions may lead us to encouraging students to choose a topic for which we would be the obvious director. Some faculty members may believe they are in a better position to help a given student intellectually and professionally, though that realization can be compromised by more self-serving motivations.
Similarly, attributing to certain colleagues prejudices and stereotypes — racial, misogynistic homophobic, and so on — that would render them bad choices for a given student, a faculty member may attempt to steer that dissertator away from such people. The intentions may on occasion be largely or entirely honorable and the anticipated outcome preferable — but even in such instances one always has to be sure that a desire to supervise the thesis oneself is not being rationalized and that the information about the putative prejudices is grounded in solid evidence, not the gossip that jealousy and resentments often breed.
Departments that base course reductions or other perks on the number of dissertations supervised thus encourage competition for dissertators. Faculty members who discover — or fear — that they are supervising fewer theses because other people are dubiously attracting dissertators may feel that justifies similar behavior, thus turning regrettable behavior into a snowball, or an already-stormy departmental climate into a thunderstorm or blizzard.
Sadly, the most common motivation for pressuring students to choose oneself as director may be ego and the attendant rivalries with other faculty members. Indeed, as noted below, sometimes longstanding animosities and more generalized competition between Professors X and Y, not necessarily the desire to supervise the dissertation in question, may impel X to discourage students from working with Y.
But more to the point, faculty members too often judge themselves and others by the number of theses being supervised. The widespread practice of listing on vitae not only the dissertations we have directed but also the current professional position of the student indicates the significance of such status systems. Even more troubling: the desire to replicate oneself, so risky in more literal parenting, sometimes encourages people not only to corral dissertators but also to try to encourage undue imitation of one’s own work. In short, the line between enthusiastic and disinterested engagement with a student and pernicious pressure is an important — and sometimes blurred — boundary.
Some war stories culled from reliable sources around the country abound (repeated here with a few minor details altered):
Graduate students in one department soon learned via the grapevine that Professor A would consent to work on dissertations only if selected as director or co-director — and only if Professor B was not on the committee.
Elsewhere a faculty member heard reliably that another department member was telling students that if they chose her as director they were very likely to get a job but very unlikely if they chose my informant. Any scholar who knows this person’s field and her sterling reputation within it would realize the advice was not worth the venom it was written on.
Too many students continue to report being instructed in virtually so many words by a potential director that she or he, not a colleague with similar credentials, is the only appropriate director. This pressure intensifies if the person applying the pressure is someone with a major reputation or someone in a respected administrative position in the department.
Debating between working with Professor X on one topic or Professor Y on a topic for which he would be the more logical supervisor, the student is firmly instructed by Y not to mention in any way to X that he is considering an alternative topic and director. Does Y fear that that knowledge would propel X into pressuring the student? Or does Y see the situation not as a collegial collaboration where he and X are working with the student to identify his best interests but rather as a rivalry where the stealthy bird will get the worm? (And at their worst scenarios like this do indeed treat students like worms, though ones that are attractive fodder for the more predatory birds.) Or are both explanations true, proving that we attribute to others our own behavior and values in such situations?
One faculty member was puzzled about why, after being asked to serve on committees and sometimes direct for several years, these requests abruptly dried up. He learned that a colleague senior to him had recently started offering informal evening workshops, both on campus and at his house, for people approaching the point of choosing a director. Given that this person had a reputation for dropping students who didn’t follow his advice, my informant could not help but suspect that these sessions were designed to attract students their organizer wanted to work with. And others might wonder whether or not a senior colleague, aware that someone junior to him was increasingly attracting students, perhaps felt a need to define and protect what he saw as his territory.
Pressuring students to choose oneself as a director is dangerous in several ways. The student may select an adviser who is not ideal in terms of interests and pedagogical practices. To ensure the desired outcome, faculty members may urge those students to choose a director early, before they know their own interests and the options well enough to make an informed decision. These types of behavior build tension among colleagues and, as noted above, may snowball.
Moreover, the faculty members who pressure students to select themselves as director often also pressure them to become intellectual clones. As one distinguished professor observed to me, “If students try throughout graduate school to become better versions of themselves, they may well succeed; if they try to become versions of someone else, they are likely to turn into second-rate imitations.”
Other fallout from the practice of competing for dissertators too often includes what insurance companies often describe as cherry-picking: seeking the most desirable clients or dissertators while hoping to avoid the others. The attitudes that lead certain faculty members unabashedly to compete for the top students often make them uninterested in working with the people whom they perceive as less promising — hence more time-consuming for the director and less likely to yield reflected glory. This too can compromise collegiality: faculty members who are willing to work with such students may resentfully note the fact that their colleagues never will assume what is often a more burdensome responsibility. And mightn’t being rejected by a potential adviser, especially one known to encourage other students to work with her or him, create insecurities in the students not sought after, thus compromising productivity and turning the perception that these students are less promising into a self-fulfilling prophecy?
The most perilous consequence of pressuring students in these ways is also the most subterranean: faculty members who do so are modeling regrettable behavior for their students — instructing them not only in how to write a thesis but also how to compete with colleagues and manipulate students.
How can we limit the deleterious effects of aggressively hunting for potential dissertators? Perhaps the most promising potential solutions are also the hardest to effect. Competition is inevitable in our profession, like so many others, and not always destructive. But some of the attitudes that encourage pernicious rivalries might be modulated, although of course a comprehensive discussion of these broad issues demands a different conversation. For example, as I have argued elsewhere, the huge salary inequities resulting from matching outside offers can encourage rivalries and resentment. One professor aptly responded to my queries about avoiding competition for dissertators with, “Morale is all.”
Moreover, celebrating both undergraduate and graduate teaching may discourage some from putting all the fragile eggs of their fragile egos in the latter basket; such celebration can occur when the most respected professors volunteer to teach elementary classes and when hiring committees make a good faith effort at the difficult task of determining whether a candidate would perform both pedagogical roles well. Graduate seminars can not only teach critical approaches but also model attitudes critical in more senses than one; for example, classes in which students edit each other’s papers can, if that system is carefully structured, encourage cooperation and respect.
Other possibilities for limiting competition for dissertators involve responsible mentoring and thoughtful institutional practices. Faculty members can counterbalance pressure students may receive from other quarters by encouraging them to delay choosing a director until they are further along in the program and, in particular, have worked with more people and by stressing that the decision about a director needs to be made by the student himself, not anyone else.
Some graduate programs have also adopted structural solutions to destructive competition for graduate students. Co-directing arrangements can be successful. The transformation of the position of director and second reader into a committee structure is working well at certain Ph.D.-granting institutions, of which Harvard University is one of many examples.
Graduate students at some universities now have the option of either retaining the traditional first reader (director) / second reader model or setting up a three-person committee. One member of those committees is designated the nominal director for administrative purposes; in many instances the triumvirate does assume equal responsibilities, though in some the nominal director proves to have a significantly larger role. But even when one person in practice becomes the main supervisor, the committee structure may well encourage the student to consider a number of professional models, avoiding the risks of cloning. And such procedures reduce the possibility of one a faculty member without warning calling for a major overhaul very late in the game. This system is not without its own risks— for instance, one observer at another institution reports situations where one member is happy to get the credit for supervising the thesis while passing the lion’s share of the hard work onto other committee members. But the committee structure is proving a fruitful option in many instances.
In contrast, the fruit of the poisoned trees of coercion, which thrive in all too many academic orchards gardens, is the knowledge of commodified goods and professional evils.
Heather Dubrow is the John D. Boyd SJ Chair in the Poetic Imagination at Fordham University and taught previously at several other institutions. Among her publications are six single-authored monographs, a co-edited collection of essays, an edition of As You Like It, and a volume of her own poetry.
In their effort to improve outcomes, colleges and universities are becoming more sophisticated in how they analyze student data – a promising development. But too often they focus their analytics muscle on predicting which students will fail, and then allocate all of their support resources to those students.
That’s a mistake. Colleges should instead broaden their approach to determine which support services will work best with particular groups of students. In other words, they should go beyond predicting failure to predicting which actions are most likely to lead to success.
Higher education institutions are awash in the resources needed for sophisticated analysis of student success issues. They have talented research professionals, mountains of data and robust methodologies and tools. Unfortunately, most resourced-constrained institutional research (IR) departments are focused on supporting accreditation and external reporting requirements.
Some institutions have started turning their analytics resources inward to address operational and student performance issues, but the question remains: Are they asking the right questions?
Colleges spend hundreds of millions of dollars on services designed to enhance student success. When making allocation decisions, the typical approach is to identify the 20 to 30 percent of students who are most “at risk” of dropping out and throw as many support resources at them as possible. This approach involves a number of troubling assumptions:
The most “at risk” students are the most likely to be affected by a particular form of support.
Every form of support has a positive impact on every “at risk” student.
Students outside this group do not require or deserve support.
What we have found over 14 years working with students and institutions across the country is that:
There are students whose success you can positively affect at every point along the risk distribution.
Different forms of support impact different students in different ways.
The ideal allocation of support resources varies by institution (or more to the point, by the students and situations within the institution).
Another problem with a risk-focused approach is that when students are labeled “at risk” and support resources directed to them on that basis, asking for or accepting help becomes seen as a sign of weakness. When tailored support is provided to all students, even the most disadvantaged are better-off. The difference is a mindset of “success creation” versus “failure prevention.” Colleges must provide support without stigma.
To better understand impact analysis, consider Eric Siegel’s book Predictive Analytics. In it, he talks about the Obama 2012 campaign’s use of microtargeting to cost-effectively identify groups of swing voters who could be moved to vote for Obama by a specific outreach technique (or intervention), such as piece of direct mail or a knock on their door -- the “persuadable” voters. The approach involved assessing what proportion of people in a particular group (e.g., high-income suburban moms with certain behavioral characteristics) was most likely to:
vote for Obama if they received the intervention (positive impact subgroup)
vote for Obama or Romney irrespective of the intervention (no impact subgroup)
vote for Romney if they received the intervention (negative impact subgroup)
The campaign then leveraged this analysis to focus that particular intervention on the first subgroup.
This same technique can be applied in higher education by identifying which students are most likely to respond favorably to a particular form of support, which will be unmoved by it and which will be negatively impacted and dropout.
Of course, impact modeling is much more difficult than risk modeling. Nonetheless, if our goal is to get more students to graduate, it’s where we need to focus analytics efforts.
The biggest challenge with this analysis is that it requires large, controlled studies involving multiple forms of intervention. The need for large controlled studies is one of the key reasons why institutional researchers focus on risk modeling. It is easy to track which students completed their programs and which did not. So, as long as the characteristics of incoming students aren’t changing much, risk modeling is rather simple.
However, once you’ve assessed a student’s risk, you’re still left trying to answer the question, “Now what do I do about it?” This is why impact modeling is so essential. It gives researchers and institutions guidance on allocating the resources that are appropriate for each student.
There is tremendous analytical capacity in higher education, but we are currently directing it toward the wrong goal. While it’s wonderful to know which students are most likely to struggle in college, it is more important to know what we can do to help more students succeed.
Dave Jarrat is a member of the leadership team at InsideTrack, where he directs marketing, research and industry relations activities.
Last month the White House hosted a higher education summit to draw attention to the problem of college attainment among low-income students. The summit focused in particular on “undermatching,” in which high-achieving, low-income students fail to apply to highly selective colleges, and instead attend less competitive institutions.
It is without question that all students deserve a chance to attend a college that will give them the best shot in life, and I applaud efforts to better inform students about their choices. However, while we are rightly concerned about directing more underserved students to selective colleges, we should also recognize that sending more students to these colleges will not improve the overall quality of our higher education system.
The reality is that even in a perfectly matched world, millions of low-income, minority, first-generation, and immigrant students will continue to enroll in community colleges. If we want to improve educational outcomes among these groups of students, then we need to improve the colleges so many of them will attend.
Community colleges have been extremely successful at opening the doors to college for disadvantaged students, but thus far, they have had less success in helping them graduate. Less than 40 percent of students who start in community colleges complete a credential in six years. The success rates are worse for low-income and minority students.
So how can community colleges deliver better quality for their students? It will not be easy. Over the last 15 years, faculty and administrators have worked tirelessly to implement reforms in teaching and support services. These efforts have failed to raise completion rates.
A critical reason for this disappointing outcome is that reform initiatives have focused too narrowly on one aspect of the student experience, such as entry, remedial education or the first semester. While many initiatives have led to some success for targeted students, these improvements have been too small and too short-lived to affect overall college performance.
Research conducted by the Community College Research Center (CCRC) at Columbia University’s Teachers College and others makes abundantly clear that improving services like developmental education is necessary but not sufficient: the entire student community college experience must be strengthened.
Some community colleges are beginning to recognize this imperative, and are entering a new phase of far more comprehensive and transformative reform. In particular, some are at the forefront of implementing what CCRC terms the guided pathways model.
That approach responds to the fact that most community college students need far more structure and guidance; it attends to all aspects of the student experience, from preparation and intake to completion. The model includes robust services to help students choose career goals and majors. It features the integration of developmental education into college-level courses and the organization of the curriculum around a limited number of broad subject areas that allows for coherent programs of study. And, importantly, it stresses the strong, ongoing collaboration between faculty, advisers and staff.
Initiatives such as the Gates-funded Completion by Design and Lumina's Finish Faster are advancing such comprehensive reforms by helping colleges and college systems create clear course pathways within programs of study that lead to degrees, transfer and careers.
The new Guttman Community College at the City University of New York (CUNY) -- perhaps the most ambitious example of a comprehensive approach to the community college student experience -- incorporates many elements of the guided pathways model. And CUNY’s ASAP program, which like Guttman takes a holistic approach to student success, has significantly improved associate degree completion rates.
Ambitious and comprehensive reforms are rare for good reason -- they are risky and difficult to implement. But they also offer the possibility of transformative improvement. Our frustration with the progress of reform in community colleges is not because skilled and dedicated people have not tried; rather, the reforms themselves have been self-limiting.
President Obama has rightly asked the nation to attend with renewed urgency to the problem of college attainment among low-income students. But the focus on undermatching is driven partly by a perception that the distribution of quality among colleges and universities is and will remain fixed.
This need not be so. Bold, large-scale reforms can improve institutions across the higher education system so that no matter where our neediest students enroll, they are ensured the best possible chance of success.
Thomas Bailey is director of the Community College Research Center at Teacher's College Columbia University.
The news that Purdue University likely overstated the impact of its early warning system, Course Signals, has cast doubt about the efficacy of a host of technology products intended to improve student retention and completion. In a commentary published in Inside Higher Ed, Mark Milliron responded by arguing that “next-generation” early warning systems use more robust analytics and will be likely to get better results.
We contend that even with extremely robust and appropriate analytics, programs like Course Signals may still fall short if their adoption ignores the most pressing piece of electronic advising systems — their use on the front end, by advisers, faculty and students. Until more attention is paid to the messy, human side of educational technology, Course Signals — and other programs like it — will continue to show anemic impacts on student retention and graduation.
Over the past year, we have worked with colleges in the process of implementing Integrated Planning and Advising Systems (which include early warning systems like Course Signals). The adoption of early warning systems requires advisers, faculty and students to approach college success differently and should, in theory, refocus attention on how they engage with advising and support services. In practice, however, we have found that colleges consistently underestimate the challenge of ensuring that such systems are adopted effectively by end-users.
The concept of an early alert is far from new. In interviews, instructors and advisers have consistently reminded us that for years, students have received “early alert” feedback in the form of grades and midterm reports. Early warning systems may streamline this process, and provide the reports in a new format (a red light instead of a warning note, for example), but the warning itself isn’t terribly different.
What is potentially different about products like Course Signals is their ability to connect these course-level warnings to the broader student support services offered by the college. If early warning signals are shared across college personnel, and if those warnings serve to trigger new behaviors on their part, then we are likely to see changed student behavior and success. In other words, sending up a red light isn’t likely to influence retention. But if that red light leads to advisers or tutors reaching out to students and providing targeted support, we might see bigger impacts on student outcomes.
Milliron says, for example, that with predictive analytics, “student[s] might be advised away from a combination of courses that could be toxic for him or her.” But such advising doesn’t happen spontaneously: it requires advisers to be more proactive in preparing for and conducting each advising session. They must examine a student’s early warning profile, program plan and case file prior to the session; they must reframe how they present course choices to students; and they have to rethink what the best course combinations are for students with varying educational and career goals, as well as learning styles and abilities. Finally, they may have to link students to additional resources on campus — such as tutoring— and colleges need to ensure these services exist and are of high quality.
For this process to occur, advisers need to be well-versed in how to use the analytics, and be encouraged to move past registering students for the most common set of courses to courses that make sense for the individual. But because most colleges remain uncertain about the process changes that should occur when they adopt early warning systems, they are unable to provide the training that would help faculty and advisers make potentially transformative adjustments in their practice.
Even if colleges do adequately prepare faculty and advisers for this transition, there is much we still don’t know about how students will perceive and use the data and messages they receive from early warning systems. These unknowns may influence the extent to which the systems impact student outcomes.
For example, if students perceive early warnings as a reprimand rather than an opportunity to get help, they may ignore the signals or avoid efforts of college personnel to contact them. To anticipate and mitigate these kinds of potentially negative responses, it is important to understand how all students, not just those who use and enjoy early alert systems, experience and react to such signals. As Milliron notes, we need to figure how to send the right message to the right people in the right way.
Early warning systems are only tools, and colleges will have to pay closer attention to changing end-user culture in order to maximize their effectiveness. Currently, colleges are skipping this step. At the end of the day, even the best system and the best data depend on people to translate them into actions and behaviors that can influence student retention and completion.
Melinda Mechur Karp is a senior research associate at the Community College Research Center at Columbia University's Teachers College. Also contributing to the essay were Jeff Fletcher, a senior research assistant, HooriSantikianKalamkarian, a research associate, and Serena Klempin, a research associate.