In a rare moment of inattention a couple of years ago, I let myself get talked into becoming the chair of my campus’s Institutional Review Board. Being IRB chair may not be the best way to endear oneself to one’s colleagues, but it does offer an interesting window into how different disciplines conceive of research and the many different ways that scholarly work can be used to produce useful knowledge.
It has also brought home to me how utterly different research and assessment are. I have come to question why anyone with any knowledge of research methods would place any value on the results of typical learning outcomes assessment.
IRB approval is required for any work that involves both research and human subjects. If both conditions are met, the IRB must review it; if only one is present, the IRB can claim no authority. In general, it’s pretty easy to tell when a project involves human subjects, but distinguishing nonresearch from research, as it is defined by the U.S. Department of Health and Human Services, is more complicated. It depends in large part on whether the project will result in generalizable knowledge.
Determining what is research and what is not is interesting from an IRB perspective, but it has also forced me to think more about the differences between research and assessment. Learning outcomes assessment looks superficially like human subjects research, but there are some critical differences. Among other things, assessors routinely ignore practices that are considered essential safeguards for research subjects as well as standard research design principles.
A basic tenet of ethical human subjects research is that the research subjects should consent to participate. That is why obtaining informed consent is a routine part of human subject research. In contrast, students whose courses are being assessed are typically not asked whether they are willing to participate in those assessments. They are simply told that they will be participating. Often there is what an IRB would see as coercion. Whether it’s 20 points of extra credit for doing the posttest or embedding an essay that will be used for assessment in the final exam, assessors go out of their way to compel participation in the study.
Given that assessment involves little physical or psychological risk, the coercion of assessment subjects is not that big of a deal. What is more interesting to me is how assessment plans ignore most of the standard practices of good research. In a typical assessment effort, the assessor first decides what the desired outcomes in his course or program are. Sometimes the next step is to determine what level of knowledge or skill students bring with them when they start the course or program, although that is not always done. The final step is to have some sort of posttest or “artifact” -- assessmentspeak for a student-produced product like a paper rather than, say, a potsherd -- which can be examined (invariably with a rubric) to determine if the course or program outcomes have been met.
On some levels, this looks like research. The pretest gives you a baseline measurement, and then, if students do X percent better on the posttest, you appear to have evidence that they made progress. Even if you don’t establish a baseline, you might still be able to look at a capstone project and say that your students met the declared program-level outcome of being able to write a cogent research paper or design and execute a psychology experiment.
From an IRB perspective, however, this is not research. It does not produce generalizable knowledge, in that the success or, more rarely, failure to meet a particular course or program outcome does not allow us to make inferences about other courses or programs. So what appears to have worked for my students, in my World History course, at my institution, may not provide any guidance about what will work at your institution, with your students, with your approach to teaching.
If assessment does not offer generalizable knowledge, does assessment produce meaningful knowledge about particular courses or programs? I would argue that it does not. Leaving aside arguments about whether the blunt instrument of learning outcomes can capture the complexity of student learning or whether the purpose of an entire degree program can be easily summed up in ways that lend themselves to documentation and measurement, it is hard to see how assessment is giving us meaningful information, even concerning specific courses or programs.
First, the people who devise and administer the assessment have a stake in the outcome. When I assess my own course or program, I have an interest in the outcome of that assessment. If I create the assessment instrument, administer it and assess it, my conscious or even unconscious belief in the awesomeness of my own course or program is certain to influence the results. After all, if my approach did not already seem to be the best possible way of doing things, as a conscientious instructor, I would have changed it long ago.
Even if I were the rare human who is entirely without bias, my assessment results would still be meaningless, because I have no way of knowing what caused any of the changes I have observed. I have never seen a control group used in an assessment plan. We give all the students in the class or program the same course or courses. Then we look at what they can or cannot do at the end and assume that the course work is the cause of any change we have observed. Now, maybe this a valid assumption in a few instances, but if my history students are better writers at the end of the semester than they were at the beginning of the semester, how do I know that my course caused the change?
It could be that they were all in a good composition class at the same time as they took my class, or it could even be the case, especially in a program-level assessment, that they are just older and their brains have matured over the last four years. Without some group that has not been subjected to my course or program to compare them to, there is no compelling reason to assume it’s my course or program that’s causing the changes that are being observed.
If I developed a drug and then tested it myself without a control group, you might be a bit suspicious about my claims that everyone who took it recovered from his head cold after two weeks and thus that my drug is a success. But these are precisely the sorts of claims that we find in assessment.
I suspect that most academics are either consciously aware or at least unconsciously aware of these shortcomings and thus uneasy about the way assessment is done. That no one says anything reflects the sort of empty ritual that assessment is. Faculty members just want to keep the assessment office off their backs, the assessment office wants to keep the accreditors at bay and the accreditors need to appease lawmakers, who in turn want to be able to claim that they are holding higher education accountable.
IRBs are not supposed to critique research design unless it affects the safety of human subjects. However, they are supposed to weigh the balance between the risks posed by the study and the benefits of the research. Above all, you should not waste the time or risk the health of human subjects with research that is so poorly designed that it cannot produce meaningful results.
So, acknowledging that assessment is not research and not governed by IRB rules, it still seems that something silly and wasteful is going on here. Why is it acceptable that we spend more and more time and money -- time and money that have real opportunity costs and could be devoted to our students -- on assessment that is so poorly designed that it does not tell us anything meaningful about our courses or students? Whose interests are really served by this? Not students. Not faculty members.
It’s time to stop this charade. If some people want to do real research on what works in the classroom, more power to them. But making every program and every faculty member engage in nonresearch that yields nothing of value is a colossal, frivolous waste of time and money.
Erik Gilbert is a professor of history at Arkansas State University.
So you’ve published a paper on monetary theory, snagged that fellowship for research in Rome or received an award for best teacher of the year. Or maybe you’ve just served on seven committees this past semester, from tenure review to curriculum reform, and colleagues ought to appreciate that.
But they don’t, at least not to your face. And the pathetic “Faculty News” page of your department website just doesn’t cut it. (Take a look at the listing for Professor Dale’s publication in Southwest Annandale Historical Society Notes: doubly out of date, since both the professor and the journal are extinct.)
You want to be known as the foremost expert on forensic linguistics or the one who got a National Endowment for the Humanities grant to study Rilke -- back in 2011, but still. What to do? Think of Berkeley’s famous proposition, applied to academics: “If a paper is published in a journal and no one knows about it, does it make a sound?” Is it OK to toot your own horn? In this era of Facebook, are you kidding?
Consider the humblebrag, a seemingly modest utterance that’s actually a boast. The British have excelled in this charming self-deprecation for centuries: “Oh, I don’t suppose many people were in the running this year,” for instance, to explain why you won the London marathon. Only this is higher education in 2016, with access to Twitter.
Think brassier, think of that academic review coming up in 2017, and think within a 140-character limit:
Gosh, if I don’t send in that manuscript to Oxford by this fall, they’re gonna kill me!
I don’t see how I’m going to get any work done during my fellowship in Belize.
Darned if I know why the Fulbright committee chose my proposal over so many deserving others.
You know, if it weren’t for all the grateful letters that I’ve gotten from students over the years, I’d’ve given up teaching a long time ago.
Never mind all my publications. The Smoot Teaching Award I got this year makes me realize what really matters in life.
I keep thinking there must be some mistake: Why would the Guggenheim committee even consider my work on medieval stairways?
You know, I never set out to write a best seller. Everyone knows what people in academe think of that.
Promotion to full professor isn’t much, I guess, but I try to see it as an affirmation of all I’ve done here.
I don’t anticipate the deanship will give me much power, but I do intend to take the responsibility seriously.
It’s not fashionable to talk about service, I know, which is why I don’t discuss all the behind-the-scenes work I do for the college.
All that work for such a simple title: provost.
I’m sure plenty of people could have delivered the keynote address at this conference, but I’m the one who got suckered into it.
They said I’m the youngest program director they’ve ever had -- must be their code word for inexperienced.
The students in my econ class all say that I’m their favorite teacher, but you know what that means.
As an adjunct, I could just phone in my performance, but I always have to put in 200 percent. Sigh. That’s just me.
That’s what I told Mike -- I mean, the chancellor. No idea why he listens to me. Hey, I’m just custodial staff.
David Galef directs the creative writing program at Montclair State University. His latest book, Brevity: A Flash Fiction Handbook, has just come out from Columbia University Press.
In his autobiography, Benjamin Franklin describes how, as a striving young man in Philadelphia, he practiced a quite literal variety of moral bookkeeping. Having determined 13 virtues he ought to cultivate (temperance, frugality, chastity, etc.), he listed them on a table or grid, with the seven days of the week as its horizontal element. At night, before bed, he would make a mark for each time he had succumbed to a vice that day, in the row for the virtue so compromised.
A dot in the ledger was a blot on his character. Franklin explicitly states that his goal was moral perfection; the 13th virtue on his list was humility, almost as an afterthought. But without claiming to have achieved perfection, Franklin reports that his self-monitoring began to show results. Seeing fewer markings on the page from week to week provided a form of positive reinforcement that made Franklin, as he put it in his late 70s, “a better and a happier man than I otherwise should have been had I had not attempted it.”
Franklin’s feedback system was a prototype of the 21st-century phenomenon analyzed by Deborah Lupton in The Quantified Self (Polity), a study of how digital self-tracking is insinuating itself into every nook and cranny of human experience. (The author is a research professor in communication at the University of Canberra in Australia.) A device or application is available now for just about any activity or biological function you can think of (if not, just wait), generating a continuous flow of data. It’s possible to keep track of not only what you eat but where you eat it, at what time and how much ground was covered in walking to and from the restaurant, assuming you did.
In principle, the particulars of your digestive and excretory processes could also be monitored and stored: Lupton mentions “ingestible digital tablets that send wireless signals from inside the body to a patch worn on the arm.” She does not elaborate, but a little follow-up shows that their potential medical value is to provide “an objective measure of medication adherence and physiologic response.” Wearable devices can keep track of alcohol consumption (as revealed by sweat), as well as every exertion and benefit from a fitness routine. Sensor-equipped beds can monitor your sleep patterns and body temperature, not to mention “sounds and thrusting motions” possibly occurring there.
Self-tracking in the digital mode yields data about the individual characterized by harder-edged objectivity than even the most brutally honest self-assessment might allow. For Franklin, the path to self-improvement involved translating the moral evaluation of his own behavior into an externalized, graphic record; it was an experiment with the possibility of increasing personal discipline through enhanced self-awareness. The tools and practices that Lupton discusses -- the examples cited above are just a small selection -- expand upon Franklin’s sense of the self as something to be quantified, controlled and optimized. The important difference lies in how comprehensive and automated the contemporary methods are (many of the apps and devices can run in the background of everyday life, unnoticed most of the time), as well as how much more strongly they imply a technocratic sense of the world.
“The body is represented as a machine,” writes Lupton, “that generates data requiring scientific modes of analysis and contains imperceptible flows and ebbs of data that need to be identified, captured and harnessed so that they may be made visible to the observer.” But not only the body: other forms of self-tracking are available to monitor (and potentially to control) productivity, mood and social interaction. One device, “worn like a brooch … listens to conversations with and around the wearer and lights up when the conversation refers to topics that the user has listed in the associated app.”
Along with the ability to monitor and control various dimensions of an individual’s existence, there is likely to come the expectation or obligation to do so. On this point, Lupton’s use of the idea of self-reflexivity (as developed by the social theorists Zygmunt Bauman, Ulrich Beck and Anthony Giddens) proves more compelling than her somewhat perfunctory and obligatory references to Michel Foucault on “technologies of self” or Christopher Lasch on “the culture of narcissism.” The digitally enhanced, self-monitoring 21st-century citizen must meet the challenge of continuously “seeking information and making choices about one’s life in a context in which traditional patterns and frameworks that once structured the life course have largely dissolved … Because [people] must do so, their life courses have become much more open, but also much more subject to threats and uncertainties,” especially “in a political context of the developed world -- that of neoliberalism -- that champions self-responsibility, the market economy and competition and where the state is increasingly withdrawing from offering economic support to citizens.”
In such a context, high-tech self-tracking can provide access to exact, objective self-knowledge about health, productivity, status (there are apps that keep track of your standing in the world of social media) and so on. Know thyself -- and control thy destiny! Or so it would seem, if not for a host of issues around who has ownership, use or control of the digital clouds that shadow us. Lupton points to a recent case in which lawyers won damages in a personal-injury suit using data from a physical fitness monitor: the victim’s numbers from before and after the accident were concrete testimony to its effect. Conversely, it is not difficult to imagine such data being subpoenaed and used against someone.
The unintended consequences may also take the form of changed social mores: “Illness, emotional distress, lack of happiness or lack of productivity in the workplace come to be represented primarily as failures of self-control or efficiency on the part of individuals, and therefore as requiring greater or more effective individual efforts -- including perhaps self-tracking regimens of increased intensity -- to produce a ‘better self.’” Advanced technology may offer innovative ways to dig ourselves out of the hole, with the usual level of success.
Lupton is not opposed to self-tracking any more than she is a celebrant of it, in the manner of a loopy technovisionary prophet who announces, “Data will become integral with our sensory, biological self. And as we get more and more connected, our feeling of being tied into one body will also fade, as we become data creatures, bodiless, angelized.” (I will avoid naming the source of that quotation and simply express hope that it was meant to be a parody of Timothy Leary.) Instead, The Quantified Self is a careful, evenhanded survey of a trend that is on the cusp of seeming so ubiquitous that we’ll soon forget how utterly specific the problems associated with this aspect of our sci-fi future are to the wealthy countries, and how incomprehensible they must seem to the rest of the planet.
Yet in Georgia, where I teach, all of our campuses thankfully remain gun-free. While Texas legislators passed and its governor happily signed a law allowing concealed weapons on campus, my governor, Nathan Deal, vetoed a bill that would have done the same here in the Peach State. Faculty members in the rest of the country who will face similar bills as their legislatures meet again can learn important lessons from both states.
I was one of many faculty members who publicly fought the Georgia bill. We did that through op-eds, rallies and letters to elected officials. A few celebrities even joined our cause. As a professor of rhetoric and one of those who took an active role in stopping campus carry, I feel I am in a distinct position to offer lessons to others. They include:
The Higher Education Exception. I have already mentioned one reason Georgia does not have campus carry: the veto of Nathan Deal, a second-term Republican. His veto message offers faculty members some counterclaims to the ones gun supporters usually make.
Deal centered his veto on the oft-cited 2008 Supreme Court ruling in District of Columbia v. Heller that the late Justice Antonin Scalia wrote. This ruling is a favorite of the National Rifle Association and other similar groups who back campus carry bills. Deal noted that Scalia, in fact, supported bans on weapons in “sensitive places” like schools. Deal argued that the history of higher education in America and in our state supports this label.
A common refrain from campus carry advocates is the “contradiction” in laws about guns in states like Georgia: on one side of the street, at the strip mall, a person can carry a gun, but across the street, at the university, one can’t. But that isn’t, in fact, a contradiction. It is on purpose. Many of our other laws distinguish colleges and universities from other public institutions, even other educational ones. For example, FERPA, the Family Educational Rights and Privacy Act, means that parents lose rights over their child’s educational records when they turn 18, the age when many enter higher education.
A good rhetorical move by faculty members in both Texas and Georgia was to point to the strong definitions of higher education that had proceeded from faculty senates. Some of the language of those statements showed up in Governor Deal’s veto. He wrote that “from the early days of our nation and state, colleges have been treated as sanctuaries of learning.” If anything, this clear and precise vision of education should serve faculty well as they address campus carry.
Past Is Prologue. Another reason for a victory in Georgia was particular to our state context, but it could still be applicable to similar states. In his veto message, Deal pointed out that, in 2014, Georgia passed a law allowing concealed weapons (with a permit) in many public places -- including bars and churches that authorize them, government buildings that don’t have security screenings, and K-12 schools. It was dubbed the “guns everywhere” law. Everywhere except colleges and universities. The very same Legislature that passed what the National Rifle Association called “the most comprehensive pro-gun bill in state history” in 2014 did not seem to think campus carry important then. Yet not two years later, it seemed to think otherwise.
Deal -- who signed the 2014 bill -- didn’t let that hypocrisy pass. Activists in other states should take note: use the Legislature’s actions as precedent. Why do they want campus carry now? Why didn’t they pursue it previously?
Faculty and Students United. Another reason Georgia has no guns on campus is the large amount of organized faculty activism. This also happened in Texas, and faculty members there should be applauded for their efforts, especially those who recently sued over the allowance of guns on the campus. Every state is different, and perhaps Texas’ rich history of gun ownership was too big an obstacle. There are many reasons why sound arguments don’t persuade.
In any case, to win against the pro-gun activists, faculty members must join ranks with students. At a rally at our capitol where I spoke at against campus carry, students also spoke. Students created Facebook groups and held their own rallies. It was not merely the “liberal professors” who were against guns but the very group that the legislators wanted to keep safe. Students told those lawmakers thanks but no thanks. They counteracted the students whom the gun groups say are prompting their push for campus carry. And if it turns into a numbers game, the biggest group has some rhetorical power.
The most important lesson to be learned may be how to handle fear. Both sides have used it: fear of crime or fear of students. I understand both. But if we learned anything from the other side in this debate, fear-based arguments, while somewhat effective and energizing at times, usually put off the people whom we most need for support.
If we are intent on convincing legislators, especially those who support gun ownership, a better argument from faculty is the distinctiveness of higher education. This was Scalia’s argument. And if we intend on convincing those students (or faculty) who support gun ownership, how can we reach them through fear of them? Our commonality is a better line of argument.
We must also not let the arguments we make sabotage our credibility to make them. One example that seemed to undermine faculty members was the publicizing of a University of Houston faculty presentation about campus carry that seemed to generalize students as volatile. The university quickly distanced itself from the presentation, saying it was a draft and wouldn’t make into the final policy. But if we are painted as fearing students, the other side calls out for more protection. In other words, it leads quickly into “this is why faculty need a gun.” Fear divides us quickly.
Faculty members also at times have linked guns to academic freedom. This argument hasn’t worked because the public doesn’t fear loss of academic freedom, mainly because it only seems to be an individual benefit to professors. In other words, faculty members haven’t done a good job arguing to the public both nationally and locally how academic freedom and its sister, tenure, are part of the public good, not an individual benefit.
Campus Carry Lite. A compromise that Georgia debated at the same time allowed Tasers and stun guns on campuses. Mainly because gun-rights supporters saw it as a compromise, Governor Deal signed it into law. If your legislators are interested in compromise, that might be a good route. It has its own drawbacks, however, as many critics have noted -- one of which is that many states don’t require permits or training for such devices. Some states have made stun guns illegal. Finally, stun guns are fatal at times, making them as dangerous as firearms.
Celebrity Support. Georgia residents, such as the Indigo Girls, former R.E.M. front man Michael Stipe and actor and University of Georgia alumnus Tituss Burgess, came out against campus carry. This type of media attention seemed more respectable to the general public than the “cocks not Glocks” protest in Texas.
Finally, it is important for faculty to organize well before the first bill is filed. I recommend working with a group like Everytown for Gun Safety, founded in 2014 to advocate for gun control and against gun violence, which has been through the fight in Texas and Georgia. It encourages faculty to join its Educators for Gun Sense. Full disclosure: I have donated to and worked with this group.
Good organization can aid on the back end, too. While Texas had a year to think through its enactment, if our bill in Georgia had passed, we would have had just a few weeks. If a bill is to pass in your state, try to get some delay in its implementation. But if not, a well-organized faculty doesn’t have to wait for its administration to come up with a plan.
Campus carry bills are not going away. With a new governor in two years, Georgia will face this again. Perhaps even sooner. Faculty members across the nation must strategize now about the upcoming legislative session.
Matthew Boedy is an assistant professor of rhetoric and composition at the University of North Georgia in Gainesville.
Current events have highlighted systemic racism in America yet again, and social media feeds continue to be inundated with posts about racism and police brutality. Often, these online conversations enter the classroom, lecture hall or other communal spaces within the university. This can often leave administrators, faculty members and students to fend for themselves during conversations that are, by their very nature, heated and laden with emotional content.
To address this, many people have turned to the language of privilege to structure conversations and unpack racism for those who may be predisposed to deny its very existence. Regardless of how popular the term “privilege” has become, I have never found it particularly useful in discussions, because it is too generic and abstract.
In fact, I believe that “privilege” is a sterile word that does not grapple with the core of the problem. If you are white, you do not have “white” privilege. If you are male, you do not have “male” privilege. If you are straight, you do not have “straight” privilege. What you have is advantage. The language of advantage, I propose, is a much cleaner and more precise way to frame discussions about racism (or sexism, or most systems of oppression).
Any and all advantages one can have are based -- in part, or in whole -- on a system of oppression designed to elevate certain innocuous expressions of humanity over others (skin color, sexual preference and so on). Thus, the language of advantage begins by first enumerating one’s advantages and understanding their origins.
For example, I am advantaged as a male. That advantage affords me a higher salary on average when compared to women, regardless of talent, which in turn affords the further advantage of enabling me to build wealth. If I were white, my advantages would grow. In the academy, I am also, perplexingly, better equipped to take advantage of paternity leave. Being male also enables me to express my opinions as though they were fact -- my opinions in certain spaces are generally not questioned, or if they are, it is not assumed that I am wrong.
Those are simple examples, but they illustrate the point. Advantages can be summed up in a way that can generate a net advantage or disadvantage in certain spaces. This exercise is similar to a “privilege walk.” But it is different in that any advantages will not just net me a meaningless step forward in comparison to my peers. Thinking in this way forces me to understand what my advantages can, in fact, buy.
The distinction between “privilege” and “advantage” is important because “privilege” is not a particularly useful phrase to incite change in the minds or actions of others. No one wants to give up privileges. The entire idea of a privilege is based on possessing a special status that is somehow deserved. Privileges feel good.
Think about all of your privileges. Do you want to give them up? Does giving them up make you feel like you have somehow done someone a favor? (“Here you go … make sure you use this well.”) Or does giving up a “privilege” seem incoherent? It might, because generally privileges are given and taken by someone else. They are earned, and are seldom bad things to have.
Now try shifting your language to that of advantages. Ask yourself, “What advantages do I have over that person over there?” That question is much easier to answer and yields more nuanced responses. If I answer for myself, I can readily see that not all advantages are inherently problematic on their face. As a tall person I am advantaged in some spaces (e.g., reaching up to grab something from the high shelf in a supermarket), and disadvantaged in others (e.g., sitting in a cramped seat on an airplane). Yet if one looks under the surface, one can see that in both circumstances my (dis)advantage is predicated on design choices that are outside of my control. They are systemic. (It is also silly to say that I am tall privileged.)
What about a wealthy high school student who scored well on their SAT? They could unpack their success by understanding their advantage, for example: “Yes, my SAT scores are higher than someone else’s, but that may be because I have advantages in schooling that are predicated on the wealth of my community and/or parents. My schools are better, and I had access to tutoring. Moreover, some of that wealth is a result of oppressing people of color by historically denying them the ability to buy property in nicer areas, thus limiting their capacity to build and transmit wealth to their children. Those advantages are unearned, yet I still benefit from them. So, no, I won’t get bent out of shape if someone else with lower SAT scores is admitted into this fancy college and I’m not.”
The above example is more complex than my innocuous example about my height, but both have the same structure. They both require situating an advantage in a larger sociocultural context. While this is possible by using privilege, doing so can get clunky very quickly, and can shut down conversations before they become meaningful.
Unpacking systematically unfair systems through the language of advantage affords nuance. The poor white farmer lacks economic advantage but still possesses white advantage, and he can thus interact with law enforcement without fear. The wealthy black businessperson lacks racial advantage but can mitigate some of the negative effects of that through the strategic use of wealth. The difference? The white farmer will always be white. The black businessperson may not have always been wealthy, may lose his or her wealth, and his or her wealth can be ignored by a more powerful government.
The language of advantage also implies intersectionality, and this allows for a better understanding of one’s net advantage. For example, I am a Mexican-American man. I do not have “male privilege.” I am a man, and that affords certain unjust advantages when it comes to the salary I can earn and where I can work. However, for a person of color that salary may come with expectations for more service that, for all their merit, can be distracting and lead to less productivity.
All this leads to a certain uncomfortable truth: we are not -- and have never been -- equal when it comes to the advantages we possess. All lives do not matter equally in practice (although they should). It is time we adopt language around racism, sexism, etc., that helps move the conversation forward. Only then can we begin to measure and understand the mechanisms of inequality that lead to needless suffering.
When we shift the language to that of advantages and disadvantages, it foregrounds how unjust and arbitrary some of those advantages are -- while also allowing us to quantify relative (dis)advantage better. The language of privilege, on the other hand, obfuscates the systems of oppression it is meant to highlight. It is time we move on from using it.
Stephen J. Aguilar is a provost postdoctoral scholar for faculty diversity in informatics and digital knowledge at the Rossier School of Education at the University of Southern California. You can follow him on Twitter @stephenaguilar.
Submitted by Sarah Bray on November 15, 2016 - 3:00am
Is English 101 really just English 101? What about that first lab? Is a B or C in either of those lower-division courses a bellwether of a student’s likelihood to graduate? Until recently, we didn’t think so, but more and more, the data are telling us yes. In fact, insights from our advanced analytics have helped us identify a new segment of at-risk students hiding in plain sight.
It wasn’t until recently that the University of Arizona discovered this problem. As we combed through volumes of academic data and metrics with our partner, Civitas Learning, it became evident that students who seemed poised to graduate were actually leaving at higher rates than we could have foreseen. Why were good students -- students with solid grades in their lower-division foundational courses -- leaving after their first, second or even third year? And what could we do to help them stay and graduate from UA?
There’s a reason it’s hard to identify which students fall into this group; they simply don’t exhibit the traditional warning signs as defined by the retention experts. These students persist into the higher years but never graduate despite the fact that they’re strong students. They persist past their first two years and over 40 percent have GPAs above 3.0 -- so how does one diagnose them as at risk when all metrics indicate that they’re succeeding? Now we’re taking a deeper look at the data from the entire curriculum to find clues about what these students really need and even redefine our notion of what “at risk” really means.
Lower-division foundational courses are a natural starting point for us. These are the courses where basic mastery -- of a skill like writing or the scientific process -- begins, and mastery of these basics increases in necessity over the years. Writing, for instance, becomes more, not less, important over students’ academic careers. A 2015 National Survey of Student Engagement at UA indicated that the number of pages of writing assigned in the academic year to freshmen is 55, compared to 76 pages for seniors. As a freshman or sophomore, falling behind even by a few fractions can hurt you later on.
To wit, when a freshman gets a C in English 101, it doesn’t seem like a big deal -- why would it? She’s not at risk; she still has a 3.0, after all. But this student has unintentionally stepped into an institutional blind spot, because she’s a strong student by all measures. Our data analysis now shows that this student may persist until she hits a wall, usually during her major and upper-division courses, which is oftentimes difficult to overcome.
Let’s fast forward two years, then, when that same freshman is a junior enrolled in demanding upper-level classes. Her problem, a lack of writing command, has compounded into a series of C’s or D’s on research papers. A seemingly strong student is now at risk to persist, and her academic life becomes much less clear. We all thought she was on track to graduate, but now what? From that point, she may change her major, transfer to another institution or even exit college altogether. In the past, we would never have considered wraparound support services for students who earned a C in an intro writing course or a B in an intro lab course, but today we understand that we have to be ready and have to think about a deeper level of academic support across the entire life cycle of an undergrad.
Nationally, institutions like ours have developed many approaches to addressing the classic challenges of student success, developing an infrastructure of broad institutional interventions like centralized tutoring, highly specialized support staff, supplemental classes and more. Likewise, professors and advisers have become more attuned to responding to the one-on-one needs of students who may find themselves in trouble. There’s no doubt that this high/low approach has made an impact and our students have measurably benefited from it. But to assist students caught in the middle, those that by all measurement are already “succeeding,” we have to develop a more comprehensive institutional approach that works at the intersections of curricular innovation and wider student support.
Today, we at UA are adding a new layer to the institutional and one-to-one approaches already in place. In our courses, we are pushing to ensure that mastery matters more than a final grade by developing metrics and models that are vital to student learning. This, we believe, will lead to increases in graduation rates. We are working hand in hand with college faculty members, administrators and curriculum committees, arming those partners with the data necessary to develop revisions and supplementary support for the courses identified as critical to graduation rather than term-over-term persistence. We are modeling new classroom practices through the expansion of student-centered active classrooms and adaptive learning to better meet the diverse needs of our students.
When mastery is what matters most, the customary objections to at-risk student intervention matter less. Grade inflation by the instructor and performance for grade by the student become irrelevant. A foundational course surrounded by the support that a student often finds in lower-division courses is not an additional burden to the student, but an essential experience. Although the approach is added pressure on the faculty and staff, it has to be leavened with the resources that help both the instructor and the students succeed.
This is a true universitywide partnership to help a population of students who have found themselves unintentionally stuck in the middle. We must be data informed, not data driven, in supporting our students, because when our data are mapped with a human touch, we can help students unlock their potential in ways even they couldn’t have imagined.
Angela Baldasare is assistant provost for institutional research. Melissa Vito is senior vice president for student affairs and enrollment management and senior vice provost for academic initiatives and student success. Vincent J. Del Casino Jr. is provost of digital learning and student engagement and associate vice president of student affairs and enrollment management at the University of Arizona.