As the CEO of a tech start-up and a former professor, here’s what keeps me awake at night: half of college students pursuing degrees in science, technology, engineering and math end up dropping those courses and switching to another major. That is disturbing, not only because I am personally passionate about STEM innovators’ potential to improve lives, but also because it is no secret that we are in dire need of a STEM-proficient work force. If we continue at this rate of attrition, in the next decade, America will need approximately a million more STEM professionals than the field will produce. While we’re pumping much-needed investments into ensuring more K-12 students have access to worthwhile math and computer science education, these investments will mean very little if students abandon STEM once they get to college.
If these skills are so critical, why are students failing to complete STEM degrees? And what can we do to reverse the trend?
In recent years, we’ve gained a better understanding of why students drop STEM majors. Many leave the field early -- even during the first courses they take as undergraduates -- because they’re striving to get good grades in comparison to their performance in non-STEM courses. Some students who struggle the most are discouraged to the point of dropping out of college altogether, which is a devastating outcome for students who once hoped to be computer programmers, doctors and engineers.
The other largest driver of STEM attrition is a lack of engagement with the material. There is a mismatch between today’s students, who understand and interact with the world through technology, and the outdated, two-dimensional delivery of information found in too many STEM courses. This is a shame, given that STEM subjects are inherently engaging, interactive and rooted in exploration.
In classrooms around the world, instructors are tapping into the potential of new technologies to address this learning deficit, and interactive learning models are proving most effective at increasing student engagement and boosting student performance. In STEM programs in particular, these new technologies have been grafted to the established curriculum as one way to improve student retention rates, and the results are promising. Studies show improved student performance in these courses -- more A’s and B’s, fewer D’s and F’s -- with particularly significant gains for the lowest-performing students.
Interactive learning tools using web-based technology, such as digital textbooks and homework assignments, present endless possibilities to improve student engagement and achievement. And we’re not talking about digital copies of static text but rather materials that are alive with animation, graphics and instant-feedback question sets that emphasize learning through action. Such tools work because they disrupt the classic passive learning model and invite the student to become the doer.
Students taking these courses demonstrate not only improved results but also a greater desire to learn. In fact, most report a preference for interactive learning tools and choose to spend twice as much time with interactive textbooks than traditional textbooks, even though there is less text. Students are staying on track and moving on with a deeper understanding of the content.
When I taught at the University of California, Davis, many of my colleagues faced the same issue: traditional textbooks and teaching resources are simply not as effective as we need them to be, leaving even the most talented instructors equipped with inadequate tools. Embracing web-based resources allows us to show movement, cause and effect, and coding outcomes much better than a PowerPoint, chalkboard or old-fashioned textbook ever could. And without the costs of printing and physical distribution, web-based interactive tools address yet another barrier to student retention -- the burden of soaring textbook prices -- head-on.
This is a pivotal moment in developing the STEM work force. We are witnessing a generation of students with inherent talent and capacity give up before they’ve even begun. If we don’t focus our efforts on supporting greater numbers of students to succeed in STEM degrees, we may find ourselves navigating a STEM shortage more stark than the gap we see today. Fortunately, instructors are keenly aware of the challenge and are cultivating the necessary ingenuity to steer this generation back to STEM and to success.
Smita Bakshi is the co-founder and CEO of zyBooks digital interactive textbooks and a former electrical and computer engineering professor at the University of California, Davis.
Submitted by Anonymous on November 22, 2016 - 3:00am
The STEM Fields
When the Supreme Court handed down its decision in the case Fisher vs. the University of Texas in July, university admissions officers cheered the affirmation of including race and ethnicity as admissions criteria when narrowly tailored to the institution’s mission. Despite the positive decision for affirmative action, however, university leaders are facing another challenge: making sure they have the right diversity practices in place to support the students they admit. Colleges and universities still have plenty of work to do to encourage students to pursue high-needs fields, like STEM and the biomedical sciences, where diversity is urgently needed.
In addition, universities continue to struggle with faculty diversity, which studies have shown is important not just for excellence in teaching and research but also for the overall campus climate. All the more reason, then, for us to redouble our efforts in researching and sharing effective practices for improving campus diversity -- and identifying ineffective practices that we should stop.
We’ve got a great base to start from. Take the many initiatives designed to ensure the success of underrepresented students -- programs designed precisely to ensure that we don’t lose them on their way to graduate school and the biomedical research work force. These efforts develop student talent along the educational and career continuum in biomedical and STEM fields, and ensure student persistence and success. Most important, some of these programs have developed successful models and gathered evaluative research to understand their success.
For example, the Meyerhoff Scholars Program at the University of Maryland Baltimore County has been widely recognized for its successful development of many underrepresented students in the sciences. An evaluation of the program found that the key levers of success were financial support, identity formation as a member of the community of Meyerhoff Scholars, summer research activities and professional network development.
Another example is the Fisk-Vanderbilt Master’s-to-Ph.D. Bridge Program, which aims to address the barriers facing underrepresented students in matriculating to doctoral programs. The program has produced a number of high-profile graduates, including Fabienne Bastien, the first African-American woman to be published in Nature and the first African-American recipient of the NASA Hubble Fellowship. Half of the program’s Ph.D. graduates are female, and 83 percent are minority-community individuals.
What would yet more research on these and other programs tell us about how to support the success of all students? We need more empirical evidence to close gaps in the existing research. We also need to bring exemplary practices to scale more quickly at many more institutions. For example, based on gaps in existing research we need to:
Identify effective interventions that universities can implement to reduce stereotype threat, a phenomenon that occurs when members of a disadvantaged group perform poorly when made aware of negative stereotypes about their group;
Learn more about how underrepresented students in STEM are accessing high-impact practices, such as internships and undergraduate research, and develop strategies for increasing participation; and
Identify effective teaching and learning methods that will boost underrepresented undergraduate student performance in required gateway courses.
These three areas, ripe for action, also demonstrate the gaps in the evidence. For example, high-impact practices are supported by a robust body of research, but less is known about how well underrepresented students are accessing these experiences. This is because most high-impact practices occur beyond the classroom, and it is difficult to track students’ participation and tie their experiences to academic outcomes.
In other cases, different interventions have been tested at the institutional level but have not been evaluated across institutions or in different contexts, such as adapting undergraduate interventions for graduate students. It’s a complex problem, and the research needs to get at that complexity.
Working together, the Association of Public and Land-grant Universities, its Coalition of Urban-Serving Universities, and the Association of American Medical Colleges have gathered the existing evidence in a recent report that also identifies what’s missing and where we need to go next.
To address these gaps in research, we will need more partners in government, industry, philanthropy and academe to take action -- testing the available models, researching new options, reporting on their results and revising approaches based on the evidence in hand.
Improving evidence for pilot interventions will help leaders build a case for adoption of those shown to be effective at many institutions. Learning more about potential barriers to access will help university leaders improve pathways into these experiences and track student outcomes more effectively.
And at a more basic level, probing more deeply into what works and what doesn’t in our efforts to support diversity will help us with a much more fundamental problem: we’ll get a clearer picture of the “systemic unfairness” that our blind spots prevent us from seeing, as Lisa Burrell pointed out in her Harvard Business Review article “We Just Can’t Handle Diversity.” More precise research will help us avoid such phenomena as hindsight bias, which, as Burrell describes, “causes us to believe that random events are predictable and to manufacture explanations for the inevitability of our achievements.”
In its decision in the Fisher case, the Supreme Court justices called on universities to “engage in constant deliberation and continued reflection” about how diversity is achieved. We go one step farther: higher education institutions and their partners need to research as well as reflect, demonstrate as well as deliberate and put a fine point on existing findings to close the gaps in the research. Only then can we counter the challenges to our efforts to diversify the biomedical research work force and ensure that we’re doing everything we can to support the success of all students.
Jennifer C. Danek is the senior director for Urban Universities for HEALTH, a collaborative effort of the Association of Public and Land-grant Universities/Coalition of Urban Serving Universities and the Association of American Medical Colleges. Marc Nivet is the former chief diversity officer for the Association of American Medical Colleges.
In a rare moment of inattention a couple of years ago, I let myself get talked into becoming the chair of my campus’s Institutional Review Board. Being IRB chair may not be the best way to endear oneself to one’s colleagues, but it does offer an interesting window into how different disciplines conceive of research and the many different ways that scholarly work can be used to produce useful knowledge.
It has also brought home to me how utterly different research and assessment are. I have come to question why anyone with any knowledge of research methods would place any value on the results of typical learning outcomes assessment.
IRB approval is required for any work that involves both research and human subjects. If both conditions are met, the IRB must review it; if only one is present, the IRB can claim no authority. In general, it’s pretty easy to tell when a project involves human subjects, but distinguishing nonresearch from research, as it is defined by the U.S. Department of Health and Human Services, is more complicated. It depends in large part on whether the project will result in generalizable knowledge.
Determining what is research and what is not is interesting from an IRB perspective, but it has also forced me to think more about the differences between research and assessment. Learning outcomes assessment looks superficially like human subjects research, but there are some critical differences. Among other things, assessors routinely ignore practices that are considered essential safeguards for research subjects as well as standard research design principles.
A basic tenet of ethical human subjects research is that the research subjects should consent to participate. That is why obtaining informed consent is a routine part of human subject research. In contrast, students whose courses are being assessed are typically not asked whether they are willing to participate in those assessments. They are simply told that they will be participating. Often there is what an IRB would see as coercion. Whether it’s 20 points of extra credit for doing the posttest or embedding an essay that will be used for assessment in the final exam, assessors go out of their way to compel participation in the study.
Given that assessment involves little physical or psychological risk, the coercion of assessment subjects is not that big of a deal. What is more interesting to me is how assessment plans ignore most of the standard practices of good research. In a typical assessment effort, the assessor first decides what the desired outcomes in his course or program are. Sometimes the next step is to determine what level of knowledge or skill students bring with them when they start the course or program, although that is not always done. The final step is to have some sort of posttest or “artifact” -- assessmentspeak for a student-produced product like a paper rather than, say, a potsherd -- which can be examined (invariably with a rubric) to determine if the course or program outcomes have been met.
On some levels, this looks like research. The pretest gives you a baseline measurement, and then, if students do X percent better on the posttest, you appear to have evidence that they made progress. Even if you don’t establish a baseline, you might still be able to look at a capstone project and say that your students met the declared program-level outcome of being able to write a cogent research paper or design and execute a psychology experiment.
From an IRB perspective, however, this is not research. It does not produce generalizable knowledge, in that the success or, more rarely, failure to meet a particular course or program outcome does not allow us to make inferences about other courses or programs. So what appears to have worked for my students, in my World History course, at my institution, may not provide any guidance about what will work at your institution, with your students, with your approach to teaching.
If assessment does not offer generalizable knowledge, does assessment produce meaningful knowledge about particular courses or programs? I would argue that it does not. Leaving aside arguments about whether the blunt instrument of learning outcomes can capture the complexity of student learning or whether the purpose of an entire degree program can be easily summed up in ways that lend themselves to documentation and measurement, it is hard to see how assessment is giving us meaningful information, even concerning specific courses or programs.
First, the people who devise and administer the assessment have a stake in the outcome. When I assess my own course or program, I have an interest in the outcome of that assessment. If I create the assessment instrument, administer it and assess it, my conscious or even unconscious belief in the awesomeness of my own course or program is certain to influence the results. After all, if my approach did not already seem to be the best possible way of doing things, as a conscientious instructor, I would have changed it long ago.
Even if I were the rare human who is entirely without bias, my assessment results would still be meaningless, because I have no way of knowing what caused any of the changes I have observed. I have never seen a control group used in an assessment plan. We give all the students in the class or program the same course or courses. Then we look at what they can or cannot do at the end and assume that the course work is the cause of any change we have observed. Now, maybe this a valid assumption in a few instances, but if my history students are better writers at the end of the semester than they were at the beginning of the semester, how do I know that my course caused the change?
It could be that they were all in a good composition class at the same time as they took my class, or it could even be the case, especially in a program-level assessment, that they are just older and their brains have matured over the last four years. Without some group that has not been subjected to my course or program to compare them to, there is no compelling reason to assume it’s my course or program that’s causing the changes that are being observed.
If I developed a drug and then tested it myself without a control group, you might be a bit suspicious about my claims that everyone who took it recovered from his head cold after two weeks and thus that my drug is a success. But these are precisely the sorts of claims that we find in assessment.
I suspect that most academics are either consciously aware or at least unconsciously aware of these shortcomings and thus uneasy about the way assessment is done. That no one says anything reflects the sort of empty ritual that assessment is. Faculty members just want to keep the assessment office off their backs, the assessment office wants to keep the accreditors at bay and the accreditors need to appease lawmakers, who in turn want to be able to claim that they are holding higher education accountable.
IRBs are not supposed to critique research design unless it affects the safety of human subjects. However, they are supposed to weigh the balance between the risks posed by the study and the benefits of the research. Above all, you should not waste the time or risk the health of human subjects with research that is so poorly designed that it cannot produce meaningful results.
So, acknowledging that assessment is not research and not governed by IRB rules, it still seems that something silly and wasteful is going on here. Why is it acceptable that we spend more and more time and money -- time and money that have real opportunity costs and could be devoted to our students -- on assessment that is so poorly designed that it does not tell us anything meaningful about our courses or students? Whose interests are really served by this? Not students. Not faculty members.
It’s time to stop this charade. If some people want to do real research on what works in the classroom, more power to them. But making every program and every faculty member engage in nonresearch that yields nothing of value is a colossal, frivolous waste of time and money.
Erik Gilbert is a professor of history at Arkansas State University.
So you’ve published a paper on monetary theory, snagged that fellowship for research in Rome or received an award for best teacher of the year. Or maybe you’ve just served on seven committees this past semester, from tenure review to curriculum reform, and colleagues ought to appreciate that.
But they don’t, at least not to your face. And the pathetic “Faculty News” page of your department website just doesn’t cut it. (Take a look at the listing for Professor Dale’s publication in Southwest Annandale Historical Society Notes: doubly out of date, since both the professor and the journal are extinct.)
You want to be known as the foremost expert on forensic linguistics or the one who got a National Endowment for the Humanities grant to study Rilke -- back in 2011, but still. What to do? Think of Berkeley’s famous proposition, applied to academics: “If a paper is published in a journal and no one knows about it, does it make a sound?” Is it OK to toot your own horn? In this era of Facebook, are you kidding?
Consider the humblebrag, a seemingly modest utterance that’s actually a boast. The British have excelled in this charming self-deprecation for centuries: “Oh, I don’t suppose many people were in the running this year,” for instance, to explain why you won the London marathon. Only this is higher education in 2016, with access to Twitter.
Think brassier, think of that academic review coming up in 2017, and think within a 140-character limit:
Gosh, if I don’t send in that manuscript to Oxford by this fall, they’re gonna kill me!
I don’t see how I’m going to get any work done during my fellowship in Belize.
Darned if I know why the Fulbright committee chose my proposal over so many deserving others.
You know, if it weren’t for all the grateful letters that I’ve gotten from students over the years, I’d’ve given up teaching a long time ago.
Never mind all my publications. The Smoot Teaching Award I got this year makes me realize what really matters in life.
I keep thinking there must be some mistake: Why would the Guggenheim committee even consider my work on medieval stairways?
You know, I never set out to write a best seller. Everyone knows what people in academe think of that.
Promotion to full professor isn’t much, I guess, but I try to see it as an affirmation of all I’ve done here.
I don’t anticipate the deanship will give me much power, but I do intend to take the responsibility seriously.
It’s not fashionable to talk about service, I know, which is why I don’t discuss all the behind-the-scenes work I do for the college.
All that work for such a simple title: provost.
I’m sure plenty of people could have delivered the keynote address at this conference, but I’m the one who got suckered into it.
They said I’m the youngest program director they’ve ever had -- must be their code word for inexperienced.
The students in my econ class all say that I’m their favorite teacher, but you know what that means.
As an adjunct, I could just phone in my performance, but I always have to put in 200 percent. Sigh. That’s just me.
That’s what I told Mike -- I mean, the chancellor. No idea why he listens to me. Hey, I’m just custodial staff.
David Galef directs the creative writing program at Montclair State University. His latest book, Brevity: A Flash Fiction Handbook, has just come out from Columbia University Press.
In his autobiography, Benjamin Franklin describes how, as a striving young man in Philadelphia, he practiced a quite literal variety of moral bookkeeping. Having determined 13 virtues he ought to cultivate (temperance, frugality, chastity, etc.), he listed them on a table or grid, with the seven days of the week as its horizontal element. At night, before bed, he would make a mark for each time he had succumbed to a vice that day, in the row for the virtue so compromised.
A dot in the ledger was a blot on his character. Franklin explicitly states that his goal was moral perfection; the 13th virtue on his list was humility, almost as an afterthought. But without claiming to have achieved perfection, Franklin reports that his self-monitoring began to show results. Seeing fewer markings on the page from week to week provided a form of positive reinforcement that made Franklin, as he put it in his late 70s, “a better and a happier man than I otherwise should have been had I had not attempted it.”
Franklin’s feedback system was a prototype of the 21st-century phenomenon analyzed by Deborah Lupton in The Quantified Self (Polity), a study of how digital self-tracking is insinuating itself into every nook and cranny of human experience. (The author is a research professor in communication at the University of Canberra in Australia.) A device or application is available now for just about any activity or biological function you can think of (if not, just wait), generating a continuous flow of data. It’s possible to keep track of not only what you eat but where you eat it, at what time and how much ground was covered in walking to and from the restaurant, assuming you did.
In principle, the particulars of your digestive and excretory processes could also be monitored and stored: Lupton mentions “ingestible digital tablets that send wireless signals from inside the body to a patch worn on the arm.” She does not elaborate, but a little follow-up shows that their potential medical value is to provide “an objective measure of medication adherence and physiologic response.” Wearable devices can keep track of alcohol consumption (as revealed by sweat), as well as every exertion and benefit from a fitness routine. Sensor-equipped beds can monitor your sleep patterns and body temperature, not to mention “sounds and thrusting motions” possibly occurring there.
Self-tracking in the digital mode yields data about the individual characterized by harder-edged objectivity than even the most brutally honest self-assessment might allow. For Franklin, the path to self-improvement involved translating the moral evaluation of his own behavior into an externalized, graphic record; it was an experiment with the possibility of increasing personal discipline through enhanced self-awareness. The tools and practices that Lupton discusses -- the examples cited above are just a small selection -- expand upon Franklin’s sense of the self as something to be quantified, controlled and optimized. The important difference lies in how comprehensive and automated the contemporary methods are (many of the apps and devices can run in the background of everyday life, unnoticed most of the time), as well as how much more strongly they imply a technocratic sense of the world.
“The body is represented as a machine,” writes Lupton, “that generates data requiring scientific modes of analysis and contains imperceptible flows and ebbs of data that need to be identified, captured and harnessed so that they may be made visible to the observer.” But not only the body: other forms of self-tracking are available to monitor (and potentially to control) productivity, mood and social interaction. One device, “worn like a brooch … listens to conversations with and around the wearer and lights up when the conversation refers to topics that the user has listed in the associated app.”
Along with the ability to monitor and control various dimensions of an individual’s existence, there is likely to come the expectation or obligation to do so. On this point, Lupton’s use of the idea of self-reflexivity (as developed by the social theorists Zygmunt Bauman, Ulrich Beck and Anthony Giddens) proves more compelling than her somewhat perfunctory and obligatory references to Michel Foucault on “technologies of self” or Christopher Lasch on “the culture of narcissism.” The digitally enhanced, self-monitoring 21st-century citizen must meet the challenge of continuously “seeking information and making choices about one’s life in a context in which traditional patterns and frameworks that once structured the life course have largely dissolved … Because [people] must do so, their life courses have become much more open, but also much more subject to threats and uncertainties,” especially “in a political context of the developed world -- that of neoliberalism -- that champions self-responsibility, the market economy and competition and where the state is increasingly withdrawing from offering economic support to citizens.”
In such a context, high-tech self-tracking can provide access to exact, objective self-knowledge about health, productivity, status (there are apps that keep track of your standing in the world of social media) and so on. Know thyself -- and control thy destiny! Or so it would seem, if not for a host of issues around who has ownership, use or control of the digital clouds that shadow us. Lupton points to a recent case in which lawyers won damages in a personal-injury suit using data from a physical fitness monitor: the victim’s numbers from before and after the accident were concrete testimony to its effect. Conversely, it is not difficult to imagine such data being subpoenaed and used against someone.
The unintended consequences may also take the form of changed social mores: “Illness, emotional distress, lack of happiness or lack of productivity in the workplace come to be represented primarily as failures of self-control or efficiency on the part of individuals, and therefore as requiring greater or more effective individual efforts -- including perhaps self-tracking regimens of increased intensity -- to produce a ‘better self.’” Advanced technology may offer innovative ways to dig ourselves out of the hole, with the usual level of success.
Lupton is not opposed to self-tracking any more than she is a celebrant of it, in the manner of a loopy technovisionary prophet who announces, “Data will become integral with our sensory, biological self. And as we get more and more connected, our feeling of being tied into one body will also fade, as we become data creatures, bodiless, angelized.” (I will avoid naming the source of that quotation and simply express hope that it was meant to be a parody of Timothy Leary.) Instead, The Quantified Self is a careful, evenhanded survey of a trend that is on the cusp of seeming so ubiquitous that we’ll soon forget how utterly specific the problems associated with this aspect of our sci-fi future are to the wealthy countries, and how incomprehensible they must seem to the rest of the planet.