U.S. research universities' global dominance will be threatened in coming years unless governments invest more and universities become more efficient and better educate under-represented groups, according to new National Research Council report.
The most recent case of scientific fraud by Dutch social psychologist Diederik Stapel recalls the 2010 case against Harvard University of Marc Hauser, a well-respected researcher in human and animal cognition. In both cases, the focus was on access to and irregularities in handling of data. Stapel retained full control of the raw data, never allowing his students or colleagues to have access to data files. In the case of Hauser, the scientific misconduct investigation found missing data files and unsupported scientific inference at the center of the accusations against him. Outright data fraud by Stapel and sloppy data management and inappropriate data use by Hauser underscore the critical role data transparency plays in preventing scientific misconduct.
Recent developments at the National Science Foundation (and earlier this decade at the National Institutes of Health) suggest a solution — data-sharing requirements for all grant-funded projects and by all scientific journals. Such a requirement could prevent this type of fraud by quickly opening up research data to scrutiny by a wider community of scientists.
Stapel’s case is an extreme example and more likely possible in disciplines with substantially limited imperatives for data sharing and secondary data use. The research traditions of psychology suggest that collecting your own data is the only sound scientific practice. This tradition, less widely shared in other social sciences, encourages researchers to protect data from outsiders. The potential for abuse is clear.
According to published reports about Hauser, there were three instances in which the original data used in published articles could not be found. While Hauser repeated two of those experiments and produced data that supported his papers, his poor handling of data cast a significant shadow of uncertainty and suspicion over his work.
Hauser’s behavior is rare, but not unheard of. In 2008, the latest year for which data are available, the Office of Research Integrity at the U.S. Department of Health and Human Services reported 17 closed institutional cases that included data falsification or fabrication. These cases involved research funded by the federal government, and included the manipulation or misinterpretation of research data rather than the violation of scientific ethics or institutional oversight.
In both Hauser and Stapel's cases, graduate students were the first to alert authorities to irregularities. Rather than relying on other members of a researcher’s lab to come forward (an action that requires a great deal of personal and professional courage,) the new data sharing requirements at NSF and NIH have the potential to introduce long-term cultural changes in the conduct of science that may reduce the likelihood of misconduct based on data fabrication or falsification. The requirements were given teeth at NSF by the inclusion of new data management plans in the scored portion of the grant application.
NIH has since 2003 required all projects requesting more than $500,000 per year to include a data-sharing plan, and the NSF announced in January 2011 that it would require all grant requests to include data management plans. The NSF has an opportunity to reshape scientists' behavior by ensuring that the data-management plans are part of the peer review process and are evaluated for scientific merit. Peer review is essential for data-management plans for two reasons. First and foremost, it creates an incentive for scientists to actually share data. The NIH initiatives have offered the carrot for data sharing — the NSF provides the stick. The second reason is that the plans will reflect the traditions, rules, and constraints of the relevant scientific fields.
Past attempts to force scientists to share data have met with substantial resistance because the legislation did not acknowledge the substantial differences in the structure, use, and nature of data across the social, behavioral and natural sciences, and the costs of preparing data. Data sharing legislation has often been code for, "We don’t like your results," or political cover for previously highly controversial issues such as global warming or the health effects of secondhand smoke. The peer review process, on the other hand, forces consistent standards for data sharing, which are now largely absent, and allow scientists to build and judge those standards. "Witch hunts" disguised as data sharing would disappear.
The intent of the data sharing initiatives at the NIH and currently at NSF has very little to do with controlling or policing scientific misconduct. These initiatives are meant to both advance science more rapidly and to make the funding of science more efficient. Nevertheless, there is a very real side benefit of explicit data sharing requirements: reducing the incidence of true fraud and the likelihood that data errors would be misinterpreted as fraud.
The requirement to make one’s data available in a timely and accessible manner will change incentives and behavior. First, of course, if the data sets are made available in a timely manner to researchers outside the immediate research team, other scientists can begin to scrutinize and replicate findings immediately. A community of scientists is the best police force one can possibly imagine. Secondly, those who contemplate fraud will be faced with the prospect of having to create and share fraudulent data as well as fraudulent findings.
As scientists, it is often easier for us to imagine where we want to go than how to get there. Proponents of data sharing are often viewed as naïve scientific idealists, yet it seems an efficient and elegant solution to the many ongoing struggles to maintain the scientific infrastructure and the public’s trust in federally funded research. Every case of scientific fraud, particularly on such controversial issues such as the biological source of morality (which is part of Hauser’s research) or the sources of racial prejudice (in the case of Stapel) allows those suspicious of science and governments’ commitment to funding science to build a case in the public arena. Advances in technology have allowed the scientific community the opportunity to share data in a broad and scientifically valid manner, and in a way that would effectively counter those critics.
NIH and NSF have led the way toward more open access to scientific data. It is now imperative that other grant funding agencies and scientific journals redouble their own efforts to force data, the raw materials of science, into the light of day well before problems arise.
Felicia B. LeClere is a principal research scientist in the Public Health Department of NORC at the University of Chicago, where she works as research coordinator on multiple projects, including the National Immunization Survey and the National Children's Study.
As the CEO of a tech start-up and a former professor, here’s what keeps me awake at night: half of college students pursuing degrees in science, technology, engineering and math end up dropping those courses and switching to another major. That is disturbing, not only because I am personally passionate about STEM innovators’ potential to improve lives, but also because it is no secret that we are in dire need of a STEM-proficient work force. If we continue at this rate of attrition, in the next decade, America will need approximately a million more STEM professionals than the field will produce. While we’re pumping much-needed investments into ensuring more K-12 students have access to worthwhile math and computer science education, these investments will mean very little if students abandon STEM once they get to college.
If these skills are so critical, why are students failing to complete STEM degrees? And what can we do to reverse the trend?
In recent years, we’ve gained a better understanding of why students drop STEM majors. Many leave the field early -- even during the first courses they take as undergraduates -- because they’re striving to get good grades in comparison to their performance in non-STEM courses. Some students who struggle the most are discouraged to the point of dropping out of college altogether, which is a devastating outcome for students who once hoped to be computer programmers, doctors and engineers.
The other largest driver of STEM attrition is a lack of engagement with the material. There is a mismatch between today’s students, who understand and interact with the world through technology, and the outdated, two-dimensional delivery of information found in too many STEM courses. This is a shame, given that STEM subjects are inherently engaging, interactive and rooted in exploration.
In classrooms around the world, instructors are tapping into the potential of new technologies to address this learning deficit, and interactive learning models are proving most effective at increasing student engagement and boosting student performance. In STEM programs in particular, these new technologies have been grafted to the established curriculum as one way to improve student retention rates, and the results are promising. Studies show improved student performance in these courses -- more A’s and B’s, fewer D’s and F’s -- with particularly significant gains for the lowest-performing students.
Interactive learning tools using web-based technology, such as digital textbooks and homework assignments, present endless possibilities to improve student engagement and achievement. And we’re not talking about digital copies of static text but rather materials that are alive with animation, graphics and instant-feedback question sets that emphasize learning through action. Such tools work because they disrupt the classic passive learning model and invite the student to become the doer.
Students taking these courses demonstrate not only improved results but also a greater desire to learn. In fact, most report a preference for interactive learning tools and choose to spend twice as much time with interactive textbooks than traditional textbooks, even though there is less text. Students are staying on track and moving on with a deeper understanding of the content.
When I taught at the University of California, Davis, many of my colleagues faced the same issue: traditional textbooks and teaching resources are simply not as effective as we need them to be, leaving even the most talented instructors equipped with inadequate tools. Embracing web-based resources allows us to show movement, cause and effect, and coding outcomes much better than a PowerPoint, chalkboard or old-fashioned textbook ever could. And without the costs of printing and physical distribution, web-based interactive tools address yet another barrier to student retention -- the burden of soaring textbook prices -- head-on.
This is a pivotal moment in developing the STEM work force. We are witnessing a generation of students with inherent talent and capacity give up before they’ve even begun. If we don’t focus our efforts on supporting greater numbers of students to succeed in STEM degrees, we may find ourselves navigating a STEM shortage more stark than the gap we see today. Fortunately, instructors are keenly aware of the challenge and are cultivating the necessary ingenuity to steer this generation back to STEM and to success.
Smita Bakshi is the co-founder and CEO of zyBooks digital interactive textbooks and a former electrical and computer engineering professor at the University of California, Davis.
Submitted by Anonymous on November 22, 2016 - 3:00am
The STEM Fields
When the Supreme Court handed down its decision in the case Fisher vs. the University of Texas in July, university admissions officers cheered the affirmation of including race and ethnicity as admissions criteria when narrowly tailored to the institution’s mission. Despite the positive decision for affirmative action, however, university leaders are facing another challenge: making sure they have the right diversity practices in place to support the students they admit. Colleges and universities still have plenty of work to do to encourage students to pursue high-needs fields, like STEM and the biomedical sciences, where diversity is urgently needed.
In addition, universities continue to struggle with faculty diversity, which studies have shown is important not just for excellence in teaching and research but also for the overall campus climate. All the more reason, then, for us to redouble our efforts in researching and sharing effective practices for improving campus diversity -- and identifying ineffective practices that we should stop.
We’ve got a great base to start from. Take the many initiatives designed to ensure the success of underrepresented students -- programs designed precisely to ensure that we don’t lose them on their way to graduate school and the biomedical research work force. These efforts develop student talent along the educational and career continuum in biomedical and STEM fields, and ensure student persistence and success. Most important, some of these programs have developed successful models and gathered evaluative research to understand their success.
For example, the Meyerhoff Scholars Program at the University of Maryland Baltimore County has been widely recognized for its successful development of many underrepresented students in the sciences. An evaluation of the program found that the key levers of success were financial support, identity formation as a member of the community of Meyerhoff Scholars, summer research activities and professional network development.
Another example is the Fisk-Vanderbilt Master’s-to-Ph.D. Bridge Program, which aims to address the barriers facing underrepresented students in matriculating to doctoral programs. The program has produced a number of high-profile graduates, including Fabienne Bastien, the first African-American woman to be published in Nature and the first African-American recipient of the NASA Hubble Fellowship. Half of the program’s Ph.D. graduates are female, and 83 percent are minority-community individuals.
What would yet more research on these and other programs tell us about how to support the success of all students? We need more empirical evidence to close gaps in the existing research. We also need to bring exemplary practices to scale more quickly at many more institutions. For example, based on gaps in existing research we need to:
Identify effective interventions that universities can implement to reduce stereotype threat, a phenomenon that occurs when members of a disadvantaged group perform poorly when made aware of negative stereotypes about their group;
Learn more about how underrepresented students in STEM are accessing high-impact practices, such as internships and undergraduate research, and develop strategies for increasing participation; and
Identify effective teaching and learning methods that will boost underrepresented undergraduate student performance in required gateway courses.
These three areas, ripe for action, also demonstrate the gaps in the evidence. For example, high-impact practices are supported by a robust body of research, but less is known about how well underrepresented students are accessing these experiences. This is because most high-impact practices occur beyond the classroom, and it is difficult to track students’ participation and tie their experiences to academic outcomes.
In other cases, different interventions have been tested at the institutional level but have not been evaluated across institutions or in different contexts, such as adapting undergraduate interventions for graduate students. It’s a complex problem, and the research needs to get at that complexity.
Working together, the Association of Public and Land-grant Universities, its Coalition of Urban-Serving Universities, and the Association of American Medical Colleges have gathered the existing evidence in a recent report that also identifies what’s missing and where we need to go next.
To address these gaps in research, we will need more partners in government, industry, philanthropy and academe to take action -- testing the available models, researching new options, reporting on their results and revising approaches based on the evidence in hand.
Improving evidence for pilot interventions will help leaders build a case for adoption of those shown to be effective at many institutions. Learning more about potential barriers to access will help university leaders improve pathways into these experiences and track student outcomes more effectively.
And at a more basic level, probing more deeply into what works and what doesn’t in our efforts to support diversity will help us with a much more fundamental problem: we’ll get a clearer picture of the “systemic unfairness” that our blind spots prevent us from seeing, as Lisa Burrell pointed out in her Harvard Business Review article “We Just Can’t Handle Diversity.” More precise research will help us avoid such phenomena as hindsight bias, which, as Burrell describes, “causes us to believe that random events are predictable and to manufacture explanations for the inevitability of our achievements.”
In its decision in the Fisher case, the Supreme Court justices called on universities to “engage in constant deliberation and continued reflection” about how diversity is achieved. We go one step farther: higher education institutions and their partners need to research as well as reflect, demonstrate as well as deliberate and put a fine point on existing findings to close the gaps in the research. Only then can we counter the challenges to our efforts to diversify the biomedical research work force and ensure that we’re doing everything we can to support the success of all students.
Jennifer C. Danek is the senior director for Urban Universities for HEALTH, a collaborative effort of the Association of Public and Land-grant Universities/Coalition of Urban Serving Universities and the Association of American Medical Colleges. Marc Nivet is the former chief diversity officer for the Association of American Medical Colleges.