California embraces the completion agenda while foundations play a bigger role

California’s public colleges are partnering more with foundations to achieve completion goals, and while resistance among faculty members remains, the previously rocky relationship appears to have improved.

Interdisciplinary social sciences lab at Northeastern U challenges prevailing norms of lab work


Who says lab work is just for natural scientists? Interdisciplinary social sciences lab at Northeastern U challenges prevailing norms.

Several countries launch campaigns to recruit research talent from U.S. and elsewhere

Britain, Canada, France and Germany all launch funding programs to recruit foreign researchers. Will they succeed in capitalizing on perceptions of the U.S. as a less attractive place for research?

Proposal on indirect costs would put research universities in an impossible situation (essay)

Over the past 20 years, technologies based on university research have launched entire new industries, cured fatal diseases and even put new foods on your grocery store shelves. Since 1996, these technologies have contributed an estimated $1.3 trillion and 4.2 million jobs to the American economy. In 2015, Florida’s state universities spun out 48 start-ups and achieved a multitude of scientific breakthroughs in health, engineering, agriculture and basic sciences.

The partnership between America’s research universities, industry and the federal government is the envy of the world. But a proposal by the federal Office of Management and Budget to severely cut the reimbursement government agencies make to universities for shared research costs threatens to destroy it.

University research expenses are typically divided into two buckets of money. Money in the first bucket pays for the direct costs of the project: salaries for researchers and stipends for graduate assistants, lab equipment and supplies, and travel.

The money in the second bucket funds all the other things researchers need to do their work, like the building itself; electricity; heating, air-conditioning and other utilities; janitorial services; building security; laboratory safety equipment; and information technology. It also pays the salaries of the support staff members who help scientists develop and submit highly technical research proposals, manage millions of dollars in public funds, and comply with a myriad of federal rules and regulations. These highly trained professionals enable the scientists to focus on what they do best, whether it’s finding a cure for diabetes or protecting computer systems from ransomware.

In the business world, these are called overhead costs.

For most universities, such overhead costs amount to about $1 for every $2 spent directly on the research. They are determined not by the universities but by the federal government through a rigorous review process.

But the federal Office of Management and Budget proposes to pay as little as 20 cents for every $2 of research, grossly shortchanging universities and leaving them with the option to either pick up the tab or simply not do the research. You don’t need an M.B.A. to understand that any business forced to sell its services for less than its costs is doomed. For the universities within Florida’s State University System, this “tab” would be more than $100 million per year, a cost that would have to be covered with other university revenues. As such, it would place the universities in the state of Florida, and every other state, in an impossible situation -- either subsidize federally funded research with other university money or quit doing research on next-generation technologies and medical treatments.

As the chief research officers for Florida’s 12 state universities, we are committed to recovering the costs for services provided to the federal government at no loss to the institution. We urge our representatives in Washington to make every effort to stave off this action by OMB and the agencies. Otherwise, the research efforts at our universities, a shining beacon of American know-how that has been decades in the making, will be crippled. Faculty members at each of our 12 institutions are working each day to generate new discoveries for the benefit of current and future generations in our state and nation. We hope to continue these efforts for many years to come.

Daniel Flynn, Vice President for Research, Florida Atlantic University

Andrés Gil, Vice President for Research and Economic Development, Florida International University

John Kantner, assistant vice President for research, University of North Florida

Elizabeth Klonoff, vice president for research, University of Central Florida

Timothy Moore, vice president for research, Florida Agricultural & Mechanical University

Pam Northrup, vice president for research and strategic innovation, University of West Florida

David Norton, vice president for research, University of Florida

Gary Ostrander, vice president for research, Florida State University

Lee Ann Rodríguez, director of the office of research, New College of Florida

Paul Sanberg, senior vice president for research, University of South Florida

Jeanne Viviani, director of sponsored programs, Florida Polytechnic University

Tachung Yih, associate vice president for research, Florida Gulf Coast University

Is this diversity newsletter?: 
Is this Career Advice newsletter?: 

Excess credit hour policies increase student debt


A new research paper finds that excess credit hour policies don’t lead to completion, just more student debt.

Introducing a new series on reproducibility of scientific research (essay)

How do we know which scientific results to trust? Research published in peer-reviewed academic journals has typically been considered the gold standard, having been subjected to in-depth scrutiny -- or so we once thought. In recent years, our faith in peer-reviewed research has been shaken by the revelation that many published findings don’t hold up when scholars try to reproduce them. The question of which science to trust no longer seems straightforward.

Concerns about scientific validity and reproducibility have been on the rise since John Ioannidis, a professor at Stanford School of Medicine, published his 2005 article “Why Most Published Research Findings are False.” Ioannidis pointed to several sources of bias in research, including the pressure to publish positive findings, small sample sizes and selective reporting of results.

In the years since, a wave of scholars have dug deeper into these issues across a number of disciplines. Brian Nosek at the Center for Open Science and Elizabeth Iorns of Science Exchange spearheaded attempts to repeat past studies in their respective fields, psychology and cancer biology, with discouraging results.[1] Economists encountered trouble in merely repeating the analyses reported in papers using the original data and code.

More on Reproducibility

By 2016, when Nature surveyed 1,500 scholars, over half expressed the view that there is a significant “reproducibility crisis” in science. This crisis comes at an uncomfortable time, when some skeptical voices question even well-grounded scientific claims such as the effectiveness of vaccines and humans’ role in climate change.

Given this hostility, there’s a concern that reproducibility issues may undermine public confidence in science or lead to diminished funding for research.[2] What is clear is that we need a more nuanced message than “science works” or “science fails.” Scientific progress is real, but can be hindered by shortcomings that diminish our confidence in some results, and need to be addressed.

There has been plenty of coverage about the reproducibility crisis and its implications (including debate over whether to call it a crisis) in both scientific publications and mainstream outlets like The New York Times, The Atlantic, Slate and FiveThirtyEight. But somewhat less attention has been paid to the question of how to move forward. To help chip away at this question, we’re publishing a series of articles from researchers leading initiatives to improve how academics are trained, how data are shared and reviewed, and how universities shape incentives for better research. After this essay, the rest of the series will be appearing on the “Rethinking Research” blog on Inside Higher Ed.

Why the Shaky Foundation?

The reproducibility problem is an epistemological one, in which reasons for doubt undermine the foundations of knowledge. One source of doubt is the lack of visibility into the nuts and bolts of the research process. The metaphor of “front stage” and “back stage” (borrowed from the sociologist Erving Goffman, who used it in different context) may be helpful here.

If the front stage is the paper summarizing the results, the back stage holds the details of the methodology, data and statistical code used to calculate those results. All too often, the back stage is known only to the researchers, and other scholars cannot peer behind the curtain to see how the published findings were produced.

Another big issue is the flexibility scholars have in choosing how to understand and analyze their research. It’s often possible to draw many different conclusions from the same data, and the current system rewards novel, positive results. When combined with a lack of transparency, it can be difficult for others to know which results to trust, even if the vast majority of researchers are doing their work in good faith.

As Joseph Simmons, Leif D. Nelson and Uri Simonsohn write in their article on researcher degrees of freedom, “It is common (and accepted practice) for researchers to explore various analytic alternatives, to search for a combination that yields statistical significance, and to then report only what worked … This exploratory behavior is not the by-product of malicious intent, but rather the result of two factors: (a) ambiguity in how best to make these decisions and (b) the researcher’s desire to find a statistically significant result.”

Given the potential for biased or flawed research, how can we encourage greater transparency and put the right incentives in place to promote reliable, reproducible research? Three big questions we’ll be looking at in this series are: How are researchers trained? What resources and support do they receive? How do institutions respond to and reward their work?

Training the Next Generation of Researchers

Lack of proper training in research methods and data-management skills can contribute to reproducibility problems. Graduate students are sometimes left on their own in learning how to manage data and statistical code. As they merge data sets, clean data and run analyses, they may not know how to do this work in an organized, reproducible fashion. The back stage can become extremely messy, making it hard to share their materials with others or even double-check their own findings. As the students advance in their professions, they may not have the time (or the right incentives) to develop these skills.

In the 2016 survey conducted by Nature, researchers identified an improved understanding of statistics and better mentoring and supervision as the two most promising strategies for making research more reproducible.

A number of organizations are tackling this issue by offering workshops for graduate students and early-career researchers in how to conduct reproducible research, manage data and code, and track research workflow. Among them are trainings offered by Software Carpentry and Data Carpentry, the Center for Open Science, and the Berkeley Initiative for Transparency in the Social Sciences. There are even courses available online from institutions such as Johns Hopkins.

Resources and Support for Better Science

While proper training is essential, researchers also need resources that support reproducibility and transparency. One critical piece of infrastructure is data repositories, online platforms that make it easier for scholars to organize research materials and make them publicly available in a sustainable, consistent way.

Repositories like Dataverse, Figshare, ICPSR and Open Science Framework provide a place for researchers to share data and code, allowing others to evaluate and reproduce their work. There are also repositories tailored to qualitative research, such as the Qualitative Data Repository.

Universities are also enhancing their services and support for reproducible research practices. For example, the Moore-Sloan Data Science Environments initiative offers resources to support data-driven research at three universities, including software tools and training programs. Dozens of universities also have statistical consulting centers that offer advice to students and researchers on research design and statistical analysis. Some disciplinary associations are also convening groups to develop guidelines and standards for reproducible research.

Creating Incentives for Reproducible Research

Researchers often face career and institutional incentives that do little to encourage reproducibility and transparency, and can even work against those goals at times. Academic achievements like winning grants and earning tenure are linked primarily to publishing numerous papers in highly ranked journals. There’s little professional reward for the time-consuming work of sharing data, investigating and replicating the work of others, or even ensuring one’s own research is reproducible.

Institutions are beginning to shift these incentives through policies and funding that encourage reproducible research and transparency, while reducing some of the flexibility that can allow biases to creep in. Funders such as the Arnold Foundation[3] and the government of the Netherlands have set aside money for scientists to conduct replications of important studies. Some have offered incentives for scientists to pre-register their studies, meaning they commit to a hypothesis, methodology and data analysis plan ahead of data collection.

Increasingly, funding agencies and academic journals are adopting transparency policies that require data sharing, and many journals have endorsed Transparency and Openness Promotion guidelines that serve as standards for improving research reliability.

In another interesting development, some journals have shifted to a model of registered reports, in which an article is accepted based on the research question and method, rather than the results. Recently, Cancer Research UK formed a partnership with the journal Nicotine and Tobacco Research to both fund and publish research based on the registered-reports approach.

All of these initiatives are important, but the path to academic career advancement also needs to shift to reward research activities other than just publishing in prestigious journals. While change on this front has been slow, a few institutions like the University Medical Center Utrecht in the Netherlands have started to expand the criteria used in their tenure and promotion review process.

From Vision to Reality

The driving vision of these initiatives is a system that trains, supports and rewards scientists for research that is transparent and reproducible, resulting in reliable scientific results. To learn how this vision is being put into practice, we’ve partnered with contributors on a series of articles about how they are working to improve research reliability in their fields.

None of these solutions is a silver bullet. Improving research reliability will depend on change in many parts of the academic ecosystem and by many actors -- researchers, funders, universities, journals, media and the public. Taking the next steps will require openness to new ways of doing things and an attempt to discern what’s effective for improving research.

In many ways, we’re still in the early stages of realizing that there’s a problem and taking steps to improve. The good news is that there’s an ever-growing segment of the research community, and of the public, who are aware of the need for change and willing to take steps forward.

[1] The hundreds of psychologists who collaborated on the Reproducibility Project in Psychology were only able to reproduce about half the studies they analyzed. The Reproducibility Project in Cancer Biology has encountered similar difficulties reproducing results, though a more recent release of two replications showed a higher rate of success (experiments are ongoing).

[2] This concern was raised by several speakers at a National Academy of Sciences’ conference on reproducibility in March 2017.

[3] This series was supported with a grant from the Arnold Foundation.

Stephanie Wykstra is a freelance writer and consultant with a focus on data sharing and reproducibility. Her writing has appeared in Slate and The Washington Post. Wykstra helped coordinate this article series in partnership with Footnote, an online media company that amplifies the impact of academic research and ideas by showcasing them to a broader audience. The series was supported with a grant from the Arnold Foundation.

Editorial Tags: 
Is this diversity newsletter?: 
Is this Career Advice newsletter?: 

The undervaluation of the role of undergraduate research in the advancement of scientific knowledge (essay)

When people discuss undergraduate research, they generally focus entirely around the benefits for students. These experiences are widely recognized to build critical-thinking skills, foster a foundation for the scientific process and create hands-on classroom experiences.

Although true, this mind-set undervalues undergraduate research as a catalyst for the advancement of scientific knowledge.

Some in the scientific community have a skeptical view of undergraduate research. They may not doubt the benefits it offers students, but for true scientific innovation, it’s best to leave that to the flagships.

Such biases could not be more misguided. For example, a recent study by Michelle Kovarik, an assistant professor at Trinity College, documented 52 articles by primarily undergraduate institutions between 2009 and 2015 that made advances throughout analytical chemistry such as in spectroscopy, microfluidics and electrochemistry.

And a special issue of Polyhedron last August, edited by Robert LaDuca, Jared Paul and George Christou, presented over 60 articles that were based on undergraduate research and that reported scientific advances throughout inorganic chemistry.

Last year at Bucknell University, we surveyed the h-index, which measures the citations and influence of a scholar’s publications, of chemistry faculty from 22 highly selective undergraduate institutions to determine the impact of their research. We saw that assistant professors commonly had values between five and 15, with associate and full professors often increasing to high teens and even 20s, with a few faculty members even higher. Moreover, those are systematically low scores since we based them off a core collection (ISI Web of Science) to ensure reliable values. This limited sampling of significant research impacts points to much broader accomplishments by undergraduate institutions.

One of us, George Shields, published a paper on hydrogen bonding in the Journal of Computational Chemistry in 1993 with then Lake Forest College undergraduate Marcus Jurema, which has been cited 174 times to date. His articles in the Journal of the American Chemical Society with Matt Liptak in 2001 and 2002 have been cited hundreds of times each. Both Jurema and Liptak were first authors on these three papers, and Jurema is a practicing physician, while Liptak is a faculty member in chemistry at the University of Vermont. Shields was awarded the American Chemical Society’s Award for Research With Undergraduates in 2015.

Some of the examples of impactful research that emerged from our study of h-indexes included investigators such as these below (all with strong, double-digit h-index values):

  • Furman University's Lon Knight was the fourth recipient of the same award from the American Chemical Society, in 1989. He has authored more than 100 peer-reviewed research papers in leading national and international journals, primarily with undergraduates as co-authors.
  • Elizabeth Stemmler at Bowdoin College is an expert in analytical chemistry, and has an active research program investigating negative ions and atmospheric pollutants.
  • Michelle Francl from Bryn Mawr College has an active undergraduate research program in physical, theoretical and computational chemistry.
  • Liliya Yatsunyk at Swarthmore College has a fascinating research area in structural biochemistry, where her undergraduate group is exploring the three-dimensional space occupied by quadruplex DNA and the interaction of G-quadruplex DNA with porphyrins.
  • And Cynthia Selassie leads undergraduates at Pomona College in a medicinal chemistry research program that includes leveraging her expertise in quantitative structure activity relationships.

Perhaps most impressive is that these impacts have been made despite undergraduate institutions facing some distinct challenges. In speaking with some of our colleagues over the years, several common concerns emerged.

For example, some researchers believed a manuscript was more likely to be declined from high-profile journals without review because of their institution. While speculative, some investigators perceived that if the same manuscript had been submitted under the name of their former Ph.D. laboratory, it would have more likely proceeded to peer review.

Also, researchers at undergraduate institutions have experienced negative feedback on grant applications and manuscripts that was not based on the findings or the data, but rather on the involvement of undergraduates. “This work can’t be done by undergraduates” is heard all too often.

We’ve heard it ourselves. For instance, David Rovnyak has received these comments from a reviewer on a grant: “This proposal provides no indication that participation in research significantly benefited these students.” The reviewer also said that undergraduates weren’t capable of doing the work specified. George Shields once had a reviewer write the shortest review possible -- “Typical undergraduate drivel” -- on a paper that was eventually accepted in the Journal of Physical Chemistry. He saw a project he had proposed in one of his grants that was denied funding completely carried out and published by a graduate student of one of the suggested reviewers of the grant about a year later. We add that this sort of prejudice appears to be more common for junior faculty at primarily undergraduate institutions, and as researchers in the field realize the quality of the work by an established faculty member at such an institution, these problems are minimized or disappear.

Valuable Strengths

Nevertheless, undergraduate institutions boast several strengths that others lack. User fees on instruments are not in the primarily undergraduate institution research model, as they would only limit student use and training on research equipment. Similarly, network and other utilities are provided at those institutions without charges as part of the commitment to training students through involvement in research. While research universities commit significant internal support to graduate education research, at undergraduate institutions all the internal research money is for the benefit of undergraduates.

Perhaps the most valuable strength is a low-stakes environment to foster publishing research. While many undergraduate institutions do include research expectations in faculty evaluation, teaching loads and high service commitments are priorities. Since faculty members are conducting research because they want to, rather than because they have to, undergraduate institutions actually value having fewer high-quality papers over a high rate of publishing.

Meanwhile, high-stakes publishing is coming under increasing scrutiny. Editors of the major medical journals Lancet and NEJM have expressed concern that high-stakes research has led to a culture in which a surprising percentage of medical studies are not trustworthy. Further, retraction rates can be higher in leading journals than in lower-tier journals.

Undergraduate institutions also offer the ability to focus on fundamental research at a time when grants and funding are offered with the expectation of specific and highly applied returns on investment. Recently, a Massachusetts Institute of Technology report noted that public funding for basic inquiry has dropped significantly in research institution, a result they feel has greatly damaged innovation in the United States.

In contrast, undergraduate institutions feel less pressure to create intellectual property, spawn start-up companies or partner with the private sector. Instead, these valued outcomes can and do happen organically, rather than out of obligation.

A great strength of American higher education is the diversity of the institutional types. Undergraduate colleges and universities should not try to replicate all types of research being done in other types of research settings. As research contributions from undergraduate institutions continue to grow, we in our own research and in observing many of our colleagues notice an increasing trend of strong cooperation between researchers at undergraduate institutions and at Ph.D.-granting institutions and national laboratories, which leverages the respective strengths of these environments.

For instance, Shields and Brooks H. Pate, at the University of Virginia, have established a strong collaboration that has resulted in four publications on small water clusters since 2012, including two in Science. This collaboration, instigated by Pate, is built on a strong foundation of mutual respect, and was only possible because of the previous highly cited work of Shields and his undergraduate researchers.

Similarly, since 2012, Rovnyak has published four papers in close collaboration with researchers at Ph.D.-granting institutions and one with colleagues at a research hospital, where all but one featured contributions of undergraduate researchers. Together, this diversity of institutions is needed to drive scientific discovery.

Interestingly, an additional trend is that a number of faculty members who recently joined primarily undergraduate colleges and universities shared with us that they perceived more freedom to pursue their research endeavors than at other types of research institutions.

It is discouraging, however, that a less-than-friendly competition still exists, marked by biases that tangibly hinder the development of talented faculty members who are growing impactful research programs in primarily undergraduate institutions. And it should be considered also that the same environments that value incorporating undergraduates into research also propel those students into graduate programs at a higher rate per capita than research universities, leading to the next generation of top scientists in the country.

Instead of a competition, we should value the distinct contributions of both predominantly undergraduate and research institutions to advancing discovery, and how they collectively come together as a cohesive whole in the American higher education system. Undergraduates are more than the scientists of tomorrow. They’re also making an impact today.

David S. Rovnyak is a professor of chemistry at Bucknell University. George C. Shields is provost and a professor of chemistry at Furman University.

Editorial Tags: 
Image Source: 
Is this diversity newsletter?: 
Is this Career Advice newsletter?: 

Microbiology society cuts back on small conferences


Decision by the American Society for Microbiology to scale back number of small conferences highlights pressures on the economics of scholarly gatherings.

How to write an effective journal article and get it published (essay)

Victoria Reyes breaks down the structure of a well-conceived scholarly piece and provides tips to help you get your research published.

Job Tags: 
Ad keywords: 
Show on Jobs site: 
Image Source: 
Is this diversity newsletter?: 
Newsletter Order: 
Advice Newsletter publication date: 
Thursday, May 11, 2017
Is this Career Advice newsletter?: 

Key principles of open labs (essay)

At their best, our public institutions of higher education have always been public laboratories: sandbox-like spaces that support failure, learning and discovery. As the ideas and ethos of the maker movement become more mainstream, communities and institutions are investing in physical maker spaces. Older-style computer clusters in colleges and universities are being updated into “open labs,” combining the functionality of a basic computer lab with newer high-tech tools and toys. These new physical spaces are infused with the sandbox ethos: they promise to transform students into makers, explorers, risk takers and innovators.

But what makes an open lab open? As public colleges and universities invest precious tuition dollars in these spaces, we wonder if the case for open labs as hubs of innovation has been overstated. While they may be effective marketing tools, helpful on a campus tour, open labs should also let us enhance the quality of education for students, especially at the undergraduate level. If public institutions are going to divert key resources into building and equipping these spaces, they should be guided by their public mission and use these physical spaces to open collaborative and mutually enriching connections among students, the university and the various publics that our universities serve.

As advocates for Open Education, we believe deeply that working open can have important benefits for learners and for the wider community outside of the academy, and we don’t want this movement to be devalued by using the word “open” carelessly in higher education contexts. We want to focus the lens of the open education movement to help leverage administrative support for a vision of open labs that truly enriches the learning landscape. Six key principles could frame the open ethos of an open lab, adapted from the kinds of definitions and philosophies that underscore open education.

Open as in open-ended: generating learner-driven outcomes that evolve with the work. In most university courses, learning outcomes are generated -- often by departmental committee -- well in advance of the course’s start date. What this means is that at least symbolically -- and in many cases practically -- all acceptable end points are prescribed before a single student has entered the room. This has the negative consequence of leaving out of the curriculum the value that students and faculty members generate. It fails to engage students as collaborators and contributors of knowledge and to take advantage of resources that could emerge and be made part of the course to provide broader context, fresh analysis and new goals.

In any open lab environment, we should encourage all participants to have a hand in crafting the expected and desired outcomes, and then allow the contributions of the participants to shift and revise those outcomes as the work develops. In many cases, open lab experiences offer opportunities for alternative credit-generating experiences for students, and we, along with our students, should co-develop flexible, open-ended outcomes for our open labs.

Open as in open to the public: using the principles of connected learning to put the academy in conversation with a wider community. Connected learning takes as its starting point the idea that education is a dialogic process, enhanced by networked communication. The flow here is in multiple directions across networks: students contribute work to the knowledge commons; participants in the commons, whether scholars or students from other institutions or stakeholders from outside the academy, can revise and critique that work. It also supports the more traditional flow where scholars and the public can offer ideas that our students can absorb, critique, remix and the like.

An open lab should integrate the critical digital literacy skills students need to participate in these networked communities. Students should build personal learning networks, publish their work to the open web and learn about digital citizenship and about the rewards and challenges of working in public as they undertake open lab projects.

Open as in open access: using open licenses to share data, research, products and processes with the world. Traditionally, the university has been a proprietary knowledge-creation zone focused -- often for good reason -- on protecting its intellectual property. But as researchers and teachers, we have an obligation to share our work. Sharing can benefit students who are getting gouged on textbook prices. Sharing also benefits college libraries by allowing them to recover funds spent on the skyrocketing costs of databases and subscriptions as more journals convert to open access. And it benefits a public that is often being required to support university research with tax dollars yet buy back access to the results because they are published in closed, paywalled journals. Some closed journals seek to further monopolize the research and publishing process for their own enrichment with actions like patenting the online peer-reviewed research process.

Open labs should make open licensing a priority and focus on being active advocates for the open ecosystem, including the use and support of: open-source software, open data, open educational resources and open-access publishing models. For example, an open lab project team might openly license and publish on GitHub the source code and documentation for their software research project.

Open as in open 24-7: rethinking delivery systems for education. No lab -- open or otherwise -- needs to be physically open all the time in order to thrive. But seat time measures and credit counting have limited many traditional universities’ ability to offer different kinds of learning experiences. Faculty members who have to teach a certain number of credits, students who have to sit in chairs a certain number of hours and reductive either/or online vs. face-to-face characterizations of courses end up creating structures into which all learning must fit.

The open lab should offer structural flexibility to faculty members and students who have ideas about how to learn and work that may not conform to the traditional structures that the institution currently enables. That may include non- and alternative-credit generating experiences, inventive workarounds for block schedules, and more hybridized schedules that are driven by the needs of the participants and projects. An open lab has a distinct opportunity to support the tenets of Project-Based Learning by providing a physical third space and tool set with which to build learning experiences not bound by seat time or semesters.

Open as in open for business: building a sustainable economic system for education. Open labs can provide a point of partnership and collaboration among universities, their students and faculty members, and corporations and industry. As economic pressures on universities mount, such partnerships can provide additional revenue for the institution and opportunities for students.

But for public universities in particular, it is imperative that corporate interest not define the shape of higher education at the expense of students or scholarship. In some emerging models, as colleges struggle to market themselves as relevant to families who desperately need a well-paying job to follow years of expensive tuition bills, we have seen universities set competencies in response to immediate workplace needs -- which, in turn, can help students secure jobs upon graduation for which they have effectively been trained. That can appear to be a win-win, except that it doesn’t necessarily help students prepare to help shape the economic system they are entering, nor does it encourage a curriculum that would prepare them to evolve as the needs of the company evolve. It also guarantees a perpetual source of starting-level employees, which makes retaining employees unprofitable over time.

In other words, as we use open labs to partner with businesses and private donors, we should think about long-term economic sustainability from the perspective of students -- not just that of the institution or partner companies. For public universities, that means thinking about funding and revenues in the context of public support for higher education -- not just in terms of patents and products. It also means thinking about partnerships in the context of the long-range sustainability of public universities and their graduates -- not just short-term job placement. And it means considering how open labs work for the public -- not just how they can plug crisis-level funding gaps for universities or manufacture custom-trained graduates for entry-level jobs. Identifying the benefits of working partnerships between universities and external stakeholders based on the power of the relationship rather than the monetary value of the product will help institutions make the case for continuing, consistent public support for higher education.

Open as in open arms: thinking critically about our own terms, their limits and challenges to working inclusively open. Each of these principles is fraught with promises that open can’t keep. Open labs are typically walled off inside the institutional structures that ironically profess to free them. But this tension is part of what animates open. In our opinion, open provisionally agrees to work within the oppressive structures of institutions in order to refigure those structures into an architecture for the public commons.

That being said, we must work to open a space that is at its core critical of its own promises. We must be willing to do the work of identifying how exclusion, gatekeeping, prejudice and violence close down even the most well-intentioned open spaces. Racism, sexism, xenophobia, homophobia -- open spaces use a democratizing rhetoric that runs the risk of alienating those people who see such sites of “freedom” as essentially fraudulent. And while working open can drive down some costs for students, 3-D printers and fancy glass-walled rooms with rolling furniture contribute to rising bottom-line tuition costs that further disenfranchise the large number of students who struggle to afford higher education.

Above all else, open labs should work to be honest about how power and privilege operate in institutions of learning and how they are replicated, challenged and sometimes exacerbated by universities’ efforts to innovate. For open labs to be truly welcoming, they need to be open about the limits of their promises and the realities that vulnerable learners in the academy -- and in society -- face today.

Who are the stakeholders who govern decision making around open labs? That is a question fraught with the tensions that surround much higher education “innovation” right now. But to preserve the pedagogical possibility of the word “open,” we should encourage earnest conversations around the mission and methods of these emerging spaces, and integrate those conversations into whatever protocols exist for defining and branding them.

Here are some guiding questions for the collaborative group of faculty members, administrators, students and community advocates or users who represent the stakeholders of open labs:

  • How will the group encourage revision and development of goals as the work emerges?
  • How will the group connect its work to larger relevant scholarly and public conversations outside the room?
  • Is the group familiar with open licensing and actively working to make its work shareable for others to build on?
  • Is the group pressing the institution to adjust or develop institutional structures that support emerging ways of working?
  • Is the group considering how funding sources and revenue streams related to the work can sustain the institutions’ learners in the longer term in a way that supports academic freedom and inquiry?
  • Is the group asking critical questions about the challenges and barriers that threaten the inclusion, safety or well-being of the full range of possible participants in the work?

Does your college or university have open labs? If so, do they engage with any of these questions? What thoughts do you have about the “open” in open lab?

Robin DeRosa is director of interdisciplinary studies at Plymouth State University. Dan Blickensderfer is senior curriculum and assessment developer at College for America, Southern New Hampshire University.

Editorial Tags: 
Image Source: 
Is this diversity newsletter?: 


Subscribe to RSS - Research
Back to Top