The Case for an Institutionally Owned Knowledge Infrastructure

The many bottlenecks that the commercial monopoly on research information has imposed are stimulating new strategies, write James W. Weis, Amy Brand and Joi Ito.

January 7, 2020
 
 
Istockphoto.com/Krulua

Science and technology are propelled forward by the sharing of knowledge. Yet despite their vital importance in today’s innovation-driven economy, our knowledge infrastructures have failed to scale with today’s rapid pace of research and discovery.

For example, academic journals, the dominant dissemination platforms of scientific knowledge, have not been able to take advantage of the linking, transparency, dynamic communication and decentralized authority and review that the internet enables. Many other knowledge-driven sectors, from journalism to law, suffer from a similar bottleneck -- caused not by a lack of technological capacity, but rather by an inability to design and implement efficient, open and trustworthy mechanisms of information dissemination.

Fortunately, growing dissatisfaction with current knowledge-sharing infrastructures has led to a more nuanced understanding of the requisite features that such platforms must provide. With such an understanding, higher education institutions around the world can begin to recapture the control and increase the utility of the knowledge they produce.

When the World Wide Web emerged in the 1990s, an era of robust scholarship based on open sharing of scientific advancements appeared inevitable. The internet -- initially a research network -- promised a democratization of science, universal access to the academic literature and a new form of open publishing that supported the discovery and reuse of knowledge artifacts on a global scale. Unfortunately, however, that promise was never realized. Universities, researchers and funding agencies, for the most part, failed to organize and secure the investment needed to build scalable knowledge infrastructures, and publishing corporations moved in to solidify their position as the purveyors of knowledge.

In the subsequent decade, such publishers have consolidated their hold. By controlling the most prestigious journals, they have been able to charge for access -- extracting billions of dollars in subscription fees while barring much of the world from the academic literature. Indeed, some of the world’s wealthiest academic institutions are no longer able or willing to pay the subscription costs required.

Further, by controlling many of the most prestigious journals, publishers have also been able to position themselves between the creation and consumption of research, and so wield enormous power over peer review and metrics of scientific impact. Thus, they are able to significantly influence academic reputation, hirings, promotions, career progressions and, ultimately, the direction of science itself.

But signs suggest that the bright future envisioned in the early days of the internet is still within reach. Increasing awareness of, and dissatisfaction with, the many bottlenecks that the commercial monopoly on research information has imposed are stimulating new strategies for developing the future’s knowledge infrastructures. One of the most promising is the shift toward infrastructures created and supported by academic institutions, the original creators of the information being shared, and nonprofit consortia like the Collaborative Knowledge Foundation and the Center for Open Science.

Those infrastructures should fully exploit the technological capabilities of the World Wide Web to accelerate discovery, encourage more research support and better structure and transmit knowledge. By aligning academic incentives with socially beneficial outcomes, such a system could enrich the public while also amplifying the technological and societal impact of investment in research and innovation.

We’ve outlined below the three areas in which a shift to an academically owned platforms would yield the highest impact.

Truly Open Access

The movement to online and digital media has allowed the deconstruction of the previous academic publication process into its component parts: peer review, copyediting and design. The open-access movement, which aims to make scholarly literature freely available online, began as a response to that potential. Initially focused on self-archiving, or Green OA, researchers began making their results easily and freely accessible by uploading prepublication manuscripts to university-based institutional repositories and services. The repository movement began gaining steam in earnest when Harvard University established America’s first self-archiving policy in 2008. Other research universities around the world quickly followed.

But open-access and institutional repositories never realized their potential to transform research communication. Not only did the investment fall short of the funds needed to support the development of scalable platforms, but commercial publishers also successfully circumvented the movement. They revised licenses to block or delay self-archiving, creating pay-to-publish or “Gold OA” journals, and launched analytics and research workflow services. That clever divide-and-conquer strategy successfully stymied collaboration on open-access academic infrastructure development.

One possible response would be for institutions to pressure publishers to lower fees. So far, confidentiality agreements and other legal efforts have successfully blocked such collective bargaining, although there are signs that is changing. Plan S, for example, an open-access initiative supported by a coalition of roughly a dozen leading European research funders responsible for billions of dollars of funds a year, was launched in 2018 and will go live in 2020.

But as long as publishers control the underlying infrastructure, including the key journals, they can always simply extract fees elsewhere or monetize other parts of the research pipeline. Making matters worse, the emergence of predatory journals -- which have little to no quality control or peer review -- has further undermined the open-access movement.

One way to lower publishing costs is to unbundle publisher services to accurately reflect value-added work and set charges at universally accessible levels. In some ways, this solution could resemble the transition from online publishing to blogging. Before blogging platforms, large software companies charged millions of dollars for content management systems, which are still used in complicated professional settings. But it turned out that free open-source software, and open standards to interoperate between services, encouraged the creation of simple and extremely low-cost publishing platforms -- which, in turn, led to the emergence of user-generated content and what has now become social media.

While academic publishing is much more complicated, an overhaul of the software, protocols, processes and business underlying academic publishing could revolutionize it both financially and structurally -- allowing sustainable, universal, open-access publishing without paywalls.

Meaningful Impact Metrics

While academic research should ideally be judged on its individual merits, the current paradigm relies heavily on the prestige of the journal in which the research was accepted as a heuristic for importance. Because a handful of commercial entities control such “highly impactful” journals, they are able to reinforce this journal-based status quo. A consequence of this system is the often-referenced impact factor of a journal, which is supposed to indicate the impact or quality of the research that a journal accepts for publication.

Unfortunately, the impact factor is known to often be a poor proxy for research quality, and it can also be easily gamed by “citation cartels,” coercive self-citations and other well documented strategies. Yet despite that, it has a significant, and self-fulfilling, impact on the hiring and promotion of researchers. The committee members making such decisions often evaluate a candidate based on the prestige of the journal in which their research has been published, so young researchers on the tenure path are generally forced to prioritize publishing in journals with high impact factors, faulty as the metric is. As a result, the corporate grip on our knowledge infrastructure strengthens, and important work ends up behind paywalls and largely inaccessible to anyone outside a major university or research laboratory.

Trustworthy Peer Review

Publishers currently control the process of accepting or rejecting a new finding for publication. An anonymous panel of peers reviews most articles, and some journals require double-blind review in an attempt to combat bias.

This process is broken in many ways. For one thing, many papers are already published on archive and preprint servers, so in the case of double-blind review, it is easy for reviewers to find the authors of a paper on the internet. That not only obviates double-blind review but also serves to reinforce tribal biases and affiliations.

Further, evidence suggests that reviewers are not always able to consistently and accurately judge the quality of new ideas, and they can discount their value. Also, it is not clear that reviewers are sufficiently incentivized to pay attention. While conducting peer review was traditionally viewed as part of one’s academic obligations, busy researchers can be increasingly less willing to devote significant time to reviewing research for publishers.

Perhaps we can improve peer review, as with other aspects of publishing, by taking inspiration from technological developments outside the traditional academic publishing domain. In fact, social media and blogs are already a force in many scientific disciplines, with open, unsolicited reviews often appearing within hours of publication. Constructing schemas that provide academic credit to reviewers, such as the CRediT taxonomy, is one promising way of incentivizing review and thus scaling such alternative yet valuable sources of postpublication review.

Given the millions of active graduate students, postdocs and other regular consumers of academic literature, creating a system that rewards people for looking for, finding and betting some reputation on unverified new works -- similar to a sport scout or early-stage venture capitalist -- seems promising. And the concept could be extended: for example, resources could be added to incentivize reviews of traditionally overlooked research to combat biases in the current system.

If we can indeed make the work of peer review more about looking for and rewarding new and novel ideas -- instead of a system that reinforces the tribal networks and biases of academe -- we can substantially improve the progress of scholarship, while making it more equitable and available to the world at large.

Toward New Knowledge Infrastructures

Successfully revamping the current ecosystem is probably not possible with a handful of highly profitable commercial entities in control. For such a paradigm shift to occur, universities must assert some ownership over the mechanisms for knowledge-production sharing.

That can be achieved through partnerships between mission-aligned knowledge-producing organizations, such as university publishing houses and research laboratories. Such partnerships can build on existing resources, brand recognition, trust and networks of talent and capital to facilitate the incubation of new knowledge infrastructures and related projects. Further, the new organizations formed by such partnerships can work together to create inter-organizational consortia -- through which information could be exchanged and the most successful incubated projects and frameworks could organically grow. This model is both general and scalable, and it could be replicated extensively.

For example, we at MIT Press and MIT Media Lab have recently launched a collaboration called the Knowledge Futures Group with a focus on developing and deploying next-generation technologies. By serving as an incubator for publication-related projects, the collaboration aims to: 1) support projects that enrich the knowledge infrastructure and 2) spark a movement toward greater institutional investment in, and ownership of, that infrastructure.

For example, the group is developing a new, open-source publishing platform called PubPub, which uses a simple graphical format and supports programmatic illustrations and text as well as static PDFs. The goal of the PubPub project is to create an author-driven alternative to academic journals that is tuned to the dynamic nature of many of our modern experiments and discoveries.

We are also developing Underlay, a global, distributed method of linking and understanding public knowledge that will make the data and content hosted on PubPub available to other platforms. These platforms can be used to experiment with transparent, less biased peer review.

Additionally, the Knowledge Futures Group is supporting the development of new platforms for the calculation and sharing of more rigorous metrics of scientific impact. By combining such metrics with machine learning, we can gain insight into the trends and features that lead to impactful ideas. Such predictions can also help construct quantitative, data-driven frameworks for the allocation of resources to research projects in a way that maximizes impact, and we are exploring opportunities to pilot these new funding mechanisms in the real world.

In conclusion, if we in higher education are to realize the transformative promise of the web for science and scholarship, the control of knowledge infrastructure needs to transition from a commercial oligopoly to academically owned and managed partnerships. For that to occur, universities must continue to assert greater authority over systems for knowledge representation, dissemination and preservation. That will require not only building new open-source tools and protocols but also constructing new platforms for peer review, attribution and impact tracking that actively reward novel and high-quality ideas.

Through the construction of such partnerships, we can leverage the continually growing ecosystem of open-source tools to develop, test and deploy new, open, transparent and cost-effective systems and processes that will help researchers and organizations. That will enable a shift toward greater institutional and public ownership of the platforms underlying the dissemination of knowledge -- and the recapturing of the territory lost to publishers and commercial technology providers in the past decades.

What constitutes knowledge, the use of knowledge and the funding of knowledge is integrally intertwined with the future of our planet and our species. We must actively protect it from purely market-driven incentives and other corrupting forces. The transformation will require a movement involving a global network of collaborators, and we hope to contribute to catalyzing it.

Bio

James W. Weis is a doctoral candidate in the MIT Media Lab and an affiliate of the MIT Knowledge Futures Group. Amy Brand is director of the MIT Press and co-founder of the MIT Knowledge Futures Group. Joi Ito is a Distinguished Researcher at KEIO University and at the time of this piece's submission and acceptance in May was a professor of the practice at MIT.

Be the first to know.
Get our free daily newsletter.

 

Back to Top