The scientific publishing industry as we know it today represents a structure of the past. It is profoundly tied to the medium of print, which is itself an artifact of a technical revolution hundreds of years old. Moreover, its routines and structures are rooted in paper as a communications and archiving technology, and its business models are based on the costs of physical distribution and review by a select few.
Of course, paper-based systems aren’t very adaptable to the world of the internet. The outdated—but wildly profitable—business models based on the print medium have led to skyrocketing costs for scholarly journals over the past 30 years, costs that greatly outpace increases in cost of living. This is at odds with the rapidly decreasing costs of digital publication and distribution. Even worse, the paper-based status quo relies on strictly enforced barriers to public access that prevent the rapid dissemination of vital knowledge.
The changes wrought by digital networks in other content industries, from music to cinema to journalism, are coming to the scientific publishing industry as well. Libraries are canceling subscriptions; funders—especially tax payers—are moving to ensure access to knowledge produced by their investment; and new business models are emerging to challenge the industry. For scientific publishing the days of securing profit margins through punitive pricing and aggressively enforced digital-rights management are numbered.
But science publishing isn’t just an industry. It’s also the core factory for knowledge transfer in the world. It has been since 1665, when the first journal was published as Philosophical Transactions of the Royal Society. And we need to think about the knowledge first—that’s why these were called philosophical transactions, not economic ones.
Science knowledge turned into an economic transaction thanks to print. At the time, printing was the simplest, fastest, and most distributable way to take a piece of knowledge out of one brain and transfer it to another. The core elements of the scientific-knowledge “unit” were established quickly: author, date, journal issue number, pages. Peer review soon followed, and technologies for knowledge transfer via the print medium underwent their own series of revolutions and advances.
Printing was expensive, though, which meant that the process was also an early knowledge-compression algorithm. Scientific publishing was not just about being right; because each word cost money to copy and distribute, it was about being right in the smallest number of words possible. Someone had to pay for it—over time, typically a library.
So since the advent of scientific publishing, we’ve connected the idea of “knowledge” to the reality of “words printed on paper.” We have married the knowledge of science to the containers of science—journal articles, textbooks, and so on. Access was enabled through library visits to touch the physical artifact of knowledge, to read the words. The very fact that we call a scientific-knowledge unit a “paper” is a powerful illustration of how deeply the idea of knowledge is tied to the medium.
Yet today we are escaping many of the constraints of analog containers for culture. The album simply isn’t the native “unit” for music in the digital world; now the unit is the song. The end user no longer has to passively consume content. Now we’re encouraged to rip, mix, and burn. After all, as the early ads for iTunes promised us, it’s our music.
There’s arguably even greater benefit to digital knowledge transfer in science than digital cultural-object transfer. Paper-based knowledge-compression systems can yield to new systems in which narrative text is unconstrained by page lengths, in which protocols, side points, “failed” approaches, software, and data can be published, and research materials can be linked inline for easy ordering. We are creating systems in which the mantra of “rip, mix, and burn” doesn’t apply to music, but instead to knowledge.
After all, it’s our knowledge. We need, in short, to think beyond the idea of knowledge as paper. We need to think about its consumption, not just its production. Yet the shift to digital hasn’t brought us digital library access on the web, but rather the opposite. It’s harder and harder to read the knowledge without a user name and a password.
The end result is that, compared with the realm of culture, it’s hard to be an innovator for scientific knowledge. There are too many hoops to jump through, from access to copyright law to economics. And this is why, nearly 20 years after the invention of the web by a scientist, the web functions far better for culture than for science.
But the markets have a way of correcting themselves, even against the inertia of an old, powerful industry. Those who pay for science—especially the taxpayers—are starting to understand that science in a digital age requires thinking not of research as a finite process that ends with a “paper” but as a perpetual process that begins with thousands of bits of information, some of which might be in narrative form, others in data sets, still others embedded into research tools and engineered materials, all scattered across the network and linked into a common infrastructure framework.
The market is beginning to understand that guaranteeing some rights for the user of knowledge makes good sense, scientifically and economically. The web did not demand of Google a series of legal and economic applications. Nor did Facebook’s founders have to petition for access to the internet in order to test their idea of a social network. Granting some rights to users in advance is part and parcel of fomenting innovation and entrepreneurship on the network. As for computers, networks, and documents, so it should be for knowledge itself.
We see this understanding in myriad policies stating that those who fund the research have a right to read it, to remix it, to integrate it with other research. The United States National Institutes of Health is only one of many funders now realizing this benefit. And universities are waking up to their responsibility not only to create knowledge, but also to preserve its accessibility through their libraries and digital repositories.
We also see this understanding in the emergence of new “born digital” publishing businesses for scientific knowledge. Publishers like Hindawi and BioMed Central and the Public Library of Science use Creative Commons copyright licenses to grant all rights to their users to make and distribute copies, to remake and remix the knowledge, reserving only the mandate for attribution.
This market-driven change toward access is a fundamental change from today’s publishing industry. It’s taking us into a world where the network flows with data, ontologies, and machine-readable knowledge, not just blogs and videos and music. It’s taking us into a world where the publication of research serves as a distributed commons of knowledge, as the beginning of millions of research cycles, not one where a short set of “pages” represents the end of a research investment.
This is how we maximize our societal investment in science: by making sure it can be read, understood, and used by the network culture. And in many ways, it’s a return to the way things started in scientific publishing, as an innovative reaction to the disruption of printing presses. We’re going back to philosophical transactions of knowledge.
John Wilbanks is vice president of science at Creative Commons.
Originally published January 28, 2011