The US National Institutes of Health invests $28bn annually in research. European spending is lower, although the European Union is closing the gap, helped by a shortsighted American policy of “flatlining” many scientific research budgets. (Perhaps this is because scientists have an annoying tendency to discuss things that conflict with the Bush administration’s view of reality, such as global warming, evolution, the insufficiency of the New Orleans levee system and the medical potential of stem cells.)

Economists on both sides of the Atlantic strongly agree that scientific research spending provides measurable impact on economic growth. The moral case for health research is even clearer.

So much for the input side of research. What about the output? After all, paying for research is not enough. We have to get it to the scientists who might use it which, in an increasingly interdisciplinary world, is hard to predict beforehand. In the case of health research, patients are also looking for information – trying to find out whether the latest research shows that oestrogen therapy increases breast cancer risks, or anti-inflammatory drugs the risk of heart disease.

The outputs of scientific research come in a variety of forms, but the most important is an article in a peer-reviewed scientific journal. While some journals, such as those produced by the Public Library of Science, are “open access” – available in full for free online – most are not.

They can be extremely expensive. The cost of journals has dramatically outpaced both the rate of inflation and the cost of monographs over the past 15 years. These journals may be available online – but they are behind firewalls, available only on payment of a fee.

It is easy to be shocked at some of the excesses – a journal subscription more than $20,000, or the $150 per student that the Journal of Nanoscience and Nanotechnology demands for the right to photocopy a single page. Or one can be indignant that the public sometimes pays once to have the research done, again for the salary of the scientist who peer reviews it and then a third time to read the results.

These are natural reactions. But they miss a more fundamental point; our failure to apply what we learned from the world wide web to the world of scientific research.

The genius of the web is that it is an open network. Anyone can link to any part of this page, or that article, and anyone else can link to that link. That web of interconnections, cross-citations and linkages is then captured by search engines. We gain not only the knowledge in the content, but the knowledge supplied by those who read the content, who make connections the original author could not.

It is this second layer of knowledge that assesses the first layer and makes searching it possible – something that scientists should understand. Peer review and citation play the same roles.

However, it is this second layer that a world of firewalled scientific knowledge will never develop, even if a line or two of the contents can be glimpsed from Google’s search page.

This is no Voltairean call to strangle the last commercial publisher with the entrails of the last journal rep. Commercial journal publishers and learned societies play a valuable role in the assessment and dissemination of scientific knowledge – though we might wish that the availability of worldwide, free distribution had not caused their prices to rise quite so sharply.

Copyright, too, has a legitimate part to play, including maintaining the utterly crucial role of attribution. Thus I do not support the proposal that all articles based on state-funded research must pass immediately into the public domain. But there are more modest proposals that deserve our attention.

Pending legislation in the US balances the interest of commercial publishers and the public by requiring that, a year after its publication, NIH-funded research must be available, online, in full. Similar suggestions have been made in Europe though the debate still concentrates too much on making accessible something that can be read by the human eyeball, rather than something that can be mined by computers.

But even these proposals, limited to state-funded research outputs, have attracted the ire of commercial publishers. Their objections to open access proposals have been debunked succinctly by, among others, Tim Berners-Lee, the inventor of the world wide web.

There are many ironies here. Even if the modest open access proposal reaches President George W. Bush’s desk he will probably veto it …because the same bill has funding for stem cell research. The battle will then begin again.

The greatest irony, though, is this. The world wide web was designed in a scientific laboratory to facilitate access to scientific knowledge. In every other area of life – commerce, social networking, pornography – it has been a smashing success. But in the world of science itself? With the virtues of an open web all around us, we have proceeded to build an endless set of walled gardens, something that looks a lot like Compuserv or Minitel and very little like a world wide web for science.

The writer is William Neal Reynolds professor of law at Duke Law School, www.law.duke.edu/cspd, and a co-founder of Science Commons, www.sciencecommons.org

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments

Comments have not been enabled for this article.