Richard A. Epstein: Should all public information be free?

The United States has much to learn from Europe about information policy. The scattered US approach to data privacy, for example, produces random islands of privacy protection in a sea of potential vulnerability. Until recently, your video rental records were better protected than your medical records. Europe, by contrast, has tried to establish a holistic framework: a much more effective approach. But there are places where the lessons should run the other way. Take publicly generated data, the huge and hugely important flow of information produced by government-funded activities - from ordnance survey maps and weather data, to state-produced texts, traffic studies and scientific information. How is this flow of information distributed? The norm turns out to be very different in the US and in Europe.

On one side of the Atlantic, state produced data flows are frequently viewed as potential revenue sources. They are copyrighted or protected by database rights. The departments which produce the data often attempt to make a profit from user-fees, or at least recover their entire operating costs. It is heresy to suggest that the taxpayer has already paid for the production of this data and should not have to do so again. The other side of the Atlantic practices a benign form of information socialism. By law, any text produced by the central government is free from copyright and passes immediately into the public domain. Unoriginal compilations of fact - public or private - may not be owned. As for government data, the basic norm is that it should be available at the cost of reproduction alone. It is easy to guess which is which. Surely, the United States is the profit and property-obsessed realm, Europe the place where the state takes pride in providing data as a public service? No, actually it is the other way around.

Take weather data. The United States makes complete weather data available to anyone at the cost of reproduction. If the superb government websites and data feeds aren’t enough, for the price of a box of blank DVD’s you can have the entire history of weather records across the continental US. European countries, by contrast, typically claim government copyright over weather data and often require the payment of substantial fees. Which approach is better? If I had to suggest one article on this subject it would be the magisterial study by Peter Weiss called “Borders in Cyberspace,” published by the National Academies of Science. Weiss suggests that the US approach generates far more social wealth. True, the information is initially provided for free, but a thriving private weather industry has sprung up which takes the publicly funded data as its raw material and then adds value to it. The US weather risk management industry, for example, is ten times bigger than the European one, employing more people, producing more valuable products, generating more social wealth. Another study estimates that Europe invests €9.5bn in weather data and gets approximately €68bn back in economic value - in everything from more efficient farming and construction decisions, to better holiday planning - a 7-fold multiplier. The United States, by contrast invests twice as much - €19bn - but gets back a return of €750bn, a 39-fold multiplier. Other studies suggest similar patterns in areas ranging from geo-spatial data to traffic patterns and agriculture. “Free” information flow is better at priming the pump of economic activity.

Some readers may not thrill to this way of looking at things because it smacks of private corporations getting a “free ride” on the public purse - social wealth be damned. But the benefits of open data policies go further. Every year the monsoon season kills hundreds and causes massive property damage in South East Asia. This year, one set of monsoon rains alone killed 660 people in India and left 4.5 million homeless. Researchers seeking to predict the monsoon sought complete weather records from the US and from Europe so as to generate a model based on global weather patterns. The US data was easily and cheaply available at the cost of reproduction. The researchers could not afford to pay the price asked by the European weather services, precluding the “ensemble” analysis they sought to do. Weiss asks rhetorically “What is the economic and social harm to over 1 billion people from hampered research?” In the wake of the outpouring of sympathy for the tsunami victims in the same region, this example seems somehow even more tragic. Will the pattern be repeated with seismographic, cartographic and satellite data? One hopes not.

The European attitude may be changing. Competition policy has already been a powerful force pushing countries to rethink their attitudes to government data. The European Directive on the re-use of public sector information takes strides in the right direction, as do several national initiatives. Unfortunately, though, most of these follow a disappointing pattern. An initially strong draft is watered down and the utterly crucial question of whether data must be provided at the marginal cost of reproduction is fudged or avoided. This is a shame. I have argued in these pages for evidence-based information policy. I claimed in my last column that Europe’s database laws have failed that test. Sadly, up until now, its treatment of public sector data has too. Is there a single explanation for these errors? That will be a subject I take up in columns to come.

The writer is William Neal Reynolds Professor of Law at Duke Law School, a board member of Creative Commons and the co-founder of the Center for the Study of the Public Domain

…………………………………………………………………………………………………………………….

Richard A. Epstein: Should all public information be free?

Richard Epstein
© FTcom

James Boyle’s informative column on databases is right to point out the advantages of the free flow of basic information collected by government sources. But it is also critical to understand that the implicit trade-offs behind this calculation apply not only to data but to all forms of intellectual property, which can be either privately owned or placed in the public domain.

First, I think that it is wise to avoid the implicit, if striking, anthropomorphism of Boyle’s title “Why (public) information wants to be free”. The question here is how human beings should treat information in order to maximise its social value. The question is never how information “treats” itself.

It is, also, I think important to remember that a regime of public domain information is not a form of “socialism”, benign or otherwise. Socialism champions the collective ownership of the means of production, which might describe the European control over its data. The public domain connotes no collective control over information or anything else. Each person can use what he or she will, no questions asked.

The hard question is, should information created by the government be put into the public domain. One argument in favour of this approach is that allowing reproduction at cost insures greater dissemination of the information. The argument against it, which Boyle does not address, is that the taxes needed to fund the collection of information impose a burden on other sectors of the economy. The classical “marginal cost controversy” - do we price critical goods at marginal cost - swirled over how to sort these conflicting forces out.

In this case, I think that Boyle has made the right call. One reason not to price at marginal cost is that it makes it difficult to decide whether it was worth while to invest in the production of the public resource in the first place. If there are no tolls on a $million bridge, how do we know it was worth at least $1 to its free users? But whatever the situation with bridges and hard infrastructure, Boyle’s numbers suggest that this isn’t a real issue here.

In other contexts, however, the public domain solution may be more difficult to defend. For example, the Bayh-Dole Act of 1980 consciously encourages universities and their inventors to patent inventions developed with government support. The theory behind the legislation is that inventions left in the public domain will languish for want of a champion to commercialise them.

Twenty-five years later it is still hard to tell whether Bayh-Dole made the right call. But no matter how that decision comes out, the case for putting information in the public domain seems a lot stronger. Data is of great value is in the use of other commercial endeavours. Open access allows individual firms to collate the data in ways that might command a premium, while leaving others access to the raw materials.

That’s the approach taken with the human genome, and it seems to work here. It is nice to know that the United States has done something right. Let’s hope that the European Union sees the light on this one.

The Writer is the James Parker Hall Distinguished Service Professor of Law, The University of Chicago, and the Peter and Kirsten Senior Fellow at the Hoover Institution.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments

Comments have not been enabled for this article.