Lessons in privacy from Sony’s data theft

Companies should be explicit about what data are collected and kept, write Vasant Dhar and Arun Sundararajan

The fallout of the Sony PlayStation data breach, in which hackers stole personal information about more than 100m gamers, was more a failure in management than a failure in security technology. The latest estimated loss of $171m seems conservative, capturing only direct costs to Sony, and preceding the company’s admission of three subsequent global attacks in Canada, Thailand and Indonesia. The true extent of damage of these breaches and their reputational consequences will likely unfold over time.

Sony has since designated a “chief information security officer” to handle corporate data protection, a path that competitor Nintendo may follow in light of the data breaches it admitted this week. However, this may not take the solution high enough into the executive suite. Chief executives today must view data governance in the same vein as financial reporting or brand management. The potential impact on franchises of breaches of data privacy are comparable to that of product recalls due to defects in industrial products. Managing consumers’ data and privacy is an executive matter of highest priority, one made more critical by a continual increase of data risk in cyberspace. Keeping data more secure by adding encryption, security and firewalls addresses only part of the challenge. It is equally important to formulate clear data governance policy that guides decisions about what information to acquire, keep, use and share.

Companies should be explicit about what data are collected and kept. It may be that they are simply storing too much. A company that has decided what data to acquire, needs to consider both the returns from its use as well as the associated risks. In making these risk-return trade-offs for consumer data – a complex asset for which no clear valuation models or intellectual property rights exist – companies should align data use with the intent that the consumer had when the data were provided.

When internet users type in Google queries, tag Facebook images, make calls or send photographs using an iPhone, or give their banks email addresses, they have a specific intent associated with the data transferred. Upon capturing these data, companies need to understand what the consumers intended, then use the data in a manner that is in line with this, or consistent with the implicit “data rights” the user has provided them. The larger the gap between the consumers’ intent and the firms’ use, the greater the risk assumed by the firm. For example, if Apple ever chooses to or is forced to share mobility or traffic data captured by its iPhones, this will be an unrecoverable misalignment, since the device users had absolutely no intention of transferring ownership of these details. Is the risk to Apple from keeping this data worth it?

Recent privacy stories involving Epsilon, TomTom and Google amongst others make it clear that “intent-based” data governance is subtle – but essential. This is all the more complicated because every electronic interaction involves some manner of data exchange between devices. In treating this kind of unconscious data transfer responsibly, we should look to the corresponding actions that a firm and a consumer might take if an analogous transfer had occurred in the non-electronic world, where accepted norms exist. These norms can go a long way towards clarifying user intent.

More regulation is not the answer. While helpful in aligning consumer interests with corporate data use, regulating information privacy cannot replace vigilance at the firm-level. Advances in technology will always outpace government intervention. Nevertheless, directives that require firms to distinguish between consumer data they are explicitly given and data acquired as a by-product of commerce, integration or aggregation are to be welcomed.

Will “information markets” provide a solution? Possibly, but asserting property rights and notional ownership over exchanged data is a theoretical ideal that has onerous transaction costs. Perhaps a market in the future might lower these costs sufficiently. Until then, intent-based governance provides a more pragmatic basis for managing security and privacy, one that we hope chief executives with foresight will integrate into their information strategy as they struggle to stay afloat in this risky new ocean of digital data.

The writers are professors at the NYU’s Stern School of Business

Copyright The Financial Times Limited 2017. All rights reserved. You may share using our article tools. Please don't cut articles from FT.com and redistribute by email or post to the web.