Experimental feature

Listen to this article

00:00
00:00
Experimental feature
or

Ian Ayres, professor of law at Yale University, argues in his new book, Super Crunchers (John Murray £16.99), that while statistical analysis can reveal the secret levers of causation, it’s only part of the story. And “super crunching” is revolutionising the way we all make decisions. You can read an extract of the book here.

Can anything be predicted? Can one man using a stack of data predict the quality of a Bordeaux vintage better than connoisseurs working in the field for decades? Can the quality of a movie script or a relationship be reduced to mere numbers? Mr Ayres will answers FT.com readers’ questions.


The tools and methods for serious statistical analysis have been available to business for several years. The medical research community have been doing randomised trials for hundreds of years, yet, in the commercial world this is still considered to be early adopter or radical thinking. Can you offer any explanation for the slow adoption of statistical analysis? Do you think the existing tools and techniques facilitate mass adoption, if not what areas need to be addressed in your opinion?
Tony Harper, London

Ian Ayres: Excellent question. I think the timing of the rise of statistical analysis (both regressions and randomised trials) is dominantly driven by the revolutions in capturing, combining and storing digital data. There as sufficient computer speed to run the statistical analysis for more than a decade.

But the rise of the internet (which has made randomised trials virtually free) and the ability to mashup data has greatly facilitated the Super Crunching revolution. It’s hard to remember, but most organisations in the early 90s still had their data in paper form and even digital data was often stranded in proprietary silos that were incompatible with other data held on line by the same company.


I agree that today’s experts are aided by unbelievable levels of computing power but surely there are limitations to expert knowledge and models are all imperfect? It is still humans who collect, input, and set the parameters for the analysis of that data. Human errors will always occur.
Liz Fox, Derbyshire, UK

Ian Ayres: You are right. We definitely collect and input the data. Indeed, some of the data that is collected is the subject judgements of humans. Radiologists might for example code the type of some cellular abnormality. And as you say, humans crucially set the parameters. We decide what questions to ask. And most important, we come up with the hunches of what causes what. The hypotheses to be tested. Errors in prediction will occur with either Super Crunching or traditional experts. But studies have found time and again that the errors tend to decrease with statistical prediction. The more complicated the process, the worse human predictions fair relative to Super Crunching.

We tend to defer to experiential experts when some process is really complicated. But when there are more than say 5 causal factors, human experts tend to do a very bad job at correctly at assigning the correct weights to what causes what.


The airline example points out the tenuous connection between the outcome of data mining and its actual implementation through the human interface (in this case, the airline clerks who need to act upon this information). To what extent are companies able to execute on these outcomes in real time contexts? I suspect the research would prove that humans rarely seize on these opportunities on a consistent basis.
Anon

Ian Ayres: One of the recurring themes of Super Crunching is the current trend to restrict the discretion of line employees. Once Super Crunching demonstrates that a script or a set procedure is the best way to do something, organisations tend to require routinised employee compliance. This may not be the best way to promote vibrant and imaginative minds.

And some employees resist following the script. But increasingly there is a tension between organisation efficacy and employee freedom. If you want to reduce hospital infections, having physicians and nurses follow hand-washing checklists reduces fatalities. If you want to teaching reading, having your teachers follow a script produces proven benefits. Of course there are extraordinary teachers, but if you are trying to run an organisation of 1000 line employees - Super Crunched scripts almost always do better than discretionary systems.

Part of your question suggests that you are also worried about the ability to get the necessary information to the employee in time for him or her to use it. But in the age of Super Crunching, near instantaneous information is at the ready. This is not just airline information on past flight delays. When you call your credit card company, before the operator picks up the phone his or her screen flashes up all kinds of information about your past (payments, etc) and predictions about what other products you’re most likely to value.


I very much enjoyed your book – I picked it up after reading your FT weekend article. Are you familiar with any ’super crunchers’ examining hedge fund managers, or more specifically, commodity trading advisors? There are numerous data collectors (Barclays, HFR, Lipper, HedgeFund.net) who post trading results. While performance is the most important variable along with performance metric variables (for example., one of your favourites - standard deviation), it strikes me that other aspects can and should be crunched.
Ed Donnellan, US

Ian Ayres: Thanks for reading the book. There are of course many quants doing statistical analysis inside hedge funds and other investment firms and there are truly remarkable academic people like Eugene Fama and Robert Shiller - who have been crunching numbers on publicly available stock prices. (I myself have recently entered this arena. Barry Nalebuff and I have crunched numbers on stock returns going back to 1871 and have developed a far superior life-cycle strategy to allow people to diversify stock market risk across time.)

But let me use your question to also say a few words about the recent failure of Super Crunching in the financial markets. The predictions of the quants still tend to be better than those of most experiential experts. The big problem is that some firms have over-relied on the quant predictions. When you take a leveraged bet on a prediction, there is often little wriggle room for error.

In my book, I say that one of the coolest things about statistical prediction is that the statistical procedure predicts and simultaneously tells you the precision of the prediction. If there isn’t much data, the statistical procedure will still make a prediction but it will simultaneously tell you that it is not very confident in the prediction. The problem is that we can’t always take the estimate of precision at face value. Sometimes the estimates of precision are overstated. The quant hedge funds have found this out the hard way.

One way forward, is to have multiple sources of Super Crunching (and this is suggested by your question). Super Crunching is too important to leave to any one cruncher. Firms should have Super Crunching audits and multiple approaches to crunching to see if the initial predictions are ”robust” to alternative assumptions and alternative data.


Could you comment on what you think about web analytics and interactive marketing? Have you looked at any of the web analytics software/services? What do you think about the maths behind the statistics behind web analytics?
Leslie Friedrich, Houston, Texas

Ian Ayres: I’m generally a big fan of web analytics - particularly the ability to cheaply run randomised tests on the internet. Companies like Offermatica can set up your server so that people clicking on a link are randomly shown different web pages. It’s then easy to keep track of which page image produces the most click-throughs and the most sales.

It is fairly standard to lift a web page’s productivity by 10 to 20 per cent through randomised tests of different pictures, different promotions, different warranties. Graphic designers tend to come down from on high with new layouts for newspapers. But at least on the web, newspapers should start testing to see what graphic interfaces work best (and I mean a controlled randomised test). I bet FT is not doing this yet - and they should start.


Do marketing strategies which follow the prescriptions of data mining and RCT schemes run the risk of missing out on the very best prospects who sit on the margin of the profile predicted by data mining? My best clients seldom fit a mould.
John Goodwin, Kansas, US

Ian Ayres: Data mining historical data can risk ”missing out” on prospects that just don’t show up in the data base. But that’s where randomised testing comes in. Randomised Control Trials (RTCs) can proactively create information about prospects that doesn’t exist in traditional data bases.

For example, a historical data base can’t tell you about the applicants that you rejected. But firms could randomly accept some of their rejects and test to see whether they are missing out on the diamonds in the rough.

Also we need to constantly keep in mind that this is really a horse race with human decision makers who can be even more error prone. We over-estimate our ability to figure out who the best prospects are.


I haven’t read your book. Could you expand a little on why you think “super crunching” is revolutionising the way we all make decisions?
Kyle Cohen, US

Ian Ayres: The revolution in capturing and storing digital information has allowed decision-makers access to huge databases - hundreds of times larger than the text in the library of congress. Super crunching relates not so much to new techniques, but to size and speed. In field after field - business, government, sports, medicine, education - super crunching is changing the way that decisions are made.

Statistical data mining and randomised tests make it easier to uncover the hidden levers of behaviour. Decision-makers can make better choices when they have better predictions of what causes what.


Do you think that the technological progress in computer software and hardware such as the new computer modelling techniques based on sophisticated algorithms, quantum computing, quantum random number generators in combination with the deeper understanding of finance, economics and behaviour sciences, will create a new statistical analysis platform, which can advance our capabilities to forecast the functional behaviour of complex systems? Do you see an arising necessity for new governmental regulations and laws to govern the operational environment in this case?
Viktor O. Ledenyov, Ukraine

Ian Ayres: Up to now, most of the revolution has been driven by technology not techniques. But there are some possible exceptions. For example, I talk in the book about ”neural network” prediction techniques that have been shown at times to be pretty effective. What’s really interesting is that ”neural networks” is a leading example of techniques that are being driven not by statisticians but by artificial intelligence types. It’s still an open question whether the neural predictions or other innovations in techniques will ultimately beat out traditional regressions. (Econometricians are uncharacteristically silent and defensive when this subject comes up). If I were betting I wouldn’t think that chaos and complex systems theory is likely to produce stable/robustly accurate predictions (but the great thing about Super Crunching is you can statistically test the relative accuracy of alternative predictions themselves!)

There is a possibility for new government regulations. Traditionally government has only required sellers to disclose information about themselves and their products. But in today’s super crunching world, sellers often can make better predictions about consumers’ future behaviour than the consumers can make themselves. Blockbuster knows more about the probability that I will return a movie late, Hertz knows more about how much gas I’ll leave in the tank; Visa know more about how many months a year I’ll pay my bill late; Verison knows more about how many minutes I’ll leave unused on my cell. It might be appropriate to have a new type of regulation where seller’s disclose information to consumers about the individual consumers.


The examples here above are extreme, but don’t you think that statistical analysis can be simply used today to answer million of simple questions for which today, guessing and gut feel are the most common answers?
Roger Haddad, Paris, France

Ian Ayres: Yes. In fact I’ve created and gathered together about 30 prediction tools that will let you predict all kinds of things (you’re child’s height, your due date, how long your marriage will last). If you have a favourite regression or know of a prediction site, please let me know and I’ll program it up and add a link to this page. I’m with you. Statistical prediction is really something that we can all profit from...

Copyright The Financial Times Limited 2017. All rights reserved.
myFT

Follow the topics mentioned in this article

Comments have not been enabled for this article.