We’ll send you a myFT Daily Digest email rounding up the latest University of Southern California news every morning.
The first Amazon review of my new book showed superb taste: it gave me the top rating of five stars. Naturally, my ego is tickled by the fact that the reviewer – his name is Alistair Kelman, and he’s introduced himself to me online and attended a couple of my talks – so wisely divined the book’s boundless excellence. But I am also an economist, and so interested in what this might do to my sales figures – and, therefore, my income.
We know that online reviews make a difference but this isn’t totally straightforward to establish. An excellent book (ahem) might win both a large readership and positive reviews; it would seem impossible to prove that the reviews helped boost the sales figures. But two economists, Judith Chevalier of Yale and Dina Mayzlin of the University of Southern California, observed that different websites host different reviews. Observing sales ranks and reviews on Amazon.com and its rival BN.com, Chevalier and Mayzlin concluded that reviews had a substantial impact on sales – with negative reviews being taken particularly seriously.
And yet any mainstream book will accumulate several reviews – perhaps dozens. Doesn’t that suggest that Mr Kelman’s undoubted discernment is almost irrelevant to my book’s prospects? Perhaps not: an initial positive review may encourage others to be positive too – or stir up some disagreement. Now this, too, also seems hard to figure out. One good review will often be followed by other good reviews. Is this because the reviewers are influencing each other, or because they all see the same quality in the book? Is everyone reading Fifty Shades of Grey because it approaches the platonic ideal of soft porn? Or because, well, everyone’s reading Fifty Shades of Grey?
The best way to answer such questions is with controlled experiments. A few years ago the sociologist Duncan Watts, along with Matthew Salganik and Peter Dodds, set up an internet music site and used it to figure out how much people were influenced by one another’s musical tastes. Some 14,000 teenagers listened to 48 new songs, which they could rate and download if they wished.
Watts and his colleagues split the music fans at random into eight “worlds”. Some “worlds” were asocial: people listened to and rated songs without knowing what others were doing. In other “worlds”, people were shown what others in their world were rating and downloading. The social “worlds” produced two striking results. Inequality increased: the most popular songs were far more popular than in the asocial world, as people herded together. The unpopular songs were even less popular.
Even more remarkably, each social world had different “hits”. The random tastes of the earliest reviewers shaped what others listened to. The result: highly successful “winners” picked almost at random by the madness of a highly social crowd.
A more recent study – published in Science by Lev Muchnik, Sinan Aral and Sean Taylor – manipulated social preferences in a more direct way. The researchers teamed up with an internet site that allowed both comments, and positive or negative votes on those comments. It was arranged that whenever comments were published, the site would instantly and randomly add a positive or negative vote.
Again, people paid attention to what others had (apparently) done, but in an asymmetric way. Negative comments, which are unusual on the site, often motivated “corrective” positive votes. Positive comments tended to attract birds of a feather – a comment sent into the online world with a single positive vote attached was 30 per cent more likely to end up with at least 10 more positive votes than negatives.
In both these experiments, an early good review had a substantial influence on the outcome. I owe Mr Kelman a debt of gratitude.
The Undercover Economist Strikes Back, by Tim Harford, is published by Little, Brown