Wharton Online
Partner Content
Wharton Online
This content was paid for by Wharton Online and produced in partnership with the Financial Times Commercial department.

Analytics: When less really can be more

Analysing marketing data too finely may lead to costly errors; here’s a new way to do it

Consider England’s mysterious crop circles, or the ancient, gigantic animal-shaped mounds of the Americas. At ground level, they don’t look like much. It isn’t until you go further up—on a hillside or in a plane, perhaps—that you get to the “sweet spot” where the pattern sharply emerges. If you continued going up, the pattern would eventually blur and vanish.

Similarly, managers who rely on data analytics, tracking sales or usage over time, must decide on the level at which their teams should examine a given set of data. Should analysts be measuring daily, weekly, or monthly activity? Or if it’s location data, should they look at a census block? A postal code area?

Currently there’s no accepted, reliable way to choose, says Wharton marketing professor Eric Bradlow. Yet that choice, which Bradlow refers to as the “level of granularity,” has a big impact on the accuracy of your marketing information and forecasting, he says.

He cites studies including one on price elasticity1—how willing people were to buy a product after the price changed—in which the results varied by more than 50% depending on the measurement timeframe used. And in a study on predicting movies’ box-office performance2, a metric like average user rating proved valuable at the local market level, yet faded to insignificance at the national level.

Bradlow, along with Wharton marketing colleague Raghuram Iyengar and doctoral student Mingyung Kim have devised a method that managers can use to determine the optimal level of granularity for various marketing scenarios. Their research, presented in the paper “Selecting Data Granularity Using the Power Likelihood”, could become a vital new tool in companies’ ongoing efforts to navigate the welter of increasingly-available customer data.

Bradlow calls the research a “counter-argument” to the popular notion that “‘you have to track everything at its most granular level or you’re throwing away money’”. Iyengar agrees, “You’re trying to basically gather the point at which people make decisions. The thought is that the most granular data is the ‘correct’ one, but we challenge that idea”. He outlined a scenario in which a consumer buys bread at a store once a week. If you were to receive daily data about that person, then “six days out of seven, the data will basically consist of nothing, because that consumer doesn’t shop at the daily level”.

Bradlow gives a similar example of people downloading new apps to their phones. Typically there’s a big burst of usage right away, followed by a very steep decline in activity as people lose their initial sense of excitement. For the marketer, continuing to measure app usage close-up becomes a fool’s errand after a while: “I don’t need to see 5,000 seconds in a row where you do nothing”, says Bradlow.

To test their formula, the researchers applied it to a set of Nielsen scanner data that collects store-level sales and marketing information. In a study of the price elasticity of bottled orange juice brands, Bradlow, Kim, and Iyengar show that using their granularity selection method reduced statistical bias and resulted in more accurate marketing reporting than other existing approaches.

They note that interestingly, their formula arrived at an ideal granularity that was not the most in-depth one available, reinforcing the idea that more microscopic information is not necessarily better.

But what if I always need the same level of analysis—say, monthly or quarterly—to provide in standard reports?, thinks the manager. Iyengar asserts that even if the granularity tool comes up with a level that doesn’t match the division or company’s reporting, data analysts can still work with the method to generate accurate information, which will remain accurate when rolled up into larger units.

Bradlow and colleagues’ findings have implications not only for more accurate data analysis but for issues around data privacy. Being restricted to limited data because of privacy regulations may not pose as much of a roadblock to marketers as they think. If you can only get your hands on monthly rather than daily data, or town-wide rather than neighbourhood data, it may be enough to reliably extract the marketing intelligence you need.

It is a paradox, but it appears that sometimes seeing less can help you see more.

Find out more about Wharton Online

View Footnotes

Eric Bradlow is the K.P. Chao Professor, Professor of Marketing, Economics, Education, and Statistics, and Vice Dean, Analytics at Wharton, at the Wharton School of the University of Pennsylvania.

Raghuram Iyengar is Miers-Busch, W’1885 Professor, Professor of Marketing at the Wharton School of the University of Pennsylvania.

Mingyung Kim is a doctoral student in Marketing at the Wharton School of the University of Pennsylvania.

1 Christen M, Gupta S, Porter JC, Staelin R, Wittink DR (1997) Using market-level data to understand promotion effects in a nonlinear model. J. Marketing Res. 34(3).

2 Chintagunta PK, Gopinath S, Venkataraman S (2010) The effects of online user reviews on movie box office performance: Accounting for sequential rollout and aggregation across local markets. Marketing Sci. 29(5):944-957.