© The Financial Times Ltd 2016
FT and 'Financial Times' are trademarks of The Financial Times Ltd.
The Financial Times and its journalism are subject to a self-regulation regime under the FT Editorial Code of Practice.
September 2, 2014 11:02 am
Business people like to think their decisions are rational and based on facts, but in practice they are often most swayed by gut feel or the advice of colleagues, which can of course be totally wrong.
For example, a cake shop owner might think an ice-cream parlour moving in next door would have a negative effect on sales, but, by attracting more people to the area, it could actually increase them.
Using software to experiment with data can help people discover the errors in their own judgment, says Anthony Bruce, chief executive of Applied Predictive Technologies (APT), a Washington-based company that specialises in experimentation, or “test and learn”, programs.
Mr Bruce says traditional data analytics and data mining often find spurious relationships, and that this can lead to expensive and incorrect business decisions. “But experimentation software measures the impact of incremental changes, such as how increasing one product line can affect other apparently unrelated products. It recommends what further variables to try and then subjects those results to validation.”
Simply trusting one’s own intuition is not enough, says Mr Bruce. When data are subjected to software analysis, 20 per cent of results may be unexpected.
While decision makers might be able to deduce an overall effect, they often struggle to quantify and weigh up the relative importance of various factors. “Software that can do this gives much more valuable insight and maximises return on investment,” says Mr Bruce.
The technique worked well at Kohl’s, a US department store chain that wanted to follow the market trend and start selling furniture. This meant taking space occupied by other goods, says Katy Mackesey, Kohl’s director of ecommerce finance.
Kohl’s decided to cut back on children’s clothing. “We knew we would lose sales in that category, but we thought the furniture sales would create a healthy positive net effect,” says Ms Mackesey.
The experiment ran in three of the company’s 50 plus stores and the results analysed with test and learn software. They showed a more complex picture than had been thought, says Ms Mackesey. “The software showed us that it wasn’t only kids’ apparel that was losing sales, but there was a knock-on effect.”
Because fewer parents were coming in for children’s clothes, they were not buying their children other items, such as toys and shoes – or things for themselves.
It was counter-intuitive, says Ms Mackesey. “Everybody had expected to see a win. We were losing sales to a much larger degree than anticipated.” After trying a range of approaches to product range, marketing and pricing, and doubling the usual test period to 10 months, Kohl’s abandoned the idea.
“Before using APT’s analytics software, we would have ignored our test results and somebody would have gone with their gut instinct,” says Ms Mackesey. “The software helped isolate what the issue was and prove that the counter-intuitive results were real. Once you use software like this, you can see that your intuition can be plumb wrong.”
Users do not need to be experts or data scientists. The software knows how much information it needs and checks how the data were trending before the test began. There are no inconclusive test results. The software highlights where you have picked a false or unrepresentative starting point, says Ms Mackesey. “And it checks that your control data are right – that you are comparing apples with apples.”
Experimentation software works best when there are lots of data – for example, transactions, customers, employees or geographic areas. In addition to retail, early sectors to use it include financial services, consumer products and telecoms. Initial applications tend to be in marketing, merchandising, pricing and promotion.
Organisations are often sceptical at the outset, but the ability of experimentation software to demonstrate the impact of decisions tends to win people over once they try it, Mr Bruce says. Return on investment takes well under a year, he adds.
One way to win over sceptics is to apply the software to historical data, against which it can prove its accuracy. This helps persuade the doubters, Ms Mackesey says. “In situations where there is disagreement, you can get both sides aligned on a methodology.”
Initially, Kohl’s management team started using experimentation software for capital investment decisions, but its use has spread down the management chain. “Its functionality has grown exponentially during the past two years and it also operates much faster,” says Ms Mackesey.
Marketing people used to have to run very complicated queries, push the button at the end of the day, and come back next morning to get their results. Now queries come back in 20 minutes or less. This helps speed innovation.
“Experimentation software is useful for defining what sales lift we would get from spending $2m to remodel an existing store. It is also relevant for many smaller investments,” says Ms Mackesey. “We always tried to measure this, but, before, there was no consistency across the group.”
There are huge opportunities to use the software to explore how online retail compares with bricks and mortar, and how the two affect each other, says Ms Mackesey. “We’ll also be able to understand customer behaviour in more detail, monitor trends, and analyse how people respond to offers.”
Still, however elaborate software may become and however much it comes to inform business decisions, we can take comfort from the fact that humans will always be needed to evaluate its findings.
Copyright The Financial Times Limited 2016. You may share using our article tools.
Please don't cut articles from FT.com and redistribute by email or post to the web.