© The Financial Times Ltd 2016 FT and 'Financial Times' are trademarks of The Financial Times Ltd.
January 15, 2013 5:52 pm
Christopher Ahlberg can predict the future – or at least he hopes his company can. Recorded Future provides companies and governments with intelligence updates based on scouring the internet for information – tweets, news article and blogs – that it then quickly analyses.
“It could be anything from when the Chinese premier is next planning to travel to Macau, to what product a rival company is about to launch,” says Mr Ahlberg. Unsurprisingly, there is a fair amount of interest in this service. “Our customers are some of the largest corporations in the world that are interested in world events, hedge funds who do political risk-trading and even government agencies,” he says. He declines to name specific clients, as most prefer not to broadcast the fact that they access this type of information-gathering. The company’s advisers include David Carey, a former executive director of the CIA - and the investment arms of Google and the CIA have both backed the company.
Recorded Future, which was launched three years ago, has a turnover of around $10m a year and employs 35 people. The company is one of a number of start-ups that has utilised the twin technology trends of cloud computing and big data to underpin its business model. Its launch is a sign of how quickly the sector has matured: it is difficult to imagine Recorded Future operating as effectively just five years ago.
Access to huge amounts of computer processing power and storage, provided on a pay-as-you-go basis by companies such as Amazon, the ecommerce group, has allowed even small companies to calculate vast amounts of data. Previously, crunching through petabytes of data from the internet – roughly 20m filing cabinets worth of information – would have required buying roomfuls of expensive servers.
“Barriers to entry are down, as you can take advantage of cloud computing, and if you need more computing power, you can just throw another server on the fire, so to speak,” says Mr Carey.
There has also been a rapid development of technologies that allow companies to analyse quickly large amounts of unstructured data, such as social media messages and pictures. Technologies such as Hadoop, which allows companies to process vast amounts of data with ease, and Hana, created by SAP, which has cut the time it takes to analyse computer data, have made it possible to turn around more data, faster.
“Access to more processing power is certainly a factor,” says Mr Ahlberg. “But more interesting is that I can now track sources around the world in real time. The big difference is the ability to take text and turn it into data, whether it is a Chinese character, or in Spanish.”
So far, it is largely start-ups that are taking advantage of these computing capabilities, with larger, more established businesses still slow to follow, says Tony Baer, an analyst at Ovum, the technology market research firm. At a recent Ovum conference, only a third of companies attending were implementing big data projects, Mr Baer says. “And most of those are not very far along. They are mostly in the proof-of-concept stage.”
Indeed, companies currently looking at implementing big data projects often just play it safe. Internet pioneers such as Yahoo and Google are the exception, explains Mr Baer, by “doing things like optimising the ad placement on search results. But most companies are just doing slightly bigger versions of data analysis that they have always done.”
Big data has allowed companies to be much more accurate in terms of monitoring. It is no longer necessary to extrapolate from a small sample of data; now it is possible to analyse an entire dataset. Mr Baer recalls the Nielsen ratings boxes that were installed in selected US households in the 1980s to measure the nation’s TV viewing. Only around 0.02 per cent of households were sampled, and the data was criticised for inaccuracy and for not being statistically significant. “Nielsen had to be very scientific about the sample and would have to get families to do things like record the shows they watched in log books,” Baer says. “Now you don’t have to rely on people doing things, you can simply capture data on what people are watching online or on mobile.”
Bluefin Labs, based in Cambridge, Massachusetts, can build a picture in real time of what people are watching on TV simply by monitoring the comments they make on social media. Survey data indicate that around 40 per cent of people use a smartphone or a tablet while watching television, enough to provide an adequate sample size. Companies such as CBS, the television network, use these types of processes to evaluate the popularity of programmes, while Procter & Gamble, the consumer goods company, is using the technology to evaluate the effectiveness of its advertising.
Big data may even eventually be able to tell researchers details that customers themselves barely register. Affectiva, a start-up that grew out of the MIT Media Lab, is using webcams to monitor people’s facial expressions, a technology that could, for example, provide companies with data on how viewers are responding emotionally to seeing a particular advert.
Mr Carey, who advises several big data start-ups, says larger companies must be wary of not being overtaken by smaller rivals that have access to these types of faster and more accurate data.
“Companies used to pride themselves on making data-free decisions,” he says. “They understood their businesses so well they just had a feel for the right decision, and it was hard to challenge that. But now newcomers can come into the market with dashboards of distilled data and make good decisions too.”
Copyright The Financial Times Limited 2016. You may share using our article tools.
Please don't cut articles from FT.com and redistribute by email or post to the web.