At the end of the second act in 2001: A Space Odyssey's, HAL 9000, a sentient computer in charge of spaceship Discovery One's day-to-day functions, decides it's had quite enough of humans.

After severing one crew member's oxygen hose in deep space, HAL decides to go after captain Dave Bowman. In a famous sequence, HAL repeatedly overrides Bowman's commands to let him back into the ship, as the artificially intelligent servant turns into a ruthless master.

The coming global domination of AI is one of Hollywood's favourite tropes, but it seems we now have a real-life case of an AI know-it-all on our hands, albeit in a less threatening context: venture capital.

On Thursday, Axios reported Google Ventures, the venture capital arm of monolithic search-engine company Alphabet, has been using an algorithm, referred to as “the Machine”, to help it decide on prospective investments. Setting aside the fact Google's algorithmic approach was reported in 2011 and 2015, here's an extract from the article:

...the firm, formerly known as Google Ventures, for years has used an algorithm that effectively permits or prohibits both new and follow-on investments...staffers plug in all sorts of deal details into “The Machine” — which is programmed with all sorts of market data, and returns traffic signal-like outputs.

Green means go. Red means stop. Yellow means proceed with caution, but sources say it's usually the practical equivalent of red.

Inputs into “The Machine” include round size, syndicate partners, past investors, industry sector and the delta between prior valuation and current valuation. The algorithm then ranks deals on a 10-point scale, with green said to represent 8 or above.

According to Axios, staffers are raging against the Machine's judgment after possibly missing several lucrative venture deals recently.

Of course, utilising vast swathes of data to aid investing decisions is nothing new. Renaissance Technology, perhaps the most successful of quantitative hedge funds, has been harvesting and harnessing data to great effect since the 1980s.

Intuitively, quantitative strategies make the most sense in liquid, centralised and relatively transparent securities markets. For instance in public equities, an algorithm not only has access to a deep history of historical data, such as financials and industry performance, but also real-time inputs that affect pricing, whether it be macro data or, yes, the weather. This rich array of inputs should yield sophisticated outputs, and until recently, the proof was in the pudding.

However, taking an algorithmic approach to venture capital investing, particularly to the point where an algo's judgement is sacrosanct, is perplexing.

All investing involves a degree of judgement about the future, but in few markets is the future as radically uncertain as venture capital. It would be a stretch to claim that the angel investors of podcasting company Odeo ever imagined the company would pivot to eventually become Twitter, for example.

Indeed, Twitter is not the only instance. The history of venture capital is littered with failing moonshots that swiveled to become rocket-ships. Some successful businesses don't have a sizeable addressable market at the start, and then grow far beyond the scope their founders intended. Sure, internally developing a cloud infrastructure for Amazon in the early 2000s seemed like a good idea to help smooth operations, but a good idea is one thing, and a business with annual sales of $17bn is quite another.

That's not to say quantitative methods do not have a place in venture capital. One can imagine, for instance, data being of great help in due diligence. Something must be working at Google Ventures also, as assets under management has grown from $50m in 2009 to $3.5bn in 2018. Whether this is a function of performance, or Alphabet having too much cash on its books, is open to interpretation.

But in a world where judgment over a founder's qualities is almost as important as the initial business plan, it seems foolish to leave key decisions to an aspiring HAL 9000.

After all, quantitative strategies should exist to gauge probabilities, not to rationalise uncertainties.

If the difference between the two isn't clear, Keynes perhaps put it best in the General Theory:

By “uncertain” knowledge...I do not mean merely to distinguish what is known from what is merely probable...The sense in which am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.

Related Links:
Quant funds lose their shine as strategies falter - FT

Abandon all hope, ye who venture here
- FT Alphaville
You’ll never be a Yale superman - FT Alphaville
Is Google cheap? - FT Alphaville

Get alerts on GV Management Co. LLC when a new story is published

Copyright The Financial Times Limited 2022. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article