Lawrence Summers, above, and Ben Bernanke have clashed on the subject of secular stagnation
Lawrence Summers, above, and Ben Bernanke have clashed on the subject of secular stagnation

Macroeconomists are once again caught up in a discussion about the future of their profession. An example has been the recent debate between Lawrence Summers and Ben Bernanke about the deep causes of the economic slowdown. The former US Treasury secretary has defended the case of a “secular stagnation” while the former chairman of the Federal Reserve sees an excess of savings over investment.

It is an enlightened debate, but it also masks a much deeper problem within macroeconomics. Secular stagnation — a sharp fall in growth rates lasting a very long time — is not something that you can easily square with the current generation of macroeconomic theories and models.

Part of this debate reminds me of a discourse among mathematicians at the end of the 19th century. At the time, mathematicians — and physicists too — thought they had solved most problems, just as economists did until 2008.

It is instructive to go back to one episode, concerning the German mathematician Richard Dedekind. He was one of the rebels of his time, and used a new technique to prove an important result. His method would be considered standard stuff today, but was revolutionary then. The response from the traditionalists was harsh. Leopold Kronecker, another German mathematician, decried Dedekind’s proof as useless on the grounds that it had no practical applications. Dedekind retorted, not helpfully, that he wanted “to put thoughts in the place of calculations”.

The advent of chronic instability is the equivalent challenge for macroeconomics today. The present tools used by mainstream macroeconomists cannot deal with this adequately. New ones are needed. They exist in other disciplines, but to macroeconomists they look as weird today as the abstract stuff looked to mathematicians of the 19th century.

For the moment, the traditionalists still rule. They managed to go beyond the ideological turf wars of the 20th century, by taking a leap towards a new generation of economic models that were technically complex — in the sense of 19th century mathematics. The models integrated what economists had learned about various markets with knowledge about the economy as a whole. The so-called dynamic stochastic general equilibrium (DSGE) models were even designed to cope with some unforeseeable disturbances like a technology shock. They were just not able to deal with the shocks we eventually got — a financial crisis, default and deflation.

From a mathematical perspective, the modern models have at least three questionable features. The first is the assumption of a single macroeconomic equilibrium — the notion that the economy reverts to its previous position or path after a shock. Macroeconomists currently have no coherent technical framework to deal with a secular stagnation or savings glut.

The second is linearity — the idea of a straight-line relationship between events. Standard macroeconomic models are complex, and their system of equations is linear. But if you want to understand why the economy did well before 2007, why there was a break in 2008 and why the path of economic output never returned to its previous trajectory, one would require models that incorporate the notion of non-linearity, and even chaos.

The third is logically not a category of its own but a combination of the two above: the assumption of a limitless space, that, wherever you stand, you can go further. We know, for example, that interest rates cannot fall much below zero because people can always hoard cash and thus avoid a negative rate. No-go zones like a zero lower bound are technical minefields in a model. Strange things happen when you approach the outer limit of your space.

Few of these criticisms left a lasting impression on the profession. The mainstream invested a life’s work in developing their DSGE models. They will not let go easily, but continue to tinker with their models, and hope that no policy maker will ever use them. Unfortunately, many institutions already have. An example is the European Central Bank’s use of a DSGE model that has produced persistently too optimistic forecasts.

And what about the rebels? An early detractor was Hyman Minsky, who developed a framework of how to comprehend a modern crisis in the 1980s and 1990s. Minsky was shunned by the establishment. Today’s consensus challengers, like Minsky before them, remain in the anterooms of that debate, on Twitter and blogs, outside established journals and top universities.

Will they succeed? Just as none of these models has a hope of predicting our future, I cannot predict the future of macroeconomics itself. Having followed the debate for a long time, my hunch is that, unlike in mathematics, the successful challenge will come from outside the discipline, and that it will be brutal.

Letters in response to this column:

Physics of gases may be a better starting point / From Michael Kuczynski

Macroeconomic measurement / From Leonard S Hyman

Bringing economics back into liberal academic life / From Hugh Goodacre

All models are wrong, but some are useful / From Jagjit S Chadha

Get alerts on Columnists when a new story is published

Copyright The Financial Times Limited 2019. All rights reserved.
Reuse this content (opens in new window)

Follow the topics in this article