July 23, 2007 6:10 pm
Robert Engle: Good morning. We’re here in Washington Square Park, a beautiful park in the centre of New York city, right in the heart of the NYU campus. We’re going to talk about measuring volatility when it’s time varying.
Practitioners discovered that volatilities were time varying because when they measured them at different points in time, they got different answers. This leads to the most common way of measuring volatility, which is to calculate the standard deviation of the return over shorter periods. It might be just a week, or a month, or a year. These measures of volatility are called historical volatility, and we’ll look at them in a minute. The difficulty with these measures is if you use a short number of observations you get a very noisy measure of volatility, and if you use a long series, you get something which is so smooth that it doesn’t respond very well to new information.
So, what’s an alternative to historical volatility? That’s where the ARCH model comes in. The ARCH model stands for Autoregressive Conditional Heteroskedasticity. ARCH is, however, a very simple concept.
But the simple model had such an impact on financial econometrics that the Nobel Prize Committee gave me the Nobel prize in 2003 for inventing this model.
But now it’s time to talk about what really is the ARCH model.
I told you about historical volatility. It’s a way of measuring standard deviations with a limited amount of data. What the ARCH model does that solves the problem of how to choose the window is it says, let’s use a big window, but we’ll use weighted averages. That is, we’ll give more weight to the recent information and less weight to information that happened a long time ago.
The particular events that happened a long time before have a very tiny weight, so it continues to influence our forecasts on what future volatility is going to be. The special feature of the ARCH model is that you can estimate these weights using historical data, that is you can give it to an econometrics programme, and it figures out what the best set of weights would be for this particular series. That’s been a big breakthrough in the statistical modelling of volatility.
So, let me show you how it works. If you look at the graph of the S&P 500 that we studied before, we have now, in yellow, the five day moving standard deviations, and you can see those are jumping all over the place. There’s a great deal of volatility of the standard deviations over the sample period. If you look at the red curve, that’s the one year window standard deviation, which is much smoother, and if you look at the green curve, that’s a five year standard deviation, and you can see that that is often way behind the movements in the volatility, but it is much more stable.
If you give the same data to a GARCH programme, like EViews or Matlab, or many others, it will calculate the best set of weights and give you estimates of what the volatility is for each day over the sample period, and that’s shown in the next graph.
You can see that the volatility has gone up and down over time. It’s low in the middle ‘90s, it’s high around 2000, and it’s falling at the end of the sample period. If you look at the largest event, it is the October 1987 volatility, where you see it reached very high levels, but died away very quickly.
Another way to look at the same output is in terms of a confidence interval. Every day, you could ask the question, how much change do we expect in the stock market tomorrow? How high could it go? How low could it go? And, that’s what we call a confidence interval, and the GARCH model gives you a way of calculating these confidence intervals.
So, we’re showing on the next graph the same returns data that we’ve used before, in red, and it goes up and down since 1990, over the sample period. In blue, we’ve got the upper band calculated from the GARCH model, and in green, we have a lower band calculated in the GARCH model, and these bands can be given the interpretation, we’re very sure that on the next day the stock market return will not be higher than the blue band, or lower than the green band. Thus, it is a time varying confidence interval, and you can see how much lower this confidence interval was in the mid ‘90s than it is around 2000, and you can see it is falling at the end of the sample period.
In the next lecture, we’ll talk about how we can use this idea to give measures of risk, that can be used for financial practice.
Copyright The Financial Times Limited 2016. You may share using our article tools.
Please don't cut articles from FT.com and redistribute by email or post to the web.