Historical volatility is a statistical measure of the dispersion of returns for a given security or market index over a given period. This indicator provides different historical volatility model estimators with percentile gradient coloring and volatility stats panel.

█

There are multiple ways to estimate historical volatility . Other than the traditional close-to-close estimator. This indicator provides different range-based volatility estimators that take high low open into account for volatility calculation and volatility estimators that use other statistics measurements instead of standard deviation.

The gradient coloring and stats panel provides an overview of how high or low the current volatility is compared to its historical values.

█

We have mentioned the concepts of historical volatility in our previous indicators, Historical Volatility, Historical Volatility Rank, and Historical Volatility Percentile. You can check the definition of these scripts. The basic calculation is just the sample standard deviation of log return scaled with the square root of time. The main focus of this script is the difference between volatility models.

Close-to-Close is the traditional historical volatility calculation. It uses sample standard deviation.

Pros:

• Close-to-Close HV estimators are the most commonly used estimators in finance. The calculation is straightforward and easy to understand. When people reference historical volatility , most of the time they are talking about the close to close estimator.

Cons:

• The Close-to-close estimator only calculates volatility based on the closing price. It does not take account into intraday volatility drift such as high, low. It also does not take account into the jump when open and close prices are not the same.

• Close-to-Close weights past volatility equally during the lookback period, while there are other ways to weight the historical data.

• Close-to-Close is calculated based on standard deviation so it is vulnerable to returns that are not normally distributed and have fat tails. Mean and Median absolute

deviation makes the historical volatility more stable with extreme values.

═════════════════════════════════════════════════════════════════════════

• Parkinson was one of the first to come up with improvements to historical volatility calculation.

• Parkinson suggests using the High and Low of each bar can represent volatility better as it takes into account intraday volatility . So Parkinson HV is also known as Parkinson High Low HV.

• It is about 5.2 times more efficient than Close-to-Close estimator. But it does not take account into jumps and drift. Therefore, it underestimates volatility .

• Garman Klass expanded on Parkinson’s Estimator. Instead of Parkinson’s estimator using high and low, Garman Klass’s method uses open, close, high, and low to find the minimum variance method.

• The estimator is about 7.4 more efficient than the traditional estimator. But like Parkinson HV, it ignores jumps and drifts. Therefore, it underestimates volatility .

• Rogers and Satchell found some drawbacks in Garman-Klass’s estimator. The Garman-Klass assumes price as Brownian motion with zero drift.

• The Rogers Satchell Estimator calculates based on open, close, high, and low. And it can also handle drift in the financial series.

• Rogers-Satchell HV is more efficient than Garman-Klass HV when there’s drift in the data. However, it is a little bit less efficient when drift is zero. The estimator doesn’t handle jumps, therefore it still underestimates volatility .

• Yang Zhang expanded Garman Klass HV so that it can handle jumps. However, unlike the Rogers-Satchell estimator, this estimator cannot handle drift. It is about 8 times more efficient than the traditional estimator.

• The Garman-Klass Yang-Zhang extension HV has the same value as Garman-Klass when there’s no gap in the data such as in cryptocurrencies.

• The Yang Zhang Estimator combines Garman-Klass and Rogers-Satchell Estimator so that it is based on Open, close, high, and low and it can also handle non-zero drift. It also expands the calculation so that the estimator can also handle overnight jumps in the data.

• This estimator is the most powerful estimator among the range-based estimators. It has the minimum variance error among them, and it is 14 times more efficient than the close-to-close estimator. When the overnight and daily volatility are correlated, it might underestimate volatility a little.

• 1.34 is the optimal value for alpha according to their paper. The alpha constant in the calculation can be adjusted in the settings.

• EWMA stands for Exponentially Weighted Moving Average . The Close-to-Close and all other estimators here are all equally weighted.

• EWMA weighs more recent volatility more and older volatility less. The benefit of this is that volatility is usually autocorrelated. The autocorrelation has close to exponential decay as you can see using an Autocorrelation Function indicator on absolute or squared returns. The autocorrelation causes volatility clustering which values the recent volatility more. Therefore, exponentially weighted volatility can suit the property of volatility well.

• RiskMetrics uses 0.94 for lambda which equals 30 lookback period. In this indicator Lambda is coded to adjust with the lookback. It's also easy for EWMA to forecast one period volatility ahead.

• However, EWMA volatility is not often used because there are better options to weight volatility such as ARCH and GARCH.

• This estimator does not use standard deviation to calculate volatility . It uses the distance log return is from its moving average as volatility .

• It’s a simple way to calculate volatility and it’s effective. The difference is the estimator does not have to square the log returns to get the volatility . The paper suggests this estimator has more predictive power.

• The mean absolute deviation here is adjusted to get rid of the bias. It scales the value so that it can be comparable to the other historical volatility estimators.

• In Nassim Taleb’s paper, he mentions people sometimes confuse MAD with standard deviation for volatility measurements. And he suggests people use mean absolute deviation instead of standard deviation when we talk about volatility .

• This is another estimator that does not use standard deviation to measure volatility .

• Using the median gives a more robust estimator when there are extreme values in the returns. It works better in fat-tailed distribution.

• The median absolute deviation is adjusted by maximum likelihood estimation so that its value is scaled to be comparable to other volatility estimators.

█

• You can select the volatility estimator models in the Volatility Model input

• Historical Volatility is annualized. You can type in the numbers of trading days in a year in the Annual input based on the asset you are trading.

• Alpha is used to adjust the Yang Zhang volatility estimator value.

• Percentile Length is used to Adjust Percentile coloring lookbacks.

• The gradient coloring will be based on the percentile value (0- 100). The higher the percentile value, the warmer the color will be, which indicates high volatility . The lower the percentile value, the colder the color will be, which indicates low volatility .

• When percentile coloring is off, it won’t show the gradient color.

• You can also use invert color to make the high volatility a cold color and a low volatility high color.

Volatility has some mean reversion properties. Therefore when volatility is very low, and color is close to aqua, you would expect it to expand soon. When volatility is very high, and close to red, you would it expect it to contract and cool down.

• When the background signal is on, it gives a signal when HVP is very low. Warning there might be a volatility expansion soon.

• You can choose the plot style, such as lines, columns, areas in the plotstyle input.

• When the show information panel is on, a small panel will display on the right.

• The information panel displays the historical volatility model name, the 50th percentile of HV, and HV percentile.

50 the percentile of HV also means the median of HV. You can compare the value with the current HV value to see how much it is above or below so that you can get an idea of how high or low HV is.

HV Percentile value is from 0 to 100. It tells us the percentage of periods over the entire lookback that historical volatility traded below the current level. Higher HVP, higher HV compared to its historical data. The gradient color is also based on this value.

█

If you haven’t used the hvp indicator, we suggest you use the HVP indicator first. This indicator is more like historical volatility with HVP coloring. So it displays HVP values in the color and panel, but it’s not range bound like the HVP and it displays HV values.

The user can have a quick understanding of how high or low the current volatility is compared to its historical value based on the gradient color. They can also time the market better based on volatility mean reversion. High volatility means volatility contracts soon (Move about to End, Market will cooldown), low volatility means volatility expansion soon (Market About to Move).

█

The above volatility estimator concepts are a display of history in the quantitative finance realm of the research of historical volatility estimations. It's a timeline of range based from the Parkinson Volatility to Yang Zhang volatility . We hope these descriptions make more people know that even though ATR is the most popular volatility indicator in technical analysis , it's not the best estimator. Almost no one in quant finance uses ATR to measure volatility (otherwise these papers will be based on how to improve ATR measurements instead of HV). As you can see, there are much more advanced volatility estimators that also take account into open, close, high, and low. HV values are based on log returns with some calculation adjustment. It can also be scaled in terms of price just like ATR. And for profit-taking ranges, ATR is not based on probabilities. Historical volatility can be used in a probability distribution function to calculated the probability of the ranges such as the Expected Move indicator.

There are also other more advanced historical volatility estimators. There are high frequency sampled HV that uses intraday data to calculate volatility . We will publish the high frequency volatility estimator in the future. There's also ARCH and GARCH models that takes volatility clustering into account. GARCH models require maximum likelihood estimation which needs a solver to find the best weights for each component. This is currently not possible on TV due to large computational power requirements. All the other indicators claims to be GARCH are all wrong.

Special Thanks to midtownsk8rguy for applying/employing Pine etiquette.

█

**OVERVIEW**There are multiple ways to estimate historical volatility . Other than the traditional close-to-close estimator. This indicator provides different range-based volatility estimators that take high low open into account for volatility calculation and volatility estimators that use other statistics measurements instead of standard deviation.

The gradient coloring and stats panel provides an overview of how high or low the current volatility is compared to its historical values.

█

**CONCEPTS**We have mentioned the concepts of historical volatility in our previous indicators, Historical Volatility, Historical Volatility Rank, and Historical Volatility Percentile. You can check the definition of these scripts. The basic calculation is just the sample standard deviation of log return scaled with the square root of time. The main focus of this script is the difference between volatility models.

**Close-to-Close HV Estimator:**Close-to-Close is the traditional historical volatility calculation. It uses sample standard deviation.

**Note: the TradingView build in historical volatility value is a bit off because it uses population standard deviation instead of sample deviation. N – 1 should be used here to get rid of the sampling bias.**Pros:

• Close-to-Close HV estimators are the most commonly used estimators in finance. The calculation is straightforward and easy to understand. When people reference historical volatility , most of the time they are talking about the close to close estimator.

Cons:

• The Close-to-close estimator only calculates volatility based on the closing price. It does not take account into intraday volatility drift such as high, low. It also does not take account into the jump when open and close prices are not the same.

• Close-to-Close weights past volatility equally during the lookback period, while there are other ways to weight the historical data.

• Close-to-Close is calculated based on standard deviation so it is vulnerable to returns that are not normally distributed and have fat tails. Mean and Median absolute

deviation makes the historical volatility more stable with extreme values.

═════════════════════════════════════════════════════════════════════════

**For more details about the Following Estimators, click into the blue text to read the original published paper**:**Parkinson Hv Estimator:**• Parkinson was one of the first to come up with improvements to historical volatility calculation.

• Parkinson suggests using the High and Low of each bar can represent volatility better as it takes into account intraday volatility . So Parkinson HV is also known as Parkinson High Low HV.

• It is about 5.2 times more efficient than Close-to-Close estimator. But it does not take account into jumps and drift. Therefore, it underestimates volatility .

**Note: By Dividing the Parkinson Volatility by Close-to-Close volatility you can get a similar result to Variance Ratio Test. It is called the Parkinson number. It can be used to test if the market follows a random walk. (It is mentioned in Nassim Taleb's Dynamic Hedging book but it seems like he made a mistake and wrote the ratio wrongly.)****Garman-Klass Estimator**:• Garman Klass expanded on Parkinson’s Estimator. Instead of Parkinson’s estimator using high and low, Garman Klass’s method uses open, close, high, and low to find the minimum variance method.

• The estimator is about 7.4 more efficient than the traditional estimator. But like Parkinson HV, it ignores jumps and drifts. Therefore, it underestimates volatility .

**Rogers-Satchell Estimator**:• Rogers and Satchell found some drawbacks in Garman-Klass’s estimator. The Garman-Klass assumes price as Brownian motion with zero drift.

• The Rogers Satchell Estimator calculates based on open, close, high, and low. And it can also handle drift in the financial series.

• Rogers-Satchell HV is more efficient than Garman-Klass HV when there’s drift in the data. However, it is a little bit less efficient when drift is zero. The estimator doesn’t handle jumps, therefore it still underestimates volatility .

**Garman-Klass Yang-Zhang extension:**• Yang Zhang expanded Garman Klass HV so that it can handle jumps. However, unlike the Rogers-Satchell estimator, this estimator cannot handle drift. It is about 8 times more efficient than the traditional estimator.

• The Garman-Klass Yang-Zhang extension HV has the same value as Garman-Klass when there’s no gap in the data such as in cryptocurrencies.

**Yang-Zhang Estimator**:• The Yang Zhang Estimator combines Garman-Klass and Rogers-Satchell Estimator so that it is based on Open, close, high, and low and it can also handle non-zero drift. It also expands the calculation so that the estimator can also handle overnight jumps in the data.

• This estimator is the most powerful estimator among the range-based estimators. It has the minimum variance error among them, and it is 14 times more efficient than the close-to-close estimator. When the overnight and daily volatility are correlated, it might underestimate volatility a little.

• 1.34 is the optimal value for alpha according to their paper. The alpha constant in the calculation can be adjusted in the settings.

**Note: There are already some volatility estimators coded on TradingView. Some of them are right, some of them are wrong. But for Yang Zhang Estimator I have not seen a correct version on TV.****EWMA Estimator:**(Page 77)• EWMA stands for Exponentially Weighted Moving Average . The Close-to-Close and all other estimators here are all equally weighted.

• EWMA weighs more recent volatility more and older volatility less. The benefit of this is that volatility is usually autocorrelated. The autocorrelation has close to exponential decay as you can see using an Autocorrelation Function indicator on absolute or squared returns. The autocorrelation causes volatility clustering which values the recent volatility more. Therefore, exponentially weighted volatility can suit the property of volatility well.

• RiskMetrics uses 0.94 for lambda which equals 30 lookback period. In this indicator Lambda is coded to adjust with the lookback. It's also easy for EWMA to forecast one period volatility ahead.

• However, EWMA volatility is not often used because there are better options to weight volatility such as ARCH and GARCH.

**Adjusted Mean Absolute Deviation Estimator**:• This estimator does not use standard deviation to calculate volatility . It uses the distance log return is from its moving average as volatility .

• It’s a simple way to calculate volatility and it’s effective. The difference is the estimator does not have to square the log returns to get the volatility . The paper suggests this estimator has more predictive power.

• The mean absolute deviation here is adjusted to get rid of the bias. It scales the value so that it can be comparable to the other historical volatility estimators.

• In Nassim Taleb’s paper, he mentions people sometimes confuse MAD with standard deviation for volatility measurements. And he suggests people use mean absolute deviation instead of standard deviation when we talk about volatility .

**Adjusted Median Absolute Deviation Estimator**:• This is another estimator that does not use standard deviation to measure volatility .

• Using the median gives a more robust estimator when there are extreme values in the returns. It works better in fat-tailed distribution.

• The median absolute deviation is adjusted by maximum likelihood estimation so that its value is scaled to be comparable to other volatility estimators.

█

**FEATURES**• You can select the volatility estimator models in the Volatility Model input

• Historical Volatility is annualized. You can type in the numbers of trading days in a year in the Annual input based on the asset you are trading.

• Alpha is used to adjust the Yang Zhang volatility estimator value.

• Percentile Length is used to Adjust Percentile coloring lookbacks.

• The gradient coloring will be based on the percentile value (0- 100). The higher the percentile value, the warmer the color will be, which indicates high volatility . The lower the percentile value, the colder the color will be, which indicates low volatility .

• When percentile coloring is off, it won’t show the gradient color.

• You can also use invert color to make the high volatility a cold color and a low volatility high color.

Volatility has some mean reversion properties. Therefore when volatility is very low, and color is close to aqua, you would expect it to expand soon. When volatility is very high, and close to red, you would it expect it to contract and cool down.

• When the background signal is on, it gives a signal when HVP is very low. Warning there might be a volatility expansion soon.

• You can choose the plot style, such as lines, columns, areas in the plotstyle input.

• When the show information panel is on, a small panel will display on the right.

• The information panel displays the historical volatility model name, the 50th percentile of HV, and HV percentile.

50 the percentile of HV also means the median of HV. You can compare the value with the current HV value to see how much it is above or below so that you can get an idea of how high or low HV is.

HV Percentile value is from 0 to 100. It tells us the percentage of periods over the entire lookback that historical volatility traded below the current level. Higher HVP, higher HV compared to its historical data. The gradient color is also based on this value.

█

**HOW TO USE**If you haven’t used the hvp indicator, we suggest you use the HVP indicator first. This indicator is more like historical volatility with HVP coloring. So it displays HVP values in the color and panel, but it’s not range bound like the HVP and it displays HV values.

The user can have a quick understanding of how high or low the current volatility is compared to its historical value based on the gradient color. They can also time the market better based on volatility mean reversion. High volatility means volatility contracts soon (Move about to End, Market will cooldown), low volatility means volatility expansion soon (Market About to Move).

█

**FINAL THOUGHTS****HV vs ATR**The above volatility estimator concepts are a display of history in the quantitative finance realm of the research of historical volatility estimations. It's a timeline of range based from the Parkinson Volatility to Yang Zhang volatility . We hope these descriptions make more people know that even though ATR is the most popular volatility indicator in technical analysis , it's not the best estimator. Almost no one in quant finance uses ATR to measure volatility (otherwise these papers will be based on how to improve ATR measurements instead of HV). As you can see, there are much more advanced volatility estimators that also take account into open, close, high, and low. HV values are based on log returns with some calculation adjustment. It can also be scaled in terms of price just like ATR. And for profit-taking ranges, ATR is not based on probabilities. Historical volatility can be used in a probability distribution function to calculated the probability of the ranges such as the Expected Move indicator.

**Other Estimators**There are also other more advanced historical volatility estimators. There are high frequency sampled HV that uses intraday data to calculate volatility . We will publish the high frequency volatility estimator in the future. There's also ARCH and GARCH models that takes volatility clustering into account. GARCH models require maximum likelihood estimation which needs a solver to find the best weights for each component. This is currently not possible on TV due to large computational power requirements. All the other indicators claims to be GARCH are all wrong.

Special Thanks to midtownsk8rguy for applying/employing Pine etiquette.

Banned because we made all the math phobic line drawing TA idiots suffering from Apophenia here look way dumber than they would like to appear (to all normal people who have yet to realize the depth of their stupidity & the degree of their horribleness).