BTC Fear & Greed Incremental StrategyIMPORTANT: READ SETUP GUIDE BELOW OR IT WON'T WORK
# BTC Fear & Greed Incremental Strategy — TradeMaster AI (Pure BTC Stack)
## Strategy Overview
This advanced Bitcoin accumulation strategy is designed for long-term hodlers who want to systematically take profits during greed cycles and accumulate during fear periods, while preserving their core BTC position. Unlike traditional strategies that start with cash, this approach begins with a specified BTC allocation, making it perfect for existing Bitcoin holders who want to optimize their stack management.
## Key Features
### 🎯 **Pure BTC Stack Mode**
- Start with any amount of BTC (configurable)
- Strategy manages your existing stack, not new purchases
- Perfect for hodlers who want to optimize without timing markets
### 📊 **Fear & Greed Integration**
- Uses market sentiment data to drive buy/sell decisions
- Configurable thresholds for greed (selling) and fear (buying) triggers
- Automatic validation to ensure proper 0-100 scale data source
### 🐂 **Bull Year Optimization**
- Smart quarterly selling during bull market years (2017, 2021, 2025)
- Q1: 1% sells, Q2: 2% sells, Q3/Q4: 5% sells (configurable)
- **NO SELLING** during non-bull years - pure accumulation mode
- Preserves BTC during early bull phases, maximizes profits at peaks
### 🐻 **Bear Market Intelligence**
- Multi-regime detection: Bull, Early Bear, Deep Bear, Early Bull
- Different buying strategies based on market conditions
- Enhanced buying during deep bear markets with configurable multipliers
- Visual regime backgrounds for easy market condition identification
### 🛡️ **Risk Management**
- Minimum BTC allocation floor (prevents selling entire stack)
- Configurable position sizing for all trades
- Multiple safety checks and validation
### 📈 **Advanced Visualization**
- Clean 0-100 scale with 2 decimal precision
- Three main indicators: BTC Allocation %, Fear & Greed Index, BTC Holdings
- Real-time portfolio tracking with cash position display
- Enhanced info table showing all key metrics
## How to Use
### **Step 1: Setup**
1. Add the strategy to your BTC/USD chart (daily timeframe recommended)
2. **CRITICAL**: In settings, change the "Fear & Greed Source" from "close" to a proper 0-100 Fear & Greed indicator
---------------
I recommend Crypto Fear & Greed Index by TIA_Technology indicator
When selecting source with this indicator, look for "Crypto Fear and Greed Index:Index"
---------------
3. Set your "Starting BTC Quantity" to match your actual holdings
4. Configure your preferred "Start Date" (when you want the strategy to begin)
### **Step 2: Configure Bull Year Logic**
- Enable "Bull Year Logic" (default: enabled)
- Adjust quarterly sell percentages:
- Q1 (Jan-Mar): 1% (conservative early bull)
- Q2 (Apr-Jun): 2% (moderate mid bull)
- Q3/Q4 (Jul-Dec): 5% (aggressive peak targeting)
- Add future bull years to the list as needed
### **Step 3: Fine-tune Thresholds**
- **Greed Threshold**: 80 (sell when F&G > 80)
- **Fear Threshold**: 20 (buy when F&G < 20 in bull markets)
- **Deep Bear Fear Threshold**: 25 (enhanced buying in bear markets)
- Adjust based on your risk tolerance
### **Step 4: Risk Management**
- Set "Minimum BTC Allocation %" (default 20%) - prevents selling entire stack
- Configure sell/buy percentages based on your position size
- Enable bear market filters for enhanced timing
### **Step 5: Monitor Performance**
- **Orange Line**: Your BTC allocation percentage (target: fluctuate between 20-100%)
- **Blue Line**: Actual BTC holdings (should preserve core position)
- **Pink Line**: Fear & Greed Index (drives all decisions)
- **Table**: Real-time portfolio metrics including cash position
## Reading the Indicators
### **BTC Allocation Percentage (Orange Line)**
- **100%**: All portfolio in BTC, no cash available for buying
- **80%**: 80% BTC, 20% cash ready for fear buying
- **20%**: Minimum allocation, maximum cash position
### **Trading Signals**
- **Green Buy Signals**: Appear during fear periods with available cash
- **Red Sell Signals**: Appear during greed periods in bull years only
- **No Signals**: Either allocation limits reached or non-bull year
## Strategy Logic
### **Bull Years (2017, 2021, 2025)**
- Q1: Conservative 1% sells (preserve stack for later)
- Q2: Moderate 2% sells (gradual profit taking)
- Q3/Q4: Aggressive 5% sells (peak targeting)
- Fear buying active (accumulate on dips)
### **Non-Bull Years**
- **Zero selling** - pure accumulation mode
- Enhanced fear buying during bear markets
- Focus on rebuilding stack for next bull cycle
## Important Notes
- **This is not financial advice** - backtest thoroughly before use
- Designed for **long-term holders** (4+ year cycles)
- **Requires proper Fear & Greed data source** - validate in settings
- Best used on **daily timeframe** for major trend following
- **Cash calculations**: Use allocation % and BTC holdings to calculate available cash: `Cash = (Total Portfolio × (1 - Allocation%/100))`
## Risk Disclaimer
This strategy involves active trading and position management. Past performance does not guarantee future results. Always do your own research and never invest more than you can afford to lose. The strategy is designed for educational purposes and long-term Bitcoin accumulation thesis.
---
*Developed by Sol_Crypto for the Bitcoin community. Happy stacking! 🚀*
إدارة المحافظ الاستثمارية
Position Size Tool [Riley]Automatically determine number of shares for an entry. Quantity based on a stop set at the low of day for long positions or a stop set at the high of the day for short positions. As well as inputs like account balance risk per trade. Also includes a user-defined maximum for percentage of daily dollar volume to consume with entry.
Monthly DCA & Last 10 YearsThis Pine Script indicator simulates a Monthly Dollar Cost Averaging (DCA) strategy to help long-term investors visualize historical performance. Instead of complex timing, the script automatically executes a hypothetical fixed-dollar purchase (e.g., $100) on the first trading day of every month. It visually marks entry points with green "B" labels and plots a dynamic yellow line representing your Global Break-Even Price, allowing you to instantly see if the current price is above or below your average cost basis. To provide deep insight, it generates a detailed performance table in the bottom-right corner that breaks down metrics year-by-year—including total capital invested, shares/coins accumulated, and Profit/Loss percentage—along with a grand total summary of the entire investment period.
Dual Account Position Size CalculatorA quick and easy to use position sizing calculator for use on the daily TF only. inputs for two different account sizes and risk %. Calculates risk to low of day (plus a small buffer which can be changed based on ATR). Shows # of shares to buy, stop loss, portfolio %.
Will show on smaller timeframes , but be aware that the stop level will no longer be low of day, so it will not calculate properly. Always use on the daily.
90% Buying Power Position Size Helper90% Buying Power Position Size Helper — Script Description
This tool calculates a recommended share size based on your available buying power and the current market price. TradingView does not provide access to live broker balances, so this script allows you to manually enter your current buying power and instantly see how many shares you can buy using a chosen percentage of it (default: 90%).
How It Works
• Enter your Buying Power ($)
• Choose the Percent to Use (e.g., 90%).
• The script divides the selected portion of your buying power by the current price of the symbol.
• A small display in the chart corner shows the recommended number of shares to buy.
Formula
shares = floor((buying_power * percent_to_use / 100) / price)
What It’s For
• Day traders who size positions based on account buying power
• Traders who want a quick way to calculate share size per trade
• Anyone who sizes entries using a fixed percentage of their account
What It Doesn’t Do
Due to TradingView limitations, the script cannot:
• Read your live buying power or broker balance
• Auto-fill orders or submit trades
• Retrieve real account data from your broker
You simply update the buying power input whenever your account changes, and the script does the rest.
Why It’s Useful
• Keeps you consistent with position sizing
• Reduces manual math during fast trading
• Prevents oversizing or undersizing trades
• Helps maintain discipline and risk control
Fixed Dollar Risk Lines V2*This is a small update to the original concept that adds greater customization of the visual elements of the script. Since some folks have liked the original I figured I'd put this out there.*
Fixed Dollar Risk Lines is a utility indicator that converts a user-defined dollar risk into price distance and plots risk lines above and below the current price for popular futures contracts. It helps you place stops or entries at a consistent dollar risk per trade, regardless of the market’s tick value or tick size.
What it does:
-You choose a dollar amount to risk (e.g., $100) and a futures contract (ES, NQ, GC, YM, RTY, PL, SI, CL, BTC).
The script automatically:
-Looks up the contract’s tick value and tick size
-Converts your dollar risk into number of ticks
-Converts ticks into price distance
Plots:
-Long Risk line below current price
-Short Risk line above current price
-Optional labels show exact price levels and an information table summarizes your settings.
Key features
-Consistent dollar risk across instruments
-Supports major futures contracts with built‑in tick values and sizes
-Toggle Long and Short risk lines independently
-Customizable line width and colors (lines and labels)
-Right‑axis price level display for quick reading
-Compact info table with contract, risk, and computed prices
Typical use
-Long setups: use the green line as a stop level below entry to match your chosen dollar risk.
-Short setups: use the red line as a stop level above entry to match your chosen dollar risk.
-Quickly compare how the same dollar risk translates to distance on different contracts.
Inputs
-Risk Amount (USD)
-Futures Contract (ES, NQ, GC, YM, RTY, PL, SI, CL, BTC)
-Show Long/Short lines (toggles)
-Line Width
-Colors for lines and labels
Notes
-Designed for futures symbols that match the listed contracts’ tick specs. If your symbol has different tick value/size than the defaults, results will differ.
-Intended for educational/informational use; not financial advice.
-This tool streamlines risk placement so you can focus on execution while keeping dollar risk consistent across markets.
⏰Forex Market Clock Table (DST Auto)⏰ Forex Market Clock Table (DST Auto)
Keep track of key forex session times with this clean, real-time table showing local time, market status (open/closed), and automatic Daylight Saving Time (DST) adjustments for Sydney, Tokyo, London, and New York. Displays countdowns to session open/close and highlights weekends. Fully customizable position, colors, and text size—perfect for multi-session traders.
3-Daumen-RegelThis indicator evaluates three key market conditions and summarizes them in a compact table using simple thumbs-up / thumbs-down signals. It’s designed specifically for daily timeframes and helps you quickly assess whether a market is showing technical strength or weakness.
The Three Checks
Price Above the 200-Day SMA
Indicates the long-term trend direction. A thumbs-up means the price is trading above the 200-day moving average.
Positive Performance During the First 5 Trading Days of the Year (YTD Start)
Measures early-year strength. If not enough bars are available, a warning is shown.
Price Above the YTD Level
Compares the current price to the first trading day’s close of the year.
Color Coding for Instant Clarity
Green: Condition met
Red: Condition not met
This creates a compact “thumbs check” that gives you a quick read on the market’s technical health.
Note
The indicator is intended for daily charts. A message appears if a different timeframe is used.
Futures Risk Manager Pro (v6 stable)This indicator will allow you to calculate your risk management per position.
You must first enter your capital and your risk percentage. Then, when you specify your stop-loss size in ticks, the indicator will immediately tell you the number of contracts to use to stay within your risk percentage.
Futures Risk Manager Pro (v6 stable)This indicator will allow you to calculate your risk management per position.
You must first enter your capital and your risk percentage. Then, when you specify your stop-loss size in ticks, the indicator will immediately tell you the number of contracts to use to stay within your risk percentage.
Futures Risk Manager Pro (v6 stable)This indicator will allow you to calculate your risk management per position.
You must first enter your capital and your risk percentage. Then, when you specify your stop-loss size in ticks, the indicator will immediately tell you the number of contracts to use to stay within your risk percentage.
Futures Risk Manager Pro (v6 stable)This indicator will allow you to calculate your risk management per position.
You must first enter your capital and your risk percentage. Then, when you specify your stop-loss size in ticks, the indicator will immediately tell you the number of contracts to use to stay within your risk percentage.
Expected Move BandsExpected move is the amount that an asset is predicted to increase or decrease from its current price, based on the current levels of volatility.
In this model, we assume asset price follows a log-normal distribution and the log return follows a normal distribution.
Note: Normal distribution is just an assumption, it's not the real distribution of return
Settings:
"Estimation Period Selection" is for selecting the period we want to construct the prediction interval.
For "Current Bar", the interval is calculated based on the data of the previous bar close. Therefore changes in the current price will have little effect on the range. What current bar means is that the estimated range is for when this bar close. E.g., If the Timeframe on 4 hours and 1 hour has passed, the interval is for how much time this bar has left, in this case, 3 hours.
For "Future Bars", the interval is calculated based on the current close. Therefore the range will be very much affected by the change in the current price. If the current price moves up, the range will also move up, vice versa. Future Bars is estimating the range for the period at least one bar ahead.
There are also other source selections based on high low.
Time setting is used when "Future Bars" is chosen for the period. The value in time means how many bars ahead of the current bar the range is estimating. When time = 1, it means the interval is constructing for 1 bar head. E.g., If the timeframe is on 4 hours, then it's estimating the next 4 hours range no matter how much time has passed in the current bar.
Note: It's probably better to use "probability cone" for visual presentation when time > 1
Volatility Models :
Sample SD: traditional sample standard deviation, most commonly used, use (n-1) period to adjust the bias
Parkinson: Uses High/ Low to estimate volatility, assumes continuous no gap, zero mean no drift, 5 times more efficient than Close to Close
Garman Klass: Uses OHLC volatility, zero drift, no jumps, about 7 times more efficient
Yangzhang Garman Klass Extension: Added jump calculation in Garman Klass, has the same value as Garman Klass on markets with no gaps.
about 8 x efficient
Rogers: Uses OHLC, Assume non-zero mean volatility, handles drift, does not handle jump 8 x efficient
EWMA: Exponentially Weighted Volatility. Weight recently volatility more, more reactive volatility better in taking account of volatility autocorrelation and cluster.
YangZhang: Uses OHLC, combines Rogers and Garmand Klass, handles both drift and jump, 14 times efficient, alpha is the constant to weight rogers volatility to minimize variance.
Median absolute deviation: It's a more direct way of measuring volatility. It measures volatility without using Standard deviation. The MAD used here is adjusted to be an unbiased estimator.
Volatility Period is the sample size for variance estimation. A longer period makes the estimation range more stable less reactive to recent price. Distribution is more significant on a larger sample size. A short period makes the range more responsive to recent price. Might be better for high volatility clusters.
Standard deviations:
Standard Deviation One shows the estimated range where the closing price will be about 68% of the time.
Standard Deviation two shows the estimated range where the closing price will be about 95% of the time.
Standard Deviation three shows the estimated range where the closing price will be about 99.7% of the time.
Note: All these probabilities are based on the normal distribution assumption for returns. It's the estimated probability, not the actual probability.
Manually Entered Standard Deviation shows the range of any entered standard deviation. The probability of that range will be presented on the panel.
People usually assume the mean of returns to be zero. To be more accurate, we can consider the drift in price from calculating the geometric mean of returns. Drift happens in the long run, so short lookback periods are not recommended. Assuming zero mean is recommended when time is not greater than 1.
When we are estimating the future range for time > 1, we typically assume constant volatility and the returns to be independent and identically distributed. We scale the volatility in term of time to get future range. However, when there's autocorrelation in returns( when returns are not independent), the assumption fails to take account of this effect. Volatility scaled with autocorrelation is required when returns are not iid. We use an AR(1) model to scale the first-order autocorrelation to adjust the effect. Returns typically don't have significant autocorrelation. Adjustment for autocorrelation is not usually needed. A long length is recommended in Autocorrelation calculation.
Note: The significance of autocorrelation can be checked on an ACF indicator.
ACF
The multimeframe option enables people to use higher period expected move on the lower time frame. People should only use time frame higher than the current time frame for the input. An error warning will appear when input Tf is lower. The input format is multiplier * time unit. E.g. : 1D
Unit: M for months, W for Weeks, D for Days, integers with no unit for minutes (E.g. 240 = 240 minutes). S for Seconds.
Smoothing option is using a filter to smooth out the range. The filter used here is John Ehler's supersmoother. It's an advance smoothing technique that gets rid of aliasing noise. It affects is similar to a simple moving average with half the lookback length but smoother and has less lag.
Note: The range here after smooth no long represent the probability
Panel positions can be adjusted in the settings.
X position adjusts the horizontal position of the panel. Higher X moves panel to the right and lower X moves panel to the left.
Y position adjusts the vertical position of the panel. Higher Y moves panel up and lower Y moves panel down.
Step line display changes the style of the bands from line to step line. Step line is recommended because it gets rid of the directional bias of slope of expected move when displaying the bands.
Warnings:
People should not blindly trust the probability. They should be aware of the risk evolves by using the normal distribution assumption. The real return has skewness and high kurtosis. While skewness is not very significant, the high kurtosis should be noticed. The Real returns have much fatter tails than the normal distribution, which also makes the peak higher. This property makes the tail ranges such as range more than 2SD highly underestimate the actual range and the body such as 1 SD slightly overestimate the actual range. For ranges more than 2SD, people shouldn't trust them. They should beware of extreme events in the tails.
Different volatility models provide different properties if people are interested in the accuracy and the fit of expected move, they can try expected move occurrence indicator. (The result also demonstrate the previous point about the drawback of using normal distribution assumption).
Expected move Occurrence Test
The prediction interval is only for the closing price, not wicks. It only estimates the probability of the price closing at this level, not in between. E.g., If 1 SD range is 100 - 200, the price can go to 80 or 230 intrabar, but if the bar close within 100 - 200 in the end. It's still considered a 68% one standard deviation move.
️Omega RatioThe Omega Ratio is a risk-return performance measure of an investment asset, portfolio, or strategy. It is defined as the probability-weighted ratio, of gains versus losses for some threshold return target. The ratio is an alternative for the widely used Sharpe ratio and is based on information the Sharpe ratio discards.
█ OVERVIEW
As we have mentioned many times, stock market returns are usually not normally distributed. Therefore the models that assume a normal distribution of returns may provide us with misleading information. The Omega Ratio improves upon the common normality assumption among other risk-return ratios by taking into account the distribution as a whole.
█ CONCEPTS
Two distributions with the same mean and variance, would according to the most commonly used Sharpe Ratio suggest that the underlying assets of the distribution offer the same risk-return ratio. But as we have mentioned in our Moments indicator, variance and standard deviation are not a sufficient measure of risk in the stock market since other shape features of a distribution like skewness and excess kurtosis come into play. Omega Ratio tackles this problem by employing all four Moments of the distribution and therefore taking into account the differences in the shape features of the distributions. Another important feature of the Omega Ratio is that it does not require any estimation but is rather calculated directly from the observed data. This gives it an advantage over standard statistical estimators that require estimation of parameters and are therefore sampling uncertainty in its calculations.
█ WAYS TO USE THIS INDICATOR
Omega calculates a probability-adjusted ratio of gains to losses, relative to the Minimum Acceptable Return (MAR). This means that at a given MAR using the simple rule of preferring more to less, an asset with a higher value of Omega is preferable to one with a lower value. The indicator displays the values of Omega at increasing levels of MARs and creating the so-called Omega Curve. Knowing this one can compare Omega Curves of different assets and decide which is preferable given the MAR of your strategy. The indicator plots two Omega Curves. One for the on chart symbol and another for the off chart symbol that u can use for comparison.
When comparing curves of different assets make sure their trading days are the same in order to ensure the same period for the Omega calculations. Value interpretation: Omega<1 will indicate that the risk outweighs the reward and therefore there are more excess negative returns than positive. Omega>1 will indicate that the reward outweighs the risk and that there are more excess positive returns than negative. Omega=1 will indicate that the minimum acceptable return equals the mean return of an asset. And that the probability of gain is equal to the probability of loss.
█ FEATURES
• "Low-Risk security" lets you select the security that you want to use as a benchmark for Omega calculations.
• "Omega Period" is the size of the sample that is used for the calculations.
• “Increments” is the number of Minimal Acceptable Return levels the calculation is carried on. • “Other Symbol” lets you select the source of the second curve.
• “Color Settings” you can set the color for each curve.
Linear Moments█ OVERVIEW
The Linear Moments indicator, also known as L-moments, is a statistical tool used to estimate the properties of a probability distribution. It is an alternative to conventional moments and is more robust to outliers and extreme values.
█ CONCEPTS
█ Four moments of a distribution
We have mentioned the concept of the Moments of a distribution in one of our previous posts. The method of Linear Moments allows us to calculate more robust measures that describe the shape features of a distribution and are anallougous to those of conventional moments. L-moments therefore provide estimates of the location, scale, skewness, and kurtosis of a probability distribution.
The first L-moment, λ₁, is equivalent to the sample mean and represents the location of the distribution. The second L-moment, λ₂, is a measure of the dispersion of the distribution, similar to the sample standard deviation. The third and fourth L-moments, λ₃ and λ₄, respectively, are the measures of skewness and kurtosis of the distribution. Higher order L-moments can also be calculated to provide more detailed information about the shape of the distribution.
One advantage of using L-moments over conventional moments is that they are less affected by outliers and extreme values. This is because L-moments are based on order statistics, which are more resistant to the influence of outliers. By contrast, conventional moments are based on the deviations of each data point from the sample mean, and outliers can have a disproportionate effect on these deviations, leading to skewed or biased estimates of the distribution parameters.
█ Order Statistics
L-moments are statistical measures that are based on linear combinations of order statistics, which are the sorted values in a dataset. This approach makes L-moments more resistant to the influence of outliers and extreme values. However, the computation of L-moments requires sorting the order statistics, which can lead to a higher computational complexity.
To address this issue, we have implemented an Online Sorting Algorithm that efficiently obtains the sorted dataset of order statistics, reducing the time complexity of the indicator. The Online Sorting Algorithm is an efficient method for sorting large datasets that can be updated incrementally, making it well-suited for use in trading applications where data is often streamed in real-time. By using this algorithm to compute L-moments, we can obtain robust estimates of distribution parameters while minimizing the computational resources required.
█ Bias and efficiency of an estimator
One of the key advantages of L-moments over conventional moments is that they approach their asymptotic normal closer than conventional moments. This means that as the sample size increases, the L-moments provide more accurate estimates of the distribution parameters.
Asymptotic normality is a statistical property that describes the behavior of an estimator as the sample size increases. As the sample size gets larger, the distribution of the estimator approaches a normal distribution, which is a bell-shaped curve. The mean and variance of the estimator are also related to the true mean and variance of the population, and these relationships become more accurate as the sample size increases.
The concept of asymptotic normality is important because it allows us to make inferences about the population based on the properties of the sample. If an estimator is asymptotically normal, we can use the properties of the normal distribution to calculate the probability of observing a particular value of the estimator, given the sample size and other relevant parameters.
In the case of L-moments, the fact that they approach their asymptotic normal more closely than conventional moments means that they provide more accurate estimates of the distribution parameters as the sample size increases. This is especially useful in situations where the sample size is small, such as when working with financial data. By using L-moments to estimate the properties of a distribution, traders can make more informed decisions about their investments and manage their risk more effectively.
Below we can see the empirical dsitributions of the Variance and L-scale estimators. We ran 10000 simulations with a sample size of 100. Here we can clearly see how the L-moment estimator approaches the normal distribution more closely and how such an estimator can be more representative of the underlying population.
█ WAYS TO USE THIS INDICATOR
The Linear Moments indicator can be used to estimate the L-moments of a dataset and provide insights into the underlying probability distribution. By analyzing the L-moments, traders can make inferences about the shape of the distribution, such as whether it is symmetric or skewed, and the degree of its spread and peakedness. This information can be useful in predicting future market movements and developing trading strategies.
One can also compare the L-moments of the dataset at hand with the L-moments of certain commonly used probability distributions. Finance is especially known for the use of certain fat tailed distributions such as Laplace or Student-t. We have built in the theoretical values of L-kurtosis for certain common distributions. In this way a person can compare our observed L-kurtosis with the one of the selected theoretical distribution.
█ FEATURES
Source Settings
Source - Select the source you wish the indicator to calculate on
Source Selection - Selec whether you wish to calculate on the source value or its log return
Moments Settings
Moments Selection - Select the L-moment you wish to be displayed
Lookback - Determine the sample size you wish the L-moments to be calculated with
Theoretical Distribution - This setting is only for investingating the kurtosis of our dataset. One can compare our observed kurtosis with the kurtosis of a selected theoretical distribution.
Historical Volatility EstimatorsHistorical volatility is a statistical measure of the dispersion of returns for a given security or market index over a given period. This indicator provides different historical volatility model estimators with percentile gradient coloring and volatility stats panel.
█ OVERVIEW There are multiple ways to estimate historical volatility. Other than the traditional close-to-close estimator. This indicator provides different range-based volatility estimators that take high low open into account for volatility calculation and volatility estimators that use other statistics measurements instead of standard deviation. The gradient coloring and stats panel provides an overview of how high or low the current volatility is compared to its historical values.
█ CONCEPTS We have mentioned the concepts of historical volatility in our previous indicators, Historical Volatility, Historical Volatility Rank, and Historical Volatility Percentile. You can check the definition of these scripts. The basic calculation is just the sample standard deviation of log return scaled with the square root of time. The main focus of this script is the difference between volatility models.
Close-to-Close HV Estimator: Close-to-Close is the traditional historical volatility calculation. It uses sample standard deviation. Note: the TradingView build in historical volatility value is a bit off because it uses population standard deviation instead of sample deviation. N – 1 should be used here to get rid of the sampling bias.
Pros:
• Close-to-Close HV estimators are the most commonly used estimators in finance. The calculation is straightforward and easy to understand. When people reference historical volatility, most of the time they are talking about the close to close estimator.
Cons:
• The Close-to-close estimator only calculates volatility based on the closing price. It does not take account into intraday volatility drift such as high, low. It also does not take account into the jump when open and close prices are not the same.
• Close-to-Close weights past volatility equally during the lookback period, while there are other ways to weight the historical data.
• Close-to-Close is calculated based on standard deviation so it is vulnerable to returns that are not normally distributed and have fat tails. Mean and Median absolute deviation makes the historical volatility more stable with extreme values.
Parkinson Hv Estimator:
• Parkinson was one of the first to come up with improvements to historical volatility calculation. • Parkinson suggests using the High and Low of each bar can represent volatility better as it takes into account intraday volatility. So Parkinson HV is also known as Parkinson High Low HV. • It is about 5.2 times more efficient than Close-to-Close estimator. But it does not take account into jumps and drift. Therefore, it underestimates volatility. Note: By Dividing the Parkinson Volatility by Close-to-Close volatility you can get a similar result to Variance Ratio Test. It is called the Parkinson number. It can be used to test if the market follows a random walk. (It is mentioned in Nassim Taleb's Dynamic Hedging book but it seems like he made a mistake and wrote the ratio wrongly.)
Garman-Klass Estimator:
• Garman Klass expanded on Parkinson’s Estimator. Instead of Parkinson’s estimator using high and low, Garman Klass’s method uses open, close, high, and low to find the minimum variance method.
• The estimator is about 7.4 more efficient than the traditional estimator. But like Parkinson HV, it ignores jumps and drifts. Therefore, it underestimates volatility.
Rogers-Satchell Estimator:
• Rogers and Satchell found some drawbacks in Garman-Klass’s estimator. The Garman-Klass assumes price as Brownian motion with zero drift.
• The Rogers Satchell Estimator calculates based on open, close, high, and low. And it can also handle drift in the financial series.
• Rogers-Satchell HV is more efficient than Garman-Klass HV when there’s drift in the data. However, it is a little bit less efficient when drift is zero. The estimator doesn’t handle jumps, therefore it still underestimates volatility.
Garman-Klass Yang-Zhang extension:
• Yang Zhang expanded Garman Klass HV so that it can handle jumps. However, unlike the Rogers-Satchell estimator, this estimator cannot handle drift. It is about 8 times more efficient than the traditional estimator.
• The Garman-Klass Yang-Zhang extension HV has the same value as Garman-Klass when there’s no gap in the data such as in cryptocurrencies.
Yang-Zhang Estimator:
• The Yang Zhang Estimator combines Garman-Klass and Rogers-Satchell Estimator so that it is based on Open, close, high, and low and it can also handle non-zero drift. It also expands the calculation so that the estimator can also handle overnight jumps in the data.
• This estimator is the most powerful estimator among the range-based estimators. It has the minimum variance error among them, and it is 14 times more efficient than the close-to-close estimator. When the overnight and daily volatility are correlated, it might underestimate volatility a little.
• 1.34 is the optimal value for alpha according to their paper. The alpha constant in the calculation can be adjusted in the settings. Note: There are already some volatility estimators coded on TradingView. Some of them are right, some of them are wrong. But for Yang Zhang Estimator I have not seen a correct version on TV.
EWMA Estimator:
• EWMA stands for Exponentially Weighted Moving Average. The Close-to-Close and all other estimators here are all equally weighted.
• EWMA weighs more recent volatility more and older volatility less. The benefit of this is that volatility is usually autocorrelated. The autocorrelation has close to exponential decay as you can see using an Autocorrelation Function indicator on absolute or squared returns. The autocorrelation causes volatility clustering which values the recent volatility more. Therefore, exponentially weighted volatility can suit the property of volatility well.
• RiskMetrics uses 0.94 for lambda which equals 30 lookback period. In this indicator Lambda is coded to adjust with the lookback. It's also easy for EWMA to forecast one period volatility ahead.
• However, EWMA volatility is not often used because there are better options to weight volatility such as ARCH and GARCH.
Adjusted Mean Absolute Deviation Estimator:
• This estimator does not use standard deviation to calculate volatility. It uses the distance log return is from its moving average as volatility.
• It’s a simple way to calculate volatility and it’s effective. The difference is the estimator does not have to square the log returns to get the volatility. The paper suggests this estimator has more predictive power.
• The mean absolute deviation here is adjusted to get rid of the bias. It scales the value so that it can be comparable to the other historical volatility estimators.
• In Nassim Taleb’s paper, he mentions people sometimes confuse MAD with standard deviation for volatility measurements. And he suggests people use mean absolute deviation instead of standard deviation when we talk about volatility.
Adjusted Median Absolute Deviation Estimator:
• This is another estimator that does not use standard deviation to measure volatility.
• Using the median gives a more robust estimator when there are extreme values in the returns. It works better in fat-tailed distribution.
• The median absolute deviation is adjusted by maximum likelihood estimation so that its value is scaled to be comparable to other volatility estimators.
█ FEATURES
• You can select the volatility estimator models in the Volatility Model input
• Historical Volatility is annualized. You can type in the numbers of trading days in a year in the Annual input based on the asset you are trading.
• Alpha is used to adjust the Yang Zhang volatility estimator value.
• Percentile Length is used to Adjust Percentile coloring lookbacks.
• The gradient coloring will be based on the percentile value (0- 100). The higher the percentile value, the warmer the color will be, which indicates high volatility. The lower the percentile value, the colder the color will be, which indicates low volatility.
• When percentile coloring is off, it won’t show the gradient color.
• You can also use invert color to make the high volatility a cold color and a low volatility high color. Volatility has some mean reversion properties. Therefore when volatility is very low, and color is close to aqua, you would expect it to expand soon. When volatility is very high, and close to red, you would it expect it to contract and cool down.
• When the background signal is on, it gives a signal when HVP is very low. Warning there might be a volatility expansion soon.
• You can choose the plot style, such as lines, columns, areas in the plotstyle input.
• When the show information panel is on, a small panel will display on the right.
• The information panel displays the historical volatility model name, the 50th percentile of HV, and HV percentile. 50 the percentile of HV also means the median of HV. You can compare the value with the current HV value to see how much it is above or below so that you can get an idea of how high or low HV is. HV Percentile value is from 0 to 100. It tells us the percentage of periods over the entire lookback that historical volatility traded below the current level. Higher HVP, higher HV compared to its historical data. The gradient color is also based on this value.
█ HOW TO USE If you haven’t used the hvp indicator, we suggest you use the HVP indicator first. This indicator is more like historical volatility with HVP coloring. So it displays HVP values in the color and panel, but it’s not range bound like the HVP and it displays HV values. The user can have a quick understanding of how high or low the current volatility is compared to its historical value based on the gradient color. They can also time the market better based on volatility mean reversion. High volatility means volatility contracts soon (Move about to End, Market will cooldown), low volatility means volatility expansion soon (Market About to Move).
█ FINAL THOUGHTS HV vs ATR The above volatility estimator concepts are a display of history in the quantitative finance realm of the research of historical volatility estimations. It's a timeline of range based from the Parkinson Volatility to Yang Zhang volatility. We hope these descriptions make more people know that even though ATR is the most popular volatility indicator in technical analysis, it's not the best estimator. Almost no one in quant finance uses ATR to measure volatility (otherwise these papers will be based on how to improve ATR measurements instead of HV). As you can see, there are much more advanced volatility estimators that also take account into open, close, high, and low. HV values are based on log returns with some calculation adjustment. It can also be scaled in terms of price just like ATR. And for profit-taking ranges, ATR is not based on probabilities. Historical volatility can be used in a probability distribution function to calculated the probability of the ranges such as the Expected Move indicator. Other Estimators There are also other more advanced historical volatility estimators. There are high frequency sampled HV that uses intraday data to calculate volatility. We will publish the high frequency volatility estimator in the future. There's also ARCH and GARCH models that takes volatility clustering into account. GARCH models require maximum likelihood estimation which needs a solver to find the best weights for each component. This is currently not possible on TV due to large computational power requirements. All the other indicators claims to be GARCH are all wrong.
Single AHR DCA (HM) — AHR Pane (customized quantile)Customized note
The log-regression window LR length controls how long a long-term fair value path is estimated from historical data.
The AHR window AHR window length controls over which historical regime you measure whether the coin is “cheap / expensive”.
When you choose a log-regression window of length L (years) and an AHR window of length A (years), you can intuitively read the indicator as:
“Within the last A years of this regime, relative to the long-term trend estimated over the same A years, the current price is cheap / neutral / expensive.”
Guidelines:
In general, set the AHR window equal to or slightly longer than the LR window:
If the AHR window is much longer than LR, you mix different baselines (different LR regimes) into one distribution.
If the AHR window is much shorter than LR, quantiles mostly reflect a very local slice of history.
For BTC / ETH and other BTC-like assets, you can use relatively long horizons (e.g. LR ≈ 3–5 years, AHR window ≈ 3–8 years).
For major altcoins (BNB / SOL / XRP and similar high-beta assets), it is recommended to use equal or slightly shorter horizons, e.g. LR ≈ 2–3 years, AHR window ≈ 2–3 years.
1. Price series & windows
Working timeframe: daily (1D).
Let the daily close of the current symbol on day t be P_t .
Main length parameters:
HM window: L_HM = maLen (default 200 days)
Log-regression window: L_LR = lrLen (default 1095 days ≈ 3 years)
AHR window (regime window): W = windowLen (default 1095 days ≈ 3 years)
2. Harmonic moving average (HM)
On a window of length L_HM, define the harmonic mean:
HM_t = ^(-1)
Here eps = 1e-10 is used to avoid division by zero.
Intuition: HM is more sensitive to low prices – an extremely low price inside the window will drag HM down significantly.
3. Log-regression baseline (LR)
On a window of length L_LR, perform a linear regression on log price:
Over the last L_LR bars, build the series
x_k = log( max(P_k, eps) ), for k = t-L_LR+1 ... t, and fit
x_k ≈ a + b * k.
The fitted value at the current index t is
log_P_hat_t = a + b * t.
Exponentiate to get the log-regression baseline:
LR_t = exp( log_P_hat_t ).
Interpretation: LR_t is the long-term trend / fair value path of the current regime over the past L_LR days.
4. HM-based AHR (valuation ratio)
At each time t, build an HM-based AHR (valuation multiple):
AHR_t = ( P_t / HM_t ) * ( P_t / LR_t )
Interpretation:
P_t / HM_t : deviation of price from the mid-term HM (e.g. 200-day harmonic mean).
P_t / LR_t : deviation of price from the long-term log-regression trend.
Multiplying them means:
if price is above both HM and LR, “expensiveness” is amplified;
if price is below both, “cheapness” is amplified.
Typical reading:
AHR_t < 1 : price is below both mid-term mean and long-term trend → statistically cheaper.
AHR_t > 1 : price is above both mid-term mean and long-term trend → statistically more expensive.
5. Empirical quantile thresholds (Opp / Risk)
On each new day, whenever AHR_t is valid, add it into a rolling array:
A_t_window = { AHR_{t-W+1}, ..., AHR_t } (at most W = windowLen elements)
On this empirical distribution, define two quantiles:
Opportunity quantile: q_opp (default 15%)
Risk quantile: q_risk (default 65%)
Using standard percentile computation (order statistics + linear interpolation), we get:
Opp threshold:
theta_opp = Percentile( A_t_window, q_opp )
Risk threshold:
theta_risk = Percentile( A_t_window, q_risk )
We also compute the percentile rank of the current AHR inside the same history:
q_now = PercentileRank( A_t_window, AHR_t ) ∈
This yields three valuation zones:
Opportunity zone: AHR_t <= theta_opp
(corresponds to roughly the cheapest ~q_opp% of historical states in the last W days.)
Neutral zone: theta_opp < AHR_t < theta_risk
Risk zone: AHR_t >= theta_risk
(corresponds to roughly the most expensive ~(100 - q_risk)% of historical states in the last W days.)
All quantiles are purely empirical and symbol-specific: they are computed only from the current asset’s own history, without reusing BTC thresholds or assuming cross-asset similarity.
6. DCA simulation (lightweight, rolling window)
Given:
a daily budget B (input: budgetPerDay), and
a DCA simulation window H (input: dcaWindowLen, default 900 days ≈ 2.5 years),
The script applies the following rule on each new day t:
If thresholds are unavailable or AHR_t > theta_risk
→ classify as Risk zone → buy = 0
If AHR_t <= theta_opp
→ classify as Opportunity zone → buy = 2B (double size)
Otherwise (Neutral zone)
→ buy = B (normal DCA)
Daily invested cash:
C_t ∈ {0, B, 2B}
Daily bought quantity:
DeltaQ_t = C_t / P_t
The script keeps rolling sums over the last H days:
Cumulative position:
Q_H = sum_{k=t-H+1..t} DeltaQ_k
Cumulative invested cash:
C_H = sum_{k=t-H+1..t} C_k
Current portfolio value:
PortVal_t = Q_H * P_t
Cumulative P&L:
PnL_t = PortVal_t - C_H
Active days:
number of days in the last H with C_k > 0.
These results are only used to visualize how this AHR-quantile-driven DCA rule would have behaved over the recent regime, and do not constitute financial advice.
Mebane Faber GTAA 5In 2007, Mebane Faber published research that challenged the conventional wisdom of buy-and-hold investing. His paper, titled "A Quantitative Approach to Tactical Asset Allocation" and published in the Journal of Wealth Management, demonstrated that a simple timing mechanism could reduce portfolio volatility and drawdowns while maintaining competitive returns (Faber, 2007). This indicator implements his Global Tactical Asset Allocation strategy, known as GTAA5, following the original methodology.
The core insight of Faber's research stems from a century of market data. By analyzing asset class performance from 1901 onwards, Faber found that a ten-month simple moving average served as an effective trend filter across major asset classes. When an asset trades above its ten-month moving average, it tends to continue its upward trajectory; when it falls below, significant drawdowns often follow (Faber, 2007, pp. 12-16). This observation aligns with momentum research by Jegadeesh and Titman (1993), who documented that intermediate-term momentum persists across equity markets.
The GTAA5 strategy allocates capital equally across five diversified asset classes: domestic equities (SPY), international developed markets (EFA), aggregate bonds (AGG), commodities (DBC), and real estate investment trusts (VNQ). Each asset receives a twenty percent allocation when trading above its ten-month moving average. When an asset falls below this threshold, its allocation moves to short-term treasury bills (SHY), creating a dynamic cash position that scales with market risk (Cambria Investment Management, 2013).
The strategy's historical performance during market crises illustrates its function. During the 2008 financial crisis, traditional sixty-forty portfolios experienced drawdowns exceeding forty percent. The GTAA5 strategy limited losses to approximately twelve percent by reducing equity exposure as prices declined below their moving averages (Faber, 2013). This asymmetric return profile represents the strategy's primary characteristic.
This implementation uses monthly closing prices retrieved via request.security() to calculate the ten-month simple moving average. This distinction matters, as approximations using daily data (such as a 200-day moving average) can generate different signals during volatile periods. Monthly data ensures the indicator produces signals consistent with published academic research.
The indicator provides position monitoring, automatic rebalancing detection on either the first or last trading day of each month, and share calculations based on user-defined capital. A dashboard displays current trend status for each asset class, target versus actual weightings, and trade instructions for rebalancing. Performance metrics including annualized volatility and Sharpe ratio provide ongoing risk assessment.
Several limitations warrant acknowledgment. First, the strategy rebalances monthly, meaning it cannot respond to intra-month market crashes. Second, transaction costs and taxes from monthly rebalancing may reduce net returns for taxable accounts. Third, the ten-month lookback period, while historically robust, offers no guarantee of future effectiveness. As Ilmanen (2011) notes in "Expected Returns", all timing strategies face the risk of regime change, where historical relationships break down.
This indicator serves educational purposes and portfolio monitoring. It does not constitute financial advice.
References:
Cambria Investment Management (2013). Global Tactical Asset Allocation: An Introduction to the Approach. Research Report, Los Angeles.
Faber, M.T. (2007). A Quantitative Approach to Tactical Asset Allocation. Journal of Wealth Management, Spring 2007, pp. 9-79.
Faber, M.T. (2013). Global Asset Allocation: A Survey of the World's Top Asset Allocation Strategies. Cambria Investment Management, Los Angeles.
Ilmanen, A. (2011). Expected Returns: An Investor's Guide to Harvesting Market Rewards. John Wiley and Sons, Chichester.
Jegadeesh, N. and Titman, S. (1993). Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency. Journal of Finance, 48(1), pp. 65-91.
ZY Target TerminatorThe indicator generates trading signals. The profitability displayed on the signal at the time it is generated is the maximum profitability of the trade opened with the preceding signal. Therefore, avoid trading pairs and trends where this ratio is insufficient.
Position Sizing Calculator (Real-Time) - Futures Edition█ SUMMARY
The following indicator is a Position Sizing Calculator based on Average True Range (ATR), originally developed by market technician J. Welles Wilder Jr., intended for real-time trading.
This script utilizes the user's account size, acceptable risk percentage, and a stop-loss distance based on ATR to dynamically calculate the appropriate position size for each trade in real time.
█ BACKGROUND
Developed for use on the Micro E-mini Nasdaq-100 futures (MNQ), this script provides traders with continuously updated dynamic position sizes. It enables traders to instantly determine the exact number of contracts to use when entering a trade while staying within their acceptable risk tolerance.
This real-time position sizing tool helps traders make well-informed decisions when planning trade entries and calculating maximum stop-loss levels, ultimately enhancing risk management.
█ USER INPUTS
Trading Account Size: Total dollar value of the user's trading account.
Acceptable Risk (%): Maximum percentage of the trading account that the user is willing to risk per trade.
ATR Multiplier for Stop-Loss: Multiplier used to determine the distance of the stop-loss from the current price, based on the ATR value.
ATR Length: The length of the lookback period used to calculate the ATR value.
Show Target Risk Row: Toggle to hide/show the Target Risk Row
SL Levels Display: Option to see Both, Long Only, Short Only, or None of the Stop Loss Level Values.
Contract Point Value ($): Point value per contract. Tooltip highlights common values.
Tick Size: Minimum Price Movement (Default set to 0.25)
Minimum Contracts: Override the Minimum Contracts per trade to a user selected value.
(May Exceed User's Target Risk)
2-Year Real RateThe 2-year real rate is the inflation-adjusted yield on a 2-year U.S. Treasury—essentially the market’s expectation for short-term “true” interest rates after subtracting expected inflation (often approximated as nominal 2Y yield – breakeven inflation).
It matters because it reflects the actual cost of capital and is one of the cleanest gauges of the Fed’s effective stance: rising real rates mean tightening financial conditions, falling real rates mean loosening. In trading, the 2Y real rate is a powerful macro risk-on/risk-off indicator—equities, long-duration tech, crypto, and EM FX generally weaken when real rates rise, while USD and front-end rate-sensitive trades tend to strengthen. Watching inflections in the 2Y real rate helps you time shifts in liquidity, gauge how aggressively the market is pricing Fed moves, and position for cross-asset trends driven by changes in real funding conditions.
ATR Risk Manager v5.2 [Auto-Extrapolate]If you ever had problems knowing how much contracts to use for a particular timeframe to keep your risk within acceptable levels, then this indicator should help. You just have to define your accepted risk based on ATR and also percetage of your drawdown, then the indicator will tell you how many contracts you should use. If the risk is too high, it will also tell you not to trade. This is only for futures NQ MNQ ES MES GC MGC CL MCL MYM and M2K.






















