Tick ChartHello All,
Tick Chart is created using ticks and each candlestick in Tick Chart shows the price variation of X consecutive ticks ( X : Number of Ticks Per Candle ). for example if you set Number of Ticks Per Candle = 100 then each candlestick is created using 100 ticks. so, Tick Charts are NOT time-based charts (like Renko or Point & Figure Charts). Tick is the price change in minimum time interval defined in the platform. There are several advantages of Tick Charts. You can find many articles about Tick Charts on the net.
Tick Chart only works on realtime bars.
You can set " Number of Ticks Per Candle " and " Number of Candles" using options. You can change color of body, wicks abd volume bars as well.
The script shows current, minimum, maximum and average volumes. it also shows OHLC values on the last candle.
Tick Chart using different number of ticks
Volume info:
Enjoy!
المؤشرات والاستراتيجيات
Matrix Library (Linear Algebra, incl Multiple Linear Regression)What's this all about?
Ever since 1D arrays were added to Pine Script, many wonderful new opportunities have opened up. There has been a few implementations of matrices and matrix math (most notably by TradingView-user tbiktag in his recent Moving Regression script: ). However, so far, no comprehensive libraries for matrix math and linear algebra has been developed. This script aims to change that.
I'm not math expert, but I like learning new things, so I took it upon myself to relearn linear algebra these past few months, and create a matrix math library for Pine Script. The goal with the library was to make a comprehensive collection of functions that can be used to perform as many of the standard operations on matrices as possible, and to implement functions to solve systems of linear equations. The library implements matrices using arrays, and many standard functions to manipulate these matrices have been added as well.
The main purpose of the library is to give users the ability to solve systems of linear equations (useful for Multiple Linear Regression with K number of independent variables for example), but it can also be used to simulate 2D arrays for any purpose.
So how do I use this thing?
Personally, what I do with my private Pine Script libraries is I keep them stored as text-files in a Libraries folder, and I copy and paste them into my code when I need them. This library is quite large, so I have made sure to use brackets in comments to easily hide any part of the code. This helps with big libraries like this one.
The parts of this script that you need to copy are labeled "MathLib", "ArrayLib", and "MatrixLib". The matrix library is dependent on the functions from these other two libraries, but they are stripped down to only include the functions used by the MatrixLib library.
When you have the code in your script (pasted somewhere below the "study()" call), you can create a matrix by calling one of the constructor functions. All functions in this library start with "matrix_", and all constructors start with either "create" or "copy". I suggest you read through the code though. The functions have very descriptive names, and a short description of what each function does is included in a header comment directly above it. The functions generally come in the following order:
Constructors: These are used to create matrices (empy with no rows or columns, set shape filled with 0s, from a time series or an array, and so on).
Getters and setters: These are used to get data from a matrix (like the value of an element or a full row or column).
Matrix manipulations: These functions manipulate the matrix in some way (for example, functions to append columns or rows to a matrix).
Matrix operations: These are the matrix operations. They include things like basic math operations for two indices, to transposing a matrix.
Decompositions and solvers: Next up are functions to solve systems of linear equations. These include LU and QR decomposition and solvers, and functions for calculating the pseudo-inverse or inverse of a matrix.
Multiple Linear Regression: Lastly, we find an implementation of a multiple linear regression, including all the standard statistics one can expect to find in most statistical software packages.
Are there any working examples of how to use the library?
Yes, at the very end of the script, there is an example that plots the predictions from a multiple linear regression with two independent (explanatory) X variables, regressing the chart data (the Y variable) on these X variables. You can look at this code to see a real-world example of how to use the code in this library.
Are there any limitations?
There are no hard limiations, but the matrices uses arrays, so the number of elements can never exceed the number of elements supported by Pine Script (minus 2, since two elements are used internally by the library to store row and column count). Some of the operations do use a lot of resources though, and as a result, some things can not be done without timing out. This can vary from time to time as well, as this is primarily dependent on the available resources from the Pine Script servers. For instance, the multiple linear regression cannot be used with a lookback window above 10 or 12 most of the time, if the statistics are reported. If no statistics are reported (and therefore not calculated), the lookback window can usually be extended to around 60-80 bars before the servers time out the execution.
Hopefully the dev-team at TradingView sees this script and find ways to implement this functionality diretly into Pine Script, as that would speed up many of the operations and make things like MLR (multiple linear regression) possible on a bigger lookback window.
Some parting words
This library has taken a few months to write, and I have taken all the steps I can think of to test it for bugs. Some may have slipped through anyway, so please let me know if you find any, and I'll try my best to fix them when I have time to do so. This library is intended to help the community. Therefore, I am releasing the library as open source, in the hopes that people may improving on it, or using it in their own work. If you do make something cool with this, or if you find ways to improve the code, please let me know in the comments.
Monte Carlo Range Forecast [DW]This is an experimental study designed to forecast the range of price movement from a specified starting point using a Monte Carlo simulation.
Monte Carlo experiments are a broad class of computational algorithms that utilize random sampling to derive real world numerical results.
These types of algorithms have a number of applications in numerous fields of study including physics, engineering, behavioral sciences, climate forecasting, computer graphics, gaming AI, mathematics, and finance.
Although the applications vary, there is a typical process behind the majority of Monte Carlo methods:
-> First, a distribution of possible inputs is defined.
-> Next, values are generated randomly from the distribution.
-> The values are then fed through some form of deterministic algorithm.
-> And lastly, the results are aggregated over some number of iterations.
In this study, the Monte Carlo process used generates a distribution of aggregate pseudorandom linear price returns summed over a user defined period, then plots standard deviations of the outcomes from the mean outcome generate forecast regions.
The pseudorandom process used in this script relies on a modified Wichmann-Hill pseudorandom number generator (PRNG) algorithm.
Wichmann-Hill is a hybrid generator that uses three linear congruential generators (LCGs) with different prime moduli.
Each LCG within the generator produces an independent, uniformly distributed number between 0 and 1.
The three generated values are then summed and modulo 1 is taken to deliver the final uniformly distributed output.
Because of its long cycle length, Wichmann-Hill is a fantastic generator to use on TV since it's extremely unlikely that you'll ever see a cycle repeat.
The resulting pseudorandom output from this generator has a minimum repetition cycle length of 6,953,607,871,644.
Fun fact: Wichmann-Hill is a widely used PRNG in various software applications. For example, Excel 2003 and later uses this algorithm in its RAND function, and it was the default generator in Python up to v2.2.
The generation algorithm in this script takes the Wichmann-Hill algorithm, and uses a multi-stage transformation process to generate the results.
First, a parent seed is selected. This can either be a fixed value, or a dynamic value.
The dynamic parent value is produced by taking advantage of Pine's timenow variable behavior. It produces a variable parent seed by using a frozen ratio of timenow/time.
Because timenow always reflects the current real time when frozen and the time variable reflects the chart's beginning time when frozen, the ratio of these values produces a new number every time the cache updates.
After a parent seed is selected, its value is then fed through a uniformly distributed seed array generator, which generates multiple arrays of pseudorandom "children" seeds.
The seeds produced in this step are then fed through the main generators to produce arrays of pseudorandom simulated outcomes, and a pseudorandom series to compare with the real series.
The main generators within this script are designed to (at least somewhat) model the stochastic nature of financial time series data.
The first step in this process is to transform the uniform outputs of the Wichmann-Hill into outputs that are normally distributed.
In this script, the transformation is done using an estimate of the normal distribution quantile function.
Quantile functions, otherwise known as percent-point or inverse cumulative distribution functions, specify the value of a random variable such that the probability of the variable being within the value's boundary equals the input probability.
The quantile equation for a normal probability distribution is μ + σ(√2)erf^-1(2(p - 0.5)) where μ is the mean of the distribution, σ is the standard deviation, erf^-1 is the inverse Gauss error function, and p is the probability.
Because erf^-1() does not have a simple, closed form interpretation, it must be approximated.
To keep things lightweight in this approximation, I used a truncated Maclaurin Series expansion for this function with precomputed coefficients and rolled out operations to avoid nested looping.
This method provides a decent approximation of the error function without completely breaking floating point limits or sucking up runtime memory.
Note that there are plenty of more robust techniques to approximate this function, but their memory needs very. I chose this method specifically because of runtime favorability.
To generate a pseudorandom approximately normally distributed variable, the uniformly distributed variable from the Wichmann-Hill algorithm is used as the input probability for the quantile estimator.
Now from here, we get a pretty decent output that could be used itself in the simulation process. Many Monte Carlo simulations and random price generators utilize a normal variable.
However, if you compare the outputs of this normal variable with the actual returns of the real time series, you'll find that the variability in shocks (random changes) doesn't quite behave like it does in real data.
This is because most real financial time series data is more complex. Its distribution may be approximately normal at times, but the variability of its distribution changes over time due to various underlying factors.
In light of this, I believe that returns behave more like a convoluted product distribution rather than just a raw normal.
So the next step to get our procedurally generated returns to more closely emulate the behavior of real returns is to introduce more complexity into our model.
Through experimentation, I've found that a return series more closely emulating real returns can be generated in a three step process:
-> First, generate multiple independent, normally distributed variables simultaneously.
-> Next, apply pseudorandom weighting to each variable ranging from -1 to 1, or some limits within those bounds. This modulates each series to provide more variability in the shocks by producing product distributions.
-> Lastly, add the results together to generate the final pseudorandom output with a convoluted distribution. This adds variable amounts of constructive and destructive interference to produce a more "natural" looking output.
In this script, I use three independent normally distributed variables multiplied by uniform product distributed variables.
The first variable is generated by multiplying a normal variable by one uniformly distributed variable. This produces a bit more tailedness (kurtosis) than a normal distribution, but nothing too extreme.
The second variable is generated by multiplying a normal variable by two uniformly distributed variables. This produces moderately greater tails in the distribution.
The third variable is generated by multiplying a normal variable by three uniformly distributed variables. This produces a distribution with heavier tails.
For additional control of the output distributions, the uniform product distributions are given optional limits.
These limits control the boundaries for the absolute value of the uniform product variables, which affects the tails. In other words, they limit the weighting applied to the normally distributed variables in this transformation.
All three sets are then multiplied by user defined amplitude factors to adjust presence, then added together to produce our final pseudorandom return series with a convoluted product distribution.
Once we have the final, more "natural" looking pseudorandom series, the values are recursively summed over the forecast period to generate a simulated result.
This process of generation, weighting, addition, and summation is repeated over the user defined number of simulations with different seeds generated from the parent to produce our array of initial simulated outcomes.
After the initial simulation array is generated, the max, min, mean and standard deviation of this array are calculated, and the values are stored in holding arrays on each iteration to be called upon later.
Reference difference series and price values are also stored in holding arrays to be used in our comparison plots.
In this script, I use a linear model with simple returns rather than compounding log returns to generate the output.
The reason for this is that in generating outputs this way, we're able to run our simulations recursively from the beginning of the chart, then apply scaling and anchoring post-process.
This allows a greater conservation of runtime memory than the alternative, making it more suitable for doing longer forecasts with heavier amounts of simulations in TV's runtime environment.
From our starting time, the previous bar's price, volatility, and optional drift (expected return) are factored into our holding arrays to generate the final forecast parameters.
After these parameters are computed, the range forecast is produced.
The basis value for the ranges is the mean outcome of the simulations that were run.
Then, quarter standard deviations of the simulated outcomes are added to and subtracted from the basis up to 3σ to generate the forecast ranges.
All of these values are plotted and colorized based on their theoretical probability density. The most likely areas are the warmest colors, and least likely areas are the coolest colors.
An information panel is also displayed at the starting time which shows the starting time and price, forecast type, parent seed value, simulations run, forecast bars, total drift, mean, standard deviation, max outcome, min outcome, and bars remaining.
The interesting thing about simulated outcomes is that although the probability distribution of each simulation is not normal, the distribution of different outcomes converges to a normal one with enough steps.
In light of this, the probability density of outcomes is highest near the initial value + total drift, and decreases the further away from this point you go.
This makes logical sense since the central path is the easiest one to travel.
Given the ever changing state of markets, I find this tool to be best suited for shorter term forecasts.
However, if the movements of price are expected to remain relatively stable, longer term forecasts may be equally as valid.
There are many possible ways for users to apply this tool to their analysis setups. For example, the forecast ranges may be used as a guide to help users set risk targets.
Or, the generated levels could be used in conjunction with other indicators for meaningful confluence signals.
More advanced users could even extrapolate the functions used within this script for various purposes, such as generating pseudorandom data to test systems on, perform integration and approximations, etc.
These are just a few examples of potential uses of this script. How you choose to use it to benefit your trading, analysis, and coding is entirely up to you.
If nothing else, I think this is a pretty neat script simply for the novelty of it.
----------
How To Use:
When you first add the script to your chart, you will be prompted to confirm the starting date and time, number of bars to forecast, number of simulations to run, and whether to include drift assumption.
You will also be prompted to confirm the forecast type. There are two types to choose from:
-> End Result - This uses the values from the end of the simulation throughout the forecast interval.
-> Developing - This uses the values that develop from bar to bar, providing a real-time outlook.
You can always update these settings after confirmation as well.
Once these inputs are confirmed, the script will boot up and automatically generate the forecast in a separate pane.
Note that if there is no bar of data at the time you wish to start the forecast, the script will automatically detect use the next available bar after the specified start time.
From here, you can now control the rest of the settings.
The "Seeding Settings" section controls the initial seed value used to generate the children that produce the simulations.
In this section, you can control whether the seed is a fixed value, or a dynamic one.
Since selecting the dynamic parent option will change the seed value every time you change the settings or refresh your chart, there is a "Regenerate" input built into the script.
This input is a dummy input that isn't connected to any of the calculations. The purpose of this input is to force an update of the dynamic parent without affecting the generator or forecast settings.
Note that because we're running a limited number of simulations, different parent seeds will typically yield slightly different forecast ranges.
When using a small number of simulations, you will likely see a higher amount of variance between differently seeded results because smaller numbers of sampled simulations yield a heavier bias.
The more simulations you run, the smaller this variance will become since the outcomes become more convergent toward the same distribution, so the differences between differently seeded forecasts will become more marginal.
When using a dynamic parent, pay attention to the dispersion of ranges.
When you find a set of ranges that is dispersed how you like with your configuration, set your fixed parent value to the parent seed that shows in the info panel.
This will allow you to replicate that dispersion behavior again in the future.
An important thing to note when settings alerts on the plotted levels, or using them as components for signals in other scripts, is to decide on a fixed value for your parent seed to avoid minor repainting due to seed changes.
When the parent seed is fixed, no repainting occurs.
The "Amplitude Settings" section controls the amplitude coefficients for the three differently tailed generators.
These amplitude factors will change the difference series output for each simulation by controlling how aggressively each series moves.
When "Adjust Amplitude Coefficients" is disabled, all three coefficients are set to 1.
Note that if you expect volatility to significantly diverge from its historical values over the forecast interval, try experimenting with these factors to match your anticipation.
The "Weighting Settings" section controls the weighting boundaries for the three generators.
These weighting limits affect how tailed the distributions in each generator are, which in turn affects the final series outputs.
The maximum absolute value range for the weights is . When "Limit Generator Weights" is disabled, this is the range that is automatically used.
The last set of inputs is the "Display Settings", where you can control the visual outputs.
From here, you can select to display either "Forecast" or "Difference Comparison" via the "Output Display Type" dropdown tab.
"Forecast" is the type displayed by default. This plots the end result or developing forecast ranges.
There is an option with this display type to show the developing extremes of the simulations. This option is enabled by default.
There's also an option with this display type to show one of the simulated price series from the set alongside actual prices.
This allows you to visually compare simulated prices alongside the real prices.
"Difference Comparison" allows you to visually compare a synthetic difference series from the set alongside the actual difference series.
This display method is primarily useful for visually tuning the amplitude and weighting settings of the generators.
There are also info panel settings on the bottom, which allow you to control size, colors, and date format for the panel.
It's all pretty simple to use once you get the hang of it. So play around with the settings and see what kinds of forecasts you can generate!
----------
ADDITIONAL NOTES & DISCLAIMERS
Although I've done a number of things within this script to keep runtime demands as low as possible, the fact remains that this script is fairly computationally heavy.
Because of this, you may get random timeouts when using this script.
This could be due to either random drops in available runtime on the server, using too many simulations, or running the simulations over too many bars.
If it's just a random drop in runtime on the server, hide and unhide the script, re-add it to the chart, or simply refresh the page.
If the timeout persists after trying this, then you'll need to adjust your settings to a less demanding configuration.
Please note that no specific claims are being made in regards to this script's predictive accuracy.
It must be understood that this model is based on randomized price generation with assumed constant drift and dispersion from historical data before the starting point.
Models like these not consider the real world factors that may influence price movement (economic changes, seasonality, macro-trends, instrument hype, etc.), nor the changes in sample distribution that may occur.
In light of this, it's perfectly possible for price data to exceed even the most extreme simulated outcomes.
The future is uncertain, and becomes increasingly uncertain with each passing point in time.
Predictive models of any type can vary significantly in performance at any point in time, and nobody can guarantee any specific type of future performance.
When using forecasts in making decisions, DO NOT treat them as any form of guarantee that values will fall within the predicted range.
When basing your trading decisions on any trading methodology or utility, predictive or not, you do so at your own risk.
No guarantee is being issued regarding the accuracy of this forecast model.
Forecasting is very far from an exact science, and the results from any forecast are designed to be interpreted as potential outcomes rather than anything concrete.
With that being said, when applied prudently and treated as "general case scenarios", forecast models like these may very well be potentially beneficial tools to have in the arsenal.
Monte Carlo Simulation - Random WalkHello All,
Monte Carlo Simulation is a model used to predict the probability of different outcomes when the intervention of random variables is present. it is used by professionals in such widely disparate fields as finance, project management etc. You can find many articles about Monte Carlo Simulation on the net.
In this script I tried to make Monte Carlo Simulation and "Random Walk". it calculates results over and over, each time using a different set of random values that is created using historical data (500 times by default) and show min-max and some random paths. number of "random walks" is calculated by using number of bars to predict, so if you change "Number of Bars to Predict" then number of random walks may change. Total number of the lines must be less than 500.
"Number of Simulations " is 500 by default, more simulation better results. but if you increase it a lot then you may get "loop takes too long error"
"Number of Bars to Predict" can be between 10-100
"Number of Bars to use as Data Source" is the number of historical bars to use in simulations
Thanks to Ricardo Santos (@RicardoSantos) for letting me use his Random Number Generator Function.
P.S. I am not mathematician and I tried to make it as far as I understood the method. so if you see any issue let me know please.
Some examples:
Number of Bars to Predict = 100:
Number of Bars to Predict = 10:
if you enable "Keep Past Min-Max Levels" option then min-max levels will stay on the chart
Enjoy!
Multi Time Frame Candles with Volume Info / 3DHello Traders,
This is my second Multi Time Frame Candles script but with this new one, you will have some new features such volume info, remaining time to close of higher time frame candle and also developed using new features of Pine such array of lines. also I tried to make it 3D for better visualization ;) also it shows new highs/lows / breakouts.
I tried to make many things optional, so you can change almost everything using options.
What you can change using options:
- Higher time frame
- Number of Candles
- Candle Colors Up/Down
- Wick Color
- Volume colors Up/Down
- Text color of Remaining Time
- Shadow Color
- Background color
- Start bar of the candles (so you can see many higher times frame candles in same window)
- 3D effect, by default it's enables but you can disable 3D view
Lets see some examples:
Remaining time:
Breakouts:
You can combine different higher time frames:
if you don't want 3D view then combining different higher time frames:
You can change background color:
Enjoy!
Donchian Zig-Zag [LuxAlgo]The following indicator returns a line bouncing of the extremities of a Donchian channel, with the aim of replicating a "zig-zag" indicator. The indicator can both be lagging or lagging depending on the settings user uses.
Various extended lines are displayed in order to see if the peaks and troughs made by the Donchian zig-zag can act as potential support/resistance lines.
User Settings
Length : Period of the Donchian channel indicator, higher values will return fewer changes of directions from the zig-zag line
Bounce Speed : Determine the speed of bounces made by the zig-zag line, with higher values making the zig-zag line converge faster toward the extremities of the Donchian channel.
Gradient : Determine whether to use a gradient to color the area between each Donchian channel extremities, "On" by default.
Transparency : Transparency of the area between each Donchian channel extremities.
Usage
It is clear that this is not a very common indicator to see, as such usages can be limited and very hypothetical. Nonetheless, when a bounce speed value of 1 is used, the zig-zag line will have the tendency to lag behind the price, and as such can provides crosses with the prices which can provide potential entries.
The advantage of this approach against most indicators relying on crosses with the price is that the linear nature of the indicator allows avoiding retracements, thus potentially holding a position for the entirety of the trend.
Altho this indicator would not necessarily be the most adapted to this kind of usage.
When using a bounce speed superior to 1, we can see the predictive aspects of the indicator:
We can link the peaks/troughs made by the zig-zag with the precedent ones made to get potential support and resistance lines, while such a method is not necessarily accurate it still allows for an additional to interpret the indicator.
Conclusions
We presented an indicator aiming to replicate the behaviour of a zig-zag indicator. While somehow experimental, it has the benefits of being innovative and might inspire users in one way or another.
Previous Period Levels - X Alerts====== ABOUT THIS INDICATOR
- A simple but highly customisable display of previous higher time-frame
OHLC values, drawn using line.new and label.new. Nothing fancy but...
- Customised resolution input which excludes time frames lower than 1 hour
while extending the common higher reference inputs to include:
• 6, and 12 Hour
• 5 Day
• 3, and 6 Month
• 1 Year
- Alert conditions using an adjustable SMA to help reduce false positive
spam.
- Full visual customisation options for (almost) every aspect, so it can be
tuned to suit most individual preferences.
- In line with the miriad visual customisation options is the ability to
change the display format of the Labels, to show more or less information,
or disable them altogether.
====== REASON FOR STUDY
- To practice advanced user input option handling to allow for a full visual
customisation experience without stepping outside of, or interfering with,
the intended function of the indicator.
- Provide reasonably clear code commenting and structure in order to be
useful as a potential learning aid for others, and future reference for
myself.
====== DISCLAIMER
Any trade decisions you make are entirely your own responsibility.
I've made an effort to squash all the bugs, but you never know!
Random Synthetic Asset GenerationThis script generates pseudo-random asset data . Due to the nature of the random generator, it is impossible to use this indicator as input for other indicators because the instance of the script that the indicator is applied to will automatically be different from the instance that is plotted on the chart. Therefore, the idea is to use this script in other scripts (to make it possible to backtest on random data, for example).
The script has four main input parameters.
Random Number Generator Method: It supports two methods for generating the pseudo-random numbers (one by Ricardo Santos and one by Wichmann-Hill).
Seed: You can specify the seed to use. Each unique seed will generate a unique set of pseudo-random data.
Intrabar Volatility: This basically sets how volatile the generated wicks will be (0 = no wicks).
Price Multiplier: This is just a multiplier for the generated price data, so that you can scale up or down the generated price data.
You can also change the colors of the bars.
In addition to this, the indicator also generates random volume. In order to make it possible to show both volume and price, you need to have two identical instances of the indicator. One on the chart, and one in its own panel. Then, go into the Style tab in the indicator settings on the instance in the panel. Untick Up-Candles and Down-Candles boxes, and tick the Volume box.
In a similar manner, you can also plot the true range data and the candle change data as well, by ticking one of those boxes instead.
Oscillator Evaluator (Analysis tool)Oscillator Evaluator (Analysis tool)
The oscillator evaluator is a tool that will help you analyse and compare the oscillator of your choice to another 2 oscillators.
By selecting the strategy with which you will analyze the oscillators, you will be able to see the behaviour of the oscillators in different aspects.
First there is a moving average increase or decrease strategy, that will give you a good idea of the correlation of the oscillator with the price.
The second is a commom 2 MA crossover strategy, that will give you and idea of the validaty of that oscillator as a strategy or as a trend filter.
The third strategy is a cross over 0 signal, that will go long on a crossover of 0 and short on a crossunder 0. This helps you see how good is the oscillator at evaluating suport and resistance areas and give you an idea of its balance.
The forth strategy is a Buy/Sell on extremes of the oscillator and will let you know how good is your strategy at spotting good places to buy and sell.
The fith strategy is to evaluate how goood the oscillator is as a mean reversion filter or how good it is at spotting small price changes.
The sixth strategy is similar to the last but is focused on how good is the oscillator spotting good places to take profits on trending strategies.
The 6 strategies in the script produce signals from the oscillator and from the oscillator only.
In conclusion this tool can be used to measure your oscillator and see if it really is as good as you think in comparison to others.
This script is not intended to be used as a full strategy but as a tool.
MTF seconds values - JDAdd MTF capabilities to "seconds" timeframes!!
This script is not intended to be used as an indicator but gives you a workaround to solve the missing seconds MTF capabilities.
The "resolution" function in Pinescript doesn't allow for seconds values to be put in MTF
So I wrote a little helper code with arrays to get MTF on seconds timeframes.
If you want to add MTF in minutes, hours,... you can always add ' resolution = "" ' to the study line of the script
With these arrays of MTF values you can perform various calculations.
As an example I plotted the sma from the MTF values on the chart, but you can add anything you want that you can calculate from the array values.
Have fun with it !
Gr, JD.
Built-in Kelly ratio for dynamic position sizingThis is the defaut keltners channel strategy with a few additions.
The main purpose is to show how we include the Kelly ratio into our scripts for dynamic position sizing based on the performance of the strategy on a per trade basis.
We've also included the usual take-profit and stop-loss parameters in the event you want to play a little :)
We hope this helps you advance your personal system.
Happy Trading!
Moving Regression Prediction BandsIntroducing the Moving Regression Prediction Bands indicator.
Here I aimed to combine the principles of traditional band indicators (such as Bollinger Bands), regression channel and outlier detection methods. Its upper and lower bands define an interval in which the current price was expected to fall with a prescribed probability, as predicted by the previous-step result of the local polynomial regression (for the original Moving Regression script, see link below).
Algorithm
1. At every time step, the script performs local polynomial regression of the sample data within the lookback window specified by the Length input parameter.
2. The fitted polynomial is used to construct the Moving Regression time series as well as to extrapolate data, that is, to predict the next data point ( MRPrediction ).
3. The accuracy of local interpolation is estimated by means of the root-mean-square error ( RMSE ), that is, the deviation between the fitted polynomial and the observed values.
4. The MRPrediction and RMSE values calculated for the previous bar are then used to build the upper and lower bands , which I define as follows:
Upper Band = MRPrediction_prev + Multiplier *( RMSE_prev )
Lower Band = MRPrediction_prev - Multiplier *( RMSE_prev )
Here the Multiplier is a user-defined parameter that should be interpreted as a quantile in the standard normal distribution (the default value of 2.0 roughly corresponds to the 95% prediction interval).
To visualize the central line , the script offers the following options:
Previous-Period MR Prediction: MRPrediction_prev time series from the above equation.
MR: Conventional Moving Regression time series.
Ribbon: “Previous-Period MR Prediction” and “MR” curves plotted together and colored according to their relative value (green if MR > Previous MR Prediction; red otherwise).
Usage
My original idea was to use the band breakouts as potential trading signals. For example, the price crossing above the upper band is a bullish signal , being a potential sign that price is gaining momentum and is out of a previously predicted trend. The exit signal could be the crossing under the lower band or under the central line.
However, be aware that it is an experimental indicator, so you might fin some better strategies.
Feel free to play around!
Anti-Volume Stop LossFINALLY!
As everyone who tried to create, understand, or even find the Buff Pelz Dormeier Anti-volume stop-loss indicator knows that - it's not easy. Personally, I have partially, or perhaps completely figured out, the tips Buff had given in Investing with Volume Analysis book.
AVSL now is ready.
Please do some test and give me a feedback how it works in your trade strategy.
Anti-Volume stop loss - AVSL
from Investing with Volume Analysis book CHAPTER 20 • RISKY BUSINESS 253-256:
"It is important in any risk-management process to predetermine an objective decision point level (a stop loss) to exit, thereby protecting principal in case you are wrong. My objective sell point is determined by using a quantitative formula I refer to as Anti-Volume Stop Loss (AVSL). Having a quantitative, yet intelligent sell point eliminates the emotional struggles involved in deciding when to exit a position.
AVSL is a technical methodology that incorporates the concepts of support, volatility, and, most importantly, the inverse relationship between price and volume. The AVSL combines the concepts of the VPCI (Volume Price Confirmation Indicator) and John Bollinger’s Bollinger Bands to create a trailing stop loss.
AVSL = Lower Bollinger Band – (Price, Length, Standard Deviation)
Where:
Length = Round (3 + VPCI)
Price = Average (Lows × 1 / VPC × 1 / VPR, Length)
Standard Deviation = 2 × (VPCI × VM)
One of the most difficult decisions is determining what one’s maximum loss threshold should be. Some say 2 percent; others say 20 percent. I believe the more volatile a security, the looser the stop should be. A nonvolatile security, such as Coca-Cola, might move 7 percent a year, while a volatile security such as Google might move 7 percent in a day. If you use a 7 percent stop for Coca-Cola, it might take a year to be stopped out while the security underperforms.
However, if you use 7 percent for Google, you can be stopped out intraday, not allowing the investment an opportunity to develop. By using the lower Bollinger Band of the securities lows, the AVSL considers each individual security’s own volatility. Thus, a volatile security would be granted more room of the stocks low while a stable security would have a tighter leash (see Figure 20.7).
The next important step is employing the price-volume relationship into the calculation. Volume gauges the power behind price moves. In accounting for this, when a security is in an uptrend and has positive volume characteristics, it is given more room. However, if the security exhibits contracting volume characteristics, then the stop is tightened. In this way, if a negative news event affects an unhealthy security, the stop is tighter, thus preserving more of your profits.
However, if the negative news event affects a security whose price-volume relationship is healthy, the stop has been loosened, avoiding the temporary whipsaw of an otherwise strong position. In these ways, AVSL lets the market decide when to exit your position.
AVSL tailors each security for support, volatility, and the pricevolume relationship based on an investor’s time frame as calculated from the chart data. For example, my portfolio positions are continually re-evaluated with this AVSL methodology, which yields the possibility of raising the decision point threshold periodically based on the time frame of my investment objective. With my short-term Giddy-up portfolios, I use daily chart data and seek to raise my maximum loss stop on a daily basis.
My intermediate ETF and stock positions are calculated off of weekly data and then re-evaluated weekly. With my longer term stock portfolios, the decision point is calculated off data revised monthly. This analytical approach that uses measurable facts over emotion or gut instincts allows me to maintain my objectivity. Thus objectivity, not emotion, informs my investment decisions."
How look mine AVSL:
Price component = low × 1/VPC × 1/VPR : for VPC > 1 and VPC < -1 | low × 1 × 1/VPR : for 1 > VPC > 0 | low × -1 × 1/VPR : for 0 > VPC > -1
AVSL Price = sma((low × 1/VPC × 1/VPR) , length) / 100
length = round : for VPCI > 0 | round [ absolute ] : for VPCI < 0 | 3 : for VPCI=0
Standard Deviation = mult × VPCI × VM)
AVSL = sma(Actual low price - AWSL Price + Standard Deviation, 26)
It's hard to say is it the same as in Buff Pelz Dormeier book, but I encourage you to modify the script for better results.
Ultimate Strategy TemplateHello Traders
As most of you know, I'm a member of the PineCoders community and I sometimes take freelance pine coding jobs for TradingView users.
Off the top of my head, users often want to:
- convert an indicator into a strategy, so as to get the backtesting statistics from TradingView
- add alerts to their indicator/strategy
- develop a generic strategy template which can be plugged into (almost) any indicator
My gift for the community today is my Ultimate Strategy Template
Step 1: Create your connector
Adapt your indicator with only 2 lines of code and then connect it to this strategy template.
For doing so:
1) Find in your indicator where are the conditions printing the long/buy and short/sell signals.
2) Create an additional plot as below
I'm giving an example with a Two moving averages cross.
Please replicate the same methodology for your indicator wether it's a MACD, ZigZag, Pivots, higher-highs, lower-lows or whatever indicator with clear buy and sell conditions
//@version=4
study(title='Moving Average Cross', shorttitle='Moving Average Cross', overlay=true, precision=6, max_labels_count=500, max_lines_count=500)
type_ma1 = input(title="MA1 type", defval="SMA", options= )
length_ma1 = input(10, title = " MA1 length", type=input.integer)
type_ma2 = input(title="MA2 type", defval="SMA", options= )
length_ma2 = input(100, title = " MA2 length", type=input.integer)
// MA
f_ma(smoothing, src, length) =>
iff(smoothing == "RMA", rma(src, length),
iff(smoothing == "SMA", sma(src, length),
iff(smoothing == "EMA", ema(src, length), src)))
MA1 = f_ma(type_ma1, close, length_ma1)
MA2 = f_ma(type_ma2, close, length_ma2)
// buy and sell conditions
buy = crossover(MA1, MA2)
sell = crossunder(MA1, MA2)
plot(MA1, color=color_ma1, title="Plot MA1", linewidth=3)
plot(MA2, color=color_ma2, title="Plot MA2", linewidth=3)
plotshape(buy, title='LONG SIGNAL', style=shape.circle, location=location.belowbar, color=color_ma1, size=size.normal)
plotshape(sell, title='SHORT SIGNAL', style=shape.circle, location=location.abovebar, color=color_ma2, size=size.normal)
/////////////////////////// SIGNAL FOR STRATEGY /////////////////////////
Signal = buy ? 1 : sell ? -1 : 0
plot(Signal, title="🔌Connector🔌", transp=100)
Basically, I identified my buy, sell conditions in the code and added this at the bottom of my indicator code
Signal = buy ? 1 : sell ? -1 : 0
plot(Signal, title="🔌Connector🔌", transp=100)
Important Notes
🔥 The Strategy Template expects the value to be exactly 1 for the bullish signal , and -1 for the bearish signal
Now you can connect your indicator to the Strategy Template using the method below or that one
Step 2: Connect the connector
1) Add your updated indicator to a TradingView chart
2) Add the Strategy Template as well to the SAME chart
3) Open the Strategy Template settings and in the Data Source field select your 🔌Connector🔌 (which comes from your indicator)
From then, you should start seeing the signals and plenty of other stuff on your chart
🔥 Note that whenever you'll update your indicator values, the strategy statistics and visual on your chart will update in real-time
Settings
- Color Candles : Color the candles based on the trade state (bullish, bearish, neutral)
- Close positions at market at the end of each session : useful for everything but cryptocurrencies
- Session time ranges : Take the signals from a starting time to an ending time
- Close Direction : Choose to close only the longs, shorts, or both
- Date Filter : Take the signals from a starting date to an ending date
- Set the maximum losing streak length with an input
- Set the maximum winning streak length with an input
- Set the maximum consecutive days with a loss
- Set the maximum drawdown (in % of strategy equity)
- Set the maximum intraday loss in percentage
- Limit the number of trades per day
- Limit the number of trades per week
- Stop-loss: None or Percentage or Trailing Stop Percentage or ATR
- Take-Profit: None or Percentage or ATR
- Risk-Reward based on ATR multiple for the Stop-Loss and Take-Profit
This script is open-source so feel free to use it, and optimize it as you want
Alerts
Maybe you didn't know it but alerts are available on strategy scripts.
I added them in this template - that's cool because:
- if you don't know how to code, now you can connect your indicator and get alerts
- you have now a cool template showing you how to create alerts for strategy scripts
Source: www.tradingview.com
I hope you'll like it, use it, optimize it and most importantly....make some optimizations to your indicators thanks to this Strategy template
Special Thanks
Special thanks to @JosKodify as I borrowed a few risk management snippets from his website: kodify.net
Additional features
I thought of plenty of extra filters that I'll add later on this week on this strategy template
Best
Dave
Intraday Volume SwingsVolume swings are defined as increasing volume and higher highs/lower lows over a minimum of three bars.
This script tracks volume swings over an intraday chart and stores the final lowest low swing / highest high swing over the course of the day. The final high swing and low swing are then plotted over the following day as possible retracement / support & resistance levels.
Intraday levels for the current day can also be displayed, which may or may not be the final swings for the day, but are also possible areas of interest.
See code for additional notes.
Max Drawdown Calculating Functions (Optimized)Maximum Drawdown and Maximum Relative Drawdown% calculating functions.
I needed a way to calculate the maxDD% of a serie of datas from an array (the different values of my balance account). I didn't find any builtin pinescript way to do it, so here it is.
There are 2 algorithms to calculate maxDD and relative maxDD%, one non optimized needs n*(n - 1)/2 comparisons for a collection of n datas, the other one only needs n-1 comparisons.
In the example we calculate the maxDDs of the last 10 close values.
There a 2 functions : "maximum_relative_drawdown" and "maximum_dradown" (and "optimized_maximum_relative_drawdown" and "optimized_maximum_drawdown") with names speaking for themselves.
Input : an array of floats of arbitrary size (the values we want the DD of)
Output : an array of 4 values
I added the iteration number just for fun.
Basically my script is the implementation of these 2 algos I found on the net :
var peak = 0;
var n = prices.length
for (var i = 1; i < n; i++){
dif = prices - prices ;
peak = dif < 0 ? i : peak;
maxDrawdown = maxDrawdown > dif ? maxDrawdown : dif;
}
var n = prices.length
for (var i = 0; i < n; i++){
for (var j = i + 1; j < n; j++){
dif = prices - prices ;
maxDrawdown = maxDrawdown > dif ? maxDrawdown : dif;
}
}
Feel free to use it.
@version=4
Delta-RSI Oscillator StrategyDelta-RSI Oscillator Strategy:
This strategy illustrates the use of the recently published Delta-RSI Oscillator as a stand-alone indicator.
Delta-RSI represents a smoothed time derivative of the RSI, plotted as a histogram and serving as a momentum indicator.
There are three optional conditions to generate trading signals (set separately for Buy, Sell and Exit signals):
Zero-crossing : bullish when D-RSI crosses zero from negative to positive values (bearish otherwise)
Signal Line Crossing : bullish when D-RSI crosses from below to above the signal line (bearish otherwise)
Direction Change : bullish when D-RSI was negative and starts ascending (bearish otherwise)
Since D-RSI oscillator is based on polynomial fitting of the RSI curve, there is also an option to filter trade signal by means of the root mean-square error of the fit (normalized by the sample average).
My original D-RSI Oscillator script can be found here:
Linear Correlation OscillatorYou don't need loops to get the rolling correlation between an input series and a linear sequence of values, this can be obtained from the normalized difference between a WMA and an SMA of the input series.
The closed-form solutions for the moving average and standard deviation of a linear sequence can be easily calculated, while the same rolling statistics for the input series can be computed using cumulative sums. All these concepts were introduced in previous indicators posts long ago.
This approach can allow to efficiently compute the rolling R-Squared of a linear regression, as well as its SSE.
Using the rolling correlation as a trend indicator is often attributed to John Ehlers with the correlation trend indicator (Correlation As A Trend Indicator), but the applications of this precise method can be traced back quite a while ago by a wide variety of users, in fact, the LSMA can be computed using this precise indicator. You can see an example where the correlation oscillator appears below:
`security()` revisited [PineCoders]NOTE
The non-repainting technique in this publication that relies on bar states is now deprecated, as we have identified inconsistencies that undermine its credibility as a universal solution. The outputs that use the technique are still available for reference in this publication. However, we do not endorse its usage. See this publication for more information about the current best practices for requesting HTF data and why they work.
█ OVERVIEW
This script presents a new function to help coders use security() in both repainting and non-repainting modes. We revisit this often misunderstood and misused function, and explain its behavior in different contexts, in the hope of dispelling some of the coder lure surrounding it. The function is incredibly powerful, yet misused, it can become a dangerous WMD and an instrument of deception, for both coders and traders.
We will discuss:
• How to use our new `f_security()` function.
• The behavior of Pine code and security() on the three very different types of bars that make up any chart.
• Why what you see on a chart is a simulation, and should be taken with a grain of salt.
• Why we are presenting a new version of a function handling security() calls.
• Other topics of interest to coders using higher timeframe (HTF) data.
█ WARNING
We have tried to deliver a function that is simple to use and will, in non-repainting mode, produce reliable results for both experienced and novice coders. If you are a novice coder, stick to our recommendations to avoid getting into trouble, and DO NOT change our `f_security()` function when using it. Use `false` as the function's last argument and refrain from using your script at smaller timeframes than the chart's. To call our function to fetch a non-repainting value of close from the 1D timeframe, use:
f_security(_sym, _res, _src, _rep) => security(_sym, _res, _src )
previousDayClose = f_security(syminfo.tickerid, "D", close, false)
If that's all you're interested in, you are done.
If you choose to ignore our recommendation and use the function in repainting mode by changing the `false` in there for `true`, we sincerely hope you read the rest of our ramblings before you do so, to understand the consequences of your choice.
Let's now have a look at what security() is showing you. There is a lot to cover, so buckle up! But before we dig in, one last thing.
What is a chart?
A chart is a graphic representation of events that occur in markets. As any representation, it is not reality, but rather a model of reality. As Scott Page eloquently states in The Model Thinker : "All models are wrong; many are useful". Having in mind that both chart bars and plots on our charts are imperfect and incomplete renderings of what actually occurred in realtime markets puts us coders in a place from where we can better understand the nature of, and the causes underlying the inevitable compromises necessary to build the data series our code uses, and print chart bars.
Traders or coders complaining that charts do not reflect reality act like someone who would complain that the word "dog" is not a real dog. Let's recognize that we are dealing with models here, and try to understand them the best we can. Sure, models can be improved; TradingView is constantly improving the quality of the information displayed on charts, but charts nevertheless remain mere translations. Plots of data fetched through security() being modelized renderings of what occurs at higher timeframes, coders will build more useful and reliable tools for both themselves and traders if they endeavor to perfect their understanding of the abstractions they are working with. We hope this publication helps you in this pursuit.
█ FEATURES
This script's "Inputs" tab has four settings:
• Repaint : Determines whether the functions will use their repainting or non-repainting mode.
Note that the setting will not affect the behavior of the yellow plot, as it always repaints.
• Source : The source fetched by the security() calls.
• Timeframe : The timeframe used for the security() calls. If it is lower than the chart's timeframe, a warning appears.
• Show timeframe reminder : Displays a reminder of the timeframe after the last bar.
█ THE CHART
The chart shows two different pieces of information and we want to discuss other topics in this section, so we will be covering:
A — The type of chart bars we are looking at, indicated by the colored band at the top.
B — The plots resulting of calling security() with the close price in different ways.
C — Points of interest on the chart.
A — Chart bars
The colored band at the top shows the three types of bars that any chart on a live market will print. It is critical for coders to understand the important distinctions between each type of bar:
1 — Gray : Historical bars, which are bars that were already closed when the script was run on them.
2 — Red : Elapsed realtime bars, i.e., realtime bars that have run their course and closed.
The state of script calculations showing on those bars is that of the last time they were made, when the realtime bar closed.
3 — Green : The realtime bar. Only the rightmost bar on the chart can be the realtime bar at any given time, and only when the chart's market is active.
Refer to the Pine User Manual's Execution model page for a more detailed explanation of these types of bars.
B — Plots
The chart shows the result of letting our 5sec chart run for a few minutes with the following settings: "Repaint" = "On" (the default is "Off"), "Source" = `close` and "Timeframe" = 1min. The five lines plotted are the following. They have progressively thinner widths:
1 — Yellow : A normal, repainting security() call.
2 — Silver : Our recommended security() function.
3 — Fuchsia : Our recommended way of achieving the same result as our security() function, for cases when the source used is a function returning a tuple.
4 — White : The method we previously recommended in our MTF Selection Framework , which uses two distinct security() calls.
5 — Black : A lame attempt at fooling traders that MUST be avoided.
All lines except the first one in yellow will vary depending on the "Repaint" setting in the script's inputs. The first plot does not change because, contrary to all other plots, it contains no conditional code to adapt to repainting/no-repainting modes; it is a simple security() call showing its default behavior.
C — Points of interest on the chart
Historical bars do not show actual repainting behavior
To appreciate what a repainting security() call will plot in realtime, one must look at the realtime bar and at elapsed realtime bars, the bars where the top line is green or red on the chart at the top of this page. There you can see how the plots go up and down, following the close value of each successive chart bar making up a single bar of the higher timeframe. You would see the same behavior in "Replay" mode. In the realtime bar, the movement of repainting plots will vary with the source you are fetching: open will not move after a new timeframe opens, low and high will change when a new low or high are found, close will follow the last feed update. If you are fetching a value calculated by a function, it may also change on each update.
Now notice how different the plots are on historical bars. There, the plot shows the close of the previously completed timeframe for the whole duration of the current timeframe, until on its last bar the price updates to the current timeframe's close when it is confirmed (if the timeframe's last bar is missing, the plot will only update on the next timeframe's first bar). That last bar is the only one showing where the plot would end if that timeframe's bars had elapsed in realtime. If one doesn't understand this, one cannot properly visualize how his script will calculate in realtime when using repainting. Additionally, as published scripts typically show charts where the script has only run on historical bars, they are, in fact, misleading traders who will naturally assume the script will behave the same way on realtime bars.
Non-repainting plots are more accurate on historical bars
Now consider this chart, where we are using the same settings as on the chart used to publish this script, except that we have turned "Repainting" off this time:
The yellow line here is our reference, repainting line, so although repainting is turned off, it is still repainting, as expected. Because repainting is now off, however, plots on historical bars show the previous timeframe's close until the first bar of a new timeframe, at which point the plot updates. This correctly reflects the behavior of the script in the realtime bar, where because we are offsetting the series by one, we are always showing the previously calculated—and thus confirmed—higher timeframe value. This means that in realtime, we will only get the previous timeframe's values one bar after the timeframe's last bar has elapsed, at the open of the first bar of a new timeframe. Historical and elapsed realtime bars will not actually show this nuance because they reflect the state of calculations made on their close , but we can see the plot update on that bar nonetheless.
► This more accurate representation on historical bars of what will happen in the realtime bar is one of the two key reasons why using non-repainting data is preferable.
The other is that in realtime, your script will be using more reliable data and behave more consistently.
Misleading plots
Valiant attempts by coders to show non-repainting, higher timeframe data updating earlier than on our chart are futile. If updates occur one bar earlier because coders use the repainting version of the function, then so be it, but they must then also accept that their historical bars are not displaying information that is as accurate. Not informing script users of this is to mislead them. Coders should also be aware that if they choose to use repainting data in realtime, they are sacrificing reliability to speed and may be running a strategy that behaves very differently from the one they backtested, thus invalidating their tests.
When, however, coders make what are supposed to be non-repainting plots plot artificially early on historical bars, as in examples "c4" and "c5" of our script, they would want us to believe they have achieved the miracle of time travel. Our understanding of the current state of science dictates that for now, this is impossible. Using such techniques in scripts is plainly misleading, and public scripts using them will be moderated. We are coding trading tools here—not video games. Elementary ethics prescribe that we should not mislead traders, even if it means not being able to show sexy plots. As the great Feynman said: You should not fool the layman when you're talking as a scientist.
You can readily appreciate the fantasy plot of "c4", the thinnest line in black, by comparing its supposedly non-repainting behavior between historical bars and realtime bars. After updating—by miracle—as early as the wide yellow line that is repainting, it suddenly moves in a more realistic place when the script is running in realtime, in synch with our non-repainting lines. The "c5" version does not plot on the chart, but it displays in the Data Window. It is even worse than "c4" in that it also updates magically early on historical bars, but goes on to evaluate like the repainting yellow line in realtime, except one bar late.
Data Window
The Data Window shows the values of the chart's plots, then the values of both the inside and outside offsets used in our calculations, so you can see them change bar by bar. Notice their differences between historical and elapsed realtime bars, and the realtime bar itself. If you do not know about the Data Window, have a look at this essential tool for Pine coders in the Pine User Manual's page on Debugging . The conditional expressions used to calculate the offsets may seem tortuous but their objective is quite simple. When repainting is on, we use this form, so with no offset on all bars:
security(ticker, i_timeframe, i_source )
// which is equivalent to:
security(ticker, i_timeframe, i_source)
When repainting is off, we use two different and inverted offsets on historical bars and the realtime bar:
// Historical bars:
security(ticker, i_timeframe, i_source )
// Realtime bar (and thus, elapsed realtime bars):
security(ticker, i_timeframe, i_source )
The offsets in the first line show how we prevent repainting on historical bars without the need for the `lookahead` parameter. We use the value of the function call on the chart's previous bar. Since values between the repainting and non-repainting versions only differ on the timeframe's last bar, we can use the previous value so that the update only occurs on the timeframe's first bar, as it will in realtime when not repainting.
In the realtime bar, we use the second call, where the offsets are inverted. This is because if we used the first call in realtime, we would be fetching the value of the repainting function on the previous bar, so the close of the last bar. What we want, instead, is the data from the previous, higher timeframe bar , which has elapsed and is confirmed, and thus will not change throughout realtime bars, except on the first constituent chart bar belonging to a new higher timeframe.
After the offsets, the Data Window shows values for the `barstate.*` variables we use in our calculations.
█ NOTES
Why are we revisiting security() ?
For four reasons:
1 — We were seeing coders misuse our `f_secureSecurity()` function presented in How to avoid repainting when using security() .
Some novice coders were modifying the offset used with the history-referencing operator in the function, making it zero instead of one,
which to our horror, caused look-ahead bias when used with `lookahead = barmerge.lookahead_on`.
We wanted to present a safer function which avoids introducing the dreaded "lookahead" in the scripts of unsuspecting coders.
2 — The popularity of security() in screener-type scripts where coders need to use the full 40 calls allowed per script made us want to propose
a solid method of allowing coders to offer a repainting/no-repainting choice to their script users with only one security() call.
3 — We wanted to explain why some alternatives we see circulating are inadequate and produce misleading behavior.
4 — Our previous publication on security() focused on how to avoid repainting, yet many other considerations worthy of attention are not related to repainting.
Handling tuples
When sending function calls that return tuples with security() , our `f_security()` function will not work because Pine does not allow us to use the history-referencing operator with tuple return values. The solution is to integrate the inside offset to your function's arguments, use it to offset the results the function is returning, and then add the outside offset in a reassignment of the tuple variables, after security() returns its values to the script, as we do in our "c2" example.
Does it repaint?
We're pretty sure Wilder was not asked very often if RSI repainted. Why? Because it wasn't in fashion—and largely unnecessary—to ask that sort of question in the 80's. Many traders back then used daily charts only, and indicator values were calculated at the day's close, so everybody knew what they were getting. Additionally, indicator values were calculated by generally reputable outfits or traders themselves, so data was pretty reliable. Today, almost anybody can write a simple indicator, and the programming languages used to write them are complex enough for some coders lacking the caution, know-how or ethics of the best professional coders, to get in over their heads and produce code that does not work the way they think it does.
As we hope to have clearly demonstrated, traders do have legitimate cause to ask if MTF scripts repaint or not when authors do not specify it in their script's description.
► We recommend that authors always use our `f_security()` with `false` as the last argument to avoid repainting when fetching data dependent on OHLCV information. This is the only way to obtain reliable HTF data. If you want to offer users a choice, make non-repainting mode the default, so that if users choose repainting, it will be their responsibility. Non-repainting security() calls are also the only way for scripts to show historical behavior that matches the script's realtime behavior, so you are not misleading traders. Additionally, non-repainting HTF data is the only way that non-repainting alerts can be configured on MTF scripts, as users of MTF scripts cannot prevent their alerts from repainting by simply configuring them to trigger on the bar's close.
Data feeds
A chart at one timeframe is made up of multiple feeds that mesh seamlessly to form one chart. Historical bars can use one feed, and the realtime bar another, which brokers/exchanges can sometimes update retroactively so that elapsed realtime bars will reappear with very slight modifications when the browser's tab is refreshed. Intraday and daily chart prices also very often originate from different feeds supplied by brokers/exchanges. That is why security() calls at higher timeframes may be using a completely different feed than the chart, and explains why the daily high value, for example, can vary between timeframes. Volume information can also vary considerably between intraday and daily feeds in markets like stocks, because more volume information becomes available at the end of day. It is thus expected behavior—and not a bug—to see data variations between timeframes.
Another point to keep in mind concerning feeds it that when you are using a repainting security() plot in realtime, you will sometimes see discrepancies between its plot and the realtime bars. An artefact revealing these inconsistencies can be seen when security() plots sometimes skip a realtime chart bar during periods of high market activity. This occurs because of races between the chart and the security() feeds, which are being monitored by independent, concurrent processes. A blue arrow on the chart indicates such an occurrence. This is another cause of repainting, where realtime bar-building logic can produce different outcomes on one closing price. It is also another argument supporting our recommendation to use non-repainting data.
Alternatives
There is an alternative to using security() in some conditions. If all you need are OHLC prices of a higher timeframe, you can use a technique like the one Duyck demonstrates in his security free MTF example - JD script. It has the great advantage of displaying actual repainting values on historical bars, which mimic the code's behavior in the realtime bar—or at least on elapsed realtime bars, contrary to a repainting security() plot. It has the disadvantage of using the current chart's TF data feed prices, whereas higher timeframe data feeds may contain different and more reliable prices when they are compiled at the end of the day. In its current state, it also does not allow for a repainting/no-repainting choice.
When `lookahead` is useful
When retrieving non-price data, or in special cases, for experiments, it can be useful to use `lookahead`. One example is our Backtesting on Non-Standard Charts: Caution! script where we are fetching prices of standard chart bars from non-standard charts.
Warning users
Normal use of security() dictates that it only be used at timeframes equal to or higher than the chart's. To prevent users from inadvertently using your script in contexts where it will not produce expected behavior, it is good practice to warn them when their chart is on a higher timeframe than the one in the script's "Timeframe" field. Our `f_tfReminderAndErrorCheck()` function in this script does that. It can also print a reminder of the higher timeframe. It uses one security() call.
Intrabar timeframes
security() is not supported by TradingView when used with timeframes lower than the chart's. While it is still possible to use security() at intrabar timeframes, it then behaves differently. If no care is taken to send a function specifically written to handle the successive intrabars, security() will return the value of the last intrabar in the chart's timeframe, so the last 1H bar in the current 1D bar, if called at "60" from a "D" chart timeframe. If you are an advanced coder, see our FAQ entry on the techniques involved in processing intrabar timeframes. Using intrabar timeframes comes with important limitations, which you must understand and explain to traders if you choose to make scripts using the technique available to others. Special care should also be taken to thoroughly test this type of script. Novice coders should refrain from getting involved in this.
█ TERMINOLOGY
Timeframe
Timeframe , interval and resolution are all being used to name the concept of timeframe. We have, in the past, used "timeframe" and "resolution" more or less interchangeably. Recently, members from the Pine and PineCoders team have decided to settle on "timeframe", so from hereon we will be sticking to that term.
Multi-timeframe (MTF)
Some coders use "multi-timeframe" or "MTF" to name what are in fact "multi-period" calculations, as when they use MAs of progressively longer periods. We consider that a misleading use of "multi-timeframe", which should be reserved for code using calculations actually made from another timeframe's context and using security() , safe for scripts like Duyck's one mentioned earlier, or TradingView's Relative Volume at Time , which use a user-selected timeframe as an anchor to reset calculations. Calculations made at the chart's timeframe by varying the period of MAs or other rolling window calculations should be called "multi-period", and "MTF-anchored" could be used for scripts that reset calculations on timeframe boundaries.
Colophon
Our script was written using the PineCoders Coding Conventions for Pine .
The description was formatted using the techniques explained in the How We Write and Format Script Descriptions PineCoders publication.
Snippets were lifted from our MTF Selection Framework , then massaged to create the `f_tfReminderAndErrorCheck()` function.
█ THANKS
Thanks to apozdnyakov for his help with the innards of security() .
Thanks to bmistiaen for proofreading our description.
Look first. Then leap.
Nick Rypock Trailing Reverse (NRTR)This indicator was invented in 2001 by Konstantin Kopyrkin. The name "Nick Rypock" is derived from his surname reading in the opposite direction:
Kopyrkin -> Kopyr Kin -> Kin Kopyr -> Nik Rypok
The idea of the indicator is similar to the Chandelier Exit, but doesn't involve ATR component and uses a percentage instead.
A dynamic price channel is used to calculate the NRTR. The calculations involve only those prices that are included in the current trend and exclude the extremes related to the previous trend. The indicator is always at the same distance (in percent) from the extremes reached by prices (below the maximum peak for the current uptrend, above the minimum bottom for the current downtrend).
Machine Learning: Logistic RegressionMulti-timeframe Strategy based on Logistic Regression algorithm
Description:
This strategy uses a classic machine learning algorithm that came from statistics - Logistic Regression (LR).
The first and most important thing about logistic regression is that it is not a 'Regression' but a 'Classification' algorithm. The name itself is somewhat misleading. Regression gives a continuous numeric output but most of the time we need the output in classes (i.e. categorical, discrete). For example, we want to classify emails into “spam” or 'not spam', classify treatment into “success” or 'failure', classify statement into “right” or 'wrong', classify election data into 'fraudulent vote' or 'non-fraudulent vote', classify market move into 'long' or 'short' and so on. These are the examples of logistic regression having a binary output (also called dichotomous).
You can also think of logistic regression as a special case of linear regression when the outcome variable is categorical, where we are using log of odds as dependent variable. In simple words, it predicts the probability of occurrence of an event by fitting data to a logit function.
Basically, the theory behind Logistic Regression is very similar to the one from Linear Regression, where we seek to draw a best-fitting line over data points, but in Logistic Regression, we don’t directly fit a straight line to our data like in linear regression. Instead, we fit a S shaped curve, called Sigmoid, to our observations, that best SEPARATES data points. Technically speaking, the main goal of building the model is to find the parameters (weights) using gradient descent.
In this script the LR algorithm is retrained on each new bar trying to classify it into one of the two categories. This is done via the logistic_regression function by updating the weights w in the loop that continues for iterations number of times. In the end the weights are passed through the sigmoid function, yielding a prediction.
Mind that some assets require to modify the script's input parameters. For instance, when used with BTCUSD and USDJPY, the 'Normalization Lookback' parameter should be set down to 4 (2,...,5..), and optionally the 'Use Price Data for Signal Generation?' parameter should be checked. The defaults were tested with EURUSD.
Note: TradingViews's playback feature helps to see this strategy in action.
Warning: Signals ARE repainting.
Style tags: Trend Following, Trend Analysis
Asset class: Equities, Futures, ETFs, Currencies and Commodities
Dataset: FX Minutes/Hours/Days
VolumeHeatmap | Experimental Version of Marketorders MatrixDear all,
I wish a Happy New Year!
The last time I tried to developing a Volume Heatmap, that the Marketorders made.
With the currently version I achieve my skills for that and I present it for everyone - some bugs I cannot solve today.
It also possible to see the POC - also the dynamic of volume developing:
The background for that is to find the pricelevel with the most volume - this is for valuetrading always the target.
If someone find it useful or have question - let me know!
Kind regards
NXT2017