PivotLiveLibrary "PivotLive"
zigCore(lo, hi, d, dev, bs)
Parameters:
lo (float)
hi (float)
d (int)
dev (int)
bs (int)
MATH
LibVPrfLibrary "LibVPrf"
This library provides an object-oriented framework for volume
profile analysis in Pine Script®. It is built around the `VProf`
User-Defined Type (UDT), which encapsulates all data, settings,
and statistical metrics for a single profile, enabling stateful
analysis with on-demand calculations.
Key Features:
1. **Object-Oriented Design (UDT):** The library is built around
the `VProf` UDT. This object encapsulates all profile data
and provides methods for its full lifecycle management,
including creation, cloning, clearing, and merging of profiles.
2. **Volume Allocation (`AllotMode`):** Offers two methods for
allocating a bar's volume:
- **Classic:** Assigns the entire bar's volume to the close
price bucket.
- **PDF:** Distributes volume across the bar's range using a
statistical price distribution model from the `LibBrSt` library.
3. **Buy/Sell Volume Splitting (`SplitMode`):** Provides methods
for classifying volume into buying and selling pressure:
- **Classic:** Classifies volume based on the bar's color (Close vs. Open).
- **Dynamic:** A specific model that analyzes candle structure
(body vs. wicks) and a short-term trend factor to
estimate the buy/sell share at each price level.
4. **Statistical Analysis (On-Demand):** Offers a suite of
statistical metrics calculated using a "Lazy Evaluation"
pattern (computed only when requested via `get...` methods):
- **Central Tendency:** Point of Control (POC), VWAP, and Median.
- **Dispersion:** Value Area (VA) and Population Standard Deviation.
- **Shape:** Skewness and Excess Kurtosis.
- **Delta:** Cumulative Volume Delta, including its
historical high/low watermarks.
5. **Structural Analysis:** Includes a parameter-free method
(`getSegments`) to decompose a profile into its fundamental
unimodal segments, allowing for modality detection (e.g.,
identifying bimodal profiles).
6. **Dynamic Profile Management:**
- **Auto-Fitting:** Profiles set to `dynamic = true` will
automatically expand their price range to fit new data.
- **Manipulation:** The resolution, price range, and Value Area
of a dynamic profile can be changed at any time. This
triggers a resampling process that uses a **linear
interpolation model** to re-bucket existing volume.
- **Assumption:** Non-dynamic profiles are fixed and will throw
a `runtime.error` if `addBar` is called with data
outside their initial range.
7. **Bucket-Level Access:** Provides getter methods for direct
iteration and analysis of the raw buy/sell volume and price
boundaries of each individual price bucket.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
create(buckets, rangeUp, rangeLo, dynamic, valueArea, allot, estimator, cdfSteps, split, trendLen)
Construct a new `VProf` object with fixed bucket count & range.
Parameters:
buckets (int) : series int number of price buckets ≥ 1
rangeUp (float) : series float upper price bound (absolute)
rangeLo (float) : series float lower price bound (absolute)
dynamic (bool) : series bool Flag for dynamic adaption of profile ranges
valueArea (int) : series int Percentage of total volume to include in the Value Area (1..100)
allot (series AllotMode) : series AllotMode Allocation mode `classic` or `pdf` (default `classic`)
estimator (series PriceEst enum from AustrianTradingMachine/LibBrSt/1) : series LibBrSt.PriceEst PDF model when `model == PDF`. (deflault = 'uniform')
cdfSteps (int) : series int even #sub-intervals for Simpson rule (default 20)
split (series SplitMode) : series SplitMode Buy/Sell determination (default `classic`)
trendLen (int) : series int Look‑back bars for trend factor (default 3)
Returns: VProf freshly initialised profile
method clone(self)
Create a deep copy of the volume profile.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object to copy
Returns: VProf A new, independent copy of the profile
method clear(self)
Reset all bucket tallies while keeping configuration intact.
Namespace types: VProf
Parameters:
self (VProf) : VProf profile object
Returns: VProf cleared profile (chaining)
method merge(self, srcABuy, srcASell, srcRangeUp, srcRangeLo, srcCvd, srcCvdHi, srcCvdLo)
Merges volume data from a source profile into the current profile.
If resizing is needed, it performs a high-fidelity re-bucketing of existing
volume using a linear interpolation model inferred from neighboring buckets,
preventing aliasing artifacts and ensuring accurate volume preservation.
Namespace types: VProf
Parameters:
self (VProf) : VProf The target profile object to merge into.
srcABuy (array) : array The source profile's buy volume bucket array.
srcASell (array) : array The source profile's sell volume bucket array.
srcRangeUp (float) : series float The upper price bound of the source profile.
srcRangeLo (float) : series float The lower price bound of the source profile.
srcCvd (float) : series float The final Cumulative Volume Delta (CVD) value of the source profile.
srcCvdHi (float) : series float The historical high-water mark of the CVD from the source profile.
srcCvdLo (float) : series float The historical low-water mark of the CVD from the source profile.
Returns: VProf `self` (chaining), now containing the merged data.
method addBar(self, offset)
Add current bar’s volume to the profile (call once per realtime bar).
classic mode: allocates all volume to the close bucket and classifies
by `close >= open`. PDF mode: distributes volume across buckets by the
estimator’s CDF mass. For `split = dynamic`, the buy/sell share per
price is computed via context-driven piecewise s(u).
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
offset (int) : series int To offset the calculated bar
Returns: VProf `self` (method chaining)
method setBuckets(self, buckets)
Sets the number of buckets for the volume profile.
Behavior depends on the `isDynamic` flag.
- If `dynamic = true`: Works on filled profiles by re-bucketing to a new resolution.
- If `dynamic = false`: Only works on empty profiles to prevent accidental changes.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
buckets (int) : series int The new number of buckets
Returns: VProf `self` (chaining)
method setRanges(self, rangeUp, rangeLo)
Sets the price range for the volume profile.
Behavior depends on the `dynamic` flag.
- If `dynamic = true`: Works on filled profiles by re-bucketing existing volume.
- If `dynamic = false`: Only works on empty profiles to prevent accidental changes.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
rangeUp (float) : series float The new upper price bound
rangeLo (float) : series float The new lower price bound
Returns: VProf `self` (chaining)
method setValueArea(self, valueArea)
Set the percentage of volume for the Value Area. If the value
changes, the profile is finalized again.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
valueArea (int) : series int The new Value Area percentage (0..100)
Returns: VProf `self` (chaining)
method getBktBuyVol(self, idx)
Get Buy volume of a bucket.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
idx (int) : series int Bucket index
Returns: series float Buy volume ≥ 0
method getBktSellVol(self, idx)
Get Sell volume of a bucket.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
idx (int) : series int Bucket index
Returns: series float Sell volume ≥ 0
method getBktBnds(self, idx)
Get Bounds of a bucket.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
idx (int) : series int Bucket index
Returns:
up series float The upper price bound of the bucket.
lo series float The lower price bound of the bucket.
method getPoc(self)
Get POC information.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
Returns:
pocIndex series int The index of the Point of Control (POC) bucket.
pocPrice. series float The mid-price of the Point of Control (POC) bucket.
method getVA(self)
Get Value Area (VA) information.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
Returns:
vaUpIndex series int The index of the upper bound bucket of the Value Area.
vaUpPrice series float The upper price bound of the Value Area.
vaLoIndex series int The index of the lower bound bucket of the Value Area.
vaLoPrice series float The lower price bound of the Value Area.
method getMedian(self)
Get the profile's median price and its bucket index. Calculates the value on-demand if stale.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object.
Returns:
medianIndex series int The index of the bucket containing the Median.
medianPrice series float The Median price of the profile.
method getVwap(self)
Get the profile's VWAP and its bucket index. Calculates the value on-demand if stale.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object.
Returns:
vwapIndex series int The index of the bucket containing the VWAP.
vwapPrice series float The Volume Weighted Average Price of the profile.
method getStdDev(self)
Get the profile's volume-weighted standard deviation. Calculates the value on-demand if stale.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object.
Returns: series float The Standard deviation of the profile.
method getSkewness(self)
Get the profile's skewness. Calculates the value on-demand if stale.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object.
Returns: series float The Skewness of the profile.
method getKurtosis(self)
Get the profile's excess kurtosis. Calculates the value on-demand if stale.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object.
Returns: series float The Kurtosis of the profile.
method getSegments(self)
Get the profile's fundamental unimodal segments. Calculates on-demand if stale.
Uses a parameter-free, pivot-based recursive algorithm.
Namespace types: VProf
Parameters:
self (VProf) : VProf The profile object.
Returns: matrix A 2-column matrix where each row is an pair.
method getCvd(self)
Cumulative Volume Delta (CVD) like metric over all buckets.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object.
Returns:
cvd series float The final Cumulative Volume Delta (Total Buy Vol - Total Sell Vol).
cvdHi series float The running high-water mark of the CVD as volume was added.
cvdLo series float The running low-water mark of the CVD as volume was added.
VProf
VProf Bucketed Buy/Sell volume profile plus meta information.
Fields:
buckets (series int) : int Number of price buckets (granularity ≥1)
rangeUp (series float) : float Upper price range (absolute)
rangeLo (series float) : float Lower price range (absolute)
dynamic (series bool) : bool Flag for dynamic adaption of profile ranges
valueArea (series int) : int Percentage of total volume to include in the Value Area (1..100)
allot (series AllotMode) : AllotMode Allocation mode `classic` or `pdf`
estimator (series PriceEst enum from AustrianTradingMachine/LibBrSt/1) : LibBrSt.PriceEst Price density model when `model == PDF`
cdfSteps (series int) : int Simpson integration resolution (even ≥2)
split (series SplitMode) : SplitMode Buy/Sell split strategy per bar
trendLen (series int) : int Look‑back length for trend factor (≥1)
maxBkt (series int) : int User-defined number of buckets (unclamped)
aBuy (array) : array Buy volume per bucket
aSell (array) : array Sell volume per bucket
cvd (series float) : float Final Cumulative Volume Delta (Total Buy Vol - Total Sell Vol).
cvdHi (series float) : float Running high-water mark of the CVD as volume was added.
cvdLo (series float) : float Running low-water mark of the CVD as volume was added.
poc (series int) : int Index of max‑volume bucket (POC). Is `na` until calculated.
vaUp (series int) : int Index of upper Value‑Area bound. Is `na` until calculated.
vaLo (series int) : int Index of lower value‑Area bound. Is `na` until calculated.
median (series float) : float Median price of the volume distribution. Is `na` until calculated.
vwap (series float) : float Profile VWAP (Volume Weighted Average Price). Is `na` until calculated.
stdDev (series float) : float Standard Deviation of volume around the VWAP. Is `na` until calculated.
skewness (series float) : float Skewness of the volume distribution. Is `na` until calculated.
kurtosis (series float) : float Excess Kurtosis of the volume distribution. Is `na` until calculated.
segments (matrix) : matrix A 2-column matrix where each row is an pair. Is `na` until calculated.
LibBrStLibrary "LibBrSt"
This is a library for quantitative analysis, designed to estimate
the statistical properties of price movements *within* a single
OHLC bar, without requiring access to tick data. It provides a
suite of estimators based on various statistical and econometric
models, allowing for analysis of intra-bar volatility and
price distribution.
Key Capabilities:
1. **Price Distribution Models (`PriceEst`):** Provides a selection
of estimators that model intra-bar price action as a probability
distribution over the range. This allows for the
calculation of the intra-bar mean (`priceMean`) and standard
deviation (`priceStdDev`) in absolute price units. Models include:
- **Symmetric Models:** `uniform`, `triangular`, `arcsine`,
`betaSym`, and `t4Sym` (Student-t with fat tails).
- **Skewed Models:** `betaSkew` and `t4Skew`, which adjust
their shape based on the Open/Close position.
- **Model Assumptions:** The skewed models rely on specific
internal constants. `betaSkew` uses a fixed concentration
parameter (`BETA_SKEW_CONCENTRATION = 4.0`), and `t4Sym`/`t4Skew`
use a heuristic scaling factor (`T4_SHAPE_FACTOR`)
to map the distribution.
2. **Econometric Log-Return Estimators (`LogEst`):** Includes a set of
econometric estimators for calculating the volatility (`logStdDev`)
and drift (`logMean`) of logarithmic returns within a single bar.
These are unit-less measures. Models include:
- **Parkinson (1980):** A High-Low range estimator.
- **Garman-Klass (1980):** An OHLC-based estimator.
- **Rogers-Satchell (1991):** An OHLC estimator that accounts
for non-zero drift.
3. **Distribution Analysis (PDF/CDF):** Provides functions to work
with the Probability Density Function (`pricePdf`) and
Cumulative Distribution Function (`priceCdf`) of the
chosen price model.
- **Note on `priceCdf`:** This function uses analytical (exact)
calculations for the `uniform`, `triangular`, and `arcsine`
models. For all other models (e.g., `betaSkew`, `t4Skew`),
it uses **numerical integration (Simpson's rule)** as
an approximation of the cumulative probability.
4. **Mathematical Functions:** The library's Beta distribution
models (`betaSym`, `betaSkew`) are supported by an internal
implementation of the natural log-gamma function, which is
based on the Lanczos approximation.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
priceStdDev(estimator, offset)
Estimates **σ̂** (standard deviation) *in price units* for the current
bar, according to the chosen `PriceEst` distribution assumption.
Parameters:
estimator (series PriceEst) : series PriceEst Distribution assumption (see enum).
offset (int) : series int To offset the calculated bar
Returns: series float σ̂ ≥ 0 ; `na` if undefined (e.g. zero range).
priceMean(estimator, offset)
Estimates **μ̂** (mean price) for the chosen `PriceEst` within the
current bar.
Parameters:
estimator (series PriceEst) : series PriceEst Distribution assumption (see enum).
offset (int) : series int To offset the calculated bar
Returns: series float μ̂ in price units.
pricePdf(estimator, price, offset)
Probability-density under the chosen `PriceEst` model.
**Returns 0** when `p` is outside the current bar’s .
Parameters:
estimator (series PriceEst) : series PriceEst Distribution assumption (see enum).
price (float) : series float Price level to evaluate.
offset (int) : series int To offset the calculated bar
Returns: series float Density value.
priceCdf(estimator, upper, lower, steps, offset)
Cumulative probability **between** `upper` and `lower` under
the chosen `PriceEst` model. Outside-bar regions contribute zero.
Uses a fast, analytical calculation for Uniform, Triangular, and
Arcsine distributions, and defaults to numerical integration
(Simpson's rule) for more complex models.
Parameters:
estimator (series PriceEst) : series PriceEst Distribution assumption (see enum).
upper (float) : series float Upper Integration Boundary.
lower (float) : series float Lower Integration Boundary.
steps (int) : series int # of sub-intervals for numerical integration (if used).
offset (int) : series int To offset the calculated bar.
Returns: series float Probability mass ∈ .
logStdDev(estimator, offset)
Estimates **σ̂** (standard deviation) of *log-returns* for the current bar.
Parameters:
estimator (series LogEst) : series LogEst Distribution assumption (see enum).
offset (int) : series int To offset the calculated bar
Returns: series float σ̂ (unit-less); `na` if undefined.
logMean(estimator, offset)
Estimates μ̂ (mean log-return / drift) for the chosen `LogEst`.
The returned value is consistent with the assumptions of the
selected volatility estimator.
Parameters:
estimator (series LogEst) : series LogEst Distribution assumption (see enum).
offset (int) : series int To offset the calculated bar
Returns: series float μ̂ (unit-less log-return).
LibWghtLibrary "LibWght"
This is a library of mathematical and statistical functions
designed for quantitative analysis in Pine Script. Its core
principle is the integration of a custom weighting series
(e.g., volume) into a wide array of standard technical
analysis calculations.
Key Capabilities:
1. **Universal Weighting:** All exported functions accept a `weight`
parameter. This allows standard calculations (like moving
averages, RSI, and standard deviation) to be influenced by an
external data series, such as volume or tick count.
2. **Weighted Averages and Indicators:** Includes a comprehensive
collection of weighted functions:
- **Moving Averages:** `wSma`, `wEma`, `wWma`, `wRma` (Wilder's),
`wHma` (Hull), and `wLSma` (Least Squares / Linear Regression).
- **Oscillators & Ranges:** `wRsi`, `wAtr` (Average True Range),
`wTr` (True Range), and `wR` (High-Low Range).
3. **Volatility Decomposition:** Provides functions to decompose
total variance into distinct components for market analysis.
- **Two-Way Decomposition (`wTotVar`):** Separates variance into
**between-bar** (directional) and **within-bar** (noise)
components.
- **Three-Way Decomposition (`wLRTotVar`):** Decomposes variance
relative to a linear regression into **Trend** (explained by
the LR slope), **Residual** (mean-reversion around the
LR line), and **Within-Bar** (noise) components.
- **Local Volatility (`wLRLocTotStdDev`):** Measures the total
"noise" (within-bar + residual) around the trend line.
4. **Weighted Statistics and Regression:** Provides a robust
function for Weighted Linear Regression (`wLinReg`) and a
full suite of related statistical measures:
- **Between-Bar Stats:** `wBtwVar`, `wBtwStdDev`, `wBtwStdErr`.
- **Residual Stats:** `wResVar`, `wResStdDev`, `wResStdErr`.
5. **Fallback Mechanism:** All functions are designed for reliability.
If the total weight over the lookback period is zero (e.g., in
a no-volume period), the algorithms automatically fall back to
their unweighted, uniform-weight equivalents (e.g., `wSma`
becomes a standard `ta.sma`), preventing errors and ensuring
continuous calculation.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
wSma(source, weight, length)
Weighted Simple Moving Average (linear kernel).
Parameters:
source (float) : series float Data to average.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 1.
Returns: series float Linear-kernel weighted mean; falls back to
the arithmetic mean if Σweight = 0.
wEma(source, weight, length)
Weighted EMA (exponential kernel).
Parameters:
source (float) : series float Data to average.
weight (float) : series float Weight series.
length (simple int) : simple int Look-back length ≥ 1.
Returns: series float Exponential-kernel weighted mean; falls
back to classic EMA if Σweight = 0.
wWma(source, weight, length)
Weighted WMA (linear kernel).
Parameters:
source (float) : series float Data to average.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 1.
Returns: series float Linear-kernel weighted mean; falls back to
classic WMA if Σweight = 0.
wRma(source, weight, length)
Weighted RMA (Wilder kernel, α = 1/len).
Parameters:
source (float) : series float Data to average.
weight (float) : series float Weight series.
length (simple int) : simple int Look-back length ≥ 1.
Returns: series float Wilder-kernel weighted mean; falls back to
classic RMA if Σweight = 0.
wHma(source, weight, length)
Weighted HMA (linear kernel).
Parameters:
source (float) : series float Data to average.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 1.
Returns: series float Linear-kernel weighted mean; falls back to
classic HMA if Σweight = 0.
wRsi(source, weight, length)
Weighted Relative Strength Index.
Parameters:
source (float) : series float Price series.
weight (float) : series float Weight series.
length (simple int) : simple int Look-back length ≥ 1.
Returns: series float Weighted RSI; uniform if Σw = 0.
wAtr(tr, weight, length)
Weighted ATR (Average True Range).
Implemented as WRMA on *true range*.
Parameters:
tr (float) : series float True Range series.
weight (float) : series float Weight series.
length (simple int) : simple int Look-back length ≥ 1.
Returns: series float Weighted ATR; uniform weights if Σw = 0.
wTr(tr, weight, length)
Weighted True Range over a window.
Parameters:
tr (float) : series float True Range series.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 1.
Returns: series float Weighted mean of TR; uniform if Σw = 0.
wR(r, weight, length)
Weighted High-Low Range over a window.
Parameters:
r (float) : series float High-Low per bar.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 1.
Returns: series float Weighted mean of range; uniform if Σw = 0.
wBtwVar(source, weight, length, biased)
Weighted Between Variance (biased/unbiased).
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns:
variance series float The calculated between-bar variance (σ²btw), either biased or unbiased.
sumW series float The sum of weights over the lookback period (Σw).
sumW2 series float The sum of squared weights over the lookback period (Σw²).
wBtwStdDev(source, weight, length, biased)
Weighted Between Standard Deviation.
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns: series float σbtw uniform if Σw = 0.
wBtwStdErr(source, weight, length, biased)
Weighted Between Standard Error.
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns: series float √(σ²btw / N_eff) uniform if Σw = 0.
wTotVar(mu, sigma, weight, length, biased)
Weighted Total Variance (= between-group + within-group).
Useful when each bar represents an aggregate with its own
mean* and pre-estimated σ (e.g., second-level ranges inside a
1-minute bar). Assumes the *weight* series applies to both the
group means and their σ estimates.
Parameters:
mu (float) : series float Group means (e.g., HL2 of 1-second bars).
sigma (float) : series float Pre-estimated σ of each group (same basis).
weight (float) : series float Weight series (volume, ticks, …).
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns:
varBtw series float The between-bar variance component (σ²btw).
varWtn series float The within-bar variance component (σ²wtn).
sumW series float The sum of weights over the lookback period (Σw).
sumW2 series float The sum of squared weights over the lookback period (Σw²).
wTotStdDev(mu, sigma, weight, length, biased)
Weighted Total Standard Deviation.
Parameters:
mu (float) : series float Group means (e.g., HL2 of 1-second bars).
sigma (float) : series float Pre-estimated σ of each group (same basis).
weight (float) : series float Weight series (volume, ticks, …).
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns: series float σtot.
wTotStdErr(mu, sigma, weight, length, biased)
Weighted Total Standard Error.
SE = √( total variance / N_eff ) with the same effective sample
size logic as `wster()`.
Parameters:
mu (float) : series float Group means (e.g., HL2 of 1-second bars).
sigma (float) : series float Pre-estimated σ of each group (same basis).
weight (float) : series float Weight series (volume, ticks, …).
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns: series float √(σ²tot / N_eff).
wLinReg(source, weight, length)
Weighted Linear Regression.
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 2.
Returns:
mid series float The estimated value of the regression line at the most recent bar.
slope series float The slope of the regression line.
intercept series float The intercept of the regression line.
wResVar(source, weight, midLine, slope, length, biased)
Weighted Residual Variance.
linear regression – optionally biased (population) or
unbiased (sample).
Parameters:
source (float) : series float Data series.
weight (float) : series float Weighting series (volume, etc.).
midLine (float) : series float Regression value at the last bar.
slope (float) : series float Slope per bar.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population variance (σ²_P), denominator ≈ N_eff.
false → sample variance (σ²_S), denominator ≈ N_eff - 2.
(Adjusts for 2 degrees of freedom lost to the regression).
Returns:
variance series float The calculated residual variance (σ²res), either biased or unbiased.
sumW series float The sum of weights over the lookback period (Σw).
sumW2 series float The sum of squared weights over the lookback period (Σw²).
wResStdDev(source, weight, midLine, slope, length, biased)
Weighted Residual Standard Deviation.
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
midLine (float) : series float Regression value at the last bar.
slope (float) : series float Slope per bar.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns: series float σres; uniform if Σw = 0.
wResStdErr(source, weight, midLine, slope, length, biased)
Weighted Residual Standard Error.
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
midLine (float) : series float Regression value at the last bar.
slope (float) : series float Slope per bar.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns: series float √(σ²res / N_eff); uniform if Σw = 0.
wLRTotVar(mu, sigma, weight, midLine, slope, length, biased)
Weighted Linear-Regression Total Variance **around the
window’s weighted mean μ**.
σ²_tot = E_w ⟶ *within-group variance*
+ Var_w ⟶ *residual variance*
+ Var_w ⟶ *trend variance*
where each bar i in the look-back window contributes
m_i = *mean* (e.g. 1-sec HL2)
σ_i = *sigma* (pre-estimated intrabar σ)
w_i = *weight* (volume, ticks, …)
ŷ_i = b₀ + b₁·x (value of the weighted LR line)
r_i = m_i − ŷ_i (orthogonal residual)
Parameters:
mu (float) : series float Per-bar mean m_i.
sigma (float) : series float Pre-estimated σ_i of each bar.
weight (float) : series float Weight series w_i (≥ 0).
midLine (float) : series float Regression value at the latest bar (ŷₙ₋₁).
slope (float) : series float Slope b₁ of the regression line.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population; false → sample.
Returns:
varRes series float The residual variance component (σ²res).
varWtn series float The within-bar variance component (σ²wtn).
varTrd series float The trend variance component (σ²trd), explained by the linear regression.
sumW series float The sum of weights over the lookback period (Σw).
sumW2 series float The sum of squared weights over the lookback period (Σw²).
wLRTotStdDev(mu, sigma, weight, midLine, slope, length, biased)
Weighted Linear-Regression Total Standard Deviation.
Parameters:
mu (float) : series float Per-bar mean m_i.
sigma (float) : series float Pre-estimated σ_i of each bar.
weight (float) : series float Weight series w_i (≥ 0).
midLine (float) : series float Regression value at the latest bar (ŷₙ₋₁).
slope (float) : series float Slope b₁ of the regression line.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population; false → sample.
Returns: series float √(σ²tot).
wLRTotStdErr(mu, sigma, weight, midLine, slope, length, biased)
Weighted Linear-Regression Total Standard Error.
SE = √( σ²_tot / N_eff ) with N_eff = Σw² / Σw² (like in wster()).
Parameters:
mu (float) : series float Per-bar mean m_i.
sigma (float) : series float Pre-estimated σ_i of each bar.
weight (float) : series float Weight series w_i (≥ 0).
midLine (float) : series float Regression value at the latest bar (ŷₙ₋₁).
slope (float) : series float Slope b₁ of the regression line.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population; false → sample.
Returns: series float √((σ²res, σ²wtn, σ²trd) / N_eff).
wLRLocTotStdDev(mu, sigma, weight, midLine, slope, length, biased)
Weighted Linear-Regression Local Total Standard Deviation.
Measures the total "noise" (within-bar + residual) around the trend.
Parameters:
mu (float) : series float Per-bar mean m_i.
sigma (float) : series float Pre-estimated σ_i of each bar.
weight (float) : series float Weight series w_i (≥ 0).
midLine (float) : series float Regression value at the latest bar (ŷₙ₋₁).
slope (float) : series float Slope b₁ of the regression line.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population; false → sample.
Returns: series float √(σ²wtn + σ²res).
wLRLocTotStdErr(mu, sigma, weight, midLine, slope, length, biased)
Weighted Linear-Regression Local Total Standard Error.
Parameters:
mu (float) : series float Per-bar mean m_i.
sigma (float) : series float Pre-estimated σ_i of each bar.
weight (float) : series float Weight series w_i (≥ 0).
midLine (float) : series float Regression value at the latest bar (ŷₙ₋₁).
slope (float) : series float Slope b₁ of the regression line.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population; false → sample.
Returns: series float √((σ²wtn + σ²res) / N_eff).
wLSma(source, weight, length)
Weighted Least Square Moving Average.
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 2.
Returns: series float Least square weighted mean. Falls back
to unweighted regression if Σw = 0.
LogNormalLibrary "LogNormal"
A collection of functions used to model skewed distributions as log-normal.
Prices are commonly modeled using log-normal distributions (ie. Black-Scholes) because they exhibit multiplicative changes with long tails; skewed exponential growth and high variance. This approach is particularly useful for understanding price behavior and estimating risk, assuming continuously compounding returns are normally distributed.
Because log space analysis is not as direct as using math.log(price) , this library extends the Error Functions library to make working with log-normally distributed data as simple as possible.
- - -
QUICK START
Import library into your project
Initialize model with a mean and standard deviation
Pass model params between methods to compute various properties
var LogNorm model = LN.init(arr.avg(), arr.stdev()) // Assumes the library is imported as LN
var mode = model.mode()
Outputs from the model can be adjusted to better fit the data.
var Quantile data = arr.quantiles()
var more_accurate_mode = mode.fit(model, data) // Fits value from model to data
Inputs to the model can also be adjusted to better fit the data.
datum = 123.45
model_equivalent_datum = datum.fit(data, model) // Fits value from data to the model
area_from_zero_to_datum = model.cdf(model_equivalent_datum)
- - -
TYPES
There are two requisite UDTs: LogNorm and Quantile . They are used to pass parameters between functions and are set automatically (see Type Management ).
LogNorm
Object for log space parameters and linear space quantiles .
Fields:
mu (float) : Log space mu ( µ ).
sigma (float) : Log space sigma ( σ ).
variance (float) : Log space variance ( σ² ).
quantiles (Quantile) : Linear space quantiles.
Quantile
Object for linear quantiles, most similar to a seven-number summary .
Fields:
Q0 (float) : Smallest Value
LW (float) : Lower Whisker Endpoint
LC (float) : Lower Whisker Crosshatch
Q1 (float) : First Quartile
Q2 (float) : Second Quartile
Q3 (float) : Third Quartile
UC (float) : Upper Whisker Crosshatch
UW (float) : Upper Whisker Endpoint
Q4 (float) : Largest Value
IQR (float) : Interquartile Range
MH (float) : Midhinge
TM (float) : Trimean
MR (float) : Mid-Range
- - -
TYPE MANAGEMENT
These functions reliably initialize and update the UDTs. Because parameterization is interdependent, avoid setting the LogNorm and Quantile fields directly .
init(mean, stdev, variance)
Initializes a LogNorm object.
Parameters:
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
set(ln, mean, stdev, variance)
Transforms linear measurements into log space parameters for a LogNorm object.
Parameters:
ln (LogNorm) : Object containing log space parameters.
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
quantiles(arr)
Gets empirical quantiles from an array of floats.
Parameters:
arr (array) : Float array object.
Returns: Quantile Object
- - -
DESCRIPTIVE STATISTICS
Using only the initialized LogNorm parameters, these functions compute a model's central tendency and standardized moments.
mean(ln)
Computes the linear mean from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
median(ln)
Computes the linear median from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
mode(ln)
Computes the linear mode from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
variance(ln)
Computes the linear variance from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
skewness(ln)
Computes the linear skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
kurtosis(ln, excess)
Computes the linear kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Kurtosis (true) or regular Kurtosis (false).
Returns: Between 0 and ∞
hyper_skewness(ln)
Computes the linear hyper skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
hyper_kurtosis(ln, excess)
Computes the linear hyper kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Hyper Kurtosis (true) or regular Hyper Kurtosis (false).
Returns: Between 0 and ∞
- - -
DISTRIBUTION FUNCTIONS
These wrap Gaussian functions to make working with model space more direct. Because they are contained within a log-normal library, they describe estimations relative to a log-normal curve, even though they fundamentally measure a Gaussian curve.
pdf(ln, x, empirical_quantiles)
A Probability Density Function estimates the probability density . For clarity, density is not a probability .
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate for which a density will be estimated.
empirical_quantiles (Quantile) : Quantiles as observed in the data (optional).
Returns: Between 0 and ∞
cdf(ln, x, precise)
A Cumulative Distribution Function estimates the area under a Log-Normal curve between Zero and a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ccdf(ln, x, precise)
A Complementary Cumulative Distribution Function estimates the area under a Log-Normal curve between a linear X coordinate and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
cdfinv(ln, a, precise)
An Inverse Cumulative Distribution Function reverses the Log-Normal cdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
ccdfinv(ln, a, precise)
An Inverse Complementary Cumulative Distribution Function reverses the Log-Normal ccdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
cdfab(ln, x1, x2, precise)
A Cumulative Distribution Function from A to B estimates the area under a Log-Normal curve between two linear X coordinates (A and B).
Parameters:
ln (LogNorm) : Object of log space parameters.
x1 (float) : First linear X coordinate .
x2 (float) : Second linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ott(ln, x, precise)
A One-Tailed Test transforms a linear X coordinate into an absolute Z Score before estimating the area under a Log-Normal curve between Z and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 0.5
ttt(ln, x, precise)
A Two-Tailed Test transforms a linear X coordinate into symmetrical ± Z Scores before estimating the area under a Log-Normal curve from Zero to -Z, and +Z to Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ottinv(ln, a, precise)
An Inverse One-Tailed Test reverses the Log-Normal ott() by estimating a linear X coordinate for the right tail from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Half a normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
tttinv(ln, a, precise)
An Inverse Two-Tailed Test reverses the Log-Normal ttt() by estimating two linear X coordinates from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Linear space tuple :
- - -
UNCERTAINTY
Model-based measures of uncertainty, information, and risk.
sterr(sample_size, fisher_info)
The standard error of a sample statistic.
Parameters:
sample_size (float) : Number of observations.
fisher_info (float) : Fisher information.
Returns: Between 0 and ∞
surprisal(p, base)
Quantifies the information content of a single event.
Parameters:
p (float) : Probability of the event .
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
entropy(ln, base)
Computes the differential entropy (average surprisal).
Parameters:
ln (LogNorm) : Object of log space parameters.
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
perplexity(ln, base)
Computes the average number of distinguishable outcomes from the entropy.
Parameters:
ln (LogNorm)
base (float) : Logarithmic base used for Entropy (optional).
Returns: Between 0 and ∞
value_at_risk(ln, p, precise)
Estimates a risk threshold under normal market conditions for a given confidence level.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
value_at_risk_inv(ln, value_at_risk, precise)
Reverses the value_at_risk() by estimating the confidence level from the risk threshold.
Parameters:
ln (LogNorm) : Object of log space parameters.
value_at_risk (float) : Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
conditional_value_at_risk(ln, p, precise)
Estimates the average loss beyond a confidence level, aka. expected shortfall.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_value_at_risk_inv(ln, conditional_value_at_risk, precise)
Reverses the conditional_value_at_risk() by estimating the confidence level of an average loss.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_value_at_risk (float) : Conditional Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
partial_expectation(ln, x, precise)
Estimates the partial expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and µ
partial_expectation_inv(ln, partial_expectation, precise)
Reverses the partial_expectation() by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
partial_expectation (float) : Partial Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_expectation(ln, x, precise)
Estimates the conditional expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between X and ∞
conditional_expectation_inv(ln, conditional_expectation, precise)
Reverses the conditional_expectation by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_expectation (float) : Conditional Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
fisher(ln, log)
Computes the Fisher Information Matrix for the distribution, not a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the distribution
fisher(ln, x, log)
Computes the Fisher Information Matrix for a linear X coordinate, not the distribution itself.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the linear X coordinate
confidence_interval(ln, x, sample_size, confidence, precise)
Estimates a confidence interval for a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
sample_size (float) : Number of observations.
confidence (float) : Confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: CI for the linear X coordinate
- - -
CURVE FITTING
An overloaded function that helps transform values between spaces. The primary function uses quantiles, and the overloads wrap the primary function to make working with LogNorm more direct.
fit(x, a, b)
Transforms X coordinate between spaces A and B.
Parameters:
x (float) : Linear X coordinate from space A .
a (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
b (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
Returns: Adjusted X coordinate
- - -
EXPORTED HELPERS
Small utilities to simplify extensibility.
z_score(ln, x)
Converts a linear X coordinate into a Z Score.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate.
Returns: Between -∞ and +∞
x_coord(ln, z)
Converts a Z Score into a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
z (float) : Standard normal Z Score.
Returns: Between 0 and ∞
iget(arr, index)
Gets an interpolated value of a pseudo -element (fictional element between real array elements). Useful for quantile mapping.
Parameters:
arr (array) : Float array object.
index (float) : Index of the pseudo element.
Returns: Interpolated value of the arrays pseudo element.
Min_Position_Size_ALLLibrary "Min_Position_Size_ALL"
getMinPositionSize(symbol_, type_, broker_)
Parameters:
symbol_ (string)
type_ (string)
broker_ (string)
testLibLibrary "testLib"
TODO: add library description here
mySMA(x)
TODO: add function description here
Parameters:
x (int) : TODO: add parameter x description here
Returns: TODO: add what function returns
Adaptive FoS LibraryThis library provides Adaptive Functions that I use in my scripts. For calculations, I use the max_bars_back function with a fixed length of 200 bars to prevent errors when a script tries to access data beyond its available history. This is a key difference from most other adaptive libraries — if you don’t need it, you don’t have to use it.
Some of the adaptive length functions are normalized. In addition to the adaptive length functions, this library includes various methods for calculating moving averages, normalized differences between fast and slow MA's, as well as several normalized oscillators.
TimeSeriesBenchmarkMeasuresLibrary "TimeSeriesBenchmarkMeasures"
Time Series Benchmark Metrics. \
Provides a comprehensive set of functions for benchmarking time series data, allowing you to evaluate the accuracy, stability, and risk characteristics of various models or strategies. The functions cover a wide range of statistical measures, including accuracy metrics (MAE, MSE, RMSE, NRMSE, MAPE, SMAPE), autocorrelation analysis (ACF, ADF), and risk measures (Theils Inequality, Sharpness, Resolution, Coverage, and Pinball).
___
Reference:
- github.com .
- medium.com .
- www.salesforce.com .
- towardsdatascience.com .
- github.com .
mae(actual, forecasts)
In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of Y versus X include comparisons of predicted versus observed, subsequent time versus initial time, and one technique of measurement versus an alternative technique of measurement.
Parameters:
actual (array) : List of actual values.
forecasts (array) : List of forecasts values.
Returns: - Mean Absolute Error (MAE).
___
Reference:
- en.wikipedia.org .
- The Orange Book of Machine Learning - Carl McBride Ellis .
mse(actual, forecasts)
The Mean Squared Error (MSE) is a measure of the quality of an estimator. As it is derived from the square of Euclidean distance, it is always a positive value that decreases as the error approaches zero.
Parameters:
actual (array) : List of actual values.
forecasts (array) : List of forecasts values.
Returns: - Mean Squared Error (MSE).
___
Reference:
- en.wikipedia.org .
rmse(targets, forecasts, order, offset)
Calculates the Root Mean Squared Error (RMSE) between target observations and forecasts. RMSE is a standard measure of the differences between values predicted by a model and the values actually observed.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
order (int) : Model order parameter that determines the starting position in the targets array, `default=0`.
offset (int) : Forecast offset related to target, `default=0`.
Returns: - RMSE value.
nmrse(targets, forecasts, order, offset)
Normalised Root Mean Squared Error.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
order (int) : Model order parameter that determines the starting position in the targets array, `default=0`.
offset (int) : Forecast offset related to target, `default=0`.
Returns: - NRMSE value.
rmse_interval(targets, forecasts)
Root Mean Squared Error for a set of interval windows. Computes RMSE by converting interval forecasts (with min/max bounds) into point forecasts using the mean of the interval bounds, then compares against actual target values.
Parameters:
targets (array) : List of target observations.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - RMSE value for the combined interval list.
mape(targets, forecasts)
Mean Average Percentual Error.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
Returns: - MAPE value.
smape(targets, forecasts, mode)
Symmetric Mean Average Percentual Error. Calculates the Mean Absolute Percentage Error (MAPE) between actual targets and forecasts. MAPE is a common metric for evaluating forecast accuracy, expressed as a percentage, lower values indicate a better forecast accuracy.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
mode (int) : Type of method: default=0:`sum(abs(Fi-Ti)) / sum(Fi+Ti)` , 1:`mean(abs(Fi-Ti) / ((Fi + Ti) / 2))` , 2:`mean(abs(Fi-Ti) / (abs(Fi) + abs(Ti))) * 100`
Returns: - SMAPE value.
mape_interval(targets, forecasts)
Mean Average Percentual Error for a set of interval windows.
Parameters:
targets (array) : List of target observations.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - MAPE value for the combined interval list.
acf(data, k)
Autocorrelation Function (ACF) for a time series at a specified lag.
Parameters:
data (array) : Sample data of the observations.
k (int) : The lag period for which to calculate the autocorrelation. Must be a non-negative integer.
Returns: - The autocorrelation value at the specified lag, ranging from -1 to 1.
___
The autocorrelation function measures the linear dependence between observations in a time series
at different time lags. It quantifies how well the series correlates with itself at different
time intervals, which is useful for identifying patterns, seasonality, and the appropriate
lag structure for time series models.
ACF values close to 1 indicate strong positive correlation, values close to -1 indicate
strong negative correlation, and values near 0 indicate no linear correlation.
___
Reference:
- statisticsbyjim.com
acf_multiple(data, k)
Autocorrelation function (ACF) for a time series at a set of specified lags.
Parameters:
data (array) : Sample data of the observations.
k (array) : List of lag periods for which to calculate the autocorrelation. Must be a non-negative integer.
Returns: - List of ACF values for provided lags.
___
The autocorrelation function measures the linear dependence between observations in a time series
at different time lags. It quantifies how well the series correlates with itself at different
time intervals, which is useful for identifying patterns, seasonality, and the appropriate
lag structure for time series models.
ACF values close to 1 indicate strong positive correlation, values close to -1 indicate
strong negative correlation, and values near 0 indicate no linear correlation.
___
Reference:
- statisticsbyjim.com
adfuller(data, n_lag, conf)
: Augmented Dickey-Fuller test for stationarity.
Parameters:
data (array) : Data series.
n_lag (int) : Maximum lag.
conf (string) : Confidence Probability level used to test for critical value, (`90%`, `95%`, `99%`).
Returns: - `adf` The test statistic.
- `crit` Critical value for the test statistic at the 10 % levels.
- `nobs` Number of observations used for the ADF regression and calculation of the critical values.
___
The Augmented Dickey-Fuller test is used to determine whether a time series is stationary
or contains a unit root (non-stationary). The null hypothesis is that the series has a unit root
(is non-stationary), while the alternative hypothesis is that the series is stationary.
A stationary time series has statistical properties that do not change over time, making it
suitable for many time series forecasting models. If the test statistic is less than the
critical value, we reject the null hypothesis and conclude the series is stationary.
___
Reference:
- www.jstor.org
- en.wikipedia.org
theils_inequality(targets, forecasts)
Calculates Theil's Inequality Coefficient, a measure of forecast accuracy that quantifies the relative difference between actual and predicted values.
Parameters:
targets (array) : List of target observations.
forecasts (array) : Matrix with list of forecasts, ordered column wise.
Returns: - Theil's Inequality Coefficient value, value closer to 0 is better.
___
Theil's Inequality Coefficient is calculated as: `sqrt(Sum((y_i - f_i)^2)) / (sqrt(Sum(y_i^2)) + sqrt(Sum(f_i^2)))`
where `y_i` represents actual values and `f_i` represents forecast values.
This metric ranges from 0 to infinity, with 0 indicating perfect forecast accuracy.
___
Reference:
- en.wikipedia.org
sharpness(forecasts)
The average width of the forecast intervals across all observations, representing the sharpness or precision of the predictive intervals.
Parameters:
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - Sharpness The sharpness level, which is the average width of all prediction intervals across the forecast horizon.
___
Sharpness is an important metric for evaluating forecast quality. It measures how narrow or wide the
prediction intervals are. Higher sharpness (narrower intervals) indicates greater precision in the
forecast intervals, while lower sharpness (wider intervals) suggests less precision.
The sharpness metric is calculated as the mean of the interval widths across all observations, where
each interval width is the difference between the upper and lower bounds of the prediction interval.
Note: This function assumes that the forecasts matrix has at least 2 columns, with the first column
representing the lower bounds and the second column representing the upper bounds of prediction intervals.
___
Reference:
- Hyndman, R. J., & Athanasopoulos, G. (2018). Forecasting: principles and practice. OTexts. otexts.com
resolution(forecasts)
Calculates the resolution of forecast intervals, measuring the average absolute difference between individual forecast interval widths and the overall sharpness measure.
Parameters:
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - The average absolute difference between individual forecast interval widths and the overall sharpness measure, representing the resolution of the forecasts.
___
Resolution is a key metric for evaluating forecast quality that measures the consistency of prediction
interval widths. It quantifies how much the individual forecast intervals vary from the average interval
width (sharpness). High resolution indicates that the forecast intervals are relatively consistent
across observations, while low resolution suggests significant variation in interval widths.
The resolution is calculated as the mean absolute deviation of individual interval widths from the
overall sharpness value. This provides insight into the uniformity of the forecast uncertainty
estimates across the forecast horizon.
Note: This function requires the forecasts matrix to have at least 2 columns (min, max) representing
the lower and upper bounds of prediction intervals.
___
Reference:
- (sites.stat.washington.edu)
- (www.jstor.org)
coverage(targets, forecasts)
Calculates the coverage probability, which is the percentage of target values that fall within the corresponding forecasted prediction intervals.
Parameters:
targets (array) : List of target values.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - Percent of target values that fall within their corresponding forecast intervals, expressed as a decimal value between 0 and 1 (or 0% and 100%).
___
Coverage probability is a crucial metric for evaluating the reliability of prediction intervals.
It measures how well the forecast intervals capture the actual observed values. An ideal forecast
should have a coverage probability close to the nominal confidence level (e.g., 90%, 95%, or 99%).
For example, if a 95% prediction interval is used, we expect approximately 95% of the actual
target values to fall within those intervals. If the coverage is significantly lower than the
nominal level, the intervals may be too narrow; if it's significantly higher, the intervals may
be too wide.
Note: This function requires the targets array and forecasts matrix to have the same number of
observations, and the forecasts matrix must have at least 2 columns (min, max) representing
the lower and upper bounds of prediction intervals.
___
Reference:
- (www.jstor.org)
pinball(tau, target, forecast)
Pinball loss function, measures the asymmetric loss for quantile forecasts.
Parameters:
tau (float) : The quantile level (between 0 and 1), where 0.5 represents the median.
target (float) : The actual observed value to compare against.
forecast (float) : The forecasted value.
Returns: - The Pinball loss value, which quantifies the distance between the forecast and target relative to the specified quantile level.
___
The Pinball loss function is specifically designed for evaluating quantile forecasts. It is
asymmetric, meaning it penalizes underestimates and overestimates differently depending on the
quantile level being evaluated.
For a given quantile τ, the loss function is defined as:
- If target >= forecast: (target - forecast) * τ
- If target < forecast: (forecast - target) * (1 - τ)
This loss function is commonly used in quantile regression and probabilistic forecasting
to evaluate how well forecasts capture specific quantiles of the target distribution.
___
Reference:
- (www.otexts.com)
pinball_mean(tau, targets, forecasts)
Calculates the mean pinball loss for quantile regression.
Parameters:
tau (float) : The quantile level (between 0 and 1), where 0.5 represents the median.
targets (array) : The actual observed values to compare against.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - The mean pinball loss value across all observations.
___
The pinball_mean() function computes the average Pinball loss across multiple observations,
making it suitable for evaluating overall forecast performance in quantile regression tasks.
This function leverages the asymmetric Pinball loss function to evaluate how well forecasts
capture specific quantiles of the target distribution. The choice of which column from the
forecasts matrix to use depends on the quantile level:
- For τ ≤ 0.5: Uses the first column (min) of forecasts
- For τ > 0.5: Uses the second column (max) of forecasts
This loss function is commonly used in quantile regression and probabilistic forecasting
to evaluate how well forecasts capture specific quantiles of the target distribution.
___
Reference:
- (www.otexts.com)
Correlation HeatMap Matrix Data [TradingFinder]🔵 Introduction
Correlation is a statistical measure that shows the degree and direction of a linear relationship between two assets.
Its value ranges from -1 to +1 : +1 means perfect positive correlation, 0 means no linear relationship, and -1 means perfect negative correlation.
In financial markets, correlation is used for portfolio diversification, risk management, pairs trading, intermarket analysis, and identifying divergences.
Correlation HeatMap Matrix Data TradingFinder is a Pine Script v6 library that calculates and returns raw correlation matrix data between up to 20 symbols. It only provides the data – it does not draw or render the heatmap – making it ideal for use in other scripts that handle visualization or further analysis. The library uses ta.correlation for fast and accurate calculations.
It also includes two helper functions for visual styling :
CorrelationColor(corr) : takes the correlation value as input and generates a smooth gradient color, ranging from strong negative to strong positive correlation.
CorrelationTextColor(corr) : takes the correlation value as input and returns a text color that ensures optimal contrast over the background color.
Library
"Correlation_HeatMap_Matrix_Data_TradingFinder"
CorrelationColor(corr)
Parameters:
corr (float)
CorrelationTextColor(corr)
Parameters:
corr (float)
Data_Matrix(Corr_Period, Sym_1, Sym_2, Sym_3, Sym_4, Sym_5, Sym_6, Sym_7, Sym_8, Sym_9, Sym_10, Sym_11, Sym_12, Sym_13, Sym_14, Sym_15, Sym_16, Sym_17, Sym_18, Sym_19, Sym_20)
Parameters:
Corr_Period (int)
Sym_1 (string)
Sym_2 (string)
Sym_3 (string)
Sym_4 (string)
Sym_5 (string)
Sym_6 (string)
Sym_7 (string)
Sym_8 (string)
Sym_9 (string)
Sym_10 (string)
Sym_11 (string)
Sym_12 (string)
Sym_13 (string)
Sym_14 (string)
Sym_15 (string)
Sym_16 (string)
Sym_17 (string)
Sym_18 (string)
Sym_19 (string)
Sym_20 (string)
🔵 How to use
Import the library into your Pine Script using the import keyword and its full namespace.
Decide how many symbols you want to include in your correlation matrix (up to 20). Each symbol must be provided as a string, for example FX:EURUSD .
Choose the correlation period (Corr\_Period) in bars. This is the lookback window used for the calculation, such as 20, 50, or 100 bars.
Call Data_Matrix(Corr_Period, Sym_1, ..., Sym_20) with your selected parameters. The function will return an array containing the correlation values for every symbol pair (upper triangle of the matrix plus diagonal).
For example :
var string Sym_1 = '' , var string Sym_2 = '' , var string Sym_3 = '' , var string Sym_4 = '' , var string Sym_5 = '' , var string Sym_6 = '' , var string Sym_7 = '' , var string Sym_8 = '' , var string Sym_9 = '' , var string Sym_10 = ''
var string Sym_11 = '', var string Sym_12 = '', var string Sym_13 = '', var string Sym_14 = '', var string Sym_15 = '', var string Sym_16 = '', var string Sym_17 = '', var string Sym_18 = '', var string Sym_19 = '', var string Sym_20 = ''
switch Market
'Forex' => Sym_1 := 'EURUSD' , Sym_2 := 'GBPUSD' , Sym_3 := 'USDJPY' , Sym_4 := 'USDCHF' , Sym_5 := 'USDCAD' , Sym_6 := 'AUDUSD' , Sym_7 := 'NZDUSD' , Sym_8 := 'EURJPY' , Sym_9 := 'EURGBP' , Sym_10 := 'GBPJPY'
,Sym_11 := 'AUDJPY', Sym_12 := 'EURCHF', Sym_13 := 'EURCAD', Sym_14 := 'GBPCAD', Sym_15 := 'CADJPY', Sym_16 := 'CHFJPY', Sym_17 := 'NZDJPY', Sym_18 := 'AUDNZD', Sym_19 := 'USDSEK' , Sym_20 := 'USDNOK'
'Stock' => Sym_1 := 'NVDA' , Sym_2 := 'AAPL' , Sym_3 := 'GOOGL' , Sym_4 := 'GOOG' , Sym_5 := 'META' , Sym_6 := 'MSFT' , Sym_7 := 'AMZN' , Sym_8 := 'AVGO' , Sym_9 := 'TSLA' , Sym_10 := 'BRK.B'
,Sym_11 := 'UNH' , Sym_12 := 'V' , Sym_13 := 'JPM' , Sym_14 := 'WMT' , Sym_15 := 'LLY' , Sym_16 := 'ORCL', Sym_17 := 'HD' , Sym_18 := 'JNJ' , Sym_19 := 'MA' , Sym_20 := 'COST'
'Crypto' => Sym_1 := 'BTCUSD' , Sym_2 := 'ETHUSD' , Sym_3 := 'BNBUSD' , Sym_4 := 'XRPUSD' , Sym_5 := 'SOLUSD' , Sym_6 := 'ADAUSD' , Sym_7 := 'DOGEUSD' , Sym_8 := 'AVAXUSD' , Sym_9 := 'DOTUSD' , Sym_10 := 'TRXUSD'
,Sym_11 := 'LTCUSD' , Sym_12 := 'LINKUSD', Sym_13 := 'UNIUSD', Sym_14 := 'ATOMUSD', Sym_15 := 'ICPUSD', Sym_16 := 'ARBUSD', Sym_17 := 'APTUSD', Sym_18 := 'FILUSD', Sym_19 := 'OPUSD' , Sym_20 := 'USDT.D'
'Custom' => Sym_1 := Sym_1_C , Sym_2 := Sym_2_C , Sym_3 := Sym_3_C , Sym_4 := Sym_4_C , Sym_5 := Sym_5_C , Sym_6 := Sym_6_C , Sym_7 := Sym_7_C , Sym_8 := Sym_8_C , Sym_9 := Sym_9_C , Sym_10 := Sym_10_C
,Sym_11 := Sym_11_C, Sym_12 := Sym_12_C, Sym_13 := Sym_13_C, Sym_14 := Sym_14_C, Sym_15 := Sym_15_C, Sym_16 := Sym_16_C, Sym_17 := Sym_17_C, Sym_18 := Sym_18_C, Sym_19 := Sym_19_C , Sym_20 := Sym_20_C
= Corr.Data_Matrix(Corr_period, Sym_1 ,Sym_2 ,Sym_3 ,Sym_4 ,Sym_5 ,Sym_6 ,Sym_7 ,Sym_8 ,Sym_9 ,Sym_10,Sym_11,Sym_12,Sym_13,Sym_14,Sym_15,Sym_16,Sym_17,Sym_18,Sym_19,Sym_20)
Loop through or index into this array to retrieve each correlation value for your custom layout or logic.
Pass each correlation value to CorrelationColor() to get the corresponding gradient background color, which reflects the correlation’s strength and direction (negative to positive).
For example :
Corr.CorrelationColor(SYM_3_10)
Pass the same correlation value to CorrelationTextColor() to get the correct text color for readability against that background.
For example :
Corr.CorrelationTextColor(SYM_1_1)
Use these colors in a table or label to render your own heatmap or any other visualization you need.
Primes_4These libraries (Primes_1 -> Primes_4) contain arrays of reduced Prime Numbers to minimize the amount of tokens, allowing more information to be exported.
Values, for example:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
are reduced to:
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
With the restoreValues() function found in this library, the reduced values can be restored back to its original state.
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
is restored back to:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
The libraries contain all Prime Numbers from 2 to 1.340.011
------------------------------------------------------------
Library "Primes_4"
Prime Numbers 1.096.031 - 1.340.011
primes_a()
Prime numbers 1.096.031 - 1.205.999
primes_b()
Prime numbers 1.206.013 - 1.317.989
primes_c()
Prime numbers 1.318.003 - 1.340.011
method restoreValues(iArray, iShow, iFrom, iTo)
restoreValues : Restores reduced prime number values in an array to their original state, for example `7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21` is restored to `7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021`
Namespace types: array
Parameters:
iArray (array)
iShow (bool)
iFrom (int)
iTo (int)
Returns: Initial array with restored prime number values
Primes_3These libraries (Primes_1 -> Primes_4) contain arrays of reduced Prime Numbers to minimize the amount of tokens, allowing more information to be exported.
Values, for example:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
are reduced to:
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
With the restoreValues() function found in the Primes_4 library, the reduced values can be restored back to its original state.
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
is restored back to:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
The libraries contain all Prime Numbers from 2 to 1.340.011
------------------------------------------------------------
Library "Primes_3"
Prime Numbers 713.021 - 1.095.989
primes_a()
Prime numbers 713.021 - 820.997
primes_b()
Prime numbers 821.003 - 928.979
primes_c()
Prime numbers 929.003 - 1.038.953
primes_d()
Prime numbers 1.039.001 - 1.095.989
Primes_2These libraries (Primes_1 -> Primes_4) contain arrays of reduced Prime Numbers to minimize the amount of tokens, allowing more information to be exported.
Values, for example:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
are reduced to:
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
With the restoreValues() function found in the Primes_4 library, the reduced values can be restored back to its original state.
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
is restored back to:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
The libraries contain all Prime Numbers from 2 to 1.340.011
------------------------------------------------------------
Library "Primes_2"
Prime Numbers 340.007 - 712.981
primes_a()
Prime numbers 340.007 - 441.971
primes_b()
Prime numbers 442.003 - 545.959
primes_c()
Prime numbers 546.001 - 650.987
primes_d()
Prime numbers 651.017 - 712.981
Primes_1These libraries (Primes_1 -> Primes_4) contain arrays of reduced Prime Numbers to minimize the amount of tokens, allowing more information to be exported.
Values, for example:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
are reduced to:
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
With the restoreValues() function found in the Primes_4 library, the reduced values can be restored back to its original state.
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
is restored back to:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
The libraries contain all Prime Numbers from 2 to 1.340.011
------------------------------------------------------------
Library "Primes_1"
Prime Numbers 2 - 339.991
primes_a()
Prime numbers 2 - 81.689
primes_b()
Prime numbers 81.701 - 175.897
primes_c()
Prime numbers 175.909 - 273.997
primes_d()
Prime numbers 274.007 - 339.991
SITFX_FuturesSpec_v17SITFX_FuturesSpec_v17 – Universal Futures Contract Library
Full-scale futures contract specification library for Pine Script v6. Covers CME, CBOT, NYMEX, COMEX, CFE, Eurex, ICE, and more – including minis, micros, metals, energies, FX, and bonds.
Key Features:
✅ Instrument‑agnostic: ES/MES, NQ/MNQ, YM/MYM, RTY/M2K, metals, energies, FX, bonds
✅ Full contract data: Tick size, tick value, point value, margins
✅ Continuation‑safe: Single‑line logic, no arrays or continuation errors
✅ Foundation for SITFX tools: Gann, Fibs, structure, and risk modules
Usage example:
import SITFX_FuturesSpec_v17/1 as fs
spec = fs.get(syminfo.root)
label.new(bar_index, high, str.format("{0}: Tick={1}, Value=${2}", spec.name, spec.tickSize, spec.tickValue))
FunctionADFLibrary "FunctionADF"
Augmented Dickey-Fuller test (ADF), The ADF test is a statistical method used to assess whether a time series is stationary – meaning its statistical properties (like mean and variance) do not change over time. A time series with a unit root is considered non-stationary and often exhibits non-mean-reverting behavior, which is a key concept in technical analysis.
Reference:
-
- rtmath.net
- en.wikipedia.org
adftest(data, n_lag, conf)
: Augmented Dickey-Fuller test for stationarity.
Parameters:
data (array) : Data series.
n_lag (int) : Maximum lag.
conf (string) : Confidence Probability level used to test for critical value, (`90%`, `95%`, `99%`).
Returns: `adf` The test statistic. \
`crit` Critical value for the test statistic at the 10 % levels. \
`nobs` Number of observations used for the ADF regression and calculation of the critical values.
DoublePatternsDetects Double Top and Double Bottom patterns from pivot points using structural symmetry, valley/peak depth, and extreme validation. Returns a detailed result object including similarity score, target price, and breakout quality.
WedgePatternsDetects Rising and Falling Wedge chart patterns using pivot points, trendline convergence, and volume confirmation. Includes adaptive wedge length analysis and a quality score for each match. Returns full wedge geometry and classification via WedgeResult.
HeadShouldersPatternsDetects Head & Shoulders and Inverse Head & Shoulders chart patterns from pivot point arrays. Includes neckline validation, shoulder symmetry checks, and head extremeness filtering. Returns a detailed result object with structure points, bar indices, and projected price target.
XABCD_HarmonicsLibrary for detecting harmonic patterns using ZigZag pivots or custom swing points. Supports Butterfly, Gartley, Bat, and Crab patterns with automatic Fibonacci ratio validation and optional D-point projection using extremes. Returns detailed PatternResult including structure points and target projection. Ideal for technical analysis, algorithmic detection, or overlay visualizations.
FastMetrixLibrary "FastMetrix"
This is a library I've been tweaking and working with for a while and I find it useful to get valuable technical analysis metrics faster (why its called FastMetrix). A lot of is personal to my trading style, so sorry if it does not have everything you want. The way I get my variables from library to script is by copying the return function into my new script.
TODO: Volatility and short term price analysis functions
slope(source, smoothing)
Parameters:
source (float)
smoothing (int)
integral(topfunction, bottomfunction, start, end)
Parameters:
topfunction (float)
bottomfunction (float)
start (int)
end (int)
deviation(x, y)
Parameters:
x (float)
y (float)
getema(len)
TODO: return important exponential long term moving averages and derivatives/variables
Parameters:
len (simple int)
getsma(len)
TODO: return requested sma
Parameters:
len (int)
kc(mult, len)
TODO: Return Keltner Channels variables and calculations
Parameters:
mult (simple float)
len (simple int)
bollinger(len, mult)
TODO: returns bollinger bands with optimal settings
Parameters:
len (int)
mult (simple float)
volatility(atrlen, smoothing)
TODO: Returns volatility indicators based on atr
Parameters:
atrlen (simple int)
smoothing (int)
premarketfib()
countinday(xcondition)
Parameters:
xcondition (bool)
countinsession(condition, n)
Parameters:
condition (bool)
n (int)
MathConstantsSolarSystemLibrary "MathConstantsSolarSystem"
Properties and data for the celestial objects in the Solar System.






















