Bar Magnified Volume Profile/Fixed Range [ChartPrime]This indicator draws a volume profile by utilizing data from the lower timeframe to get a more accurate representation of where volume occurred on a bar to bar basis. The indicator creates a price range, and then splits that price range into 100 grids by default. The indicator then drops down to the lower timeframe, approximately 16 times lower than the current timeframe being viewed on the chart, and then parses through all of the lower timeframe bars, and attributes the lower timeframe bar volume to all grids that it is touching. The volume is dispersed proportionally to the grids which it is touching by whatever percent of the candle is inside each grid. For example, if one of the lower timeframe bars is interacting with "2" of the grids in the profile, and 60% of the candle is inside of the top grid, 60% of the volume from said candle will be attributed to the grid.
To make all of this magic happen, this script utilizes a quadratic time complexity algorithm while parsing and attributing the volume to all of the grids. Due to this type of algorithm being used in the script, many of the user inputs have been limited to allow for simplicity, but also to prevent possible errors when executing loops. For the most part, all of the settings have been thoroughly tested and configured with the right amount of limitations to prevent these errors, but also still give the user a broad range of flexibility to adjust the script to their liking.
📗 SETTINGS
Lookback Period: The lookback period determines how many bars back the script will search for the "highest high" and the "lowest low" which will then be used to generate the grids in-between
Number Of Levels: This setting determines how many grids there will be within the volume profile/fixed range. This is personal preference, however it is capped at 100 to prevent time complexity issues
Profile Length: This setting allows you to stretch or thin the volume profile. A higher number will stretch it more, vise versa a smaller number will thin it further. This does not change the volume profiles results or values, only its visual appearance.
Profile Offset: This setting allows you to offset the profile to the left or right, in the event the user does not appreciate the positioning of the default location of the profile. A higher number will shift it to the right, vise versa a lower number will shift it to the left. This is personal preference and does not affect the results or values of the profile.
🧰 UTILITY
The volume profile/fixed range can be used in many ways. One of the most popular methods is to identify high volume areas on the chart to be used as trade entries or exits in the event of the price revisiting the high volume areas. Take this picture as an example. The image clearly demonstrates how the 2 highest areas of volume within this magnified volume profile also line up to great areas of support and resistance in the market.
Here are some other useful methods of using the volume profile/fixed range
Identify Key Support and Resistance Levels for Setups
Determine Logical Take Profits and Stop Losses
Calculate Initial R Multiplier
Identify Balanced vs Imbalanced Markets
Determine Strength of Trends
스크립트에서 "algo"에 대해 찾기
[blackcat] L1 Adaptive Choppiness IndexLevel: 1
Background
I have been working with choppiness index type indicator for long. However, there are several problems in tradintional one.
Function
One of the issue of conventional choppiness index is the noise or ripple is too obvious. I was wondering several ways to smooth it. As you may know, choppiness index is "one line" indicator. There is little room of freedom to change it too much. Then, I introduced adaptation algorithm to make "length" parameter adaptive, which can smooth choppiness index indicator to some degree. Meanwhile, I use ALMA to smooth the output again.
Remarks
I used my published dc_ta lib, which collects several dominant cycle algorithm from Elhers to make many indicator adaptive possible.
Feedbacks are appreciated.
Strategy Myth-Busting #7 - MACDBB+SSL+VSF - [MYN]This is part of a new series we are calling "Strategy Myth-Busting" where we take open public manual trading strategies and automate them. The goal is to not only validate the authenticity of the claims but to provide an automated version for traders who wish to trade autonomously.
Our seventh one we are automating is the "Magic MACD Indicator: Crazy Accurate Scalping Trading Strategy ( 74% Win Rate )" strategy from "TradeIQ" who claims to have backtested this manually and achieved 427% profit with a 74% winrate over 100 trades in just a 4 months. I was unable to emulate these results consistently accommodating for slippage and commission but even so the results and especially the high win-rate and low markdown is pretty impressive and quite respectable.
This strategy uses a combination of 3 open-source public indicators:
AK MACD BB v 1.00 by Algokid
SSL Hybrid by Mihkel00
Volume Strength Finder by Saravanan_Ragavan
This is considered a trend following Strategy. AK MACD BB is being used as the primary short term trend direction indicator with an interesting approach of using Bollinger Bands to define an upper and lower range and upon the MACD going above the upper Bollinger Bands, it's indicative of an up trend, where as if the MACD is below the lower Bollinger Band, it's indicative of a down trend. To eliminate false signals, SSL Hyrbid is used as a trend confirmation filter, confirming and eliminating false signals from the MACD BB. It does this by validating the price action is above the the EMA and the SSL is positive that is a confirmation of an uptrend. When the price action is below the EMA and the SSL is negative, that is an confirmation of a downtrend. To avoid taking trades during ranged markets, VSF Buyer's Strength is used so the buyers/sellers strength and must be above 50% or the trade will not be inititiated.
Trading Rules
5 min candles but other lower time frames even below 5m work quite well too.
Best results can be found by tweaking these 2 input parameters:
Number Of bars to look back to ensure MACD isn't above/below Zero Line
Number Of bars back to look for SSL pullback
Long Entry when these conditions are true
AK MACD BB BB issues a new continuation long signal. A new green circle must appear on the indicator and these circles should not be touching across the zero level while they were previously red
SSL Hybrid price action closes above the EMA and the line is blue color and then creates a pullback . The pullback is confirmed when the color changes from blue to gray or from blue to red.
VSF Buyers strength above 50% at the time the MACD indicator issues a new long signal.
Short Entry when these conditions are true
AK MACD BB issues a new continuation short signal. A new red circle must appear on the indicator and these circles should not be touching across the zero level while they were previously green
SSL Hybrid price action closes below the EMA and the line is red color then it has to create a pullback . The pullback is confirmed when the color changes from red to gray or from red to blue.
VSF Sellers strength above 50% at the time the MACD indicator issues a new short signal.
Stop Loss at EMA Line with TP Target 1.5x the risk
If you know of or have a strategy you want to see myth-busted or just have an idea for one, please feel free to message me.
Chatterjee CorrelationThis is my first attempt on implementing a statistical method. This problem was given to me by @lejmer (who also helped me later on building more efficient code to achieve this) when we were debating on the need for higher resource allocation to run scripts so it can run longer and faster. The major problem faced by those who want to implement statistics based methods is that they run out of processing time or need to limit the data samples. My point was that such things need be implemented with an algorithm which suits pine instead of trying to port a python code directly. And yes, I am able to demonstrate that by using this implementation of Chatterjee Correlation.
🎲 What is Chatterjee Correlation?
The Chatterjee rank Correlation Coefficient (CCC) is a method developed by Sourav Chatterjee which can be used to study non linear correlation between two series.
Full documentation on the method can be found here:
arxiv.org
In short, the formula which we are implementing here is:
Algorithm can be simplified as follows:
1. Get the ranks of X
2. Get the ranks of Y
3. Sort ranks of Y in the order of X (Lets call this SortedYIndices)
4. Calculate the sum of adjacent Y ranks in SortedYIndices (Lets call it as SumOfAdjacentSortedIndices)
5. And finally the correlation coefficient can be calculated by using simple formula
CCC = 1 - (3*SumOfAdjacentSortedIndices)/(n^2 - 1)
🎲 Looks simple? What is the catch?
Mistake many people do here is that they think in Python/Java/C etc while coding in Pine. This makes code less efficient if it involves arrays and loops. And the simple code may look something like this.
var xArray = array.new()
var yArray = array.new()
array.push(xArray, x)
array.push(yArray, y)
sortX = array.sort_indices(xArray)
sortY = array.sort_indices(yArray)
SumOfAdjacentSortedIndices = 0.0
index = array.get(xSortIndices, 0)
for i=1 to n > 1? n -1 : na
indexNext = array.get(sortX, i)
SumOfAdjacentSortedIndices += math.abs(array.get(sortY, indexNext)-array.get(sortY, index))
index := indexNext
correlation := 1 - 3*SumOfAdjacentSortedIndices/(math.pow(n,2)-1)
But, problem here is the number of loops run. Remember pine executes the code on every bar. There are loops run in array.sort_indices and another loop we are running to calculate SumOfAdjacentSortedIndices. Due to this, chances of program throwing runtime errors due to script running for too long are pretty high. This limits greatly the number of samples against which we can run the study. The options to overcome are
Limit the sample size and calculate only between certain bars - this is not ideal as smaller sets are more likely to yield false or inconsistent results.
Start thinking in pine instead of python and code in such a way that it is optimised for pine. - This is exactly what we have done in the published code.
🎲 How to think in Pine?
In order to think in pine, you should try to eliminate the loops as much as possible. Specially on the data which is continuously growing.
My first thought was that sorting takes lots of time and need to find a better way to sort series - specially when it is a growing data set. Hence, I came up with this library which implements Binary Insertion Sort.
Replacing array.sort_indices with binary insertion sort will greatly reduce the number of loops run on each bar. In binary insertion sort, the array will remain sorted and any item we add, it will keep adding it in the existing sort order so that there is no need to run separate sort. This allows us to work with bigger data sets and can utilise full 20,000 bars for calculation instead of few 100s.
However, last loop where we calculate SumOfAdjacentSortedIndices is not replaceable easily. Hence, we only limit these iterations to certain bars (Even though we use complete sample size). Plots are made for only those bars where the results need to be printed.
🎲 Implementation
Current implementation is limited to few combinations of x and fixed y. But, will be converting this into library soon - which means, programmers can plug any x and y and get the correlation.
Our X here can be
Average volume
ATR
And our Y is distance of price from moving average - which identifies trend.
Thus, the indicator here helps to understand the correlation coefficient between volume and trend OR volatility and trend for given ticker and timeframe. Value closer to 1 means highly correlated and value closer to 0 means least correlated. Please note that this method will not tell how these values are correlated. That is, we will not be able to know if higher volume leads to higher trend or lower trend. But, we can say whether volume impacts trend or not.
Please note that values can differ by great extent for different timeframes. For example, if you look at 1D timeframe, you may get higher value of correlation coefficient whereas lower value for 1m timeframe. This means, volume to trend correlation is higher in 1D timeframe and lower in lower timeframes.
Extreme Volume Support Resistance LevelsExtreme Volume Support Resistance Levels are S/R levels(zones, basically), based on extreme volume .
Settings:
Lookback -- number of bars, which algorithm will be using;
Volume Threshold Period -- period of MA (Volume MA), which smoothers volume in order to find the extremes;
Volume Threshold Multiplier -- multiplier for Volume MA, which "lift" Volume MA and thus will provide the algorithm with more accurate extreme volume ;
Number of zones to show -- number of last S/R zones, which will be shown on the chart.
RU:
Extreme Volume Support Resistance Levels — это уровни S/R (зоны, в основном), основанные на избыточном объеме.
Параметры:
Lookback -- число баров, которое алгоритм будет использовать для расчётов;
Volume Threshold Period -- период MA (Volume MA), которая сглаживает объем для нахождения экстремумов объёма;
Volume Threshold Multiplier -- множитель для Volume MA, который "поднимает" Volume MA и тем самым обеспечивает алгоритм более точными значениями экстремального объёма;
Количество зон для отображения -- количество оставшихся зон S/R, которые отображаются на графике.
REVE MarkersREVE stands for ‘Range Extensions Volume Expansions’. It seeks to report the same as the REVE which I published before. However the code uses a different algorithm to find the ‘usual range’ or ‘usual volume’ to which the current range and volume is compared. In the old REVE a function is coded which mimics a median() function..
In this code the median() function provided in pinescript is used, which makes the code of the actual algorithm nice and short in lines 21 through 27
For example line 23: “morevol=ta.median(curvol , usual)*eventnorm” in which
‘morevol ‘ is the calculated level above which the volume is deemed considerable,
‘curvol’ is the current volume (see line 21); curvol the volume of the previous period.
‘usual’ is the lookback period (see line 8)
‘ta.median(curvol , usual)’ is therfore the median volume in the lookback period
‘eventnorm’ is the percent which sets when “normal” becomes “considerable” (see line 6)
In line 26 the same is done for range.
The code in lines 30 to 92, concern logic manipulations to arrive at choosing the appropriate marker, which are plotted in lines 95 through 136.
Using the shapes as provided by Pinescript offers the possibility to give a much better and more meaningful visualization of volume and range events than different colored columns and histograms in the ‘old’ REVE in the below panel (see example chart).
Using the Pinescript function to find the median opens the possibility of letting the user play in the inputs with the lookback period and the norms for considerable and excessive to find a setting he or she likes most.
Using median in stead of average is necessary in volume and range analysis because these are so volatile. E.g. range or volume can be 10 times larger in the next period! If you have a few excessive volumes or ranges in the lookback period the ‘average volume or range’ is much higher than the ‘usual volume or range’ In statistics this is referred to as the outlier problem.
The markers are located on the bottom of the instrument pane. Those indicating volume events (with ‘event’ I mean a considerable or excessive expansion or extension) are colored triangles or squares, triangles indicate direction, squares that the price stays the same. those indicating range events with ‘normal’ volume are crosses, plus-cross means considerable range event and x-cross is excessive event.
The red, fuchsia and maroon triangles and squares indicate a combination of volume and range events. I call this ‘effective volume’ because more trade leads to shifting prices. The green and blue triangles and squares indicate a volume event with ‘normal’ ranges. I call this ‘ineffective volume’ because more volume does not lead to price shits. Effective volume can be attributed to occasional traders, because these do not care much for the price effect of their orders. The ineffective volume is attributable to institutional traders, because these go to great length to hide the size of their selling or buying objective by trading many small amounts in a day. Therefore one can theorize that ‘smart money’ is active when green and blue markers show up.
There is an option in the inputs to show markers around the candles (or bars). Those above indicate volume events, plus-cross for considerable and x-cross for excessive volume.
Those below the candles (or bars) indicate range events, triangles for direction or a plus-cross when the price stays the same. The small ones indicate considerable range events and the big ones excessive range events. This option can be used for better understanding of the colors of the bottom markers or to check which marker applies to which candle or bar.
If the instrument is without volume, the indicator will show only range markers.
Have fun and take care.
GT 5.1 Strategy═════════════════════════════════════════════════════════════════════════
█ OVERVIEW
People often look an indicator in their technical analysis to enter a position. We may also need to look at the signals of one or more indicators to verify the signals given by some indicators. In this context, I developed a strategy to test whether it really works by choosing some of the indicators that capture trend changes with the same characteristics. Also, since the subject is to catch the trend change, I thought it would be right to include an indicator using the heikin ashi logic. By averaging and smoothing the market noise, Heiken Ashi makes it easier to detect the direction of the trend helps to see possible reversal points on the chart. However, it should be noted that Heiken Ashi is a lagging indicator.
I picked 5 different indicators (but their purpose are similar) and combined them to produce buy and sell signals based on your choice(not repaint). First of all let's get some information about our indicators. So you will understand me why i picked these indicators and what is the meaning of their signals.
1 — Coral Trend Indicator by LazyBear
Coral Trend Indicator is a linear combination of moving averages, all obtained by a triple or higher order exponential smoothing. The indicator comes with a trend indication which is based on the normalized slope of the plot. the usage of this indicator is simple. When the color of the line is green that means the market is in uptrend. But when the color is red that means the market is in downtrend.
As you see the original indicator it is simple to find is it in uptrend or downtrend.
So i added a code to find when the color of the line change. When it turns green to red my script giving sell signals, when it turns red to green it gives buy signals.
I hide the candles to show you more clearly what is happening when you choose only Coral Strategy. But sometimes it is not enough only using itself. Even if green dots turn to red it continues in uptrend. So we need a to look another indicator to approve our signal.
2 — SSL channel by ErwinBeckers
Known as the SSL , the Semaphore Signal Level channel is an indicator that combines moving averages to provide you with a clear visual signal of price movement dynamics. In short, it's designed to show you when a price trend is forming. This indicator creates a band by calculating the high and low values according to the determined period. Simply if you decide 10 as period, it calculates a 10-period moving average on the latest 10 highs. Calculate a 10-period moving average on the latest 10 lows. If the price falls below the low band, the downtrend begins, if the price closes above the high band, the uptrend begins. Lets look the original form of indicator and learn how it using.
If the red line is below and the green band is above, it means that we are in uptrend, and if it is on the opposite side, it means that we are in downtrend. Therefore, it would be logical to enter a position where the trend has changed. So i added a code to find when the crossover has occured.
As you see in my strategy, it gives you signals when the trend has changed. But sometimes it is not enough only using this indicator itself. So lets look 2 indicator together in one chart.
Look circle SSL is saying it is in downtrend but Coral is saying it has entered in uptrend. if we just look to coral signal it can misleads us. So it can be better to look another indicator for validating our signals.
3 — Heikin Ashi RSI Oscillator by JayRogers
The Heikin-Ashi technique is used by technical traders to identify a given trend more easily. Heikin-Ashi has a smoother look because it is essentially taking an average of the movement. There is a tendency with Heikin-Ashi for the candles to stay red during a downtrend and green during an uptrend, whereas normal candlesticks alternate color even if the price is moving dominantly in one direction. This indicator actually recalculates the RSI indicator with the logic of heikin ashi. Due to smoothing, the bars are formed with a slight lag, reflecting the trend rather than the exact price movement. So lets look the original version to understand more clearly. If red bars turn to green bars it means uptrend may begin, if green bars turn to red it means downtrend may begin.
As you see HARSI giving lots of signal some of them is really good but some of them are not very well. Because it gives so much signals Now i will change time period and lets look same chart again.
Now results are better because of heikin ashi's logic. it is not suitable for day traders, it gives more accurate result when using the time period is longer. But it can be useful to use this indicator in short time periods using with other indicators. So you may catch the trend changes more accurately.
4 — MACD DEMA by ToFFF
This indicator uses a double EMA and MACD algorithm to analyze the direction of the trend. Though it might seem a tough task to manage the trades with the help of MACD DEMA once you know how the proper way to interpret the signal lines, it will be an easy task.
This indicator also smoothens the signal lines with the time series algorithm which eventually makes the higher time frame important. So, expecting better results in the lower time frame can result in big losses as the data reading from the MACD DEMA will not be accurate. In order to understand the function of this indicator, you have to know the functions of the EMA also.
The exponential moving average tends to give more priority to the recent price changes. So, expecting better results when the volatility is very high is a very risky approach to trade the market. Moreover, the MACD has some lagging issues compared to the EMA, so it is super important to use a trading method that focuses on the higher time frame only. What does MACD 12 26 Close 9 mean? When the DEMA-9 crosses above the MACD(12,26), this is considered a bearish signal. It means the trend in the stock – its magnitude and/or momentum – is starting to shift course. When the MACD(12,26) crosses above the DEMA-9, this is considered a bullish signal. Lets see this indicator on Chart.
When the blue line crossover red line it is good time to buy. As you see from the chart i put arrows where the crossover are appeared.
When the red line crossover blue line it is good time to sell or exit from position.
5 — WaveTrend Oscillator by LazyBear
This is a technical indicator that creates high and low bands between two values. It then creates a trend indicator that draws waves with highs and lows within these boundaries. WaveTrend is a widely used indicator for finding direction of an asset.
Calculation period: number of candles used to calculate WaveTrend, defaults to 10. Averaging period: number of candles used to average WaveTrend, defaults to 21.
As you see in chart when the lines crossover occured my strategy gives buy or sell signals.
═════════════════════════════════════════════════════════════════════════
█ HOW TO USE
I hope you understand how the indicators I mentioned above work and what they are used for. Now, I will explain in detail how to use the strategy I have created.
When you enter the settings section, you will see 5 types of indicators. If you want to use the signals of the indicators, simply tick the box next to the indicators. Also, under each option there is an area where you can set the "lookback". This setting is a field that will make the signals overlap when you select more than one option. If you are going to trade with only one option, you should make sure that this field is 0. Otherwise, it may continue to generate as many signals as you choose.
Lets see in chart for easy understanding.
As you see chart, if i chose only HARSI with lookback 0 (HARSI and CORAL should be 1 minumum because of algorithm-we looking 1 bar before, others 0 because we are looking crossovers), it will give signals only when harsı bar's color changed. But when i changed Lookback as 7 it will be like this in chart.
Now i will choose 2 indicator with settings of their lookback 0.
As you see it will give signals when both of them occurs same time. But HARSI is an indicator giving very early signal so we can enter position 5-6 bars after the first bar color change. So i will change HARSI Lookback settings as 7. Lets look what happens when we use lookback option.
So it wil be useful to change lookback settings to find best signals in each time period and in each symbol. But it shouldnt be too high. Because you can be late to catch trend's starting.
this is an image of MACD and WAVE trend used and lookback option are both 6.
Now lets see an example with 3 options are chosen with lookback option 11-1-5
Now lets talk about indicators settings. After strategy options you will see each indicators settings, you can change their settings as you desired. So each indicators signal will be changed according to your adjustment.
I left strategy options with default settings. You can change it manually as if you want.
═════════════════════════════════════════════════════════════════════════
█ LIMITATIONS: Don't rely on non-standard charts results. For example Heikin Ashi is a technical analysis method used with the traditional candlestick chart.Heikin Ashi vs. Candlestick Chart: The decisive visual difference between Heikin Ashi and the traditional chart is that Heikin Ashi flattens the traditional candlestick chart using a modified formula.
The primary advantage of Heikin Ashi is that it makes the chart more reader-friendly and helps users identify and analyze trends .
Because Heikin Ashi provides averaged price information rather than real-time price and reacts slowly to volatility — not suitable for scalpers and high-frequency traders. I added HARSI indicator as a supportive signal because it is useful with using CORAL and SSL channel indicators. If you change your candle types to Heikin Ashi , your profit will change in good way but dont rely on it.
═════════════════════════════════════════════════════════════════════════
█ THANKS:
Special thanks to authors of the scripts that i used.
@LazyBear and @ErwinBeckers and @JayRogers and @ToFFF
═════════════════════════════════════════════════════════════════════════
█ DISCLAIMER
Any trade decisions you make are entirely your own responsibility.
Possible RSI [Loxx]Possible RSI is a normalized, variety second-pass normalized, Variety RSI with Dynamic Zones and optionl High-Pass IIR digital filtering of source price input. This indicator includes 7 types of RSI.
High-Pass Fitler (optional)
The Ehlers Highpass Filter is a technical analysis tool developed by John F. Ehlers. Based on aerospace analog filters, this filter aims at reducing noise from price data. Ehlers Highpass Filter eliminates wave components with periods longer than a certain value. This reduces lag and makes the oscialltor zero mean. This turns the RSI output into something more similar to Stochasitc RSI where it repsonds to price very quickly.
First Normalization Pass
RSI (Relative Strength Index) is already normalized. Hence, making a normalized RSI seems like a nonsense... if it was not for the "flattening" property of RSI. RSI tends to be flatter and flatter as we increase the calculating period--to the extent that it becomes unusable for levels trading if we increase calculating periods anywhere over the broadly recommended period 8 for RSI. In order to make that (calculating period) have less impact to significant levels usage of RSI trading style in this version a sort of a "raw stochastic" (min/max) normalization is applied.
Second-Pass Variety Normalization Pass
There are three options to choose from:
1. Gaussian (Fisher Transform), this is the default: The Fisher Transform is a function created by John F. Ehlers that converts prices into a Gaussian normal distribution. The normaliztion helps highlights when prices have moved to an extreme, based on recent prices. This may help in spotting turning points in the price of an asset. It also helps show the trend and isolate the price waves within a trend.
2. Softmax: The softmax function, also known as softargmax: or normalized exponential function, converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom.
3. Regular Normalization (devaitions about the mean): Converts a vector of K real numbers into a probability distribution of K possible outcomes without using log sigmoidal transformation as is done with Softmax. This is basically Softmax without the last step.
Dynamic Zones
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
7 Types of RSI
See here to understand which RSI types are included:
Included:
Bar coloring
4 signal types
Alerts
Loxx's Expanded Source Types
Loxx's Variety RSI
Loxx's Dynamic Zones
CFB-Adaptive Trend Cipher Candles [Loxx]CFB-Adaptive Trend Cipher Candles is a candle coloring indicator that shows both trend and trend exhaustion using Composite Fractal Behavior price trend analysis. To do this, we first calculate the dynamic period outputs from the CFB algorithm and then we injection those period inputs into a correlation function that correlates price input price to the candle index. The closer the correlation is to 1, the lighter the green color until the color turns yellow, sometimes, indicating upward price exhaustion. The closer the correlation is to -1, the lighter the red color until it reaches Fuchsia color indicating downward price exhaustion. Green means uptrend, red means downtrend, yellow means reversal from uptrend to downtrend, fuchsia means reversal from downtrend to uptrend.
What is Composite Fractal Behavior ( CFB )?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
Included
Loxx's Expanded Source Types
Related indicators:
Adaptive Trend Cipher loxx]
Dynamic Zones Polychromatic Momentum Candles
RSI Precision Trend Candles
STD/C-Filtered, N-Order Power-of-Cosine FIR Filter [Loxx]STD/C-Filtered, N-Order Power-of-Cosine FIR Filter is a Discrete-Time, FIR Digital Filter that uses Power-of-Cosine Family of FIR filters. This is an N-order algorithm that turns the following indicator from a static max 16 orders to a N orders, but limited to 50 in code. You can change the top end value if you with to higher orders than 50, but the signal is likely too noisy at that level. This indicator also includes a clutter and standard deviation filter.
See the static order version of this indicator here:
STD/C-Filtered, Power-of-Cosine FIR Filter
Amplitudes for STD/C-Filtered, N-Order Power-of-Cosine FIR Filter:
What are FIR Filters?
In discrete-time signal processing, windowing is a preliminary signal shaping technique, usually applied to improve the appearance and usefulness of a subsequent Discrete Fourier Transform. Several window functions can be defined, based on a constant (rectangular window), B-splines, other polynomials, sinusoids, cosine-sums, adjustable, hybrid, and other types. The windowing operation consists of multipying the given sampled signal by the window function. For trading purposes, these FIR filters act as advanced weighted moving averages.
What is Power-of-Sine Digital FIR Filter?
Also called Cos^alpha Window Family. In this family of windows, changing the value of the parameter alpha generates different windows.
f(n) = math.cos(alpha) * (math.pi * n / N) , 0 ≤ |n| ≤ N/2
where alpha takes on integer values and N is a even number
General expanded form:
alpha0 - alpha1 * math.cos(2 * math.pi * n / N)
+ alpha2 * math.cos(4 * math.pi * n / N)
- alpha3 * math.cos(4 * math.pi * n / N)
+ alpha4 * math.cos(6 * math.pi * n / N)
- ...
Special Cases for alpha:
alpha = 0: Rectangular window, this is also just the SMA (not included here)
alpha = 1: MLT sine window (not included here)
alpha = 2: Hann window (raised cosine = cos^2)
alpha = 4: Alternative Blackman (maximized roll-off rate)
This indicator contains a binomial expansion algorithm to handle N orders of a cosine power series. You can read about how this is done here: The Binomial Theorem
What is Pascal's Triangle and how was it used here?
In mathematics, Pascal's triangle is a triangular array of the binomial coefficients that arises in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Blaise Pascal, although other mathematicians studied it centuries before him in India, Persia, China, Germany, and Italy.
The rows of Pascal's triangle are conventionally enumerated starting with row n = 0 at the top (the 0th row). The entries in each row are numbered from the left beginning with k=0 and are usually staggered relative to the numbers in the adjacent rows. The triangle may be constructed in the following manner: In row 0 (the topmost row), there is a unique nonzero entry 1. Each entry of each subsequent row is constructed by adding the number above and to the left with the number above and to the right, treating blank entries as 0. For example, the initial number in the first (or any other) row is 1 (the sum of 0 and 1), whereas the numbers 1 and 3 in the third row are added to produce the number 4 in the fourth row.
Rows of Pascal's Triangle
0 Order: 1
1 Order: 1 1
2 Order: 1 2 1
3 Order: 1 3 3 1
4 Order: 1 4 6 4 1
5 Order: 1 5 10 10 5 1
6 Order: 1 6 15 20 15 6 1
7 Order: 1 7 21 35 35 21 7 1
8 Order: 1 8 28 56 70 56 28 8 1
9 Order: 1 9 36 34 84 126 126 84 36 9 1
10 Order: 1 10 45 120 210 252 210 120 45 10 1
11 Order: 1 11 55 165 330 462 462 330 165 55 11 1
12 Order: 1 12 66 220 495 792 924 792 495 220 66 12 1
13 Order: 1 13 78 286 715 1287 1716 1716 1287 715 286 78 13 1
For a 12th order Power-of-Cosine FIR Filter
1. We take the coefficients from the Left side of the 12th row
1 13 78 286 715 1287 1716 1716 1287 715 286 78 13 1
2. We slice those in half to
1 13 78 286 715 1287 1716
3. We reverse the array
1716 1287 715 286 78 13 1
This is our array of alphas: alpha1, alpha2, ... alphaN
4. We then pull alpha one from the previous order, order 11, the middle value
11 Order: 1 11 55 165 330 462 462 330 165 55 11 1
The middle value is 462, this value becomes our alpha0 in the calculation
5. We apply these alphas to the cosine calculations
example: + alpha4 * math.cos(6 * math.pi * n / N)
6. We then divide by the sum of the alphas to derive our final coefficient weighting kernel
**This is only useful for orders that are EVEN, if you use odd ordering, the following are the coefficient outputs and these aren't useful since they cancel each other out and result in a value of zero. See below for an odd numbered oder and compare with the amplitude of the graphic posted above of the even order amplitude:
What is a Standard Deviation Filter?
If price or output or both don't move more than the (standard deviation) * multiplier then the trend stays the previous bar trend. This will appear on the chart as "stepping" of the moving average line. This works similar to Super Trend or Parabolic SAR but is a more naive technique of filtering.
What is a Clutter Filter?
For our purposes here, this is a filter that compares the slope of the trading filter output to a threshold to determine whether to shift trends. If the slope is up but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. If the slope is down but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. Alternatively if either up or down slope exceeds the threshold then the trend turns green for up and red for down. Fro demonstration purposes, an EMA is used as the moving average. This acts to reduce the noise in the signal.
Included
Bar coloring
Loxx's Expanded Source Types
Signals
Alerts
CFB-Adaptive, Jurik DMX Histogram [Loxx]Jurik DMX Histogram is the ultra-smooth, low lag version of your classic DMI indicator. This is a momentum indicator. You can use this indicator standalone or as part of a system with a moving average and a mean reversion indicator. This indicator has both composite fractal behavior adaptive inputs and fixed inputs. The default is CFB adaptive. Dark green means strong push up, dark red, strong push down. Light green means weak push up, and light red means weak push down.
What is the directional movement index?
The directional movement index (DMI) is an indicator developed by J. Welles Wilder in 1978 that identifies in which direction the price of an asset is moving. The indicator does this by comparing prior highs and lows and drawing two lines: a positive directional movement line ( +DI ) and a negative directional movement line ( -DI ). An optional third line, called the average directional index ( ADX ), can also be used to gauge the strength of the uptrend or downtrend.
When +DI is above -DI , there is more upward pressure than downward pressure in the price. Conversely, if -DI is above +DI , then there is more downward pressure on the price. This indicator may help traders assess the trend direction. Crossovers between the lines are also sometimes used as trade signals to buy or sell.
What is Composite Fractal Behavior ( CFB )?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
What is Jurik Volty used in the Juirk Filter?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Ideally, you would like a filtered signal to be both smooth and lag-free. Lag causes delays in your trades, and increasing lag in your indicators typically result in lower profits. In other words, late comers get what's left on the table after the feast has already begun.
Included:
Alerts
Loxx's Expanded Source Types
Signals
Bar coloring
Modified Covariance Autoregressive Estimator of Price [Loxx]What is the Modified Covariance AR Estimator?
The Modified Covariance AR Estimator uses the modified covariance method to fit an autoregressive (AR) model to the input data. This method minimizes the forward and backward prediction errors in the least squares sense. The input is a frame of consecutive time samples, which is assumed to be the output of an AR system driven by white noise. The block computes the normalized estimate of the AR system parameters, A(z), independently for each successive input.
Characteristics of Modified Covariance AR Estimator
Minimizes the forward prediction error in the least squares sense
Minimizes the forward and backward prediction errors in the least squares sense
High resolution for short data records
Able to extract frequencies from data consisting of p or more pure sinusoids
Does not suffer spectral line-splitting
May produce unstable models
Peak locations slightly dependent on initial phase
Minor frequency bias for estimates of sinusoids in noise
Order must be less than or equal to 2/3 the input frame size
Purpose
This indicator calculates a prediction of price. This will NOT work on all tickers. To see whether this works on a ticker for the settings you have chosen, you must check the label message on the lower right of the chart. The label will show either a pass or fail. If it passes, then it's green, if it fails, it's red. The reason for this is because the Modified Covariance method produce unstable models
H(z)= G / A(z) = G / (1+. a(2)z −1 +…+a(p+1)z)
You specify the order, "ip", of the all-pole model in the Estimation order parameter. To guarantee a valid output, you must set the Estimation order parameter to be less than or equal to two thirds the input vector length.
The output port labeled "a" outputs the normalized estimate of the AR model coefficients in descending powers of z.
The implementation of the Modified Covariance AR Estimator in this indicator is the fast algorithm for the solution of the modified covariance least squares normal equations.
Inputs
x - Array of complex data samples X(1) through X(N)
ip - Order of linear prediction model (integer)
Notable local variables
v - Real linear prediction variance at order IP
Outputs
a - Array of complex linear prediction coefficients
stop - value at time of exit, with error message
false - for normal exit (no numerical ill-conditioning)
true - if v is not a positive value
true - if delta and gamma do not lie in the range 0 to 1
true - if v is not a positive value
true - if delta and gamma do not lie in the range 0 to 1
errormessage - an error message based on "stop" parameter; this message will be displayed in the lower righthand corner of the chart. If you see a green "passed" then the analysis is valid, otherwise the test failed.
Indicator inputs
LastBar = bars backward from current bar to test estimate reliability
PastBars = how many bars are we going to analyze
LPOrder = Order of Linear Prediction, and for Modified Covariance AR method, this must be less than or equal to 2/3 the input frame size, so this number has a max value of 0.67
FutBars = how many bars you'd like to show in the future. This algorithm will either accept or reject your value input here and then project forward
Further reading
Spectrum Analysis-A Modern Perspective 1380 PROCEEDINGS OF THE IEEE, VOL. 69, NO. 11, NOVEMBER 1981
Related indicators
Levinson-Durbin Autocorrelation Extrapolation of Price
Weighted Burg AR Spectral Estimate Extrapolation of Price
Helme-Nikias Weighted Burg AR-SE Extra. of Price
Itakura-Saito Autoregressive Extrapolation of Price
Modified Covariance Autoregressive Estimator of Price
Real-Fast Fourier Transform of Price Oscillator [Loxx]Real-Fast Fourier Transform Oscillator is a simple Real-Fast Fourier Transform Oscillator. You have the option to turn on inverse filter as well as min/max filters to fine tune the oscillator. This oscillator is normalized by default. This indicator is to demonstrate how one can easily turn the RFFT algorithm into an oscillator..
What is the Discrete Fourier Transform?
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous (and periodic), and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
What is the Complex Fast Fourier Transform?
The complex Fast Fourier Transform algorithm transforms N real or complex numbers into another N complex numbers. The complex FFT transforms a real or complex signal x in the time domain into a complex two-sided spectrum X in the frequency domain. You must remember that zero frequency corresponds to n = 0, positive frequencies 0 < f < f_c correspond to values 1 ≤ n ≤ N/2 −1, while negative frequencies −fc < f < 0 correspond to N/2 +1 ≤ n ≤ N −1. The value n = N/2 corresponds to both f = f_c and f = −f_c. f_c is the critical or Nyquist frequency with f_c = 1/(2*T) or half the sampling frequency. The first harmonic X corresponds to the frequency 1/(N*T).
The complex FFT requires the list of values (resolution, or N) to be a power 2. If the input size if not a power of 2, then the input data will be padded with zeros to fit the size of the closest power of 2 upward.
What is Real-Fast Fourier Transform?
Has conditions similar to the complex Fast Fourier Transform value, except that the input data must be purely real. If the time series data has the basic type complex64, only the real parts of the complex numbers are used for the calculation. The imaginary parts are silently discarded.
Included
Moving window from Last Bar setting. You can lock the oscillator in place on the current bar by adding 1 every time a new bar appears in the Last Bar Setting
Variety RSI of Fast Discrete Cosine Transform [Loxx]Variety RSI of Fast Discrete Cosine Transform is an RSI indicator with 7 types of RSI that is calculated on the Fast Discrete Cosine Transform of source. The source inputs are 33 different source types from Loxx's Expanded Source Types.
What is Discrete Cosine Transform?
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. The DCT, first proposed by Nasir Ahmed in 1972, is a widely used transformation technique in signal processing and data compression. It is used in most digital media, including digital images (such as JPEG and HEIF, where small high-frequency components can be discarded), digital video (such as MPEG and H.26x), digital audio (such as Dolby Digital, MP3 and AAC), digital television (such as SDTV, HDTV and VOD), digital radio (such as AAC+ and DAB+), and speech coding (such as AAC-LD, Siren and Opus). DCTs are also important to numerous other applications in science and engineering, such as digital signal processing, telecommunication devices, reducing network bandwidth usage, and spectral methods for the numerical solution of partial differential equations.
Fast Discrete Cosine Transform
The algorithm performs a fast cosine transform of the real function defined by nn samples on the real axis.
Depending on the passed parameters, it can be executed both direct and inverse conversion.
Input parameters:
tnn - Number of function values minus one. Should be 1024 degree of two. The algorithm does not check correct value passed.
a - array of Real 1025 Function values.
InverseFCT - the direction of the transformation. True if reverse, False if direct.
Output parameters: a - the result of the transformation. For more details, see description on the site. www.alglib.net
Included:
7 types of RSI
33 source inputs from Loxx's Expanded Source Types
2 types of signals
Alerts
Fourier Extrapolation of Variety Moving Averages [Loxx]Fourier Extrapolation of Variety Moving Averages is a Fourier Extrapolation (forecasting) indicator that has for inputs 38 different types of moving averages along with 33 different types of sources for those moving averages. This is a forecasting indicator of the selected moving average of the selected price of the underlying ticker. This indicator will repaint, so past signals are only as valid as the current bar. This indicator allows for up to 1500 bars between past bars and future projection bars. If the indicator won't load on your chart. check the error message for details on how to fix that, but you must ensure that past bars + futures bars is equal to or less than 1500.
Fourier Extrapolation using the Quinn-Fernandes algorithm is one of several (5-10) methods of signals forecasting that I'l be demonstrating in Pine Script.
What is Fourier Extrapolation?
This indicator uses a multi-harmonic (or multi-tone) trigonometric model of a price series xi, i=1..n, is given by:
xi = m + Sum( a*Cos(w*i) + b*Sin(w*i), h=1..H )
Where:
xi - past price at i-th bar, total n past prices;
m - bias;
a and b - scaling coefficients of harmonics;
w - frequency of a harmonic ;
h - harmonic number;
H - total number of fitted harmonics.
Fitting this model means finding m, a, b, and w that make the modeled values to be close to real values. Finding the harmonic frequencies w is the most difficult part of fitting a trigonometric model. In the case of a Fourier series, these frequencies are set at 2*pi*h/n. But, the Fourier series extrapolation means simply repeating the n past prices into the future.
This indicator uses the Quinn-Fernandes algorithm to find the harmonic frequencies. It fits harmonics of the trigonometric series one by one until the specified total number of harmonics H is reached. After fitting a new harmonic , the coded algorithm computes the residue between the updated model and the real values and fits a new harmonic to the residue.
see here: A Fast Efficient Technique for the Estimation of Frequency , B. G. Quinn and J. M. Fernandes, Biometrika, Vol. 78, No. 3 (Sep., 1991), pp . 489-497 (9 pages) Published By: Oxford University Press
The indicator has the following input parameters:
src - input source
npast - number of past bars, to which trigonometric series is fitted;
Nfut - number of predicted future bars;
nharm - total number of harmonics in model;
frqtol - tolerance of frequency calculations.
Included:
Loxx's Expanded Source Types
Loxx's Moving Averages
Other indicators using this same method
Fourier Extrapolator of Variety RSI w/ Bollinger Bands
Fourier Extrapolator of Price w/ Projection Forecast
Fourier Extrapolator of Price
Loxx's Moving Averages: Detailed explanation of moving averages inside this indicator
Loxx's Expanded Source Types: Detailed explanation of source types used in this indicator
Fourier Extrapolator of Variety RSI w/ Bollinger Bands [Loxx]Fourier Extrapolator of Variety RSI w/ Bollinger Bands is an RSI indicator that shows the original RSI, the Fourier Extrapolation of RSI in the past, and then the projection of the Fourier Extrapolated RSI for the future. This indicator has 8 different types of RSI including a new type of RSI called T3 RSI. The purpose of this indicator is to demonstrate the Fourier Extrapolation method used to model past data and to predict future price movements. This indicator will repaint. If you wish to use this for trading, then make sure to take a screenshot of the indicator when you enter the trade to save your analysis. This is the first of a series of forecasting indicators that can be used in trading. Due to how this indicator draws on the screen, you must choose values of npast and nfut that are equal to or less than 200. this is due to restrictions by TradingView and Pine Script in only allowing 500 lines on the screen at a time. Enjoy!
What is Fourier Extrapolation?
This indicator uses a multi-harmonic (or multi-tone) trigonometric model of a price series xi, i=1..n, is given by:
xi = m + Sum( a*Cos(w*i) + b*Sin(w*i), h=1..H )
Where:
xi - past price at i-th bar, total n past prices;
m - bias;
a and b - scaling coefficients of harmonics;
w - frequency of a harmonic ;
h - harmonic number;
H - total number of fitted harmonics.
Fitting this model means finding m, a, b, and w that make the modeled values to be close to real values. Finding the harmonic frequencies w is the most difficult part of fitting a trigonometric model. In the case of a Fourier series, these frequencies are set at 2*pi*h/n. But, the Fourier series extrapolation means simply repeating the n past prices into the future.
This indicator uses the Quinn-Fernandes algorithm to find the harmonic frequencies. It fits harmonics of the trigonometric series one by one until the specified total number of harmonics H is reached. After fitting a new harmonic , the coded algorithm computes the residue between the updated model and the real values and fits a new harmonic to the residue.
see here: A Fast Efficient Technique for the Estimation of Frequency , B. G. Quinn and J. M. Fernandes, Biometrika, Vol. 78, No. 3 (Sep., 1991), pp . 489-497 (9 pages) Published By: Oxford University Press
The indicator has the following input parameters:
src - input source
npast - number of past bars, to which trigonometric series is fitted;
Nfut - number of predicted future bars;
nharm - total number of harmonics in model;
frqtol - tolerance of frequency calculations.
Included:
Loxx's Expanded Source Types
Loxx's Variety RSI
Other indicators using this same method
Fourier Extrapolator of Price w/ Projection Forecast
Fourier Extrapolator of Price
Fourier Extrapolator of Price w/ Projection Forecast [Loxx]Due to popular demand, I'm pusblishing Fourier Extrapolator of Price w/ Projection Forecast.. As stated in it's twin indicator, this one is also multi-harmonic (or multi-tone) trigonometric model of a price series xi, i=1..n, is given by:
xi = m + Sum( a*Cos(w*i) + b*Sin(w*i), h=1..H )
Where:
xi - past price at i-th bar, total n past prices;
m - bias;
a and b - scaling coefficients of harmonics;
w - frequency of a harmonic ;
h - harmonic number;
H - total number of fitted harmonics.
Fitting this model means finding m, a, b, and w that make the modeled values to be close to real values. Finding the harmonic frequencies w is the most difficult part of fitting a trigonometric model. In the case of a Fourier series, these frequencies are set at 2*pi*h/n. But, the Fourier series extrapolation means simply repeating the n past prices into the future.
This indicator uses the Quinn-Fernandes algorithm to find the harmonic frequencies. It fits harmonics of the trigonometric series one by one until the specified total number of harmonics H is reached. After fitting a new harmonic , the coded algorithm computes the residue between the updated model and the real values and fits a new harmonic to the residue.
see here: A Fast Efficient Technique for the Estimation of Frequency , B. G. Quinn and J. M. Fernandes, Biometrika, Vol. 78, No. 3 (Sep., 1991), pp . 489-497 (9 pages) Published By: Oxford University Press
The indicator has the following input parameters:
src - input source
npast - number of past bars, to which trigonometric series is fitted;
Nfut - number of predicted future bars;
nharm - total number of harmonics in model;
frqtol - tolerance of frequency calculations.
The indicator plots two curves: the green/red curve indicates modeled past values and the yellow/fuchsia curve indicates the modeled future values.
The purpose of this indicator is to showcase the Fourier Extrapolator method to be used in future indicators.
Fourier Extrapolator of Price [Loxx]Fourier Extrapolator of Price is a multi-harmonic (or multi-tone) trigonometric model of a price series xi, i=1..n, is given by:
xi = m + Sum( a *Cos(w *i) + b *Sin(w *i), h=1..H )
Where:
xi - past price at i-th bar, total n past prices;
m - bias;
a and b - scaling coefficients of harmonics;
w - frequency of a harmonic;
h - harmonic number;
H - total number of fitted harmonics.
Fitting this model means finding m, a , b , and w that make the modeled values to be close to real values. Finding the harmonic frequencies w is the most difficult part of fitting a trigonometric model. In the case of a Fourier series, these frequencies are set at 2*pi*h/n. But, the Fourier series extrapolation means simply repeating the n past prices into the future.
This indicator uses the Quinn-Fernandes algorithm to find the harmonic frequencies. It fits harmonics of the trigonometric series one by one until the specified total number of harmonics H is reached. After fitting a new harmonic, the coded algorithm computes the residue between the updated model and the real values and fits a new harmonic to the residue.
see here: A Fast Efficient Technique for the Estimation of Frequency , B. G. Quinn and J. M. Fernandes, Biometrika, Vol. 78, No. 3 (Sep., 1991), pp. 489-497 (9 pages) Published By: Oxford University Press
The indicator has the following input parameters:
src - input source
npast - number of past bars, to which trigonometric series is fitted;
nharm - total number of harmonics in model;
frqtol - tolerance of frequency calculations.
The indicator plots the modeled past values
The purpose of this indicator is to showcase the Fourier Extrapolator method to be used in future indicators. While this method can also prediction future price movements, for our purpose here we will avoid doing.
PA-Adaptive Polynomial Regression Fitted Moving Average [Loxx]PA-Adaptive Polynomial Regression Fitted Moving Average is a moving average that is calculated using Polynomial Regression Analysis. The purpose of this indicator is to introduce polynomial fitting that is to be used in future indicators. This indicator also has Phase Accumulation adaptive period inputs. Even though this first indicator is for demonstration purposes only, its still one of the only viable implementations of Polynomial Regression Analysis on TradingView is suitable for trading, and while this same method can be used to project prices forward, I won't be doing that since forecasting is generally worthless and causes unavoidable repainting. This indicator only repaints on the current bar. Once the bar closes, any signal on that bar won't change.
For other similar Polynomial Regression Fitted methodologies, see here
Poly Cycle
What is the Phase Accumulation Cycle?
The phase accumulation method of computing the dominant cycle is perhaps the easiest to comprehend. In this technique, we measure the phase at each sample by taking the arctangent of the ratio of the quadrature component to the in-phase component. A delta phase is generated by taking the difference of the phase between successive samples. At each sample we can then look backwards, adding up the delta phases.When the sum of the delta phases reaches 360 degrees, we must have passed through one full cycle, on average.The process is repeated for each new sample.
The phase accumulation method of cycle measurement always uses one full cycle’s worth of historical data.This is both an advantage and a disadvantage.The advantage is the lag in obtaining the answer scales directly with the cycle period.That is, the measurement of a short cycle period has less lag than the measurement of a longer cycle period. However, the number of samples used in making the measurement means the averaging period is variable with cycle period. longer averaging reduces the noise level compared to the signal.Therefore, shorter cycle periods necessarily have a higher out- put signal-to-noise ratio.
What is Polynomial Regression?
In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modelled as an nth degree polynomial in x. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y |x). Although polynomial regression fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(y | x) is linear in the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.
Things to know
You can select from 33 source types
The source is smoothed before being injected into the Polynomial fitting algorithm, there are 35+ moving averages to choose from for smoothing
The output of the Polynomial fitting algorithm is then smoothed to create the signal, there are 35+ moving averages to choose from for smoothing
Included
Alerts
Signals
Bar coloring
VHF-Adaptive, Digital Kahler Variety RSI w/ Dynamic Zones [Loxx]VHF-Adaptive, Digital Kahler Variety RSI w/ Dynamic Zones is an RSI indicator with adaptive inputs, Digital Kahler filtering, and Dynamic Zones. This indicator uses a Vertical Horizontal Filter for calculating the adaptive period inputs and allows the user to select from 7 different types of RSI.
What is VHF Adaptive Cycle?
Vertical Horizontal Filter (VHF) was created by Adam White to identify trending and ranging markets. VHF measures the level of trend activity, similar to ADX DI. Vertical Horizontal Filter does not, itself, generate trading signals, but determines whether signals are taken from trend or momentum indicators. Using this trend information, one is then able to derive an average cycle length.
What is Digital Kahler?
From Philipp Kahler's article for www.traders-mag.com, August 2008. "A Classic Indicator in a New Suit: Digital Stochastic"
Digital Indicators
Whenever you study the development of trading systems in particular, you will be struck in an extremely unpleasant way by the seemingly unmotivated indentations and changes in direction of each indicator. An experienced trader can recognise many false signals of the indicator on the basis of his solid background; a stupid trading system usually falls into any trap offered by the unclear indicator course. This is what motivated me to improve even further this and other indicators with the help of a relatively simple procedure. The goal of this development is to be able to use this indicator in a trading system with as few additional conditions as possible. Discretionary traders will likewise be happy about this clear course, which is not nerve-racking and makes concentrating on the essential elements of trading possible.
How Is It Done?
The digital stochastic is a child of the original indicator. We owe a debt of gratitude to George Lane for his idea to design an indicator which describes the position of the current price within the high-low range of the historical price movement. My contribution to this indicator is the changed pattern which improves the quality of the signal without generating too long delays in giving signals. The trick used to generate this “digital” behavior of the indicator. It can be used with most oscillators like RSI or CCI .
First of all, the original is looked at. The indicator always moves between 0 and 100. The precise position of the indicator or its course relative to the trigger line are of no interest to me, I would just like to know whether the indicator is quoted below or above the value 50. This is tantamount to the question of whether the market is just trading above or below the middle of the high-low range of the past few days. If the market trades in the upper half of its high-low range, then the digital stochastic is given the value 1; if the original stochastic is below 50, then the value –1 is given. This leads to a sequence of 1/-1 values – the digital core of the new indicator. These values are subsequently smoothed by means of a short exponential moving average . This way minor false signals are eliminated and the indicator is given its typical form.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included:
Bar coloring
4 signal types
Alerts
Loxx's Expanded Source Types
Loxx's Moving Averages
Loxx's Variety RSI
Loxx's Dynamic Zones
True Adaptive-Lookback Phase Change Index [Loxx]Previously I posted a Phase Change Index using Ehlers Autocorrelation Periodogram Algorithm to tease out the adaptive periods. You can find the previous version here: . This new version is also adaptive but uses a different method to derive the adaptive length inputs. This adaptive method derives period inputs by counting pivots from past candles. This version also relies on Jurik Smoothing to generate the final signal. I named this one "true" because I should have specified in the previous PCI's title that it's powered by Ehlers Autocorrelation Periodogram. Additionally, you'll notice the ALB algorithm has changed from other indicators, This is restrict the range of possible ALB period outputs to a specific range so the indicator is usable.
And remember, this is an inverse indicator. This means that small values on the oscillator indicate bullish sentiment and higher values on the oscillator indicate bearish sentiment.
What is the Phase Change Index?
Based on the M.H. Pee's TASC article "Phase Change Index".
Prices at any time can be up, down, or unchanged. A period where market prices remain relatively unchanged is referred to as a consolidation. A period that witnesses relatively higher prices is referred to as an uptrend, while a period of relatively lower prices is called a downtrend.
The Phase Change Index ( PCI ) is an indicator designed specifically to detect changes in market phases.
This indicator is made as he describes it with one deviation: if we follow his formula to the letter then the "trend" is inverted to the actual market trend. Because of that an option to display inverted (and more logical) values is added.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Ideally, you would like a filtered signal to be both smooth and lag-free. Lag causes delays in your trades, and increasing lag in your indicators typically result in lower profits. In other words, late comers get what's left on the table after the feast has already begun.
That's why investors, banks and institutions worldwide ask for the Jurik Research Moving Average ( JMA ). You may apply it just as you would any other popular moving average. However, JMA's improved timing and smoothness will astound you.
What is adaptive Jurik volatility
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
Included:
Bar coloring
2 signal variations w/ alerts
Variety RSI of Adaptive Lookback Averages [Loxx]Variety RSI of Adaptive Lookback Averages uses an adaptive lookback algorithm in order to determine dynamic length inputs to get used to smooth the input price source before calculating your choice of 6 different types of RSI. This ALB algorithm counts bars back until X many swing counts are reached.
Included:
Bar coloring
2 signal variations w/ alerts
CFB-Adaptive CCI w/ T3 Smoothing [Loxx]CFB-Adaptive CCI w/ T3 Smoothing is a CCI indicator with adaptive period inputs and T3 smoothing. Jurik's Composite Fractal Behavior is used to created dynamic period input.
What is Composite Fractal Behavior ( CFB )?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
What is Jurik Volty used in the Juirk Filter?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Ideally, you would like a filtered signal to be both smooth and lag-free. Lag causes delays in your trades, and increasing lag in your indicators typically result in lower profits. In other words, late comers get what's left on the table after the feast has already begun.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts