[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[amibroker] High to High line



PureBytes Links

Trading Reference Links

Byron, below are some comments on the points you made earlier.

4. What is the rationale of using the Close in Kvol? If this
is used to normalize KSDIup and KSDIdn calculation. Should we be
using the Variation of the High for KSDIdn since we hold the Low
constant and look at the variable look back period of the Highs.
Likewise the Low for KSDIup. I think you alluded to this in one of
your previous comments. Maybe it is not "pure Kase" but I
don't
think she is always telling us the nuances of her methods. Is this
sound statistically?? Or is using the Close better or why not
something like the typical price (H+L+C)/3
---
Good point but your suggestion to use the variation of the high for 
KSDIdn etc. in my opinion is not an improvement. Would you use log
(Ref(H,-n)/H) you are still inconsistent with the standard deviation 
using the close. Use the high here too? I myself would prefer to use 
the close overall, that makes the formulas internally consistent. 
Btw Ram Gummadi emailed me a suggestion to limit KSDIup and KSDIdn 
to zero, in order to prevent for example in a 65-period down trend 
have a negative KSDIup. This makes perfect sense. Below is part of 
the code having both these changes built in. I myself have been 
playing around and Ram also made some other suggestions like using 
an EMA instead of an SMA for the MN Peakout Lines calculations but 
I'm not so sure if that's a good idea. Here's a quote
from John 
Bollinger's book about his Bands:

"
For years a father and son team advertised better Bollinger bands. 
Their secret? They used an exponential moving average as the measure 
of central tendency. Yet this book still recommends a simple moving 
average. The reason is that a simple moving average is what is used 
in calculating the volatility used to set the bandwith, so it is 
internally consistent to use the same average to set the center 
point. Can you use an exponential moving average? Of course. Any 
average will work. But in doing so you are introducing an extraneous 
factor that you might or might not have to pay attention to. In our 
testing, no clear advantage was conferred by using an exponential or 
font weighted average. So in the absence of a compelling argument, 
you should stick to the simplest and most logical approach.
"

Wise words. Computers enable us to effortlessly play around with 
different settings and unconsciously perhaps violate the logic 
behind an indicator. Don't get me wrong, I think we made some
good 
changes and let's continue to study this thing and improve even 
more, but at the same time make sure we keep it consistent. Don't 
hold back any further suggestions though.

5. Do I understand it correctly that KPO is measured in
standard deviations. So if KPO is 3 then that is 3 standard
deviations from the mean.
---
see point 6.

6. Instead of plotting standard deviations for KPO, do you
think there would be any value for plotting probability. 2 Standard
Deviations does not mean a whole lot to me but 95% chance of this
being a trend tells me a lot. Maybe there is a resolution issue in
that a probability of 3 SD or 4 SD will be on top of each other at
most plotting scales.
---
What you're saying is almost true, but for KSDIup and KSDIdn 
separately. However you are measuring towards volatility, which is 
defined as stdev*sqrt(n), and you are not measuring from the mean 
but the full distance price has moved from n periods ago to current. 
(And, you are measuring this distance in logs.) It may be hard to 
keep mental track of this and in fact it doesn't mean much
without 
the peakout lines: some markets are very volatile and have high 
KSDI's while others are quiet; the key is in comparison with
their 
historical behavior. KPO is the difference between two trends 
measurements and here you cannot simply convert to probabilities. In 
general to turn standard deviations into probabilities (%'s) use
the 
formula

Probability = ( 2*N(# stdev's) – 1 )*100

with N the standard normal cumulative distribution function, similar 
Excel NORMSDIST. But this is no AFL function so you have to use an 
approximation like Hastings, see e.g. page 1 of

http://home.ust.hk/~jinzhang/ust/Lecture3a.pdf

What you can do is plot in a separate pane the maxKSDI's as 
probabilities by inserting somewhere at the end of the code (in 
plots)

maxKSDIup_chance = (2*HAS(maxKSDIup) - 1)*100 ;
maxKSDIdn_chance = -(2*HAS(maxKSDIdn) - 1)*100 ;
Plot(maxKSDIup_chance,"maxKSDIup_chance",colorGreen,styleLine);
Plot(maxKSDIdn_chance,"maxKSDIdn_chance",colorRed,styleLine);

You'll see that eg when maxKSDIup=2 then maxKSDIup_chance=95.4 ( 
Chances in the downtrend are plotted as negative percentages).

I did not adapt the code any further because I sincerely doubt the 
usefulness of this because keep in mind that all this is based on 
the assumption that markets are random, so more or less a flat line. 
Many markets end up where they were minutes, days or years earlier 
but have described quite different paths in between. In a truly 
random market even ten consecutive upticks (or updays) would be 
quite out of the ordinary. Some markets are "internally" more 
trending ("not-random") than others. If say KSDIup=3 then
indeed, 
assuming randomness, this would have a 0.3% chance of happening, but 
it may occur on a weekly basis in some biotech stock while not in 
years in a bond fund. The closest you can get to meaningful 
probability is using the Peakout lines: if you set PChist=98 then 
there's a 2% chance of KPO breaking POhist based on history
loaded. 
Set PCcycl=90 then there's a 10% chance of breaking POcycl within 
the current lookback (=strongest trend).

7. In your coin tossing example, does of use of SD_sam take
care of the decreased confidence due to a small sample size? If
not, is not there a way to correct for low sample size.
---
No there isn't. In fact the origin of the sample and population 
labels has never become clear to me ( if anyone reading this knows, 
please enlighten us). The instability of small sample measurements 
seems impossible to overcome: do 2 coin tosses, both heads, give 
significant proof that the coin is biased? I assume Kase found good 
reason to start at eight but on the other hand there may be markets 
where a smaller lookback can be done. One more quote from volatility 
expert Bollinger:

"
The population calculation is used for Bollinger Bands, not the 
sample calculation for which the divisor changes to n-1. There is no 
theoretical reason for this. In initial testing the population 
seemed to work well, and so it was used. The bands would be a bit 
wider using the sample calculation.
"

That pretty much says it all.

8. For the pink and violet signal, I'm not sure exactly what
kind of "pullback" signal should be used – I never got a
clear understanding in the papers – maybe it is in the book which
I 
donot have. Is the pullback in KPO or is it in the PRICE? – KPO
peaks
and the price has a small pull back in that bar? Does the pullback 
KPO have to be lower that the previous KPO and do both have to be 
above the Peakout line? I have tired to look at the data but have 
not been able to figure it out.
---
Look only at the KPO, a pullback at the current bar simply means 
that the current KPO is below the Peakout line while the most recent 
KPO was above the Peakout line. What exactly the implications are I 
am still figuring out myself, but Kase correctly states that these 
KPO pullbacks often signal a temporary pause in the (price)trend and 
possibly even a trend reversal.

9. For the Col definition – the Ref() function use minus for
past and positive for future. I think that we are referencing the
future here??
---
Yes but don't worry, this is only to color say yesterday's
KPO bar 
pink once today's KPO is a pullback. This is the way Kase in her 
book highlights the KPO bars: the last bar that was above the 
Peakout line.  

11. In the FINI – ProphetX Manual – They state that "The
PeakMax 
line is the maximum of the 2 SD of the local PeakOscillator reading 
and the 90th percentile of momentum, historically. The PeakMin is 
the minimum of the two." Other reference uses the 98th percentile of 
the local distribution. Or is this the same thing?
---
The typist who wrote this probably did not have a clue, but if you 
read between the lines it is more or less correct, because 2 
standard deviations means the 98th percentile ( see formula point 
6). It boils down to a longer historical timeframe with a lower 
threshold and a shorter timeframe with a higher threshold. Further 
than that it's a matter of personal preference. Kase averages 80 
years of commodity history like you average several currencies. 
Especially when sufficient history is available my personal 
preference would probably be to limit measurement to any one 
particular market (one stock, commodity, currency). Why mix and 
average with other markets? 

12. I'm not sure I understand what you did with the Hastings
approx for the Normal distribution but that just means I will have
to study it a little more.
---
See point 6 above. I have put in a suggestion to AB to include 
NORMSDIST.

13. For POhist, I like the concept of using the data set we are
graphing to come up the POhist value. However, what if the data is
somewhat limited, should you try to average more information. In my
code, I have POhist as 3.33 which was the average of the 4 currency
pairs that I trade EUR = 3.51, CHF = 3.35, GBP = 3.57 and JPY =
2.97. If 3.33 is in standard deviations, then the probability is
like 99.9 percent. This seems to make sense to me. What do you
think?
---
See also point 11 above.  Your numbers seem very high, if you will 
let me know exactly how you calculate these. In an article by 
Michael Poulos he determined that the yen is one of the most 
trending futures markets, so this also seems contrary to your 
measurements.  

14. In the documentation for using Kase in ProphetX, they recommend
using the constant volume bars. I guess there are 3 types of bars
that can be constructed. 1) Constant time(the norm), 2)Constant
volume, 3) constant price interval. Maybe Kase goes over this in
the book. Statistically, what are the pros and cons for using the
different types. This gets back to my concern in the Forex Market
where throughout any 24hr period, there a predictable periods of
activity like when the Asian and European markets are open. It just
seems to me that giving the same statisical weight to a 1 hr period
where there is 100,000 trades versus a 1 hr period where there is
100 trades is not statistically sound. Any thoughts? Maybe is will
give us another Looping exercise for constructing constant volume
and constant price interval bars. But I don't want to go to the
trouble to do this if there is no statistical advantage. Maybe
someone else has already done this. I could check.
---
I'm not very familiar with the last two mentioned bar systems.
Your 
concerns about the huge volume differences in FOREX are justified I 
guess, but only as mentioned earlier because a very low volume 
period produces statistically less reliable conclusions, but how to 
implement this in the code?  But perhaps it is not so much a matter 
of less importance ( in low volume periods) but of different 
characteristics, like more/less volatility (?).  All I can think of 
would be to consider these periods separate markets, possibly 
optimize the PO %'s ( and LB's) loading only the distinctive
daily 
timeframes you are referring to.  This however I realize only makes 
sense for intraday trading. If you have code ideas I'd be happy
to 
give my opinion but you yourself are probably more qualified to 
initialize some kind of volume weighted code. I am pretty sure 
though that Kase does nowhere uses volume and I doubt the usefulness 
of weighing volume.

Below is KPO definition using only the close and limiting
maxKSDI's 
to zero. IMHO both changes are an improvement.

/* Definition of Kase PeakOscillator KPO */

function Kup(n)
{ return log(C/Ref(C,-n)); }

function Kdn(n)
{ return log(Ref(C,-n)/C); }

function Kvol(n)
{ return SD(log(Ref(C,-1)/C), n); }

function KSDIup(n)
{ return Kup(n)/(Kvol(n)*sqrt(n)); }

function KSDIdn(n)
{ return Kdn(n)/(Kvol(n)*sqrt(n)); }

maxKSDIup = 0;
for (i=minLB;i<=maxLB;i++)
{ maxKSDIup = IIf(KSDIup(i)>maxKSDIup,KSDIup(i),maxKSDIup); }

maxKSDIdn = 0;
for (i=minLB;i<=maxLB;i++)
{ maxKSDIdn = IIf(KSDIdn(i)>maxKSDIdn,KSDIdn(i),maxKSDIdn); }

KPO = maxKSDIup - maxKSDIdn;

/* end */

-treliff



Send BUG REPORTS to bugs@xxxxxxxxxxxxx
Send SUGGESTIONS to suggest@xxxxxxxxxxxxx
-----------------------------------------
Post AmiQuote-related messages ONLY to: amiquote@xxxxxxxxxxxxxxx 
(Web page: http://groups.yahoo.com/group/amiquote/messages/)
--------------------------------------------
Check group FAQ at: http://groups.yahoo.com/group/amibroker/files/groupfaq.html 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
     http://groups.yahoo.com/group/amibroker/

<*> To unsubscribe from this group, send an email to:
     amibroker-unsubscribe@xxxxxxxxxxxxxxx

<*> Your use of Yahoo! Groups is subject to:
     http://docs.yahoo.com/info/terms/