PureBytes Links
Trading Reference Links
|
Perhaps the problem can be assigned to the Zig Zag feature that is used in
Peaks and Troughs. Refer to MetaStock manual (ver 7.03) page 528:
"Be forewarned, that the last leg (i.e., segment of the Zig Zag is dynamic,
meaning that it can change. Therefore, be careful when designing system
tests, experts, etc. based on the Zig Zag indicator."
Al Taglavore
----------
> From: Owen Davies <owen@xxxxxxxxxxxxx>
> To: metastock@xxxxxxxxxxxxx
> Subject: Peak and trough
> Date: Wednesday, October 24, 2001 2:58 PM
>
> Among the many things I don't understand, this one has
> been bothering me of late:
>
> A while back, I decided to check one of my assumptions
> and test the higher-high, higher-low/lower-high, lower-low
> definition of trends. The easy way was to create a system
> using peak() and trough(). It worked beautifully. Virtually
> any contract I ran the system past, it made money. This
> I took to confirm the validity of the trend definition.
>
> Then the obvious dawned on me: Why not see whether there
> was enough of the move left, on average, to make a buck from it
> after the peak or trough was far enough behind us to get the
> signal in real time? I wrote another system that included a delay
> factor, so that one would enter or exit a trade only when the
> price had retraced from the peak or trough by the appropriate
> percentage. Again, it worked just fine. In historical testing, it
> made money like magic on anything from 5-minute to daily bars.
>
> Problem: When I put it on real-time data, it gave a lot of bad
> signals. Then it suddenly recalculated things, decided that the
> minor up and down trends of the last few weeks--this was
> on smallish intraday bars--had really been a long up trend,
> gave a new set of signals, and declared itself a winner.
>
> Does anyone understand these functions well enough to
> explain this behavior to me? I knew that peak() and trough()
> backdate their results by putting their signal several bars
> before it was possible to receive it; that is what I was trying
> to correct with the delay factor. Now it seems that they
> also recalculate their old percentages by comparing against
> the latest data rather than limiting themselves to the data
> that available in real time.
>
> No doubt this is a real beginner's mistake (despite having
> played with this for years), but it would have seemed
> reasonable to assume that a change of X% three weeks ago
> should remain X%, even if we looked at it later. This sort
> of thing has to be seen within its context, or it's useless.
> Is there some reason the functions have to be written this way,
> which I'm completely overlooking, or did someone just
> butcher this piece of code?
>
> Many thanks.
>
> Owen Davies
|