[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

VERY LONG POST



PureBytes Links

Trading Reference Links

Was Re: Backtesting Perpetual Contracts (DANGER VERY LONG POST)

Dans un courrier daté du 12/11/98 19:02:35 Heure d7iver Pari35 Madrid,
markbrown@xxxxxxxxxxxxx a écrit :

> 
>  Near 100 percent of the system builders out there taint the developmental
>  process with their empirical observations the are frankly counter
>  productive.  I've seen only three other people even ever talk about this in
>  some sense.  Mark Jurik, Bob Brickey, PO,  while they have all arrived at
>  the conclusion something needed to be done we all do it differently.  I
have
>  been studying the comments of Jurik and Brickey,  I would not use that
>  approach myself, just recently Brickey gave a demonstration of a method, it
>  can be seen at :  http://www.sciapp.com/tradelab/fft.html  I believe this
>  method could be used, I dont use it but I do research it from time to time.
>  Brickey uses back adjusted contracts I believe to do his studies Jurik I
>  dont know but he also has talked about pre processing data
>  http://www.jurikres.com  PO's approach is black box and I dont know if it
>  pre processes the data or attempts to find patterns amongst the indicators
>  or a combination of both.  I see some promise in the product from the post
>  and demonstrations that PO www.sirtrade.com has given and the magazine
>  article about the product I read.
>  

Interesting post ( not because of me , but generally speaking).

Trading system development is a constant struggle between filtering raw data
and not getting to much lag.
This is valid for any of the above method.
For what I know, Bob Brickey specialized in signal processing and NN, what
could be considered as a not so obvious method, but properly set it could
yield to spectacular results.
The FFT approach as presented is also a valid one, and I also used it.
I have some FFT screenshots on the web too.

Mark Jurik's tools (AMA, VEL) are also acting as sophisticated filters ( I
suppose they are, due to the amont of EL code they take). His battlehorse is
removing noise without lag ( in fact with a minimum lag, as a zero lag is not
always possible).

Neurofuzzy logic attempts to do this very differently, by writing hundred of
rules that apply on some parts of the indicator range ( fuzzy sets), each
fuzzy set combining with an other one of each indicator, and so on
If there are 5 indicators that are splitted into 5 fuzzy sets, the number of
rules is theoretically 5^5 = 3125 rules.
It's like segmentation of the space of solution by using a lot of rules, that
are specific  to a small portion of the solution space. As they are too
numerous, they are impossible to understand in their gloability, but each rule
can be viewed and separately evaluated.
In this sense, it's not a true black box.

Useless to say that examining 3125 rules is cumbersome and useless ( you will
give up before rule 50), best is to see if 
they work in their globality.
So, it becomes a black box when you understand...that you cannot globally all
understand like if it was a 50 lines EL trading system.

The answer of each rule being buy or sell, we can see the fuzzy rule base as a
mosaic of pixels ( 1 rule= 1 pixel).
One could also say one  fuzzy rule= one N_indicators pattern, 
In fact, I simplify, because a lot of rules overlap with their close
neiborhood due to their fuzzy nature.
We attempt to get the most precise picture with the minimum of pixels that may
have the "sufficient" precision.
Above its overfitting, below it's GIGO.

Seeing this from a normal distance, this will produce an image ( can be viewed
as the considered  performance summary field targeted).
The quality of the image will be dependant of the answers from each
pixel...and their location ( frequency)
So, what we do is a kind of digital filtering where information is coded at
the end like on a computer screen where the purpose is to turn most of the
important pixel to produce a readable image

In a nutshell, all these methods have the same goal : starting from price data
(raw adata, or price action or whatever you want) and extract the information
from the noise.

Processing the raw information through complex  signal  processing filters or
indicators is basically the same: The raw data must be transformed to produce
a workable information ( buy sell signals) .


>  I use a different approach to either method,  I attempt to pre filter data
>  simply by using as little data as I can and approaching it from a thought
of
>  why does a system give a buy or a sell signal.  Most systems use price
>  action direction to dictate the direction of a trade taken.  If the world
>  markets were always in a trend this would be a great theory of how to trade
>  the worlds markets.  Given this then we need to understand that the entire
>  industry is eat up with purveyors of crap and crap ideas.  Most all wannabe
>  traders are too lazy to do research, and those that aren't lazy cant float
>  above the sea of bullshit.

This is the same idea. Prefiltering data using as a little is also signal
processing: Your filters, Jurik's filters, Bob Brickey's methods or extacting
information by segmentation of various indicators is basically the same: None
of us cannot accomodate with raw data to build trading systems.
As always, good ingredients could not make good food. Untrue if you have good
saucepans, good cooking book recipes, good paractice of cooking...
The difference with kitchen arts is that we all have the same: raw data.

Useless to say that all of this is far complex.
A reason why you can classify working trading systems into  two categoies:
Very simple (like channel Breakouts, adaptive, DMI crossover) and very
complicated like above.
I have never found something  very satifactory in the median zone.

To end with your question, we do not preprocess data and can use continous
contracts or not, this makes no valid difference with intraday data, because
the rollover event is a rare event that is statistically showing few impact (
overnight gaps are more difficult).

Thisis not true for daily contacts, and I use backadjusted contracts in this
case.
As a test, I used to throw into a dustbin any sytem that produces far
different results on different daily contact types. But this not important to
me now because I use to work with intraday data.

What we call preprocessing is in fact to compensate buy and sell signal to be
learned during the train period ( for the case of too much biased data), and
duplicate some difficult case to learn ( trend changes)
But this is optional when training and do not affect the test and unseen data.
This kind of preprocessing is applied to training data and do not offect the
indicators (inputs).

Sincerely,

Pierre Orphelin