[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Gambling Indicators: They work!



PureBytes Links

Trading Reference Links

Dans un courrier daté du 20/10/98 15:04:18é Pari18 Madrid, bfulks@xxxxxxxxxxxx
a écrit :

> 
>  In the example you cite, stochastics (K% D%), ADX, volatility, and the
>  "slope of the trend" are all indicators based solely upon price. So past
>  values of price are the only information this system is using. (I
>  understand that we could also use other data such as volume or interest
>  rates but that is not relevant to my argument.)

Yes. I was starting from your previous stochastic post.
The indicators that I have added to the list were here as examples.
>  
>  Assume that your AI system could "learn" some combination of rules that
>  will interpret all of the above four "indicators" based upon all of the
>  past price data available to train it. Clearly, as you state, there would
>  be thousands of rules.
>  
Right again. There will be "much rules"  ("much" starts from 15-30 according
to my own brain structure).
Above this (fuzzy!) limit I'm in trouble and unable to understand what I do,
or what's wrong in my EL coding.

>  So now we start with price, calculate four different derived indicators,
>  and learn thousands of rules to combine the values of the four indicators
>  into trading signals. I will even assume your system can do this - quite an
>  achievement.
>  
Yes.

>  But, it is not at all clear to me is that this combination would do a
>  better job than a system that derives trading signals directly from price.
>  In addition, I would be very worried that the thousands of rules might have
>  embedded in them, some hidden "resonance effect" that might get "excited"
>  by some unexpected combination of prices that the system has never seen
>  before. This could cause it to behave erratically. We are told that the
>  crash of the LTCM hedge fund was due to a combination of circumstances that
>  the model was not programmed to expect. (As that expression goes, "Sh--
>  happens".)
>  
The risk is close to zero, because I have not any Economy Nobel Prize in my
family or relationship.
Bulding such a system with hundred or thousands of rules (however most of them
are unactive) enforces to do backtesting on unseen data, and a huge database
is necessary to verify that no overfiting took place.
This is easy to do with intraday data, more if you backtest on several markets
and several timeframes.
This is the only way to wipe out the problem.
But again, it's always possible to build a dataserie that will defeat the
system, any system...

Regarding to resonance, the probabiliy is lower if you focus on a system with
less rules (say some hundred).
There are less chance to see this because the rulebase resolution will be
sharp enough to fit the data, not enough to fall into this trap. It's always
the same with trading systems regardless to the method used.
You need to find a compromise between astronomic returns due to overfitting
and real world.
It's also a number of degrees of freedom problem, already addressed  with NN
development , and also with classical systems when optimizing.

>  Then, what if I am using this system and it begins to act strangely. What
>  do I do? There is no way I can understand what is really going on with all
>  of those rules. Do I just stop using it? When do I stop using it? When do I
>  start using it again?
>  
The max Drawdown criterion is the rule like for any trading system.
I firmly blieve that , if your system has been trained on 5000 bars and tested
over 200,000, and if you are able to accept the 2.5 Drawdown observed in your
testing on unseen data ( provided that the DD is actualised according to the
average move level, but that's an other story), you will not have to worry
about what rules could be involved in a losing serie.
However,as rules are disclosed, you could find what's wrong. But it's just a
theoretical point of view.
Better is to check the rules before trading the system. If 200,000 bars is not
enough, test over 400,000 or more (change to shor term bars if you use
intraday data, or test against the CSI CD rom data if you trade on a daily
basis.
If after that you were observing a weak and unexpected drawdown, I think that
you are really unkucky.

>  With my system based directly upon price data. It is easy to understand
>  what is really going on since the relationships are logical and pretty
>  simple. If it starts behaving strangely, I can usually understand why. I
>  can usually create a test case that will simulate the strange behavior. I
>  can then even add more code to cover that case in the future by
>  INCREMENTALLY adding code to improve what already works WITHOUT CHANGING
>  what already works. This is a key point. Most of human experience, such as
>  the law, keeps what works and INCREMENTALLY fixes what doesn't work.
>  
You are right.
But this is not a major drawback for us.
The rules are too numerous to be encompassed and incrementally modified as you
do in EL hand coding ,
I agree.
We have stated that the advantage of the automated system was to build rules
outside the scope of our intuitive intelligence. As we have admitted this, we
may admit that we cannot change anything inside the system.
So, now, how to deal with the  incremental  improving method ?
This is waht you addressed below: We retrain in realtime using recent past
data (this is optional and need a fast computer to do it on a backtesting
basis).

>  With your system, as I understand it, if I want to train it for new
>  conditions, I have to add the new price data and retrain it. The result is
>  some new, different combination of thousands of rules and this new system
>  will work differently than the previous system. Any experience I have
>  developed with the old system of rules is gone.
>  
No, this is not what we do.
Realtime retraining may be necessary as a fine tuning of the system, as it's
impossible to meet all the possible cases when  originally training .
However, the trained system must have learned most of the major moves, so the
rules must be considered as acceptable and fixed forever.

We have implemented the fuzzy engine into TradeStation, and we load the
previous fuzzy description ( the same, unmodified that you have backtested).
When we decide to retrain it, we let itmodify the rules according to recent
market conditions, and the new fuzzy description is now active as long as you
continue to retrain it. (ResetTraining input to the DLL), or unless you set
reset training to zero.
In this case, you go back to the initial fuzzy description .
Doing so, we slightly modify the system to fit recent market condition, and
the result remains reversible.

//
------------------------------------------------------------------------------
--------
//     Dynamic Link Library - TradeStation Safir-X  Interface
//     JewelSoft - all rights reserved 
//     V1.0   November 96
//
------------------------------------------------------------------------------
--------
int FAR PASCAL NeuroFuzzyTrain (
	long	SafirId,	// Safir-X Predictor Identifier
	long	nIter,	// # of iterations through data 
 	long	ResetTraining,	// if >0 incremental real-time training

// if = 0 use fuzzy description initial state

It's flexible enough because:
1) You can  stay with the original neurofuzzy system at any time.
2) You can retrain (add local knowledge) and use the new system as long as you
want ( means that you keep the new system locally trained). You can train on
bar N,  retrain on bar N+1, N+2 ...
3)You can reset the system to its original state at anytime (back to step 1
status)
4) You can retrain at any bar strarting from the original state ( start from
step 1 status).

Steps 2-3-4 are of course commanded inside the system , by Easy Language.
Any criterion that you are able to write will allow to start retraining or
remove it!

You can even do more complicated things with this one: Run several neurofuzzy
predictors, retrain them, do multiple vote..

>  I will acknowledge that your system can possibly find some hidden
>  relationship that I didn't recognize and can possibly get better results on
>  the kind of price data it is trained on. But I question if that advantage
>  is worth risking money on a computer model that you cannot understand.
>  
>  I admit that I have no experience with your system so could misunderstand
>  how it works.
>  
>  Bob Fulks
>  
Was intersting and gave a chance to explain more in details.
Our goal was to apply human like thinking to automated sytem programming.

The neurofuzzy choice was taken because it was the closest AI field to human
thinking ( that is fuzzy) and technical analysist (fuzzy too).

Building a neurofuzzy rule base was the obvious sequel, because most of the
trading systems, when they start to be complicated , are decision tree like.

As nobody is perfect and anyone wants to improve, we have also applied an
optional automated incrementally learning.

As you may see, we are on the same target, even if the tools are not looking
the same.
They are closer than most could think at a first glance.

The truth in fact (and I'm not kidding) is that years ago I was tired to code
trading systems by hand, and backtest and code and.. , and thought that this
tasks should be done by the computer itself. 

Sincerely,

Pierre Orphelin