[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Optimizing



PureBytes Links

Trading Reference Links

On Fri, 7 Aug 1998 11:27:38 +0900, you wrote:

>You are incorrect to think re-optimizing makes for better performance for a
>number of reasons. What you really want to do is known as de-optimization.

Just for the records: MS does not do any true optimization: As you all
know, it tries a (long) list of test cases and sorts the results.
Optimization would be more ...

>1)Use as few parameters as possible and favor systems that show greater
>insensitivity to changes in parameter values.

Imo, it is more important to use _all relevant_ parameters, and
sensitivity on the first run is a question of scale. If your results
are not sensitive to a parameter, this parameter is not a relevant one
and you should leave it behind.

>2)Backtest over as much data as possible, but 5 years minimum is
>recommended. You need a minimum 30 closed trades (buy & sells) to get
>statistically valid results. Anything less over 5 years and you might as
>well trade the 200 day SMA (or 233 for Fib fans!).

At least in my area (i.e. DAX & DAX options) the behaviors of the
market changes considerably over time. So it does not make much sense
to force a time period of e.g. 5 years into one model (constant over
time). "Re-optimizing", i.e. adaptive modeling with a "time parameter"
included is far more successful.

>3)Check for dependency between trades. While rare, they can exploited.

MS has a (unpredictable & uncontrollable) "built-in" dependency
between the trades in a sequence because of the "re-investment"
effect. A neutral test would require a constant investment for every
trade. 

>4)Now FORWARD TEST. Divide up your data set into a minimum of 3 sections.
>More is generally better, but the exact length of each section you choose is
>really just a function of your system itself. Then do the following:
>
>Optimize over section 1. Trade over section 2.
>Optimize over section 1+2. Trade over 3 etc, ad infinitum.

Because most systems will deal with a sequence of trades along time,
there is a more effective strategy: 

1. "Train" / "Optimize" the system with all data available. (The
training process may require a training, a testing, and a validation
subset, where in most cases there is no need for a validation subset.)
2. Apply the system to a subset of "new" data and do appropriate
"out-of-sample" tests. Verify the results.
3. "Re-train" / "Re-optimize" the system periodically with the "new"
data included into the "data available".

A good system should not show big changes during step 3., but adaption
to the market changes can and will lead to changes of parameters,
which when defined appropriately will give us new insights into some
aspects of market behavior.

>5)Don't trade the peak parameter, but choose one that has lower performance
>that the one immediately before and after it. Parameter values that are
>close together tend to have similar performance (IF you followed Rule 1!).
>You are better off trading the parameter whose performance will tend to go
>higher than trading the peak, which tends lower.

Performance of a system has to be measured in terms of how well known
effects are represented accurately. The sum of return is not so
important, especially because of MS's immanent and uncontrollable
rating system.

> Don't fight the Second Law
>of Thermodynamics aka entropy!

???

mfg rudolf stricker
| Disclaimer: The views of this user are strictly his own.