While optimization can be employed to search for a good system via
methods utilizing automated rule creation, selection and combination
or generic pattern recognition most people typically use optimization
to search for a good set of parameter values. The success of the
latter of course assumes one has a good rule set i.e. system to begin
with.
As far as your prediction is concerned ... I suspect there
are lots
of people, some of who post here, who could demonstrate otherwise
if
they chose to ...
--- In amibroker@xxxxxxxxxps.com,
"brian_z111" <brian_z111@...> wrote:
>
> "IS metrics
are always good because we keep optimizing until they
> are" (or words
to that effect by HB) which is true.
>
> It is not until we
submit the system to an unknown sample, either
an
> OOS test, paper
or live trading that we validate the system.
>
> Discussing your
points:
>
> IMO we are talking about two different trading
approaches, or
styles
> (there is no reason we can't understand
both very well).
>
> One is the search for a good system, via
optimization, with the
> attendant subsequent tuning of the system to
match a changing
market.
>
> If I understand Howard correctly
he is an exponent of this style.
>
> It is my prediction that
where we are optimising, using lookback
> periods, that the max
possible PA% return will be around 30, maybe
> 40, for EOD
trading.
>
> Do we ever optimise anything other than indicators
with lookback
> periods?
> If so that might be a different
story.
>
> Bastardising Marshall McCluhans famous line I could
say "the
> optimization is the method".
>
> It is also
possible to conceptually optimize the system, before
> testing, to the
point that little, or no, optimization is required
> (experienced
traders with a certain disposition do this quite
> comfortably but it
doesn't suit the inexperienced and/or those who
> don't have the
temperament for it).
>
> So, if a system has a sound reason to
exist, and it is not
optimized
> at all, and it has a statistically
valid IS test then it his highly
> likely to be a robust system,
especially if it is robust across a
> range of
stocks/instruments.
> The chances that this is due to pure luck are
probably longer than
> the chance that an optimized IS test, with a
confirming OOS test,
is
> also a chance event.
>
>
However, if I had plenty of data e.g. I was an intraday trader,
then
> I would go ahead and do an OOS test anyway (since the cost is
> negligible)
>
> Re testing on several stocks.
>
> If the system is 'good' on one symbol, (the sample size is valid)
and
> it is also good on a second symbol (also with a valid sample
size)
is
> that any different from performing an IS and an OOS
test?
>
> For stock trading, I call the relative performance, on
a set of
> symbols, 'vertical' testing as compared to 'horizontal'
testing
> (where horizontal testing is an equity curve).
>
> Yes, if an IS test, with no optimization, beat the buy & hold on
> every occasion (or a significant number of times) in a vertical
test
> and the sum of that test was statistically valid and the
horizontal
> test (the combined equity curve) was 'good' it would give
you
> something to think about for sure.
> If some of the
symbols, in the vertical stack, had contrary
returns,
> compared to
the bias of my system, I probably would start to get a
> little
excited.
>
> (I think perhaps you were alluding to something
along those lines).
>
> BTW did you know that the Singapore
Slingers play in the Australian
> basketball league?
>
>
Cheers,
>
> brian_z
>