PureBytes Links
Trading Reference Links
|
Hi Paul --
While waiting for exactly what you want, you can get monthly information by setting the out-of-sample length to one month and reoptimizing every month.
The critical length is the length of the in-sample period. If you have a system that learns well, there is no risk in reoptimizing more often. But you might have to be careful to watch what happens to trades at the boundaries of the out-of-sample segments.
Thanks, Howard
On Fri, May 9, 2008 at 6:17 PM, Paul Ho < paultsho@xxxxxxxxxxxx> wrote:
On the question of defining fitness as a time series. I
agree that having a value every bar wouldnt be useful
However Looking at the following scenario, using CAR as
an example
If we have a value of CAR for every month, effective a
series of value of CAR. We are able to calculate a average CAR plus a standard
deviation of CAR.
I would suspect that knowing the distribution of CAR
would provide more telling information than just a single CAR value. Of
course, eventually we can incorporate the whole series into a single
number, but it will be a different number than a single CAR
number.
Anyway, thanks all for your contribution. Like I said
before, it will be fasinating to see some quantitative research in this area.
Greetings all --
I agree with Fred's comments just above this
posting.
On the subject of calculation of an objective function. In my
opinion, it is important to consolidate all of the characteristics that are
important to you (remember -- the definition of this function is your personal
choice) into a single value. It applies to the entire test run and is
best used to choose among alternatives that are generated under the same
circumstances -- all from a single optimization, for example. Some of
the best objective functions are those that reward equity growth and
smoothness, while penalizing drawdown. These calculations are made over
a number of bars -- usually the entire run. It does not make sense to me
to have a value for each bar.
Thanks, Howard
On Fri, May 9, 2008 at 4:42 AM, Fred Tonetti < ftonetti@xxxxxxxxxxxxx>
wrote:
- The logic behind having a
sensitivity guided or influenced in sample optimization is … Less
sensitive parameters = A more robust system = A higher likelihood of good
performance OOS … The problems with a separate guidance phase or a Test
OOS that precedes a Real OOS is that by definition this puts the real OOS
that much further away from the IS optimization and there really isn't any
"guidance" per se … What you get instead is more of a right / wrong type
of answer.
2-4
I understand your
questions but none of these have simple answers per se.
5. You
can't guide IS optimization by real OOS results without the OOS in effect
becoming part of the IS.
I can
see there have been a lot of discussions, mostly centred around the
processes and/or the methodology of optimization. While these are good
discussions, and with many points of views. I wonder if there are any
quantitive research being done to verify the different points of view. I
have consumed thousands of hours my own time plus even more computer
processing time in system development, and have made a few interesting
observations. However, these are not rigourous enough to quantify my
view. However, these are interestingly enough observation to
share.
1. Does sensitivity analysis provide similar consistency as
OOS guidance walk forward analysis. I use that term now for the sake of
continuity in the discussion. Sensitivity merely takes a different
path to the same data set that you optimize on. But OOS takes a
completely data set. My own observation is that I definitely need the
guidance of OOS. They dont do the same thing.
2. How does the
definition of fitness affect the result of the optimized system in terms
of robustness, performance and stability through time and with different
markets? As the old saying goes, the answer is only important if the
right question is asked. Defining a fitness criteria is like framing a
question. The answer comes in the form of result and will be different
of course depending the fitness criteria. Even for a single goal inside
one's mind, it can be expressed in term of different fitness criteria.
How is that different intrepretation going to affect us in terms of
equity curves comparsions. Eg CAR/MDD vs UPI vs some of the more
complicated variety?
3. Related to above, How about instead of
defining fitness as a single number, we define fitness as a time series.
E.g, UPI as a series of UPI every six month. We now have a distribution
of UPI. How does different distributions affect our In sample
optimization. It is like doing rolling optimization, and collating the
results.
4. How does artifical division of data set affect our
results? For example, If I optimized a system based on all the stocks on
the ASX exchange, including both current and delisted stocks, How would
it perform if I run it on ASX100 (The top 100 Australian company as
defined by S&P)
5. What is the real difference between
Guiding the optimization with OOS vs merly validating our optimization
through OOS in terms of results? Is it really better to skip Guidance
and go straight to Validation, or in my case, skip Validation and stick
with Guidance.
Food for thought, I'll be trying to answer some of
them myself. Cheers Paul. --- In amibroker@xxxxxxxxxxxxxxx, "Howard B"
<howardbandy@xxx> wrote: > > Hi Brian -- >
> I tend to agree with Fred. I, personally, do not use the guidance
data > set. If you want to use it, and you are looking for
consistency between the > two data sets, that might be valuable.
But another measure of that is > simply the equity curve and other
performance stats over both periods. > > Another approach
is to look at the robustness of the system by perturbing > each of
the perturbable variables (not all of them are), computing the >
scores for nearby points and rewarding "plateaus" in preference to
"peaks." > > Thanks, > Howard > > >
On Thu, May 8, 2008 at 7:38 PM, Fred Tonetti <ftonetti@xxx>
wrote: > > > Personally I couldn't find any value in the
guidance phase which I > > allowed for in IO for a couple of
years and have since removed the > > capability. >
> > > > >
------------------------------ > > > > *From:*
amibroker@xxxxxxxxxxxxxxx [mailto:amibroker@xxxxxxxxxxxxxxx] *On > > Behalf Of
*brian_z111 > > *Sent:* Thursday, May 08, 2008 8:32 PM > >
*To:* amibroker@xxxxxxxxxxxxxxx > > > >
*Subject:* [amibroker] Re: Fitness Criteria that incorporates Walk
Forward > > Result > > > > > > >
> Howard, > > > > Thanks for a very nice summary of the
framework. > > > > I would say that, since the training
search is exhaustive (therefore > > we must have identified all
possible candidates for the strategy) the > > best we can hope
for, in the guidance phase, is to change our choice > > of top
model to one or another of the 'training top models', or > >
abandon the strategy altogether. > > > > Also I wonder, if
the training model/guidance model combination, that > > passes
a minimum requirement in both phases, and shows less variance >
> between the training and guidance results, is the most generic
model > > of them all i.e. suited to a wider range of
conditions but not > > necessarily returning the highest possible
result in any particular > > market? > > > >
brian_z > > > > --- In amibroker@xxxxxxxxxxxxxxx <amibroker% 40yahoogroups.com>, "Howard B"
> >
<howardbandy@> wrote: > > > > > > Greetings
all -- > > > > > > I am coming to this discussion a
little late. I just returned from > > giving a > >
> talk at the NAAIM conference in Irvine. Some of the discussions
I > > had with > > > conference attendees was
exactly the topic of this thread. > > > > > > If you
are using some data and results to guide the selection of > > logic
and > > > parameter values (as described in the earliest
postings as OOS > > data), that > > > incorporates that
data into the In-Sample data set. In this case, > >
there > > > must be three data sets. They go by various names --
Training, > > Guiding, and > > > Validation will be
adequate for now. > > > > > > Optimization, by
itself, begins by generating a lot of alternatives. > > >
Optimization with selection of the "best" alternatives means
using > > an > > > objective function (or fitness
function) to assign a score to each > > >
alternative. > > > > > > The method of searching for
good trading systems used in AmiBroker's > > > automated
walk forward procedure uses a series of: search over an > >
in-sample > > > period, select the best using the score, test
over the out-of- sample > > > period. Use the concatenated
results from the out-of-sample > > periods to > > >
decide whether to trade the system or not. > > > > >
> Another method of searching for good systems (that might be
what > > some of the > > > posters to this thread were
suggesting) is to perform extensive > > searches of > >
> the data and manipulations of the logic using the Training
data, > > then > > > evaluate using the Guiding data.
Repeat this process as desired or > > required > >
> as long as the results using the Guiding data continue to
improve. > > When > > > they show signs of having
peaked, roll back to the system that > > produced the > >
> best result up to that point. Then make one evaluation using
the > > Validation > > > data. Now, step forward in
time and repeat the process. It is now > > the > >
> concatenated results of the Validation data sets that are used
to > > decide > > > whether to trade the system or
not. > > > > > > Thanks, > > >
Howard > > > > > > On Thu, May 8, 2008 at 9:24 AM,
Edward Pottasch <empottasch@> > > > wrote: > >
> > > > > thanks. Will have a look, > > >
> > > > > Ed > > > > > > >
> > > > > > > > > ----- Original Message
----- > > > > *From:* Fred <ftonetti@> > >
> > *To:* amibroker@xxxxxxxxxxxxxxx <amibroker%40yahoogroups.com> > > > > *Sent:*
Thursday, May 08, 2008 5:42 PM > > > > *Subject:* [amibroker]
Re: Fitness Criteria that incorporates > > Walk Forward >
> > > Result > > > > > > > > There's
a simple example of this in the UKB under Intelligent > > > >
Optimization ... > > > > > > > > --- In amibroker@xxxxxxxxxxxxxxx <amibroker% 40yahoogroups.com>,
> > "Edward Pottasch"
<empottasch@> > > > > wrote: > > > >
> > > > > > hi, > > > > > >
> > > > "While optimization can be employed to search for a good
system > > via > > > > > methods utilizing
automated rule creation, selection and > > > >
combination > > > > > or generic pattern
recognition" > > > > > > > > > > anyone
care to explain how this works? Some kind of inversion > > >
> technique? Here is what I want now give me the rules to get >
> there :) > > > > > > > > > >
thanks, > > > > > > > > > > Ed >
> > > > > > > > > > > > >
> > > > > > ----- Original Message ----- > >
> > > From: Fred > > > > > To: amibroker@xxxxxxxxxxxxxxx <amibroker% 40yahoogroups.com><amibroker%
> > 40yahoogroups.com> > > > > >
Sent: Thursday, May 08, 2008 2:37 PM > > > > > Subject:
[amibroker] Re: Fitness Criteria that incorporates Walk > >
> > Forward Result > > > > > > > > >
> > > > > > While optimization can be employed to
search for a good system > > > > via > > >
> > methods utilizing automated rule creation, selection and >
> > > combination > > > > > or generic pattern
recognition most people typically use > > > >
optimization > > > > > to search for a good set of
parameter values. The success of the > > > > > latter
of course assumes one has a good rule set i.e. system to > >
> > begin > > > > > with. > > > >
> > > > > > As far as your prediction is concerned ...
I suspect there are > > > > lots > > > >
> of people, some of who post here, who could demonstrate > >
otherwise > > > > if > > > > > they chose
to ... > > > > > > > > > > --- In amibroker@xxxxxxxxxxxxxxx <amibroker% 40yahoogroups.com><amibroker%
> > 40yahoogroups.com>, > > > >
"brian_z111" <brian_z111@> > > > > wrote: > >
> > > > > > > > > > "IS metrics are always
good because we keep optimizing until > > > >
they > > > > > > are" (or words to that effect by HB)
which is true. > > > > > > > > > > >
> It is not until we submit the system to an unknown sample, > >
> > either > > > > > an > > > > >
> OOS test, paper or live trading that we validate the
system. > > > > > > > > > > >
> Discussing your points: > > > > > > > >
> > > > IMO we are talking about two different trading
approaches, or > > > > > styles > > > >
> > (there is no reason we can't understand both very well). >
> > > > > > > > > > > One is the search
for a good system, via optimization, with > > the > >
> > > > attendant subsequent tuning of the system to match a
changing > > > > > market. > > > > >
> > > > > > > If I understand Howard correctly he is
an exponent of this > > > > style. > > > >
> > > > > > > > It is my prediction that where we
are optimising, using > > > > lookback > > > >
> > periods, that the max possible PA% return will be around
30, > > > > maybe > > > > > > 40,
for EOD trading. > > > > > > > > > >
> > Do we ever optimise anything other than indicators with >
> > > lookback > > > > > > periods? >
> > > > > If so that might be a different story. > >
> > > > > > > > > > Bastardising Marshall
McCluhans famous line I could say "the > > > > > >
optimization is the method". > > > > > > > >
> > > > It is also possible to conceptually optimize the
system, > > before > > > > > > testing, to the
point that little, or no, optimization is > > > >
required > > > > > > (experienced traders with a
certain disposition do this quite > > > > > >
comfortably but it doesn't suit the inexperienced and/or those >
> > > who > > > > > > don't have the
temperament for it). > > > > > > > > > >
> > So, if a system has a sound reason to exist, and it is not >
> > > > optimized > > > > > > at all, and
it has a statistically valid IS test then it his > > > >
highly > > > > > > likely to be a robust system,
especially if it is robust > > across > > > >
a > > > > > > range of stocks/instruments. > >
> > > > The chances that this is due to pure luck are probably
longer > > > > than > > > > > > the
chance that an optimized IS test, with a confirming OOS > >
> > test, > > > > > is > > > > >
> also a chance event. > > > > > > > > >
> > > However, if I had plenty of data e.g. I was an
intraday > > trader, > > > > > then > >
> > > > I would go ahead and do an OOS test anyway (since the
cost is > > > > > > negligible) > > >
> > > > > > > > > Re testing on several
stocks. > > > > > > > > > > > > If
the system is 'good' on one symbol, (the sample size is > > >
> valid) > > > > > and > > > > > >
it is also good on a second symbol (also with a valid sample >
> > > size) > > > > > is > > > >
> > that any different from performing an IS and an OOS test? >
> > > > > > > > > > > For stock trading,
I call the relative performance, on a set > > of > >
> > > > symbols, 'vertical' testing as compared to
'horizontal' > > testing > > > > > > (where
horizontal testing is an equity curve). > > > > >
> > > > > > > Yes, if an IS test, with no
optimization, beat the buy & hold > > > > on >
> > > > > every occasion (or a significant number of times)
in a > > vertical > > > > > test > >
> > > > and the sum of that test was statistically valid and
the > > > > horizontal > > > > > > test
(the combined equity curve) was 'good' it would give you > >
> > > > something to think about for sure. > > >
> > > If some of the symbols, in the vertical stack, had
contrary > > > > > returns, > > > >
> > compared to the bias of my system, I probably would start
to > > > > get a > > > > > > little
excited. > > > > > > > > > > > >
(I think perhaps you were alluding to something along those > >
> > lines). > > > > > > > > > >
> > BTW did you know that the Singapore Slingers play in the >
> > > Australian > > > > > > basketball
league? > > > > > > > > > > > >
Cheers, > > > > > > > > > > > >
brian_z > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > >
__._,_.___
Please note that this group is for discussion between users only.
To get support from AmiBroker please send an e-mail directly to
SUPPORT {at} amibroker.com
For NEW RELEASE ANNOUNCEMENTS and other news always check DEVLOG:
http://www.amibroker.com/devlog/
For other support material please check also:
http://www.amibroker.com/support.html
__,_._,___
|