PureBytes Links
Trading Reference Links
|
We would ? or you would do it this way ? Nothing there looks like
methodology I'd use or recommend.
--- In amibroker@xxxxxxxxxxxxxxx, "palsanand" <palsanand@xxxx> wrote:
> Hi,
>
> Forward or out of sample testing is one of the few tools we have
when
> system testing to avoid the dangers of curve-fitting or
optimization.
> For the purposes of illustration, say we are testing a new system
> over 10,000 bars of data. Typically we'd divide this data in half;
> call the first 5,000 bars segment A and the second 5,000 bars
segment
> B. We'd test the system over segment A, tweaking the parameters and
> logic until we get acceptable results, then run it over segment B.
If
> the results on segment B are not acceptable, it's back to the
drawing
> board and a repeat of the process: designing on A and verifying on
B.
>
> One thought is how many times do we have to go through this very
same
> process? If we're not having luck, would it make sense to reverse
the
> procedure and design a B and verify on A? This way the seemingly
more
> difficult data gets to speak first, and the odds of acceptable
> results on A is higher. By this thinking, we should always design
on
> the more difficult data segment and verify on the easier data
> segment.
>
> This main approach can be brought to the next level by breaking the
> data not into just two segments, but several segments. (Assuming
each
> segment remains large enough to be statistically significant.) Say
> we've broken our data into 10 equal segments, then the above
process
> involves designing on all ten, but verifying or judging on only the
> worst performing three (?) segments.
>
> Here we are employing a "minimax" solution: we want the worst case
> results as strong as possible. But is this too strict? Perhaps
there
> is some give and take involved. What weight we should put on the
> worst performing segments versus the overall performance? If the
> overall performance is very strong, can we look the other way on
the
> few segments with less attractive performance? Anyway, hopefully
> these insights and questions represent constructive food for
thought.
>
> Regards,
>
> Pal
> --- In amibroker@xxxxxxxxxxxxxxx, "Dave Merrill" <dmerrill@xxxx>
> wrote:
> > I'm not trying to be argumentative, honest (:-)... I'm more than
a
> little
> > sick of saying the same thing over and over, but I j u s t d o
> n ' t g
> > e t i t .
> >
> > ------------------------------
> >
> > I fail to see the huge difference in principle between equity
> feedback and
> > backtesting.
> >
> > let's start by assuming that backtesting performance of a system
> and its
> > parameters over some period of past data tells you something
about
> its
> > future performance. it's not a perfect predictor, but it's the
best
> evidence
> > we have. does this seem like a reasonable starting point? what
> alternative
> > is there?
> >
> > if that's true, why is it better to do it only once? what
> justification is
> > there for picking one examination period over another? clearly
> market
> > behavior will change continually after that. don't we need a way
of
> working
> > that looks at what's been happening and evolves our response?
> >
> > sounds like we examine performance up to some point and adjust,
> trade with
> > the best-choice system and parameters for a while, then examine
and
> adjust
> > again later. make sense? what alternative is there?
> >
> > so then, how often do we re-examine performance history? to put it
> > differently, how long do we ignore any changes in market dynamics
> that may
> > or may not have occurred? why would intermittently refusing to
look
> and
> > respond improve system performance or reliability?
> >
> > if that needs to be done, why not have the system itself do it,
as
> part of
> > its inherent operation? why is it better for us as an outside
agent
> to
> > periodically run some separate tests, reach into the internals of
> the
> > system, and change stuff?
> >
> > or should we just continue with the system and parameters we
choose
> at the
> > beginning? are they somehow more valid than what we'd choose
later,
> using
> > the same backtesting methods, but on a different date range of
data?
> >
> > ------------------------------
> >
> > I realize that even if it seems to make sense logically, this all
a
> complete
> > crock if no systems put together like this even backtest well,
> never mind
> > forward testing.
> >
> > but every time I think about abandoning this line of research, it
> seems like
> > the first thing I'd want to do with a new system would be (let me
> guess),
> > test and possibly adjust it using data up to some date, then run
> with it for
> > a while after that and see if equity growth is good. if it is,
I'd
> want to
> > lather, rinse and repeat with other in and out of sample data, to
> make sure
> > that wasn't coincidence.
> >
> > sounds way too familiar to be a completely different animal.
> >
> > dave
> > From: Fred [mailto:fctonetti@x...]
> >
> > That IS what I was trying to say. I suspect because equity
feed
> back
> > is like looking in a rear view mirror, great for letting us know
> > where we were and how we could have adjusted the past to make it
> > better, but that's about it.
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Rent DVDs Online - Over 14,500 titles.
No Late Fees & Free Shipping.
Try Netflix for FREE!
http://us.click.yahoo.com/Tq9otC/XP.FAA/3jkFAA/GHeqlB/TM
---------------------------------------------------------------------~->
Send BUG REPORTS to bugs@xxxxxxxxxxxxx
Send SUGGESTIONS to suggest@xxxxxxxxxxxxx
-----------------------------------------
Post AmiQuote-related messages ONLY to: amiquote@xxxxxxxxxxxxxxx
(Web page: http://groups.yahoo.com/group/amiquote/messages/)
--------------------------------------------
Check group FAQ at: http://groups.yahoo.com/group/amibroker/files/groupfaq.html
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
|