[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[amibroker] Re: Interpretation of Robustness Pics



PureBytes Links

Trading Reference Links

Thanks for the code and your thoughts.  On MCS, think you should take
another look.  If I were limited to just one statistical tool, MCS
would be it.  Almost no limit to what one can do with it, if applied
properly.  
--- In amibroker@xxxxxxxxxxxxxxx, "Fred" <fctonetti@xxxx> wrote:
> Overall the concept seems fine ...
> 
> It shouldn't be any secret from prior posts I've made, some of which 
> are at more than medium temperature, that my view of systems that are 
> designed, built, tested and optimized on relatively small time 
> segements of market data can't possibly be robust except by the 
> greatest stroke of luck.  I've seen people make this mistake in the 
> past and get burned badly by it.  In this regard I think this is one 
> area where most walk forward testing falls down as it's typically set 
> up to look at some arbitrary relatively short time segment of recent 
> market data hoping to captue the essence of what is currently going 
> on but like a long term SMA it hopelessly lags and has a very short 
> memory.
> 
> So this is where I believe the strengths lie in the process in that 
> you are ensuring that your initial testing of candidate systems 
> covers bull/bear/flat market segments.
> 
> I was getting a little uneasy with your description of manually 
> nominating candidates which I'd rather be via some mechanical method 
> whether it be %/bar or what have you.  When you ask about a fast 
> method do this I am assuming you mean faster than by looping through 
> and multiplying each bars ROC times the previous bars product while 
> in trades which is the long part of the process and then taking the 
> nth root of the result where n is the number of bars you were in 
> trades.  If entries and exits are at Open or Close there are other 
> ways to do this quickly i.e. for long side only trades ...
> 
> Buy     = DateNum() == 1030312;
> Sell    = DateNum() == 1030908 OR
>           DateNum() == 1030311;
> 
> OnBuy   = Ref(BarsSince(Buy), -1) < Ref(BarsSince(Sell), -1);
> 
> BarsIn  = IIf(OnBuy, BarsSince(Buy), 0);
> Factor  = IIf(OnBuy, C / Ref(C, -1), 1);
> CumFact = 100 * (exp(Cum(log(Factor))) ^ (1 / BarsIn) - 1);
> 
> Filter  = 1;
> AddColumn(BarsIn, "BarsIn", 1.0);
> AddColumn(CumFact, "CumFact", 1.6);
> 
> The above is a little sloppy but demonstrates the idea and you should 
> be able to check the explore against the backtest to see that it 
> produces the same result more or less.
> 
> As far as the rest is concerned, I don't particularly care for MCS 
> but I'd also have to admit I haven't done anything with it in a long 
> time so it's probably best I not comment further on it.
> 
> --- In amibroker@xxxxxxxxxxxxxxx, "quanttrader714" 
> <quanttrader714@xxxx> wrote:
> > Not picky, I enjoy the questions and peer review because it forces 
> me
> > to rethink things I've taken for granted in enough detail to (try) 
> to
> > answer coherently.  That can only be good, for me at least.
> > 
> > I think there's some confusion because this started out as an
> > explanation of *my personal* robustness criteria and bled over into
> > other topics such as issue selection, not that that's bad.  And oh 
> by
> > the way, some of the robustness tools are great for *part* of issue
> > selection as I see it but not all.  I think we're lost in the trees,
> > so let's step back and look at the forest.  
> > 
> > You said you're trying to understand my thinking.  OK, but remember
> > you asked for it!  Let's take robustness and issue selection one at
> > time.
> > 
> > 1.  Robustness.  You've seen criteria 1-5.  Criteria 1 and 2 use the
> > same 6 years of bull, bear and sideways data.  Criteria 3-5 use 
> *all*
> > data that I have for those same stocks and this varies mostly from
> > early 80s to sometimes much less, especially w/small caps.  So you
> > could say I'm doing some sort of OOS during robustness criteria 3-5 
> by
> > looking at performance graphs and simulated metrics based on all
> > data, for stocks that were initially selected by their performance
> > over only 6 years of data.  But with that said, I'm actually trying 
> to
> > use robustness to simplify and get away from all the OOS stuff once
> > and for all.  With the simple idea being: find systems that work
> > most of the time on most stocks and then be concerned with the only
> > OOS that really matters anyway, the future. Keep things simple, use
> > blunt tools like graphs and the bootstrap and try to minimize
> > unhelpful human influence.  Want to know what one of 
> my "proprietary"
> > end dates is for one of my 2 year periods?  My grandfather's 100th
> > birthday.  Why?  I believe picking a date that way is more robust 
> than
> > thinking too much about it and somehow screwing up the range.
> > 
> > 2. Issue selection.  Purpose is to find the best stocks to trade 
> with
> > the robust systems from the previous (robustness) phase.  I 
> personally
> > divide issue selection into 2 steps, nomination and confirmation. 
> > Nomination is the creative part and Steve K could give you much 
> better
> > input on this than me, such as rank ordering stocks by %profit/bar
> > (anyone know how to quickly do this for a large list)?  But I am
> > researching this and trying to do better.  Now once nominated by
> > whatever metric (perhaps only a single number) I like to confirm 
> each
> > candidate by running them through full criteria 3-5.  That gives me 
> a
> > really good feel for potential future performance.
> > 
> > Now to try to answer your question about the anchoring in 1971.  
> Note
> > that I had a caveat about more data being better.  This is always 
> true
> > for data from some processes but not always true for market derived
> > data. And recall that here we're concerned with the interaction
> > between a system and a nonstationary time series.  To make a long
> > story short there's a tradeoff between having enough datapoints in 
> the
> > basket to provide robust estimators and having so many that they're
> > biased toward a distant past that won't re-occur.  I am however
> > personally more biased toward adding more recent data because I
> > believe they're more likely to be more relevant.  The best place to
> > draw the line is literally a billion dollar question but my approach
> > has been a robust one... to simply go with what I have in my 
> database
> > which, so far, has proven sufficient.
> > 
> > Now aren't you glad you asked?
> > 
> > A few questions for you... now that you've seen the robustness
> > criteria, what do you think overall and what would you say is the
> > concept's greatest strength? Weakness?
> > 
> > --- In amibroker@xxxxxxxxxxxxxxx, "Fred" <fctonetti@xxxx> wrote:
> > > I'm not sure I understand the difference in why one would want to
> > add 
> > > data to the confirmation phase and not the selection phase or is
> > that 
> > > your flavor of OOS testing ?  That aside for a moment if as you 
> say 
> > > criteria 3-5 work better with more data then wouldn't one want to 
> > > anchor some beginning point in time long ago like 1971 or 
> whatever 
> > > and use all data from then up through current POSSIBLY leaving 
> some 
> > > segment out for OOS testing ? and then as time goes along include 
> > > more recent data as in sample for re-evaluation of the systems 
> > > robustness and issue selection and confirmation ?
> > > 
> > > Note: I'm not trying to be picky here, I'm only trying to
> > understand 
> > > your thinking which in at least SOME ways appears to be getting 
> > > closer to my own.
> > > 
> > > > When you say issue selection, are you talking about the
> > nomination 
> > > or
> > > > the confirmation part?  If confirmation, it would be better to
> > *add*
> > > > the new data in as they become available because criteria 3-5
> > (when
> > > > used for robustness or confirmation) inherently work better
> > (within
> > > > limits) with more data.  If nomination (for example using an
> > > > algorithm)maybe but I doubt it because I've never been able to
> > wring
> > > > much advantage out of OOS techniques (with data that exists) 
> and 
> > > that
> > > > sounds like a twist on OOS if I understand you correctly.  But
> > still
> > > > worth looking into. I've been surprised before.
> > > >


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Rent DVDs from home.
Over 14,500 titles. Free Shipping
& No Late Fees. Try Netflix for FREE!
http://us.click.yahoo.com/I3w.vC/hP.FAA/3jkFAA/GHeqlB/TM
---------------------------------------------------------------------~->

Send BUG REPORTS to bugs@xxxxxxxxxxxxx
Send SUGGESTIONS to suggest@xxxxxxxxxxxxx
-----------------------------------------
Post AmiQuote-related messages ONLY to: amiquote@xxxxxxxxxxxxxxx 
(Web page: http://groups.yahoo.com/group/amiquote/messages/)
--------------------------------------------
Check group FAQ at: http://groups.yahoo.com/group/amibroker/files/groupfaq.html 

Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/