[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [amibroker] Re: Benchmarking



PureBytes Links

Trading Reference Links

Brian,

First, remember, I am only trading ES, the e-mini S&P 500.  That  
simplifies my problem immensely.  I never have to worry about if the  
liquidity is there and if the slippage is accurate to my simulation.   
They are known quantities.  I also only look at 1 minute bars, and do  
not hold overnight.  My trades rarely last over an hour, so  
fundamental data is of little consequence.  Only the immediate news  
has a non-technical impact, and that shows up within one minute in the  
chart anyway.

Because my trading universe is so limited in scope, I was able to  
write my own back tester in AFL that only runs in indicator mode with  
the equity curve always showing in my chart.  I overlay a lot of  
indicators and stats on my charts as desired to gain greater insights  
about what is happening.

So my secret is to simplify the unknowns to the smallest universe  
possible and specialize on just one kind of trade.  This gives me a  
fighting chance of actually understanding what I am doing.

The recent addition of static arrays was a great help for me in saving  
the temp results of previous runs for equity curve comparisons.  I  
manually control every aspect of parameter changes and selection of  
which curves to save.  I also take a lot of screen shots of my charts  
for later comparison purposes.

My best system is a revision to mean system so far.  For this system,  
volatility is good and trends are bad.  I am currently working on a  
trend following version to see what I can do with that.  In that case  
trends are good.  I am only talking about the action over one day of  
course.

I don't rely on mathematical straightness, because the ES does not  
give equal opportunity all the time.  For a reversion to mean system,  
volatility gives more opportunity.  I measure volatility and apply  
that value to modify different parameters on the fly.

I start with the philosophy, and work towards the optimization from  
there.  In the beginning, it helps to do it the other way around,  
until you learn the relationships between parameters.  Mostly it is  
staring at charts with indicators overlaid and asking yourself if  
there is anything significant about the patterns you see, then program  
the ideas in and see what it gives you.  It also helps to manually  
trade your ideas some.  The perspective of what can and will go wrong  
in the real world is quite eye opening.

BR,
Dennis

On Jun 20, 2009, at 12:50 AM, brian_z111 wrote:

> Thanks .. its great to get some feedback about how people are  
> actually going about their evaluation.
>
> I notice that you are using visual methods, rather than a metric, to  
> select your top model.
>
> For my first optimization I didn't look at any eq curves ... this is  
> not the default in AB's opt?
>
> How do I create and plot the relative curves for all of the possible  
> combinations ...d o you limit this to subsets to save time OR  
> perhaps from your later comments, you don't search all candidates  
> but start in a chart and then manually add plots to test the  
> combinations you fancy?
>
>> I also block out the
>> best performing times to concentrate effort on the low performing
>> times as part of my process.
>
> I agree that this is an area worth focusing on.
>
> I did notice that AB only optimizes the 'concatenated' data, as a   
> portfolio, (at least as I understand it ... couldn't find any  
> guidance on this in the help manual or Howards' books .. trial and  
> error seems to indicate the opt report is a 'one in all in' approach  
> so I couldn't differentiate under/over performance relative to under/ 
> over performing stocks without a special effort on my part).
>
>> (last Fall was
>> a great addition to the max volatility set).
>
> Yes. I noticed that if I run the MACrossover opt on monthly data,  
> for the entire 1970-present range, then it still qualifies as a  
> trendfollower (just).. IOW the dip of 2008, wasn't quite big enough  
> to take us out of an uptrend, from the long term perspective, so  
> bull trend following systems still work in that timeframe/timeslice.
>
> I didn't report on it but when I ran an MACrossover opt on the  
> random dataset that I produced (designed for and run in EOD data)  
> the spread of the optimized results is pretty tight and low (not  
> significant) ... using ABs' total portfolio approach .. this is  
> reassuring since the concatenated random datasets converge on the  
> mean of zero returns (half the datasets underperform and half over  
> perform with low volatility) ... I wonder if that will change if I  
> introduce some volatility ... I'll have to try it.
>
>> I overlay my equity curves on top of
>> each other and look for the straightest curve relative to the >market
>> opportunities (volatility).
>
> You are not confident of the metrics that measure straightness or  
> you just prefer the eyeball method (I found it sort of ironic, or  
> something, that Howard also likes to eyeball the curves).
>
> Volatility could be good or bad ... say it is all over the shop ..  
> don't you want trendiness with volatility?
>
> Are you measuring or eyeballing volatility?
> Do you filter, by volatility, to select the instrument to trade?
>
>> When rare
>> events hurt the performance, I zoom in on those trades and try to
>> understand what about the underlying algorithm lets it happen.   
>> >Then,
>> I think about how I can make my algorithm smarter in the general case
>> to reduce these types of problems.
>
> If another line of code removes a significant number of losers then  
> it is likely to be generic going forward?
>
>> I do a lot more thinking than testing and optimizing
>
> That was my initial reaction to my optimizing experience ... that if  
> we enter some parameters and send the computer off to search then  
> what comes back might not have a lot of meaning, in terms of a  
> trading philosophy, whereas if we are working at developing a  
> trading philosphy first, then later on opt can help develop/test  
> systems derived from the philopsphy.
>
> It takes a long time , and a certain skill, to develop a trading  
> philosophy, but anyone can quickly learn to run an opt, with or  
> without applying a lot of thought to what they are doing.
>
> --- In amibroker@xxxxxxxxxxxxxxx, Dennis Brown <see3d@xxx> wrote:
>>
>> Brian,
>>
>> When I optimize (and I am only working with ES), I watch the complete
>> equity curve over more than a thousand trades under all market
>> conditions of volatility and trends in both directions (last Fall was
>> a great addition to the max volatility set).  I also block out the
>> best performing times to concentrate effort on the low performing
>> times as part of my process.  I overlay my equity curves on top of
>> each other and look for the straightest curve relative to the market
>> opportunities (volatility).  In other words, the slope of the equity
>> increases with the volatility.  If an optimization step generates a
>> better profit by virtue of clipping off a big drawdown, or other rare
>> event, then it is over optimization and I reject it as just data
>> mining.  However, if an optimization step results in a steady,
>> constantly deviating increase in outcome over all market conditions,
>> then I accept it as a fundamentally good optimization.  When rare
>> events hurt the performance, I zoom in on those trades and try to
>> understand what about the underlying algorithm lets it happen.  Then,
>> I think about how I can make my algorithm smarter in the general case
>> to reduce these types of problems.  I do a lot more thinking than
>> testing and optimizing --which I do by hand with parameters, so I  
>> know
>> what the relationships are intuitively after a while.  I do not
>> consider it cheating to have parameters that adjust themselves to
>> general market conditions like high or low volatility, etc., just as
>> long as the algorithms are very general and make logical sense
>> regardless of the data.
>>
>> It is a slow process, but when I am done, I have an algorithm and
>> settings that are robust to whatever the market throws at me.   I
>> don't like being fooled by randomness!
>>
>> BR,
>> Dennis
>>
>> On Jun 19, 2009, at 8:10 PM, brian_z111 wrote:
>>
>>> OR
>>>
>>> ... is opt correctly flagging something about market behaviour or
>>> system trading that I don't understand?
>>>
>>>
>>> OR
>>>
>>>
>>> ... all of the above, none of the above, a combination of some of
>>> the above?
>>>
>>>
>>>
>>> Also, the MACrossover changed from TrendFollowing to MeanReversion
>>> when I added 1.5 years to a 30 year lookback and not a 10 year
>>> lookback ... not a 9 year lookback as per my previous post.
>>>
>>>
>>> --- In amibroker@xxxxxxxxxxxxxxx, "brian_z111" <brian_z111@> wrote:
>>>>
>>>> So, in my first ever attempt at optimization I am presented with a
>>>> conundrum.
>>>>
>>>> If I opt MACrossver(C,X), and look down the list of top candidates
>>>> (using the AB default objective function), I see that historically
>>>> this system was both a 'trend following system' and a 'reversion to
>>>> mean' system.
>>>>
>>>> If I could travel back in time, armed with this info, should I
>>>> trade MA crosses as a 'trend follower', 'reversion to mean' or  
>>>> both?
>>>>
>>>> OR
>>>>
>>>> ...on the other hand am I failing to interpret the results
>>>> correctly ... is there something about opt that I don't
>>>> understand ... if I develop my opt skills will this help me solve
>>>> this riddle?
>>>>
>>>> OR
>>>>
>>>> ... is optimization itself somehow not providing me with a clear
>>>> understanding of how I should have followed the markets (for that
>>>> historical period)?
>>>>
>>>> OR
>>>>
>>>> ... is it something to do with MACrossovers ... perhaps other
>>>> systems are more amenable to optimization ... if so how can I
>>>> filter systems in advance to save me wasting my time opting non-
>>>> compliant systems?
>>>>
>>>> OR
>>>>
>>>> .... is the objective function the underlying cause of this  
>>>> dilemna?
>>>>
>>>> OR
>>>>
>>>> ... I have done something wrong .. failed to optimize correctly or
>>>> used AB incorrectly?
>>>>
>>>> OR
>>>>
>>>>
>>>> ... is it the data... is there something wrong with the Yahoo data
>>>> I used?
>>>>
>>>> --- In amibroker@xxxxxxxxxxxxxxx, "brian_z111" <brian_z111@> wrote:
>>>>>
>>>>> What I am headlining here is that looking back,at 9 years of data,
>>>>> the best MA system was 'trendfollowing' then, only 1.5 years later
>>>>> the best MA system was turned completely upside down into a
>>>>> 'reversion to mean' system ??????
>>>>>
>>>>> Say, what?
>>>>>
>>>>> I don't have an explanation for any of this but it is too early
>>>>> anyway ... I need to make a lot more observations before it is
>>>>> time to hypothesis.
>>>>>
>>>>> --- In amibroker@xxxxxxxxxxxxxxx, "brian_z111" <brian_z111@>  
>>>>> wrote:
>>>>>>
>>>>>> This is the first time I have optimized or used any kind of
>>>>>> synthetic data in AB ... so far I haven't used any sophisticated
>>>>>> methods to produce synthtetic data either.
>>>>>>
>>>>>> I have only done a small amount of testing but I immediately
>>>>>> found three anomalies that might be worth further investigation.
>>>>>> I have aleady reported bcak on two:
>>>>>>
>>>>>> - why does an apparently worthless 'system' (plucked out of thin
>>>>>> air unless my subconscious mind intervened) outperfrom on approx
>>>>>> 6% of stocks when those stocks are assumed to be correlated to a
>>>>>> fair extent ... chance? OR some property of the data that
>>>>>> correlation does not measure ... what property of the data would
>>>>>> favour that randomly selected 'system'?
>>>>>>
>>>>>> Note. if anything I expected the system to test the assumption
>>>>>> that the MA is the trend and I expected the system to 'fail'.
>>>>>>
>>>>>>
>>>>>> - why does the same system then outperform approx 50% of the time
>>>>>> when tested over randomly generated price series ... is it a
>>>>>> coincidence that the outperformance ratio, on random data, is
>>>>>> close to the expected for randomness? and why didn't the bull
>>>>>> 'system' outperform only on the random price series that
>>>>>> outperformed?
>>>>>>
>>>>>>
>>>>>> The third anomaly is:
>>>>>>
>>>>>> - I optimized the following on some Yahoo ^DJI data ... 10 years
>>>>>> EOD ... 9951 quotes ... 2/01/1970 until June 4th 2009.
>>>>>> Default objective (fitness) function = CAR/MDD.
>>>>>>
>>>>>> fast = Optimize( "MA Fast", 1, 1, 30, 1 );
>>>>>> slow = Optimize("MA Slow", 1, 1, 30, 1 );
>>>>>>
>>>>>> Buy = Cross(MA(C,fast),MA(C,slow));
>>>>>> Sell = Cross(MA(C,slow),MA(C,fast));
>>>>>>
>>>>>> - when I optimized on the total range I found that the top
>>>>>> values, were inverted (as per Howard's examples in this forum and
>>>>>> his books) but when I left out the 2008/09 extreme market
>>>>>> conditions I found this did not hold.
>>>>>>
>>>>>> Why does sucn a relatively small change in the test range make
>>>>>> such a radical difference in the outcomes?
>>>>>>
>>>>>> Here are some of the reported metrics from AB .. notice that they
>>>>>> are similar in some cases and markedly dissimilar in others.
>>>>>>
>>>>>> I am not sure if that leads to a question but it certainly gets
>>>>>> my attention considering that I am in the business of engineering
>>>>>> reward/risk.
>>>>>>
>>>>>> Note that I am using ProfitFactor because it is typical in the
>>>>>> industry but it has some question marks over whether is it the
>>>>>> best CoreMetric to use (I am investgating PowerFactor and
>>>>>> assymetricalPayoffRatio which might be more apt ... I hope to
>>>>>> post more on these metrics later).
>>>>>>
>>>>>> Opt1:
>>>>>>
>>>>>> using all data
>>>>>>
>>>>>> top model = CAR/MDD == 0.25 AND periods == fast 10, slow 7;
>>>>>>
>>>>>> NETT PROFIT 1749%
>>>>>> Exposure 44.9;
>>>>>> CAR 7.68;
>>>>>> RAR 17.07
>>>>>> MAXDD 31.57
>>>>>> RECOVERYFACTOR 2.64
>>>>>> PF 1.62 (WIN 68% * PR 0.75)
>>>>>> #TRADES 588
>>>>>>
>>>>>>
>>>>>> Opt2:
>>>>>>
>>>>>> using data range from  2/01/1970 to 31/12/2007 (that's Dec for
>>>>>> the benefit of timezoners).
>>>>>>
>>>>>> top model = CAR/MDD == 0.42 AND periods == fast 1, slow 6;
>>>>>>
>>>>>> NETT PROFIT 1921%
>>>>>> Exposure 54.51;
>>>>>> CAR 8.23;
>>>>>> RAR 15.09
>>>>>> MAXDD 21.67
>>>>>> RECOVERYFACTOR 5.07
>>>>>> PF 1.35 (WIN 40% * PR 2.03)
>>>>>> #TRADES 1049
>>>>>>
>>>>>>
>>>>>> I hope I reported the metrics correctly but anyone can replicate
>>>>>> my tests and report otherwise.
>>>>>>
>>>>>> I think it also demonstrates that if PoF (PowerFactor) is a
>>>>>> better CoreMetric than ProfitFactor it will need to be
>>>>>> standardized on a returns/time basis (choose your time period =
>>>>>> the basetimeframe you trade ... PoF is related to GeoMean per  
>>>>>> bar?)
>>>>>>
>>>>>> --- In amibroker@xxxxxxxxxxxxxxx, "brian_z111" <brian_z111@>  
>>>>>> wrote:
>>>>>>>
>>>>>>> Following recent discussions on benchmarking and using rule
>>>>>>> based systems to engineer returns to meet 'clients' profiles
>>>>>>> i.e.Samantha's MA(C,10) example, I did some follow up R&D with
>>>>>>> the intent of expanding the examination a little further via a
>>>>>>> zboard post.
>>>>>>>
>>>>>>> I may, or may not, get around to that so in the meantime I
>>>>>>> decided I would share a couple of things while they are still
>>>>>>> topical.
>>>>>>>
>>>>>>> I made up some quick and dirty randomly generated eq curves so
>>>>>>> that I could optimise MA(C,10) on them (out of curiosity).
>>>>>>>
>>>>>>> Also, out of curiosity, I decided to see how the example signal/
>>>>>>> filter code that I made up, as the study piece for Yofas topic
>>>>>>> on benchmarking, would actually perform.
>>>>>>>
>>>>>>> Buy = Ref(ROC(MA(C,1),1),-1) < 0 AND ROC(MA(C,1),1) > 0 AND
>>>>>>> ROC(MA(C,10),1) > 0;
>>>>>>> Sell = Cross(MA(C,10),C);//no thought went into this exit and I
>>>>>>> haven't tried any optimization of the entry or the exit
>>>>>>>
>>>>>>> By chance I noticed that it outperformed on one or two of the
>>>>>>> constituents of the ^DJI (Yahoo data ... 2005 to 2009) and to
>>>>>>> the naked eye the constituents all seem to be correlated to a
>>>>>>> fair extent over that time range.
>>>>>>>
>>>>>>> Also, to the naked eye, it outperforms on randomly generated
>>>>>>> stock prices around 50% of the time and the outperformnce
>>>>>>> doesn't appear to be correlated to the underlying(I haven't
>>>>>>> attempted to find an explanation for this).
>>>>>>>
>>>>>>> Here is the code I used to make up some randomly generated
>>>>>>> 'stocks'.
>>>>>>>
>>>>>>> As we would expect it produces, say, 100 price series with a
>>>>>>> concatenated mean of around zero (W/L = 1 and PayoffRatio == 1)
>>>>>>> etc.
>>>>>>> When plotted at the same time ... individual price series are
>>>>>>> dispersed around the mean in a 'probability cone' ... in this
>>>>>>> case it is a relatively tight cone because the method doesn't
>>>>>>> introduce a lot of volatility to the series.
>>>>>>>
>>>>>>> /*P_RandomEquity*/
>>>>>>>
>>>>>>> //Use as a Scan to create PseudoRandom Equity curves
>>>>>>> //Current symbol, All quotations in AA, select basetimeframe in
>>>>>>> AA Settings
>>>>>>> //It will also create the curves if used as an indicator (add
>>>>>>> the appropriate flag to ATC)
>>>>>>> // but this is NOT recommended as it will recalculate them on
>>>>>>> every refresh.
>>>>>>> //Indicator mode is good for viewing recalculated curves (click
>>>>>>> in whitespace)
>>>>>>> //CommentOut the Scan code before using the indicator code.
>>>>>>> //Don't use a very large N or it will freeze up indicator
>>>>>>> scrolling etc
>>>>>>>
>>>>>>> n = 100;//manually input desired number - used in Scan AND
>>>>>>> Indicator mode
>>>>>>>
>>>>>>> ///
>>>>>>> SCAN
>>>>>>> ///////////////////////////////////////////////////////////////////
>>>>>>>
>>>>>>>
>>>>>>> Buy=Sell=0;
>>>>>>>
>>>>>>> for( i = 1; i < n; i++ )
>>>>>>>
>>>>>>> {
>>>>>>>
>>>>>>> VarSet( "D"+i, 100 * exp( Cum(log(1 + (Random() - 0.5)/ 
>>>>>>> 100)) ) );
>>>>>>> AddToComposite(VarGet( "D"+i ),"~Random" + i,"X",1|2|128);
>>>>>>> //Plot( VarGet( "D"+i ), "D"+i, 1,1 );
>>>>>>> //PlotForeign("~Random" + i,"Random" + 1,1,1);
>>>>>>> }
>>>>>>>
>>>>>>> /*
>>>>>>> ////PLOT/////////////////////////////////////////////////////
>>>>>>>
>>>>>>> //use the same number setting as for the Scan
>>>>>>>
>>>>>>>
>>>>>>> for( i = 1; i < n; i++ )
>>>>>>>
>>>>>>> {
>>>>>>>
>>>>>>> PlotForeign("~Random" + i,"Random" + i,1,1);
>>>>>>>
>>>>>>> }
>>>>>>>
>>>>>>>
>>>>>>> ////
>>>>>>> OPTIMIZE
>>>>>>> ///////////////////////////////////////////////////////////
>>>>>>>
>>>>>>> //use the filter to run on Group253 OR add ~Random + i
>>>>>>> PseudoTickers to a Watchlist and define by AA filter
>>>>>>>
>>>>>>>
>>>>>>> //fast = Optimize( "MA Fast", 1, 1, 10, 1 );
>>>>>>> //slow = Optimize("MA Slow", 4, 4, 20, 1 );
>>>>>>>
>>>>>>> //PositionSize = -100/P;
>>>>>>> //Buy = Cross(MA(C,fast),MA(C,slow));
>>>>>>> //Sell = Cross(MA(C,slow),MA(C,fast));
>>>>>>>
>>>>>>> //Short = Sell;
>>>>>>> //Cover = Buy;
>>>>>>>
>>>>>>> I also stumbled on this, which seems to have some relevance:
>>>>>>>
>>>>>>> http://www.scribd.com/doc/6737301/Trading-eBookCan-Technical-Analysis-Still-Beat-Random-Systems
>>>>>>>
>>>>>>>
>>>>>>> It contains a link to a site that has a free download of some
>>>>>>> RNG produced datasets.
>>>>>>>
>>>>>>> There hasn't been much discussion on using synthetic data in the
>>>>>>> forum ... Patrick recommended it for testing? OR
>>>>>>> benchmarking? ... Fred is against using it ("If we knew enough
>>>>>>> about the characteristics of the data, in the first place, to be
>>>>>>> able to create synthetic data then we would know enough to
>>>>>>> design trading systems to exploit the data's profile anyway", OR
>>>>>>> something like that).
>>>>>>>
>>>>>>> I was titillated enough by my first excursion into benchmarking
>>>>>>> with synthetic data to bring me back for some more.
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> **** IMPORTANT PLEASE READ ****
>>> This group is for the discussion between users only.
>>> This is *NOT* technical support channel.
>>>
>>> TO GET TECHNICAL SUPPORT send an e-mail directly to
>>> SUPPORT {at} amibroker.com
>>>
>>> TO SUBMIT SUGGESTIONS please use FEEDBACK CENTER at
>>> http://www.amibroker.com/feedback/
>>> (submissions sent via other channels won't be considered)
>>>
>>> For NEW RELEASE ANNOUNCEMENTS and other news always check DEVLOG:
>>> http://www.amibroker.com/devlog/
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>
>
>
>
>
>
> ------------------------------------
>
> **** IMPORTANT PLEASE READ ****
> This group is for the discussion between users only.
> This is *NOT* technical support channel.
>
> TO GET TECHNICAL SUPPORT send an e-mail directly to
> SUPPORT {at} amibroker.com
>
> TO SUBMIT SUGGESTIONS please use FEEDBACK CENTER at
> http://www.amibroker.com/feedback/
> (submissions sent via other channels won't be considered)
>
> For NEW RELEASE ANNOUNCEMENTS and other news always check DEVLOG:
> http://www.amibroker.com/devlog/
>
> Yahoo! Groups Links
>
>
>



------------------------------------

**** IMPORTANT PLEASE READ ****
This group is for the discussion between users only.
This is *NOT* technical support channel.

TO GET TECHNICAL SUPPORT send an e-mail directly to 
SUPPORT {at} amibroker.com

TO SUBMIT SUGGESTIONS please use FEEDBACK CENTER at
http://www.amibroker.com/feedback/
(submissions sent via other channels won't be considered)

For NEW RELEASE ANNOUNCEMENTS and other news always check DEVLOG:
http://www.amibroker.com/devlog/

Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/amibroker/

<*> Your email settings:
    Individual Email | Traditional

<*> To change settings online go to:
    http://groups.yahoo.com/group/amibroker/join
    (Yahoo! ID required)

<*> To change settings via email:
    mailto:amibroker-digest@xxxxxxxxxxxxxxx 
    mailto:amibroker-fullfeatured@xxxxxxxxxxxxxxx

<*> To unsubscribe from this group, send an email to:
    amibroker-unsubscribe@xxxxxxxxxxxxxxx

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/