[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: when a strategy breaks the max drawdown



PureBytes Links

Trading Reference Links

> If you measure the DD in % and you design a robust trading strategy
> you maximum drawdown simply shouldn´t be broken during the first
> time of real time trading. 

Sorry, I can't agree with that.  Measuring DD in % is valuable, 
but it's not a magic panacea.  Stridsman claims the TS stats are 
useless because TS shows DD in $ instead of %, but all you have 
to do is measure your DD in $ for a fixed-position-size system.  
The example of a $10 DD with a $100 stock vs. a $200 stock is 
flawed, since it's comparing the $10 DD with different position 
sizes.  With a fixed position size, the system would buy 2x more 
shares at $100 than at $200, and you would see a 10% DD in the 
*position* for a $10 DD off $100 or a $20 DD off $200, just like 
you should.  With fixed position sizes, the dollar-denominated 
stats in TradeStation are just as valid as percentage-denominated 
stats, in spite of Stridsman's claims.  You just have to divide 
the $ DD into the $ position size to get the %.

And neither $ or % DD guarantee you won't exceed the level of DD 
you saw in your test.  Unless you have some amazing way of 
developing a "robust" trading strategy, I think the market will 
still find ways to pop your bubble.

Example:  I have a simple system that trades the ND.  With 
volatility-scaled position sizes, its equity curve is nearly 
straight all the way back to the inception of the ND in 1996, 
even though I only tuned it on 6-12 months of data.  I would 
consider that to be reasonably robust.  But back in late 2001 
something changed in the ND market, and in 11/01 that system just 
plain stopped working.  It went into an unprecedented drawdown 
and has never recovered to this day.  No amount of retuning 
allowed it to work in the post-11/01 market conditions.  
Fortunately I discovered a change in bar size allowed the system 
to react quicker to the market.  That new configuration seemed 
much more robust, and it performed well from 1996 through 2002.  
And even that setup is stumbling in the recent contracted market 
conditions.

I can find minor tweaks that allow the system to handle recent 
conditions AND also handle all the wildly varying conditions 
we've seen over the past 7 years, even though 80% or more of the 
7-year test is out-of-sample.  And still the market manages to 
find new ways to hoodwink me, and I have to find new tweaks that 
include those new conditions.  

If you have some way of verifying that a system is "robust" 
enough to handle the unforeseen changes in the future, I'd like 
to hear more about it.  I don't remember Stridsman having any 
remarkable methods beyond his %-based approaches.

Frank Fleischer wrote:
> Max DD is a deceiving and worthless piece of poop.  It's deceiving
> because traders think that it's a valuable statistic.  It's
> worthless because it says nothing about anything since any Maximum
> implies a single unusual event that took place within a finite set of
> historical data.  Single historical events say nothing about the
> future. 

Exactly!  And remember that when you optimized your system, you 
carefully (even if unknowingly) filtered out any settings that 
had bad drawdowns IN THE PAST.  That doesn't mean you filtered 
out all settings that CAN have bad drawdowns IN THE FUTURE.  It's 
almost certain your system still has the potential to generate 
large drawdowns, even if it didn't in your backtest.

Gary