While reading David Aronson's book _Evidence-based Technical
Analysis_,
I stumbled across a modified Monte Carlo permutation
(MCP) procedure that
compensates for data mining bias, assuming that
the "best" permutation of
rules was not selected with a directed search.
From Aronson's
perspective, this is good news. He views data mining
as a useful procedure
in the discovery phase of research. Plus, MCP
does not require
out-of-sample data. Thus it is possible to use more
data for mining and
still minimize data mining bias in test results.
The likely result: fewer
false positives for systems that are
worthless, and fewer false negatives
for systems that are valuable.
The paper with discussion and C# code is
here:
<http://www.evidencebasedta.com/MonteDoc12.15.06.pdf>.
Aronson's
book site, including a link to Amazon, is:
<http://www.evidencebasedta.com>.
Separately, I'm looking forward to
the imminent books from Howard
<http://www.quantitativetradingsystems.com/>
and Ralph Vince
<http://tinyurl.com/2os2p7>.
Not
being a user of IO (or other AB add-ons), I have no idea if this
MCP
approach is already being used in the AB community. It looks
interesting to
me. MCP appears to require market data and trade data
from every run, not
simply the trade data. That suggests to me that
an AB add-on, rather than a
completely external program, would be a
more straightforward
implementation.
Aronson also refers to a patented boostrap procedure
that accomplishes
much the same thing, White's Reality Check, named for
Halbert White,
the patent holder. Apparently WRC is not available
commercially.
Best,