PureBytes Links
Trading Reference Links
|
Howard,
Thanks for your post.
A very well written article.
Some contrary comment (first referencing some of your points and
then, later, some comments of my own):
> By trying many
> combinations of logic and parameter values, we will eventually find
>a system that is profitable for the date range analyzed.
You are assuming that all successful long term traders arrived at
their system(s) by using this approach ... perhaps there are systems
out there that have no optimiseable parameters and only one
underlying logic.
If so they are likely be based on primal market behaviour and
therefore persistent across markets and time i.e they would have to
be systems based on market characteristics that are relatively
stationary.
> testing the
> profitability of a trading system that was developed using recent
>data
> on older data is guaranteed to over-estimate the profitability of
the
> trading system.
You know that in science (philosophy/logic) it only takes one
refutation to dethrone the current ruling hypothesis ...
if a long system, developed on the last 12 months of data (when the
market was experiencing a bear riot) is then tested OOS on the prior
years data it will outperform the in sample tests (OOS would be
conducted on bull market data).
> There is very little reason to expect that future behavior and
> profitability of well known trading systems will be the same as past
> behavior.
Do we have any empirical evidence of this?
First we would have to have an agreed definition of 'well known',
make a list of the systems, and then perform massive testing.
To scrupulously prevent any bias creeping testing would have to be
conducted live, and not on historical data.
We only know that they were successful 'in the past' by IS testing,
or by claim.
Do we have any, or many, certified performance records provided by
traders who claim to have had success with those 'well known' systems.
> Statistics gathered from in-sample results have
> no relationship to statistics that will be gathered from trading.
Not, so.
They have every bearing on the stats gathered in trading because only
systems with good IS performance make it to the OS, or live trading,
phase.
OOS testing is only proceeded with because the analyst has every
expectation, or hope, that the good IS stats will be reproduced OOS.
In fact it is the relative performance between the IS and OOS stats
the encourages us to proceed or abort.
Re trading the edge erodes the edge:
It is an assumption that all players are trading systems ... many are
not, in fact the vast majority are not.... those who aren't control
vastly greater sums of money than those who do.
It is an assumption that all wins erode the system ... they could be
just lucky wins that the trader can't exploit long term, or
successful wins that the trader doesn't sustain e.g they might not
have the capital, use the correct staking or maintain self-discipline
in the future.
Only a very small percentage of traders are successful, and hence
trading a successful system ... every one else who is trading is just
making noise.
There are millions of system permutations, instruments, markets,
staking systems etc ..... how many successful traders would it take
to exahaust all of the successful permutations?
> The follow-on point, which relates to Monte Carlo analysis, is that
> rearranging the in-sample trades gives no insight into the future
> characteristics of the system. Yes, you can see the effect of taking
> the trades in different orders. But why bother? They are still
> in-sample results and still have no value.
If you are engineering an F1 racing car there is only track
testing/simulation (99.9 of the time) and racing performance (1% of
the time).
The more information you gather off the track the more likely you are
to perform on the track OR know what to adjust and when to adjust it
if performance doesn't meet expectations.
Do you know of any F1 teams that don't test/simulate?
Do you know of any F1 teams that only test/simulate one, or limited,
metrics?
What is testing if not 'massive examination of what-if scenarios'?
Re MonteCarlo and stationarity
I haven't studied the subject in depth.
Mainly it is has been used outside of trading and in different ways
to the ways that traders use it .... possibly it would be best to
limit trading discussion to 'trading simulation' and drop the MC part
of the name.
I have only found one book devoted to the subject and I regret buying
it .... 'MCS and System Trading' by Volker Butzlaff.
I have also test driven TradeSim and MSA.
Referencing their trading apps.
TS arranges the trades, as a time series, and randomly walks through
all permutations to simulate 'live trading'..... it is an MM test, of
some kind, because equity is allocated prior to the walk through.
AB's backtester, in default mode, does this once.
I assume other methods could be used ... as per my pervious XYZ
example:
- abcXdefghi with simultaneous trades on day 4,
- we can only achieve a finite set of permutations,
- the outcome of massive sampling will tend to the mean +- variance,
- we can simulate the eq outcomes using random sampling of uniform
size, ave the result per random series and then freq dist the means
(Central Limit Theoreom predicts a pseudo norm dist).
> 30 selections per series * ? series will achieve an approx of
possible eq outcomes (I'm not sure if distrubtions obey the laws of
sample error ... I don't think they do).
TradeSims real life simulation assumes stationarity (the balls in the
bin, and their values will remain constant into the future).
It also assumes that they will be selected from the bin in the same
order, or frequency to be absolutely correct (the order doesn't
change anything only the frequency).... to be precise about it, their
model assumes that if you have picked the worst historical loss out
of the bin 2/1000 trades that you will not only experience the same %
as the worst loss in the future but that it will also only occur
2/1000 times.
MSA puts all of the balls in the bin and selects them in a way that
allows new combinations (frequencies) until all possible frequencies
are exhausted i.e. they assume stationarity only in values but not
frequency of dist (they assume dist is a probability statement and
not a constant or series of constants).... to be precise about it
they assume that if it can happen it will.
So, stationarity is the issue.
So many people are confusing variance with non-stationarity .... they
are being fooled by randomness e.g.
we know that the trial records of fair coin tosses are stationary AND
they have a surprising range of outcomes (variance) ... this is very
easy to see if simulated and expressed as equity outcomes.
Therefore, in trading, we can, at the least expect a tremendous
amount of variance ... no less than what can be expected from a coin
toss experiment ... this variance can be estimated using several
methods, simulation being one easy, push the computer button and look
at the graph method.
So, the value of the simulation is in training the mind to accept
variance and mentally prepare for the worst case losses.
However, it doesn't matter how we design our systems we can not do
anything about stopping non-stationarity.
Our system will get wiped out in OOS if it is not robust OR if the
market changes.
If our system is robust it will still get wiped out if the market
changes.
However, IMO, non-stationarity is not, or need not be, as pervasive
in trading as we think.
As I have said in the past, and already in this post ... many traders
are slayed by the innocuous looking Black Swan, because of ignorance
about its behaviours.
Also, we are very lucky, in trading, to be able to have some control
over our dataset i.e. our sample space is bounded by our stops and
other inherent factors in the design.
Example:
If we have a stop in place then we are reasonably unlikely to
experience losses beyond the stop + commission + slippage .... when a
stop failure does occur it is very infrequent and not necessarily
career destroying.
When we have a profit stop in place we can expect to at least get the
stop OR BETTER.
We can also, in some circumstances, buy a guaranteed stop loss.
In summary:
Because, as traders, we are statistically lucky, we can choose, to
some extent, which marbles to put in the bin.
We can absolutely limit the worst case, ensure we get at least the
best case and then take everything in between that comes along.
Since the boundaries are limited, the range of possible values on the
balls is finite and will always be normally distributed, when
expressed as possible mean P & L (central limit theoreom)..... the
staging post on the trail towards possible equity outcomes.
I think under those circumstances that the balls in the bucket,
collected over a long sample, are a pretty fair representation of
what we can expect in the future.
If they are not then we only have ourselves to blame for our poor
system design.
Nothing anyone can do, can put an end to stockmarket non-stationarity
but the challenge for the trader is to find ways to either absorb it
or anticipate it.
One important point was absent from your post.
Kelly and Vince et al have proved conclusively that staking directly
and remarkably affects outcomes.
Based on that fact I can't understand why you, and many other
commentators, continue to draw inferences from backtests that include
a limited range of portfolio allocations ... either don't involve eq
at all OR test across all possible eq allocations.
(if you do opt for the latter choice wouldn't it be smarter to do
that using the short mathematical solution rather than the long
massive optimisation approach?).
The babblers epilogue:
I guess it is appropriate that an informal book should have an
informal ending!
"Always look on the bright side of life" ...
... from the life of Brian :-)
--- In amibroker@xxxxxxxxxxxxxxx, "Howard Bandy" <howardbandy@xxx>
wrote:
>
> Greetings all --
>
> The posting was originally made by me to Aussie Stock Forums on
> February 2, 2009. But in light of recent discussions, I'll cross
post
> it here.
>
> Some of my thoughts on using Monte Carlo techniques with trading
systems.
>
> First, some background.
>
> Monte Carlo analysis is the application of repeated random sampling
> done in order to learn the characteristics of the process being
studied.
>
> Monte Carlo analysis is particularly useful when closed form
solutions
> to the process are not available, or are too expensive to carry out.
> Even in cases when a formula or algorithm can supply the information
> desired, using Monte Carlo analysis can often be used.
>
> Here is an example of Monte Carlo analysis. Assume that a student is
> unaware of the formula that relates the area of a circle to its
> diameter. A Monte Carlo solution is to conceptually draw a square
with
> sides each one unit in length on a graph, with the origin at the
lower
> left corner. The horizontal side goes from 0.0 to 1.0 along the x-
axis
> and the vertical side goes from 0.0 to 1.0 along the y-axis. Draw a
> circle with a diameter of one unit inside the square. The center of
> the circle will be at coordinates 0.5, 0.5. The Monte Carlo process
to
> compute the area of the circle is to generate many random points
> inside the square (each point a pair of number with the values of
the
> x-coordinate and y-coordinate being drawn from a uniform
distribution
> between 0.0 and 0.999999), then count the number of those points
that
> are also inside the circle. The ratio between the number of points
> inside the circle to the number of points drawn gives an estimate of
> the constant pi. Running this experiment several times, each using
> many random points, allows application of statistical analysis
> techniques to estimate the value of pi to within some probable
> uncertainty. The process being studied in that example is
stationary.
> The relationship between the area of the circle and the area of the
> square is always the same.
>
> When we are developing trading systems, the ultimate question we are
> most often asking is "What is the future performance of this trading
> system?" Recall that the measure of goodness of a trading system is
> your own personal (or corporate) choice. Some people want highest
> compounded annual return with little regard for drawdown. Others
value
> systems that have low drawdown, or infrequent trading, or whatever
> else may be important. But, in all cases, the goal is to have the
> trading system be profitable. Assume that many of us are trading a
> single issue over a period of several years, and that the price per
> share at the end of that period is the same as it was at the
beginning
> of the period, with significant price variations in between. If we
> ignore frictional costs -- the bid - ask spread of the market maker
> and the commission of the broker -- we are playing a zero-sum game.
> Those of us who make money are taking it from those who lose money.
> If, instead of the final price being the same as the beginning
price,
> the final price is higher, then the price has an upward bias and
more
> money is made than lost. This is when we all get to claim it was our
> cleverness that made us money. If the final price is lower, the
price
> has a downward bias and more money is lost than made.
>
> The price data for the period we are trading has two components. One
> is the information contained in the data that represents the reason
> the price changes -- the signal component. The other is everything
we
> cannot identify profitably -- the noise component. Note that there
may
> be two (or more) signal components. Say one is a long term trend in
> profitability of the company, and the price follows profitability.
Say
> the other is cyclic price behavior that goes through two complete
> cycles every month for some unknown but persistent reason. In every
> financial price series, there is always the random price variation
> that is noise. The historical price data that we see consists, in
this
> case, of trend plus cycle plus noise. Each component has a strength
> that can be measured. If the signal is strong enough, relative to
the
> noise, our trading system can identify the signal and issue buy and
> sell signals to us. If our trading system has coded into it logic
that
> only recognizes changes in trend, the cycle component is noise as
seen
> by that system. That is -- anything that a trading system does not
> identify itself, even though it may have strong signal
characteristics
> when analyzed in other ways, is noise.
>
> Over the recent decades, analysis of financial data has progressed
> from simple techniques applied by a few people in a few markets
using
> proprietary tools to sophisticated techniques applied by many people
> in many markets using tools that are widely available at low cost.
The
> techniques used successfully by Richard Donchian from the 1930s, and
> Richard Dennis and William Eckhart in the 1980s, were simple. To the
> extent that the markets they traded did not have strong trends,
every
> profitable trade they made was at the expense of another trader.
> Today, every person hoping to have a profitable career in trading
> learns about techniques that did work at one time. They are well
> documented and are often included in the trading system examples
when
> a trading system development platform is installed.
>
> Assume that a data series is studied over a given date range. Using
> hindsight, we can determine the beginning price and the ending
price.
> Continuing with hindsight, we can develop a trading system that
> recognizes the signal component -- some characteristic about the
data
> series that anticipates and signals profitable trades. By trying
many
> combinations of logic and parameter values, we will eventually find
a
> system that is profitable for the date range analyzed. If we are
lucky
> or clever, the system recognizes the signal portion of the data. Or,
> the system may have simply been fit to the noise. The data that was
> used to develop the system is called the in-sample data. If the
system
> does recognize the signal and a few of us trade that system, while
all
> the rest of the traders make random trades, those of us who trade
the
> system will make a profit. On average, the rest lose. As more and
more
> people join us trading the system, each of us earns a lower profit.
In
> order to continue trading profitably, we must be earlier to
recognize
> the signal, or develop better signal recognition logic and trade
> different signals or lower strength signals. By the time the date
> range we have studied has passed, most of the profit that could have
> been taken out of that price series using that system has been
taken.
> Perhaps the future data will continue to carry the same signal in
the
> same strength and some traders will make profitable trades using
their
> techniques, or perhaps that signal changes, or perhaps so many
traders
> are watching that system that the per-trade profit does not cover
> frictional costs.
>
> Data that was not used during the development of the system is
called
> out-of-sample data. But -- important point -- testing the
> profitability of a trading system that was developed using recent
data
> on older data is guaranteed to over-estimate the profitability of
the
> trading system.
>
> Financial data is not only time-series data, but it is also
> non-stationary. There are many reasons related to profitability of
> companies and cyclic behavior of economies to explain why the data
is
> non-stationary. But -- another important point -- every profitable
> trade made increases the degree to which the data is non-stationary.
> There is very little reason to expect that future behavior and
> profitability of well known trading systems will be the same as past
> behavior.
>
> Which brings me to several key points in trading systems
development.
>
> 1. Use whatever data you want to to develop your systems. All of the
> data that is used to make decisions about the logic and operation of
> the system is in-sample data. When the system developer -- that is
you
> and me -- is satisfied that the system might be profitable, that
> conclusion was reached after thorough and extensive manipulation of
> the trading logic until it fits the data. The in-sample results are
> good -- they are Always good -- we do not stop fooling with the
system
> until they are good. In-sample results have no value in predicting
the
> future performance of a trading system. None! It does not matter
> whether the in-sample run results in three trades, or 30, or 30,000.
> In-sample results have no value in predicting the future performance
> of a trading system. Statistics gathered from in-sample results have
> no relationship to statistics that will be gathered from trading.
None!
>
> The follow-on point, which relates to Monte Carlo analysis, is that
> rearranging the in-sample trades gives no insight into the future
> characteristics of the system. Yes, you can see the effect of taking
> the trades in different orders. But why bother? They are still
> in-sample results and still have no value.
>
> The Only way to determine the future performance of a trading system
> is to use it on data that it has never seen before. Data that has
not
> been used to develop the system is out-of-sample data.
>
> 2. As a corollary to my comments above, that out-of-sample data Must
> be more recent that the in-sample data. The results of using earlier
> out-of-sample data are almost guaranteed to be better than the
results
> of using more recent out-of-sample data. Consequently, techniques
> known as boot-strap or jack-knife out-of-sample testing are
> inappropriate for testing financial trading systems.
>
> So, when is Monte Carlo analysis useful in trading system
development?
>
> 1. During trading system development. It may be possible to test the
> robustness of the system by making small changes in the values of
> parameters. This can be done by making a series of in-sample test
> runs, each run using the central value of the parameter (such as the
> length of a moving average) adjusted by a random amount. The values
of
> the parameters can be chosen using Monte Carlo methods. Note that
this
> does not guarantee that the system that works with a wide range of
> values over the in-sample period will be profitable out-of-sample,
but
> it does help discard candidate systems that are unstable due to
> selection of specific parameter values.
>
> Note that this technique is not appropriate for all parameters. For
> example, a parameter may take on a limited set of values, each of
> which selects a specific logic. Such parameters, associated with
what
> are sometimes called state variables, are only meaningful for a
> limited set of values.
>
> 2. During trading system development. It may be possible to test the
> robustness of the system by making small changes in the data.
Adding a
> known amount of noise may help quantify the signal to noise ratio.
> When done over many runs, it may reduce (smooth out) the individual
> noise components and help isolate the signal components.
>
> 3. During trading system development. It may be possible to
> investigate the effect of having more opportunities to trade than
> resources to trade. If the trading system has all of the following
> conditions:
> A. A large number of signals are generated at exactly the same time.
> For example, using end-of-day data, 15 issues appear on the Buy
list.
> B. The entry conditions are identical. For example, all the issues
are
> to be purchased at the market on the open. If, instead, the entries
> are made off limit or stop orders, these can and should be resolved
> using intra-day data -- as they would be in real time trading.
> C. The number of Buys is greater than can be taken with the
available
> funds. For example, you only have enough money to buy 5 of the 15.
>
> If your trading system development platform provides a method for
> breaking ties, use it. For example, you may be able to calculate a
> reward-to-risk value for each of the potential trades. Take those
> trades that offer the best ratio. AmiBroker, for example, allows the
> developer to include logic to compute what is known as
PositionScore.
> Trades that are otherwise tied will be taken in order of
PositionScore
> for as long as there are sufficient funds.
>
> Alternatively, Monte Carlo methods allow you to test random
selection
> of issues to trade. My feeling is that very few traders will make a
> truly random selection of which issue to buy from the long list. I
> recommend quantifying the selection process and incorporating it
into
> the trading system logic.
>
> 4. During trading system validation. After the trading system has
been
> developed using the in-sample data, it is tested on out-of-sample
> data. Preferably there is exactly one test, followed by a decision
to
> either trade the system or start over. Every time the out-of-sample
> results are examined and any modification is made to the trading
> system based on those results, that previously out-of-sample data
has
> become in-sample data. It takes very few (often just one will do it)
> peeks at the out-of-sample results followed by trading system
> modification to contaminate the out-of-sampleness and destroy the
> predictive value of the out-of-sample analysis.
>
> One possibly valuable technique that will help you decide whether to
> trade a system or start over is a Monte Carlo analysis of the
> Out-of-sample results. The technique is a reordering of trades,
> followed by generation of trade statistics and equity curves that
> would have resulted from each trade sequence. What this provides is
a
> range of results that might have been achieved. Note that this
> technique cannot be applied to all trading systems without knowledge
> of how the system works. If the logic of the system makes use of
> earlier results, such as equity curve analysis or sequence of
winning
> or losing trades, then rearranging the trades will result in trade
> sequences that could never have happened and the analysis is
> misleading and not useful. Also note that most of the results
produced
> by the Monte Carol analysis could also be developed from techniques
of
> probability and statistics without using Monte Carlo techniques --
> runs of wins and losses, distribution of drawdown, and so forth.
>
> In summary --
>
> Monte Carlo analysis can be useful in trading system development.
But
> only in those cases described in items 1, 2, 3, and 4 above.
>
> Rearranging in-sample trades has no value.
>
> Obtaining meaningful results from Monte Carlo techniques requires
> large numbers -- thousands -- of additional test runs.
>
> If you decide to apply Monte Carlo techniques, I recommend that they
> be applied sparingly, primarily to test robustness of a likely
trading
> system as in numbers 1 and 2 above, not in the early development
stages.
>
> On the other hand -----
>
> What is tremendously useful in trading system development is
automated
> walk-forward testing. I believe that is the Only way to answer the
> question "How can I gain confidence that my trading system will be
> profitable when traded?" But that is the subject of another posting.
>
> Thanks for listening,
> Howard
>
------------------------------------
**** IMPORTANT ****
This group is for the discussion between users only.
This is *NOT* technical support channel.
*********************
TO GET TECHNICAL SUPPORT from AmiBroker please send an e-mail directly to
SUPPORT {at} amibroker.com
*********************
For NEW RELEASE ANNOUNCEMENTS and other news always check DEVLOG:
http://www.amibroker.com/devlog/
For other support material please check also:
http://www.amibroker.com/support.html
*********************************
Yahoo! Groups Links
<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/amibroker/
<*> Your email settings:
Individual Email | Traditional
<*> To change settings online go to:
http://groups.yahoo.com/group/amibroker/join
(Yahoo! ID required)
<*> To change settings via email:
mailto:amibroker-digest@xxxxxxxxxxxxxxx
mailto:amibroker-fullfeatured@xxxxxxxxxxxxxxx
<*> To unsubscribe from this group, send an email to:
amibroker-unsubscribe@xxxxxxxxxxxxxxx
<*> Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/
|