PureBytes Links
Trading Reference Links
|
> BTW, I sell a lot of options, but it is all calculated manually -- >AB
> is not involved with that at all.
Now that's what I call diversification!
I still have a lot of work to do to come up with a model of diversification that suits todays markets.
Thanks.
--- In amibroker@xxxxxxxxxxxxxxx, Dennis Brown <see3d@xxx> wrote:
>
> Brian,
>
> > You haven't mentioned any specific metrics ... perhaps you want to
> > keep them secret or more likely you don't use any, or many
>
> I don't use complex metrics as such, just simple stats. The equity
> curve is my metric as I described how I use it to optimize. I have
> seen enough of these to know what I am looking for, and what is wrong
> when it looks certain ways. That plus a few stats about opportunity,
> # trades/day, and individual trade DD are all I use.
>
> BTW, I sell a lot of options, but it is all calculated manually -- AB
> is not involved with that at all.
>
> As far as under performing the max potential, that is my goal. I am
> looking for the subset of the highest probability trades to take
> without giving up too much opportunity. The more opportunities I
> take, the worse the edge on the average. There is even a point where
> taking more trades reduces the return. Taking only the very highest
> probability trades gives great returns per trade, but less total
> profit. There is a sweet spot tradeoff between the two.
>
> IMHO, anyone who ignores the slippage and commission factors in
> backtesting lives in a fantasy world. I would not like to see the
> business plan for a delivery service that does not take the cost of
> fuel and maintenance of the vehicles into account. Then there is
> accident insurance, etc.
>
> I do ignore the cost of human errors, machine failures, communication
> line failures, brokerage house bankruptcies, Illness, vacations, badly
> timed restroom breaks... because, I don't know how to quantify their
> effects.
>
> BR,
>
> Dennis
>
> On Jun 20, 2009, at 10:19 PM, brian_z111 wrote:
>
> > Well there doesn't seem to be much interest in benchmarking, or the
> > philosophy of optimization .... perhaps a few are just observing, or
> > the forum doesn't know what to make of it all?
> >
> > Our discussion has moved over a little bit ... it is more to do with
> > style and finding the 'holy grail' of trading, which some say
> > doesn't exist, but, like Santa Claus, why spoil a good fairytale
> > that so many get so much enjoyment from.
> >
> > This has become slightly more personal but then again I believe in
> > transperance and sharing our trading insights (if RalphVince,
> > Markowitz etc hadn't published where would I be?) and surely our
> > observations must have relevance to a certain number of traders.
> >
> > I think I am preaching to the converted now.
> >
> > Right from the start my style was always builtin and I was already
> > biased towards it .... I just didn't become consciously aware of
> > what my style was until I had a fair amount of experience under my
> > belt.... as soon as I became consciously aware of what worked best
> > for me the rest was easy and still is.
> >
> > We can not change our natural style, only discover and enhance it.
> >
> > This forum is naturally biased towards algorithmic trading etc and I
> > have deliberately infringed on the OT boundaries from time to time,
> > to stick up for my style and my kind of people (discussions on the
> > Psychology of Trading, Discretionary Trading, Intuition etc) ...
> > continual discussion of code examples etc gets a little boring and
> > somewhat inhuman at times, so I like to inject a little human
> > interest (small personal exchanges with people I like ... Haiku
> > poetry and suchlike).
> >
> > The interesting thing is that even though I understood myself
> > reasonably well, before I started trading, the journey has still
> > been one of immense self-discovery, which is why I don't confine my
> > discussion to just 'learning to code'.
> >
> > In order to simplify my perspective, for the benefit of others, I
> > have said that we have our psychological typologies, that we can
> > sort everyone into, generally speaking.
> > I could use some different classifications but for consistency I
> > have stuck to one simple model i.e. I have told the forum that my
> > primary psychic function is Intuition and my secondary is Logic (of
> > course this is a gross simplification but we have to start somewhere).
> > I have also told the forum that Intuition has been poorly
> > represented by the scientific community and modern culture (we use
> > the term intuition as if it is an inferior function).
> >
> > IMO intuition is more like super rationality (the rational process
> > goes on, almost unconsciously, at lightning speed and it uses
> > processes akin to 'fuzzy logic' and more).
> >
> > I also warned the forum that intuition is not fortune telling or
> > 'reading the tealeaves' and so it is not infallible and like the
> > 'scientific method' the outcomes depend on the practitioners skill
> > and knowledge with the mode (it is also very energy dependent, and
> > self aware/sirected so it has an auto failsafe when the energy pack
> > is drained).
> >
> > So, when 'visual' traders say they just look at the charts some
> > people assume that the conclusions that we arrive at are all just
> > 'subjective' and hence irrational.... in some cases that may be true
> > but not necessarily in all cases.
> > Some people think they are using their intuition when they are just
> > jumping to irrational conclusions... they call this their intuition
> > and this is why it gets such a bad name.
> >
> > In short, science, and our community, are greatly underestimating
> > the capacity of the human mind (far superior to any computer built
> > so far).
> >
> > I think people also struggle with the possibilty that anyone can be
> > highly creative (artistic) and totally logical (objective) all in
> > the one package but this is possible (we just don't do them both at
> > the same time although we can quite easily switch back and forth
> > between the modes ... but not at the drop of a hat).
> >
> > So, without analysing it all, our styles are quite similar, except
> > that from what you say, although a few snippets don't allow me a
> > full analysis, you are even more of a 'visual' trader than I am.
> >
> > I am still developing, and learning, and I have more 'good ideas'
> > than I will ever need ... you are probably the same.
> >
> > However I will go through your comments with a view to sharing with
> > you a couple of my undeveloped ideas that seem relevant to your
> > interests.... just in case you haven't thought of them (no two
> > individuals are identical).
> >
> >
> > - I call eSignal eS .. I have got you now on the trading ES part
> > - I said before in this forum that if we excluded clip trading
> > (arbitrage) and trades that benefit from no 'volatility', like
> > selling options, then we are all 'trend traders' i.e. if we can find
> > the trend (and I tend to the view that the trend doesn't exist ...
> > if I model a trend, for any particular purpose, I always assume that
> > it won't exist for long).
> > - I also assume that the market doesn't have much of a memory and so
> > nothing lasts (I consider that this models the markets as being
> > dynamic i.e. dynamism is a fundamental quality of the markets)
> > - however, we have to deal with the reality of different timeframes,
> > so each timeframe will have a memory of different duration (in time
> > but not in bars)
> > - so generally I consider the 'holographic' like inter-relationship
> > of the various timeframes as being a second fundamental quality of
> > the markets
> > - I use a wave concept (cycles are a third fundamental quality of
> > the markets) but I don't use Fibonacci or Elliot etc because I don't
> > use a set magnitude and frequency ... uncertainty is a fourth
> > fundamental quality of the markets (so I only have a probalistic
> > expectancy as to what the frequency and magnitude will be).... for
> > that reason Fib retracements are no use to me.
> > - from our recent discussions on randomness etc I am considering the
> > proposition that a certain amount of chaos needs to be injected into
> > financial models ... this is a new and developing idea for me ..
> > probably the creative mathmeticians. who are around, are right on to
> > this ... I am tentatively proposing that there has been a burgeoning
> > use of computer analysis since around 2000 and that this is
> > producing some chaotic influences not previously seen .... post
> > approx 2000 the markets are different?....(computers are binary ...
> > unsynchronized binary events may be an approximate model of limited
> > and man made chaos ... I am only considering this within the context
> > of the financial markets) ... if I am correct we might be in for a
> > rough ride for a decade or two .. eventually financial theorists,
> > and the lawmakers, will have to catchup and change the model somehow
> > (regulation or freemarket driven changes or what?) ... unfortunately
> > the normative view always lags and opposes the Si
> > gma 3+ position (of course conservatism is necessary to prevent
> > unbounded chaos in our culture ... ironic isn't it that this
> > mechanism is also the cause of some unneccesary pain but that's life).
> >
> > - in the above scenario 'old world models' will not do as well as
> > current models (due to computationally driven investing/trading
> > 'old' may well only be 10 years ago, or less).
> >
> >> I never have to worry about if the
> >> liquidity is there and if the slippage is accurate to my >simulation.
> >
> > I am still considering the uses that I can find for high liquidity
> > and low liquidity ... every thing has its' pros and cons?
> >
> > In my 'world view' the market has endemic behaviours and I am
> > interested in finding out everything about them that I can.
> > If you factor in 'slippage' you might miss the endemic behaviours in
> > lower timeframes (where slippage and commissions are a greater % of
> > the move ... i.e. including commissions and slippage etc might kill
> > some theoretically good trades and you will miss the connections
> > that exist from one timeframe to another ) ... IMO we should do our
> > thinking, and design our prototypes, without including C&S in our
> > models ... only include C & S when we go into production testing (I
> > got the terms prototype and production from bruceR .. I really like
> > them ... I have recently found a new love of words ... very
> > important things because they represent an idea ... Yuki will be
> > pleased to see I am paying more attention to the words I use ... now
> > I only need to learn some proper grammar and I will almost be there).
> >
> >> fundamental data is of little consequence. Only the immediate >news
> >> has a non-technical impact, and that shows up within one minute in
> >> >the chart anyway.
> >
> > Fundamental and value investing is a valid style ... look at Buffet,
> > he has proved the success of valuie investing over and over ... of
> > course even Buffet makes mistakes ... it is just not our style....
> > we can't eat all of the cake on the table so lets' leave fundamental
> > analysis to the righful owners.
> >
> > Instrument selection is important though .. I have to replace
> > fundamental analysis, as a way to chose the instrument, with
> > something else .. haven't worked hard on that aspect (note that
> > Reefbreak has his algorithm to sort for his underlyings ... I am
> > sure it is not an alogrithm that sorts by fundmentals).
> >
> > Definitely I am a classical technical analyst in that I believe I
> > can only think fast enough to react to price action (even with a
> > computer doing the thinking) and can not get handle all of the data
> > input and computations required to stay ahead of the news that makes
> > the moves ... however last year when the big crisis was on I
> > experimented with some news following (economists newsletters etc)
> > and I actually did quite well at picking how it would all unfold ...
> > after it was all over I just left it at that because if I try to
> > master several styles I will end up being a 'jack of all trades and
> > master of none' ... I did enough to prove to myself that it can work.
> >
> >> Because my trading universe is so limited in scope,
> >
> > I also have a tendency to filter everything down to elemental
> > simplicity .. works for me.
> > I also believe in detailed analysis of a very small patch of the
> > available trading turf.
> >
> > Once before in the forum I talked about the fact that I tentatively
> > believe that there is only ONE trade and that we are all chasing
> > with different degrees of efficiency (of course we have different
> > markets and timeframes to chase it in).
> >
> > I have isolated market behaviour to some simple repetitive
> > behaviours - they are persistent in most timeframes and all
> > instruments I have tested.
> >
> > I was only working down to a 1min base (with eS == eSignal data) ...
> > I don't like to work lower than that because at tick level etc my
> > computer starts to misbehave ... too much number crunching ... I
> > don't really want to get into buying and managing the fastest
> > computer on the planet so I keep away from that level ... some
> > people keep going to faster and faster (lower) data levels simply
> > because they arent' succesful at one level and think the answer is
> > in getting faster and faster == wrong! ... also I am sure some
> > people don't understand how computer, or software gridlock, kills
> > the creative effort .. hence my dislike for technical software
> > glitches or computer glitches ... I keep it simple to allow the
> > intuitive energies to function.
> >
> > Re timeframes and persistence patterns:
> >
> > - intitially I tested down to 1min base with 5 min selected
> > - I found my endemic patterns (I don't call them fractals because
> > they might not be ... I always own the ideas and choose my own
> > nomenclature, rather than become a Mandelbrot clone for example) ...
> > I found they are persistent from monthly (albeit this is a limited
> > dataset) down to 5 mins (very large datasets) ... they started to
> > breakup at 1 min .. I thought this might be something to do with the
> > 'snapshot' nature of the basetimeframe i.e we should never work at
> > the basetimeframe ... always set the base a little below where you
> > want to work and then compress the underlying to where you want to
> > be .. sure enough when I downloaded 5 sec and tested in 15sec the
> > patterns are back.
> >
> > Note that I have something funny with my low level eS RT databases
> > so that could have contributed to this effect... timestamp minimum
> > in my fast eS databases is 5 sec ... I haven't followed up to see
> > if it is an eS or an AB thing ... probably eS because AB is good to
> > 1 sec? ... I am subscribed to NinjaTrader data so maybe I didn't
> > read the fineprint and their eS variant is snapshot data ... perhaps
> > I need to go back to the standard eS subscription.
> >
> >> I was able to
> >> write my own back tester in AFL that only runs in indicator mode
> >> >with
> >> the equity curve always showing in my chart.
> >
> > Yes, I also 'backtest' outside of the 'backtester' ... for one thing
> > I don't like AB's BT .. it is not an intuitive model (nothing wrong
> > with it technically but I can't see the wood for the trees when I
> > use it).
> >
> > It is funny that I left Metastock for AB so that I could backtest
> > with customized stops (that was my only reason at that time) ... but
> > since I have owned AB I have hardly ever backtested ... as my
> > trading philsophy evolved, and I used it to make better and better
> > predictions/systems, I was able to think of much smarter ways to
> > perform my objective testing .. recently in AB I repeated some
> > testing in a week that took me 1 year in Metastock ... all because
> > AB is faster and the logic behind my algorithm was light years ahead
> > of what I did in 2004.
> >
> > I have talked a bit, in the forum, about BT design and effective
> > metrics (CoreMetrics) ... maybe a few get it ... not sure if there
> > is much point in going further with that type of discussion (StDev1
> > always dominates any subset of our culture and the dominant culture
> > itself).
> >
> > Anyway, I am quite well aware that my posts are long, ambiguous and
> > dispersed in time and location .. in fact my communication style is
> > almost symbolic at times ... I have considered all along that I am
> > mainly writing 'for my own' and that they can interpret the code.
> >
> >> I overlay a lot of
> >> indicators and stats on my charts as desired to gain greater
> >> >insights
> >> about what is happening.
> >
> > You haven't mentioned any specific metrics ... perhaps you want to
> > keep them secret or more likely you don't use any, or many ... my
> > secondary psychic function is logic so I drop onto that wavelength
> > as soon as I come out of brainstorming or sometimes interpose
> > both... hence my interest in stats and my sort of compatibility with
> > Howard and Patrick ...except I don't love stats/quants as much as
> > them and my quant is always coloured with intuitive stuff (I make it
> > as easy as eating moms apple pie) .. so far every stat I have used I
> > have boiled down to a readers digest version of the academic function.
> >
> > You might benefit from following my ruminations on stats e.g. have
> > you thought about how adaptiveStDev (thanks Herman, bruceR, RZ) can
> > work with median, mode, skew etc to model your trade dists, as you
> > walk forward ... all done in arrays with no binning required
> > (assuming RV's math is good and it hasn't failed me anywhere so
> > far)... did you see the link I posted from the German site about how
> > these moments model our system, via the trade series ... I still
> > have some work to do on this but I am slowly moving along with a
> > rebaking of the evaluation pie.
> >
> > I am just checking that you are not underdone on stats ... they are
> > great as long as they are restated into traderspeak.
> >
> > I hope you got something out of my ruminations on BiSim and
> > CoreMetrics .. anyway I will probably post a few more bits and
> > pieces on that subject to help the very interested get to the meat
> > in the sandwich .....but once again anyone who is going to get it
> > should have got it by now.
> >
> >> The recent addition of static arrays was a great help for me in
> >> >saving
> >> the temp results of previous runs for equity curve comparisons.
> >
> > Yep, that was the idea.
> >
> > I haven't tried the function out yet ... I don't take the betas ...
> > too much administrative effort for me .. I wait for the upgrades
> > that come with some help ... I think Tomasz underestimates the
> > reliance I (certain types of people) have on help notes e.g.
> > intuitives start with the meaning i.e. first of all, what does this
> > mean? .. what contextual environment does this fit into? .. OK ...
> > now how do I use it?)
> >
> > Sometime the logicos version of the help manual is very frustrating
> > for me.
> >
> > We should also thank Tomasz very much for having the perspicacity to
> > grab hold of the idea and implement it ... this level of
> > responsiveness to user discussion is very rare (thanks once again
> > Tomasz).
> >
> >> My best system is a revision to mean system so far. For this
> >> >system,
> >
> > You might be missing something ... if the trend really does exist
> > then 'reversion to mean' will underperform 'following the mean' ...
> > if it doesn't exist then trading both ways will net out to being
> > equal in the long term.
> >
> > Going back to the HolyGrail ... I only have one system but I can
> > vary it in several ways ... your methods should have lead you to my
> > system, or very close to it, so we must be almost talking about the
> > same thing.
> >
> > You seem to have been mesmerized a tiny bit by the dance of the
> > seven veils .. I was hinting at this in the discussion on MA and
> > habituation:
> >
> > - I don't like it that Howard refers to 'reversion to mean' and only
> > references on side .. I actually think of 'erversion to mean as
> > crossing the mean from above.
> > - when we look at any chart, without indicatros, it is meaningless
> > - we have a strong natural desire to see meaning in it .. so strong
> > we can even overlay false, or approximate meanings, and feel very
> > satisfied with our efforts
> > - MA is very potent ... it 'identifies' the trend (yes! won't be
> > long now before I am rich) and makes the trading universe
> > symmetrical ... how beautiful is that? i.e. it divides the chart
> > into above and below which has a satisfying logic to the rational
> > (with a small r) aspect of our mind (I went on to talk about the
> > powerful reinforcing effect of what feels right == habituation).
> >
> > No model is forbidden in my approach .... as long as we then go onto
> > analyze the pros and cons of the model which leads to an assessment
> > of whether we can gain an edge out of them .. anyone of them .. and
> > from there we then have to decide if the edge si significant enough
> > to trade i.e. quantify it.
> >
> > So, MA has a reversion to mean coming from above and below ... how
> > close is this model to the HolyGrail? ... is there an edge to be had
> > from focussing on one direction of the reversion as compared to the
> > other?
> >
> > Tading with the trend always beats countertrend trading (this is a
> > better name than reversion to mean) .. as long as the trend persists.
> >
> > Of course we always pay the price when trends change.....and they
> > always do this quickly, in their own timeframe ... we pay the price
> > because we can never achieve theoretically achieve the perfect
> > trade, let alone achieve it in real trading ... when we go into
> > production, with our prototypes, C&S detract from the perfect trade
> > even more.
> >
> >
> > Re volatility:
> >
> > - I am still learning about it ...more to do ... very important
> > (thanks to JohnBollinger who first introduced me to the subject via
> > BB's ... been hooked ever since).
> >
> >> I start with the philosophy, and work towards the optimization from
> >> there. In the beginning, it helps to do it the other way around,
> >> until you learn the relationships between parameters. Mostly it is
> >> staring at charts with indicators overlaid and asking yourself if
> >> there is anything significant about the patterns you see, then
> >> program
> >> the ideas in and see what it gives you.
> >
> > I agree.
> >
> > Just to refine the details:
> >
> > - which came first the chicken or the egg?
> > - my trading philosophy has developed in tandem with my struggle to
> > find ways to make a buck.... one seems to shape the other ... I
> > definitely think everything through in detail before moving on to
> > testing for confirmation .. if my idea isn't confirmed I go back to
> > more thinking to find out why it failed etc.
> >
> > - an analogy is that if we want to hunt the Tiger we could go into
> > our office, read all of the research and run simultions of our plans
> > to formulate a plan of action OR we could go into the field and sit
> > quietly in the hide for extreme periods of time ... I do this
> > frequently, take notes on all of the observed behaviours and then
> > when I go back to the lab I start to hypothesis, based on my notes.
> >
> > When I reference others trading research/opinion I cross check their
> > observations and theorems with my field observations .. if it
> > doesn't stack up I reject their work, or parts of it.
> >
> >
> >
> >
> >> It also helps to manually
> >> trade your ideas some. The perspective of what can and will go
> >> >wrong
> >> in the real world is quite eye opening.
> >
> > Yes, I am at the point where I might not Backtest much more, ever ..
> > it isn't imperative that I do so anymore ... an exception might be
> > if I develop a MatrixBacktester for fun .. then I would find some
> > uses for it, at least for a while.
> >
> > I have accepted that my methods, combined with using paper trading
> > as the OOS test, are perfectly adequate for me.
> >
> > As an aside I am sceptical that there is evne such a thing as OOS
> > when we use historical data ... for one thing we are all too widely
> > read to be naive about any trading idea so in that sense we have all
> > walked over any historical data, we care to get our hands on,
> > thousands of times .. in that sense live trading is the only 'real'
> > OutOfSample data.
> >
> > Thanks for sharing .. very helpful for me.
> >
> > I hope you get something specific out of my commentary or that at
> > least it sparks of some fruitful creative thinking for you (some
> > commentators have tried to filter the qualities that make a top
> > trader/investor but of course it is very simple .. they are highly
> > creative which exhibits as a passion for investing/trading).... like
> > passionate golfers .. we will talk to other passionate traders
> > anywhere at anytime.
> >
> > I have done a fair amount of commentating on TradingPsychology ...
> > it is spread all around the forum .. this might be the last in the
> > series ... I am not certain about that but I am starting to get
> > bored with it and the forum must be getting bored with it .. anyone
> > who was going to get it should have got it by now (and I am very
> > realistic about that) ...tough luck for newcomers ... they miss out.
> >
> > Still, I never say never.
> >
> > All the best with your trading efforts..
> >
> >
> > brian.
> >
> > --- In amibroker@xxxxxxxxxxxxxxx, Dennis Brown <see3d@> wrote:
> >>
> >> Brian,
> >>
> >> First, remember, I am only trading ES, the e-mini S&P 500. That
> >> simplifies my problem immensely. I never have to worry about if the
> >> liquidity is there and if the slippage is accurate to my simulation.
> >> They are known quantities. I also only look at 1 minute bars, and do
> >> not hold overnight. My trades rarely last over an hour, so
> >> fundamental data is of little consequence. Only the immediate news
> >> has a non-technical impact, and that shows up within one minute in
> >> the
> >> chart anyway.
> >>
> >> Because my trading universe is so limited in scope, I was able to
> >> write my own back tester in AFL that only runs in indicator mode with
> >> the equity curve always showing in my chart. I overlay a lot of
> >> indicators and stats on my charts as desired to gain greater insights
> >> about what is happening.
> >>
> >> So my secret is to simplify the unknowns to the smallest universe
> >> possible and specialize on just one kind of trade. This gives me a
> >> fighting chance of actually understanding what I am doing.
> >>
> >> The recent addition of static arrays was a great help for me in
> >> saving
> >> the temp results of previous runs for equity curve comparisons. I
> >> manually control every aspect of parameter changes and selection of
> >> which curves to save. I also take a lot of screen shots of my charts
> >> for later comparison purposes.
> >>
> >> My best system is a revision to mean system so far. For this system,
> >> volatility is good and trends are bad. I am currently working on a
> >> trend following version to see what I can do with that. In that case
> >> trends are good. I am only talking about the action over one day of
> >> course.
> >>
> >> I don't rely on mathematical straightness, because the ES does not
> >> give equal opportunity all the time. For a reversion to mean system,
> >> volatility gives more opportunity. I measure volatility and apply
> >> that value to modify different parameters on the fly.
> >>
> >> I start with the philosophy, and work towards the optimization from
> >> there. In the beginning, it helps to do it the other way around,
> >> until you learn the relationships between parameters. Mostly it is
> >> staring at charts with indicators overlaid and asking yourself if
> >> there is anything significant about the patterns you see, then
> >> program
> >> the ideas in and see what it gives you. It also helps to manually
> >> trade your ideas some. The perspective of what can and will go wrong
> >> in the real world is quite eye opening.
> >>
> >> BR,
> >> Dennis
> >>
> >> On Jun 20, 2009, at 12:50 AM, brian_z111 wrote:
> >>
> >>> Thanks .. its great to get some feedback about how people are
> >>> actually going about their evaluation.
> >>>
> >>> I notice that you are using visual methods, rather than a metric, to
> >>> select your top model.
> >>>
> >>> For my first optimization I didn't look at any eq curves ... this is
> >>> not the default in AB's opt?
> >>>
> >>> How do I create and plot the relative curves for all of the possible
> >>> combinations ...d o you limit this to subsets to save time OR
> >>> perhaps from your later comments, you don't search all candidates
> >>> but start in a chart and then manually add plots to test the
> >>> combinations you fancy?
> >>>
> >>>> I also block out the
> >>>> best performing times to concentrate effort on the low performing
> >>>> times as part of my process.
> >>>
> >>> I agree that this is an area worth focusing on.
> >>>
> >>> I did notice that AB only optimizes the 'concatenated' data, as a
> >>> portfolio, (at least as I understand it ... couldn't find any
> >>> guidance on this in the help manual or Howards' books .. trial and
> >>> error seems to indicate the opt report is a 'one in all in' approach
> >>> so I couldn't differentiate under/over performance relative to
> >>> under/
> >>> over performing stocks without a special effort on my part).
> >>>
> >>>> (last Fall was
> >>>> a great addition to the max volatility set).
> >>>
> >>> Yes. I noticed that if I run the MACrossover opt on monthly data,
> >>> for the entire 1970-present range, then it still qualifies as a
> >>> trendfollower (just).. IOW the dip of 2008, wasn't quite big enough
> >>> to take us out of an uptrend, from the long term perspective, so
> >>> bull trend following systems still work in that timeframe/timeslice.
> >>>
> >>> I didn't report on it but when I ran an MACrossover opt on the
> >>> random dataset that I produced (designed for and run in EOD data)
> >>> the spread of the optimized results is pretty tight and low (not
> >>> significant) ... using ABs' total portfolio approach .. this is
> >>> reassuring since the concatenated random datasets converge on the
> >>> mean of zero returns (half the datasets underperform and half over
> >>> perform with low volatility) ... I wonder if that will change if I
> >>> introduce some volatility ... I'll have to try it.
> >>>
> >>>> I overlay my equity curves on top of
> >>>> each other and look for the straightest curve relative to the
> >>>> >market
> >>>> opportunities (volatility).
> >>>
> >>> You are not confident of the metrics that measure straightness or
> >>> you just prefer the eyeball method (I found it sort of ironic, or
> >>> something, that Howard also likes to eyeball the curves).
> >>>
> >>> Volatility could be good or bad ... say it is all over the shop ..
> >>> don't you want trendiness with volatility?
> >>>
> >>> Are you measuring or eyeballing volatility?
> >>> Do you filter, by volatility, to select the instrument to trade?
> >>>
> >>>> When rare
> >>>> events hurt the performance, I zoom in on those trades and try to
> >>>> understand what about the underlying algorithm lets it happen.
> >>>>> Then,
> >>>> I think about how I can make my algorithm smarter in the general
> >>>> case
> >>>> to reduce these types of problems.
> >>>
> >>> If another line of code removes a significant number of losers then
> >>> it is likely to be generic going forward?
> >>>
> >>>> I do a lot more thinking than testing and optimizing
> >>>
> >>> That was my initial reaction to my optimizing experience ... that if
> >>> we enter some parameters and send the computer off to search then
> >>> what comes back might not have a lot of meaning, in terms of a
> >>> trading philosophy, whereas if we are working at developing a
> >>> trading philosphy first, then later on opt can help develop/test
> >>> systems derived from the philopsphy.
> >>>
> >>> It takes a long time , and a certain skill, to develop a trading
> >>> philosophy, but anyone can quickly learn to run an opt, with or
> >>> without applying a lot of thought to what they are doing.
> >>>
> >>> --- In amibroker@xxxxxxxxxxxxxxx, Dennis Brown <see3d@> wrote:
> >>>>
> >>>> Brian,
> >>>>
> >>>> When I optimize (and I am only working with ES), I watch the
> >>>> complete
> >>>> equity curve over more than a thousand trades under all market
> >>>> conditions of volatility and trends in both directions (last Fall
> >>>> was
> >>>> a great addition to the max volatility set). I also block out the
> >>>> best performing times to concentrate effort on the low performing
> >>>> times as part of my process. I overlay my equity curves on top of
> >>>> each other and look for the straightest curve relative to the
> >>>> market
> >>>> opportunities (volatility). In other words, the slope of the
> >>>> equity
> >>>> increases with the volatility. If an optimization step generates a
> >>>> better profit by virtue of clipping off a big drawdown, or other
> >>>> rare
> >>>> event, then it is over optimization and I reject it as just data
> >>>> mining. However, if an optimization step results in a steady,
> >>>> constantly deviating increase in outcome over all market
> >>>> conditions,
> >>>> then I accept it as a fundamentally good optimization. When rare
> >>>> events hurt the performance, I zoom in on those trades and try to
> >>>> understand what about the underlying algorithm lets it happen.
> >>>> Then,
> >>>> I think about how I can make my algorithm smarter in the general
> >>>> case
> >>>> to reduce these types of problems. I do a lot more thinking than
> >>>> testing and optimizing --which I do by hand with parameters, so I
> >>>> know
> >>>> what the relationships are intuitively after a while. I do not
> >>>> consider it cheating to have parameters that adjust themselves to
> >>>> general market conditions like high or low volatility, etc., just
> >>>> as
> >>>> long as the algorithms are very general and make logical sense
> >>>> regardless of the data.
> >>>>
> >>>> It is a slow process, but when I am done, I have an algorithm and
> >>>> settings that are robust to whatever the market throws at me. I
> >>>> don't like being fooled by randomness!
> >>>>
> >>>> BR,
> >>>> Dennis
> >>>>
> >>>> On Jun 19, 2009, at 8:10 PM, brian_z111 wrote:
> >>>>
> >>>>> OR
> >>>>>
> >>>>> ... is opt correctly flagging something about market behaviour or
> >>>>> system trading that I don't understand?
> >>>>>
> >>>>>
> >>>>> OR
> >>>>>
> >>>>>
> >>>>> ... all of the above, none of the above, a combination of some of
> >>>>> the above?
> >>>>>
> >>>>>
> >>>>>
> >>>>> Also, the MACrossover changed from TrendFollowing to MeanReversion
> >>>>> when I added 1.5 years to a 30 year lookback and not a 10 year
> >>>>> lookback ... not a 9 year lookback as per my previous post.
> >>>>>
> >>>>>
> >>>>> --- In amibroker@xxxxxxxxxxxxxxx, "brian_z111" <brian_z111@>
> >>>>> wrote:
> >>>>>>
> >>>>>> So, in my first ever attempt at optimization I am presented
> >>>>>> with a
> >>>>>> conundrum.
> >>>>>>
> >>>>>> If I opt MACrossver(C,X), and look down the list of top
> >>>>>> candidates
> >>>>>> (using the AB default objective function), I see that
> >>>>>> historically
> >>>>>> this system was both a 'trend following system' and a
> >>>>>> 'reversion to
> >>>>>> mean' system.
> >>>>>>
> >>>>>> If I could travel back in time, armed with this info, should I
> >>>>>> trade MA crosses as a 'trend follower', 'reversion to mean' or
> >>>>>> both?
> >>>>>>
> >>>>>> OR
> >>>>>>
> >>>>>> ...on the other hand am I failing to interpret the results
> >>>>>> correctly ... is there something about opt that I don't
> >>>>>> understand ... if I develop my opt skills will this help me solve
> >>>>>> this riddle?
> >>>>>>
> >>>>>> OR
> >>>>>>
> >>>>>> ... is optimization itself somehow not providing me with a clear
> >>>>>> understanding of how I should have followed the markets (for that
> >>>>>> historical period)?
> >>>>>>
> >>>>>> OR
> >>>>>>
> >>>>>> ... is it something to do with MACrossovers ... perhaps other
> >>>>>> systems are more amenable to optimization ... if so how can I
> >>>>>> filter systems in advance to save me wasting my time opting non-
> >>>>>> compliant systems?
> >>>>>>
> >>>>>> OR
> >>>>>>
> >>>>>> .... is the objective function the underlying cause of this
> >>>>>> dilemna?
> >>>>>>
> >>>>>> OR
> >>>>>>
> >>>>>> ... I have done something wrong .. failed to optimize correctly
> >>>>>> or
> >>>>>> used AB incorrectly?
> >>>>>>
> >>>>>> OR
> >>>>>>
> >>>>>>
> >>>>>> ... is it the data... is there something wrong with the Yahoo
> >>>>>> data
> >>>>>> I used?
> >>>>>>
> >>>>>> --- In amibroker@xxxxxxxxxxxxxxx, "brian_z111" <brian_z111@>
> >>>>>> wrote:
> >>>>>>>
> >>>>>>> What I am headlining here is that looking back,at 9 years of
> >>>>>>> data,
> >>>>>>> the best MA system was 'trendfollowing' then, only 1.5 years
> >>>>>>> later
> >>>>>>> the best MA system was turned completely upside down into a
> >>>>>>> 'reversion to mean' system ??????
> >>>>>>>
> >>>>>>> Say, what?
> >>>>>>>
> >>>>>>> I don't have an explanation for any of this but it is too early
> >>>>>>> anyway ... I need to make a lot more observations before it is
> >>>>>>> time to hypothesis.
> >>>>>>>
> >>>>>>> --- In amibroker@xxxxxxxxxxxxxxx, "brian_z111" <brian_z111@>
> >>>>>>> wrote:
> >>>>>>>>
> >>>>>>>> This is the first time I have optimized or used any kind of
> >>>>>>>> synthetic data in AB ... so far I haven't used any
> >>>>>>>> sophisticated
> >>>>>>>> methods to produce synthtetic data either.
> >>>>>>>>
> >>>>>>>> I have only done a small amount of testing but I immediately
> >>>>>>>> found three anomalies that might be worth further
> >>>>>>>> investigation.
> >>>>>>>> I have aleady reported bcak on two:
> >>>>>>>>
> >>>>>>>> - why does an apparently worthless 'system' (plucked out of
> >>>>>>>> thin
> >>>>>>>> air unless my subconscious mind intervened) outperfrom on
> >>>>>>>> approx
> >>>>>>>> 6% of stocks when those stocks are assumed to be correlated
> >>>>>>>> to a
> >>>>>>>> fair extent ... chance? OR some property of the data that
> >>>>>>>> correlation does not measure ... what property of the data
> >>>>>>>> would
> >>>>>>>> favour that randomly selected 'system'?
> >>>>>>>>
> >>>>>>>> Note. if anything I expected the system to test the assumption
> >>>>>>>> that the MA is the trend and I expected the system to 'fail'.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> - why does the same system then outperform approx 50% of the
> >>>>>>>> time
> >>>>>>>> when tested over randomly generated price series ... is it a
> >>>>>>>> coincidence that the outperformance ratio, on random data, is
> >>>>>>>> close to the expected for randomness? and why didn't the bull
> >>>>>>>> 'system' outperform only on the random price series that
> >>>>>>>> outperformed?
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> The third anomaly is:
> >>>>>>>>
> >>>>>>>> - I optimized the following on some Yahoo ^DJI data ... 10
> >>>>>>>> years
> >>>>>>>> EOD ... 9951 quotes ... 2/01/1970 until June 4th 2009.
> >>>>>>>> Default objective (fitness) function = CAR/MDD.
> >>>>>>>>
> >>>>>>>> fast = Optimize( "MA Fast", 1, 1, 30, 1 );
> >>>>>>>> slow = Optimize("MA Slow", 1, 1, 30, 1 );
> >>>>>>>>
> >>>>>>>> Buy = Cross(MA(C,fast),MA(C,slow));
> >>>>>>>> Sell = Cross(MA(C,slow),MA(C,fast));
> >>>>>>>>
> >>>>>>>> - when I optimized on the total range I found that the top
> >>>>>>>> values, were inverted (as per Howard's examples in this forum
> >>>>>>>> and
> >>>>>>>> his books) but when I left out the 2008/09 extreme market
> >>>>>>>> conditions I found this did not hold.
> >>>>>>>>
> >>>>>>>> Why does sucn a relatively small change in the test range make
> >>>>>>>> such a radical difference in the outcomes?
> >>>>>>>>
> >>>>>>>> Here are some of the reported metrics from AB .. notice that
> >>>>>>>> they
> >>>>>>>> are similar in some cases and markedly dissimilar in others.
> >>>>>>>>
> >>>>>>>> I am not sure if that leads to a question but it certainly gets
> >>>>>>>> my attention considering that I am in the business of
> >>>>>>>> engineering
> >>>>>>>> reward/risk.
> >>>>>>>>
> >>>>>>>> Note that I am using ProfitFactor because it is typical in the
> >>>>>>>> industry but it has some question marks over whether is it the
> >>>>>>>> best CoreMetric to use (I am investgating PowerFactor and
> >>>>>>>> assymetricalPayoffRatio which might be more apt ... I hope to
> >>>>>>>> post more on these metrics later).
> >>>>>>>>
> >>>>>>>> Opt1:
> >>>>>>>>
> >>>>>>>> using all data
> >>>>>>>>
> >>>>>>>> top model = CAR/MDD == 0.25 AND periods == fast 10, slow 7;
> >>>>>>>>
> >>>>>>>> NETT PROFIT 1749%
> >>>>>>>> Exposure 44.9;
> >>>>>>>> CAR 7.68;
> >>>>>>>> RAR 17.07
> >>>>>>>> MAXDD 31.57
> >>>>>>>> RECOVERYFACTOR 2.64
> >>>>>>>> PF 1.62 (WIN 68% * PR 0.75)
> >>>>>>>> #TRADES 588
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> Opt2:
> >>>>>>>>
> >>>>>>>> using data range from 2/01/1970 to 31/12/2007 (that's Dec for
> >>>>>>>> the benefit of timezoners).
> >>>>>>>>
> >>>>>>>> top model = CAR/MDD == 0.42 AND periods == fast 1, slow 6;
> >>>>>>>>
> >>>>>>>> NETT PROFIT 1921%
> >>>>>>>> Exposure 54.51;
> >>>>>>>> CAR 8.23;
> >>>>>>>> RAR 15.09
> >>>>>>>> MAXDD 21.67
> >>>>>>>> RECOVERYFACTOR 5.07
> >>>>>>>> PF 1.35 (WIN 40% * PR 2.03)
> >>>>>>>> #TRADES 1049
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> I hope I reported the metrics correctly but anyone can
> >>>>>>>> replicate
> >>>>>>>> my tests and report otherwise.
> >>>>>>>>
> >>>>>>>> I think it also demonstrates that if PoF (PowerFactor) is a
> >>>>>>>> better CoreMetric than ProfitFactor it will need to be
> >>>>>>>> standardized on a returns/time basis (choose your time period =
> >>>>>>>> the basetimeframe you trade ... PoF is related to GeoMean per
> >>>>>>>> bar?)
> >>>>>>>>
> >>>>>>>> --- In amibroker@xxxxxxxxxxxxxxx, "brian_z111" <brian_z111@>
> >>>>>>>> wrote:
> >>>>>>>>>
> >>>>>>>>> Following recent discussions on benchmarking and using rule
> >>>>>>>>> based systems to engineer returns to meet 'clients' profiles
> >>>>>>>>> i.e.Samantha's MA(C,10) example, I did some follow up R&D with
> >>>>>>>>> the intent of expanding the examination a little further via a
> >>>>>>>>> zboard post.
> >>>>>>>>>
> >>>>>>>>> I may, or may not, get around to that so in the meantime I
> >>>>>>>>> decided I would share a couple of things while they are still
> >>>>>>>>> topical.
> >>>>>>>>>
> >>>>>>>>> I made up some quick and dirty randomly generated eq curves so
> >>>>>>>>> that I could optimise MA(C,10) on them (out of curiosity).
> >>>>>>>>>
> >>>>>>>>> Also, out of curiosity, I decided to see how the example
> >>>>>>>>> signal/
> >>>>>>>>> filter code that I made up, as the study piece for Yofas topic
> >>>>>>>>> on benchmarking, would actually perform.
> >>>>>>>>>
> >>>>>>>>> Buy = Ref(ROC(MA(C,1),1),-1) < 0 AND ROC(MA(C,1),1) > 0 AND
> >>>>>>>>> ROC(MA(C,10),1) > 0;
> >>>>>>>>> Sell = Cross(MA(C,10),C);//no thought went into this exit
> >>>>>>>>> and I
> >>>>>>>>> haven't tried any optimization of the entry or the exit
> >>>>>>>>>
> >>>>>>>>> By chance I noticed that it outperformed on one or two of the
> >>>>>>>>> constituents of the ^DJI (Yahoo data ... 2005 to 2009) and to
> >>>>>>>>> the naked eye the constituents all seem to be correlated to a
> >>>>>>>>> fair extent over that time range.
> >>>>>>>>>
> >>>>>>>>> Also, to the naked eye, it outperforms on randomly generated
> >>>>>>>>> stock prices around 50% of the time and the outperformnce
> >>>>>>>>> doesn't appear to be correlated to the underlying(I haven't
> >>>>>>>>> attempted to find an explanation for this).
> >>>>>>>>>
> >>>>>>>>> Here is the code I used to make up some randomly generated
> >>>>>>>>> 'stocks'.
> >>>>>>>>>
> >>>>>>>>> As we would expect it produces, say, 100 price series with a
> >>>>>>>>> concatenated mean of around zero (W/L = 1 and PayoffRatio ==
> >>>>>>>>> 1)
> >>>>>>>>> etc.
> >>>>>>>>> When plotted at the same time ... individual price series are
> >>>>>>>>> dispersed around the mean in a 'probability cone' ... in this
> >>>>>>>>> case it is a relatively tight cone because the method doesn't
> >>>>>>>>> introduce a lot of volatility to the series.
> >>>>>>>>>
> >>>>>>>>> /*P_RandomEquity*/
> >>>>>>>>>
> >>>>>>>>> //Use as a Scan to create PseudoRandom Equity curves
> >>>>>>>>> //Current symbol, All quotations in AA, select basetimeframe
> >>>>>>>>> in
> >>>>>>>>> AA Settings
> >>>>>>>>> //It will also create the curves if used as an indicator (add
> >>>>>>>>> the appropriate flag to ATC)
> >>>>>>>>> // but this is NOT recommended as it will recalculate them on
> >>>>>>>>> every refresh.
> >>>>>>>>> //Indicator mode is good for viewing recalculated curves
> >>>>>>>>> (click
> >>>>>>>>> in whitespace)
> >>>>>>>>> //CommentOut the Scan code before using the indicator code.
> >>>>>>>>> //Don't use a very large N or it will freeze up indicator
> >>>>>>>>> scrolling etc
> >>>>>>>>>
> >>>>>>>>> n = 100;//manually input desired number - used in Scan AND
> >>>>>>>>> Indicator mode
> >>>>>>>>>
> >>>>>>>>> ///
> >>>>>>>>> SCAN
> >>>>>>>>> ///////////////////////////////////////////////////////////////////
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Buy=Sell=0;
> >>>>>>>>>
> >>>>>>>>> for( i = 1; i < n; i++ )
> >>>>>>>>>
> >>>>>>>>> {
> >>>>>>>>>
> >>>>>>>>> VarSet( "D"+i, 100 * exp( Cum(log(1 + (Random() - 0.5)/
> >>>>>>>>> 100)) ) );
> >>>>>>>>> AddToComposite(VarGet( "D"+i ),"~Random" + i,"X",1|2|128);
> >>>>>>>>> //Plot( VarGet( "D"+i ), "D"+i, 1,1 );
> >>>>>>>>> //PlotForeign("~Random" + i,"Random" + 1,1,1);
> >>>>>>>>> }
> >>>>>>>>>
> >>>>>>>>> /*
> >>>>>>>>> ////PLOT/////////////////////////////////////////////////////
> >>>>>>>>>
> >>>>>>>>> //use the same number setting as for the Scan
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> for( i = 1; i < n; i++ )
> >>>>>>>>>
> >>>>>>>>> {
> >>>>>>>>>
> >>>>>>>>> PlotForeign("~Random" + i,"Random" + i,1,1);
> >>>>>>>>>
> >>>>>>>>> }
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> ////
> >>>>>>>>> OPTIMIZE
> >>>>>>>>> ///////////////////////////////////////////////////////////
> >>>>>>>>>
> >>>>>>>>> //use the filter to run on Group253 OR add ~Random + i
> >>>>>>>>> PseudoTickers to a Watchlist and define by AA filter
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> //fast = Optimize( "MA Fast", 1, 1, 10, 1 );
> >>>>>>>>> //slow = Optimize("MA Slow", 4, 4, 20, 1 );
> >>>>>>>>>
> >>>>>>>>> //PositionSize = -100/P;
> >>>>>>>>> //Buy = Cross(MA(C,fast),MA(C,slow));
> >>>>>>>>> //Sell = Cross(MA(C,slow),MA(C,fast));
> >>>>>>>>>
> >>>>>>>>> //Short = Sell;
> >>>>>>>>> //Cover = Buy;
> >>>>>>>>>
> >>>>>>>>> I also stumbled on this, which seems to have some relevance:
> >>>>>>>>>
> >>>>>>>>> http://www.scribd.com/doc/6737301/Trading-eBookCan-Technical-Analysis-Still-Beat-Random-Systems
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> It contains a link to a site that has a free download of some
> >>>>>>>>> RNG produced datasets.
> >>>>>>>>>
> >>>>>>>>> There hasn't been much discussion on using synthetic data in
> >>>>>>>>> the
> >>>>>>>>> forum ... Patrick recommended it for testing? OR
> >>>>>>>>> benchmarking? ... Fred is against using it ("If we knew enough
> >>>>>>>>> about the characteristics of the data, in the first place,
> >>>>>>>>> to be
> >>>>>>>>> able to create synthetic data then we would know enough to
> >>>>>>>>> design trading systems to exploit the data's profile
> >>>>>>>>> anyway", OR
> >>>>>>>>> something like that).
> >>>>>>>>>
> >>>>>>>>> I was titillated enough by my first excursion into
> >>>>>>>>> benchmarking
> >>>>>>>>> with synthetic data to bring me back for some more.
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> ------------------------------------
> >>>>>
> >>>>> **** IMPORTANT PLEASE READ ****
> >>>>> This group is for the discussion between users only.
> >>>>> This is *NOT* technical support channel.
> >>>>>
> >>>>> TO GET TECHNICAL SUPPORT send an e-mail directly to
> >>>>> SUPPORT {at} amibroker.com
> >>>>>
> >>>>> TO SUBMIT SUGGESTIONS please use FEEDBACK CENTER at
> >>>>> http://www.amibroker.com/feedback/
> >>>>> (submissions sent via other channels won't be considered)
> >>>>>
> >>>>> For NEW RELEASE ANNOUNCEMENTS and other news always check DEVLOG:
> >>>>> http://www.amibroker.com/devlog/
> >>>>>
> >>>>> Yahoo! Groups Links
> >>>>>
> >>>>>
> >>>>>
> >>>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> ------------------------------------
> >>>
> >>> **** IMPORTANT PLEASE READ ****
> >>> This group is for the discussion between users only.
> >>> This is *NOT* technical support channel.
> >>>
> >>> TO GET TECHNICAL SUPPORT send an e-mail directly to
> >>> SUPPORT {at} amibroker.com
> >>>
> >>> TO SUBMIT SUGGESTIONS please use FEEDBACK CENTER at
> >>> http://www.amibroker.com/feedback/
> >>> (submissions sent via other channels won't be considered)
> >>>
> >>> For NEW RELEASE ANNOUNCEMENTS and other news always check DEVLOG:
> >>> http://www.amibroker.com/devlog/
> >>>
> >>> Yahoo! Groups Links
> >>>
> >>>
> >>>
> >>
> >
> >
> >
> >
> > ------------------------------------
> >
> > **** IMPORTANT PLEASE READ ****
> > This group is for the discussion between users only.
> > This is *NOT* technical support channel.
> >
> > TO GET TECHNICAL SUPPORT send an e-mail directly to
> > SUPPORT {at} amibroker.com
> >
> > TO SUBMIT SUGGESTIONS please use FEEDBACK CENTER at
> > http://www.amibroker.com/feedback/
> > (submissions sent via other channels won't be considered)
> >
> > For NEW RELEASE ANNOUNCEMENTS and other news always check DEVLOG:
> > http://www.amibroker.com/devlog/
> >
> > Yahoo! Groups Links
> >
> >
> >
>
------------------------------------
**** IMPORTANT PLEASE READ ****
This group is for the discussion between users only.
This is *NOT* technical support channel.
TO GET TECHNICAL SUPPORT send an e-mail directly to
SUPPORT {at} amibroker.com
TO SUBMIT SUGGESTIONS please use FEEDBACK CENTER at
http://www.amibroker.com/feedback/
(submissions sent via other channels won't be considered)
For NEW RELEASE ANNOUNCEMENTS and other news always check DEVLOG:
http://www.amibroker.com/devlog/
Yahoo! Groups Links
<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/amibroker/
<*> Your email settings:
Individual Email | Traditional
<*> To change settings online go to:
http://groups.yahoo.com/group/amibroker/join
(Yahoo! ID required)
<*> To change settings via email:
mailto:amibroker-digest@xxxxxxxxxxxxxxx
mailto:amibroker-fullfeatured@xxxxxxxxxxxxxxx
<*> To unsubscribe from this group, send an email to:
amibroker-unsubscribe@xxxxxxxxxxxxxxx
<*> Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/
|