PureBytes Links
Trading Reference Links
|
>
> interesting. Would you care to elaborate a little how you quantify this
> second process?
well i guess i could write a few lines here.
suppose you want to predict volatility of price time series.
the stylized facts that you base your model on are:
- it's mean reverting
- it's gaussian in log or near gaussian or stable, whatever but stable.
- it's persistent, serially correlated, clustered
- it usually positively correlated to volume
- it scales in time, usually based on either square root of time law or exp law.
that's all you have to go by and often there is very little explained as far as
the fundamental nature of volatility or risk. why does volatility go up and go
down and how you explain the persistence, mean reversion, scaling laws
and volume correlation??? if the model does not explain the above but just
takes those as facts then most likely the model won't perform since you will
miss something in there...
to explain it all you have to start with the fundamental nature of financial markets.
the market is an information processing "machine" in which a true value of the
underlying traded is found based on incoming information and profits and losses
are allowed to be made as a payoff for performing information processing function.
look at the market as a box on the input of which you have new information and on the
output of which you have new value, more over there is feedback since new value
itself is part of new information input.
types of info that you have on the input are:
- fundamental information, news, rumors, facts, you name it.
- technical ( feedback ) information, price action itself, bandwagon
expectations ie momentum ie trend, etc...
now, information comes in cycles, waves and there are two basic cycles
that correspond to up and down moves in price.
for simplicity: current info cycle and new contradicting or competing cycle.
example, you have a major fed announcement that creates an information cycle that pushes
prices higher, after initial factoring of that information a new counter cycle develops
based on different interpretation of the same report, prices recede,
next a bandwagon expectation develops where everyone piles up
on that second reaction, since we all observe the price action every new down tick
represents for us new information, next after the bandwagon pile up cycle ends (
information factored ) yet another new information cycle develops where people
get a clue that the move might be over and the intraday profit taking move sets in that
itself is new information, and on and on and on...
so you can see the market outputs price value and creates new information that begets new
information, etc...
once we know the basics we can link volatility to information or lack of thereof.
information in general could be measured in 1.content, 2.units, 3. weights
there might be several units of information with different weight based on
different content that the market factors in at any given moment.
so information factoring is a cycle which has duration, magnitude and defines direction
which is basically realized as price move. those info cycle overlap, counteract,
resonate, etc...
sometimes there is little new information to take in for the market and the market
kinda dies down, sometimes there is lots of new info to factor and the market explodes and
once the info is factor it fades down...
look at the example of new information cycle, the market open.
the open presents an ideal example of the new information that piled up
from previous day, overnight and preopen and is ready to be factored in
straight from the open... trading interest increases, order sizes increase...
as a result the common observation is that intraday volatility is the highest
on the open, same goes about the volume of the transactions...
once the market process the info after a hour, hour and a half after the open
the factoring is done the new value is found and volatility dies down towards
lunch where typically there is little new information to process and up till
the close where information content again grows and those who did
not participate in morning factoring do so on the close...
thus volatility is directly proportional to the current new information intake.
more information coming into the market, the higher the volatility goes
and information is factored and no new info coming in the volatility dies down,
fades... the best way to visualize volatility increase and decrease cycle is by
drawing a horizontal number eight figure. 8
half the eight ( one zig-zag, not the circle part ) is increase cycle and the other half
is decrease. see gif
the point of symmetry in 8 which is in the middle corresponds to the
mean volatility. the rightmost point corresponds to highest outlier and the
leftmost to lowest volatility value...
immediately you can see that to predict volatility proper you not only need to
know which part of 8 you are in, below or above the mean but
what cycle you are in, increase or decrease...
you can be below the mean but if you assume a decrease cycle ( no new
information, info fading ) where as it's increase cycle ( new information )
you will be wrong in your forecast.
math wise, you need to have two terms in your volatility model at least.
one to take care of the mean reversion and the other to take care of
the increase cycle. ( since log volatility is assumed to be normal there
is symmetry between the 4 parts of 8, ie below mean inc/dec and above mean inc/dec )
take a look at the garch regression model ( i substituted sigma for V and remove squared
for simplicity ):
Vn = c + a*Rn-1 + b*Vn-1
can you id if there are mean reversion terms and cycle terms and which ones they are and
most importantly if they are proper?
well c + a*Rn-1 represents the mean reversion term and b*Vn-1 is the cycle representation
attempt. in typical garch model mean returns is assumed to be 0 and c is very close to
zero, right?
so c + a*Rn-1 is typically comes from c + a*(Rn-1 - m ) where m = mean return an app.
zero... so this part trying to take care of the "reversion" or fading cycle.
next is b*Vn-1 which is an attempt to id the increase/decrease cycle, where b is typically
large compared to a, representing the persistency factor. Vn-1 is in this case is
volatility estimate and Rn-1 is the innovation.
if we analyze the model closer we can see that there is symmetry in parameters for
volatility increase and decrease cycles, this is a "dumb" fading model always
anticipating mean reversion... since a + b < 1
hence the assumption that garch only works well
predicting in the fading cycle... ie garch can't predict volatility increase well but does
well on the volatility decay side... sure it is hard to predict new information or predict
the end of the current info cycle and assume fading started... you kinda understand what i
mean?
so, where i am going with this is that the true volatility model should:
- attempt to separate the volatility cycle in either increase or decrease based on
whatever technique, ma, cycle id, momentum, etc... something that tells the model
that you are most likely in volatility increase OR decrease cycle.
- then on top of that the model must id where you are relative to the mean volatility,
are you under or are you over... this is important because if you are well over
you are likely to revert back or reverting back already, Prob of reversion is greater
at the extremes of the distribution... ( not in roulette case )
the best model would then be two ( four ) factor models that are enabled whether you are
in the increase or decrease volatility cycle. then you can regress/estimate two models
separately on mean reversion and increase/decrease... the volatility distribution
must then be conditioned into two separate distributions: either under or over mean
or increase / decrease for ease of parm estimation...
so you have two or four models with 3 parms each... you then enable the 1 or 2 (3,4)
model based on either increase in decrease cycle you are in...
( as you might infer the kink is in the cycle id technique )
each model has same number of terms, parms and estimated same way but
from different conditional probability density derived from joint volatility density.
the general model should then be:
1. Vn = a1 + b1*Xn-1 + c1*Yn-1 + d1*et
2. Vn = a2 + b2*Xn-1 + c2*Yn-1 + d2*et
with 3 parms to estimate b,c,d ( a could be estimated as mean cond. distrib of Vn)
Xn-1 - reversion explanatory var
Yn-1 - cycle explanatory var
et - noise
summary:
- garch is not so good and only catches half of the action, fading cycle.
- current computing power can do the above, i think it's doable.
- as the computing power improves the model complexity will increase, the
accuracy should increase too...
- some papers now confirm that two factor models ( they unknowingly point
to increase and decrease cycles ) are advantageous, ie garch (2,2) will do
better than 1,1 but not by a whole lot since the parms are estimated from
the same density and 2,2 model is not really structured as a two factor model.
that's aside, i am currently finishing up on one factor model where
i just added the cycle term... since for my log range proxy the cycles are symmetrical
i think i can do away with one factor model with 2 expl. vars one for cycle
one for reversion. for the cycle id proxy i took just the 1 bar volatility proxy
momentum ( the simplest ), ie if on the last bar volatility went up i assume it's an up
cycle...
simple but better cycle id techniques are guaranteed to improve the accuracy.
it is possible to use kalman or arima or kernel regression or fft or wavelets, or
mesa or any technique that will tell you whether current volatility is measured as up or
down cycle.
so LTCM not having the cycle id term in there model:-) could not id the volatility
increase cycle and as the crisis ws spreading all over the world they were reverting
where they should be waiting for the reversion to start... it's like catching
the bottom/top ( falling knife ) but catching the top in volatility and they did not
catch it on time... blew horn as further increase in volatility took em out of
trading business. dynamic hedging fell apart as the model predicted reversion
where as the reality reflected the snowball. if they properly id-ed the smallest
increase cycle they would have waited for
the top in volatility and then took the proper positions... funny that catching
tops and bottoms is a typical newbie trader error ,we all know it right?
pretty neat story and there is math to learn from that for sure.
do not catch tops and bottoms instead wait for the move to get underway then jump on it.
likewise do not anticipate new volatility cycle, wait for it to start first then act in
continuation.
but i don't blame em, prediction is hard business...measurement is easier.
in roulette there is no volatility, no cycles, no reversion, just probabilities,
expectations and runs.
bilo.
ps. the end result of that rocket science risk model is simply an adaptive
entry/exit technique for a trading system where risk on a given trade
is computed adaptively based on the above considerations. the complexity of
the model is on the level of garch... but structurally it is much better than garch.
the dll might be available for a fee as this model is half the trading system itself.
it's funny that the result sounds simple but the road to it so complex.
math pays. math's the key ( in systematic trading ).
see gif.
and finally no matter what the thread is you always end up dicussing what YOU are
interested in. ain't it so?
>
> As regards the mean reversion of volatility, the good news is that for some
> option strategies such as ratio backspreads you really don't have to worry
> much about its reliability because of the inherent loss limitation of these
> positions. The great mistake of LTCM was to choose strategies whose *sole*
> prerequisite was mean reversion. They finally did learn that lesson, if at a
> price.
>
> Michael Suesserott
>
>
> -----Ursprüngliche Nachricht-----
> Von: Bilo Selhi [mailto:biloselhi@xxxxxxxxxxx]
> Gesendet: Tuesday, December 04, 2001 19:20
> An: Omega List
> Betreff: Re: puzzling probability and roulette a la Mark Brown
>
>
> might i add ( since um currently working on volatility model )
> that mean reversion isn't that only, volatility in addition to mean
> reversion goes through increase/decrease cycles governed mainly by
> information factoring/fading process. it is actually two overlapping
> processes not just one. ltcm blew horn because they did not factor the
> second process...and relied on mean reversion only.
> increase cycle especially dangerous since volatility can snowball to
> unexpected high value as information intake snowballs and ruin your mean
> reversion model right there where you expect it to mean revert...
>
> where as coin toss or roulette has no increase or decrease info cycles with
> feedback because simply there is no information intake ( except the spin
> itself )
> unlike in trading, no memory thus governed by random process only. trading
> not equal
> gambling.
> however even in roulette you might have a unexpected random10 reds in row
> as you will likely be "mean reverting" after 3-5 to bet on black, right? :-)
> the difference is the nature of the process... in trading it's random +
> information factoring and feedback in game pure random event and since in
> trading it's not always random,prediction is possible.
>
> in trading is that new information begets new information begets
> new information... ie there are information cycles. a turning point in
> trading for instance is where one information cycle is overridden by an new
> counter one. when volatility snowballs it is the result of the information
> rolling snowball. compare that to a common information shock such as a news
> event that creates initial jump in volatility aka price shock then fades and
> reverts to the mean. selling volatility into the price shock will work but
> not during the snowball. means defining the cycle ( increase or decrease )
> you are in is as important as defining if you are under or over the mean
> and how much.
> in roulette there are no snowball cycles, no memory, in blackjack you might
> have
> a weak info cycle in there based on the sequence of cards and thus can have
> a
> better edge if you can determine the cycle by counting.
>
> in roulette about average 100 wheel rotations you should expect about 3
> times
> of 5 reds or blacks in a row...etc. after that the probabilities of black or
> red
> are still 50%.
>
> bilo.
>
>
>
Attachment:
Description: "volatility.gif"
|