> Hi Nicolas. Thanks and that's more or less where I am. I have
an
>article here from Chaitin based on the compression criterion
for
>randomness. He claims that financial time series are more or less
ALL
>random. 'Only one series in 1.000 can be compressed'. Meaning that
in
>practice you can find only 1 non-random time series in 1.000. I
just
>cannot believe this because if that's true what about
technical
>analysis ? So I am missing something ...
You make a
good point, if this is true, then TA would be pointless. I
would say it all
depends on the timescale, at small timescales (1mn -
5mn) there are for
sure recurrent patterns. The whole problem is to
find ones that are
tradable despite slippage and delays : )
> Anyway, you are saying
that you did some testing on the NAS100
stocks with the Lempel-Ziv
algorithm (?). Do you have any code for me
or an URL where I can find it
?
Unfortunately, I cannot give the code (part of a C++ library). You
may
implement one of the algorithms described in
http://www.snl-e.salk.edu/publications/Kennel-2005.pdf
The
one on page 1571 (Kontoyiannis et al) is known to be
good.
Alternatively, you may export your time series to a txt file
and
compress it using gzip/pkzip. Then you do the same for a large
number
of random time series (1000) and compute the ratio of the size of
the
files (your_time_series/average_random_time_series), it
will give you
a rough idea about how far you are from randomness .. but
never
explain that to a physicist : ) (see for instance
http://cscs.umich.edu/~crshalizi/notebooks/cep-gzip.html
)
Regards,
Nicolas