Keith
After you've divided your data into chunks. You can import them all
into a single database or multiple databases as you wish. If you choose
to import them all to a single db, and there are advantages with that,
the performance improvement will be significant because, the size of
the array loaded for processing would be a fraction of what it used to
be. Of course quick afl minimises the no. of bars loaded, but somehow
it is still faster to have shorter array size vs loading a smaller no.
of bars of a very long array. I think it will actually be faster to
load many shortish length of data from the same DB than loading and
unloading databases as you transverse from one date range to the next.
--- In amibroker@xxxxxxxxxps.com,
Keith McCombs <kmccombs@xx.> wrote:
>
> Paul --
> Sorry, somehow I missed your response over a month ago. Thank you
for
> pointing out that I may be able to do what I need in .afl. You are
> probably correct. And it has been quite awhile since I last
programmed
> in C++ and was not looking forward to the re-learning process.
>
> I'm not quite sure I understand your suggestion,
> "you can also add imported tickers to a certain watchlist in the
orders
> of data ranges.
> That's it.
> When you backtest, you just move from one watchlist to the next as
you
> move forward onto the next range."
>
> To do that, wouldn't the data for all the watchlists have to be in
the
> same large data base? In which case, I would be right back to the
> slower backtest and optimization runs.
>
> Charles --
> Because of your recent email, I found Paul's posting below.
> I will also respond to your question shortly.
>
> Thanks to both of you.
> -- Keith
>
> On 1/21/2010 08:40, paultsho wrote:
> >
> > Hello Keith
> > Partitioning data into smaller periods consiste of 2 steps
> > 1. exporting data from ... to a certain period. There are
some afl
> > lying around that would do the exporting to an ascii file.
all you
> > need is an extra if statement to export data only if it is
within your
> > desired data range and change the ticker name to one that
will
> > differentiate between different date ranges. you might also
do some
> > intergrity checking while you're exporting to make sure all
bars
> > exported are in chronlogical order (some time corrupted
database arent).
> > Next is to report the data into a new database This is best
> > acomplished with a script. you can also add imported tickers
to a
> > certain watchlist in the orders of data ranges.
> > Thats it.
> > When you backtest, you just move from one watchlist to the
next as you
> > move forward onto the next range. This can also be asisted
with
> > writing a script.
> > I hope that helps
> > /Paul.
> >
> > --- In amibroker@xxxxxxxxxps.com
<mailto:amibroker%40yahoogroups.com>,
> > Keith McCombs <kmccombs@> wrote:
> > >
> > > Longstrangest --
> > > Thank you for your reply.
> > > I'm not about to attack SQL in the near future. However,
its nice that
> > > I am not all alone with this problem.
> > >
> > > And your post below gives me some encouragement to try
to attack the
> > > problem. I also like your suggestion for reducing AB
overhead by making
> > > additional files with longer time periods.
> > >
> > > I'm thinking of perhaps a C++ program to produce the
smaller ASCII
> > > files, both date limited period modified, for
importation into AB.
> > > Notice that I said 'thinking', not 'writing', at least
not yet.
> > >
> > > -- Keith
> > >
> > > longstrangest wrote:
> > > >
> > > >
> > > > A technique I've been using to deal with large
intraday historical
> > > > DB's is to store the data in a SQL database, use
the ODBC plugin
> > (with
> > > > some custom mods) and use a SQL stored procedure to
fetch the data
> > and
> > > > deliver the bars to AB. Then you can control how
much data AB "sees"
> > > > (has access to) by changing a variable in some SQL
table that your
> > > > your stored procedure uses indicating the number of
bars/days you
> > want
> > > > to have delivered. The system I created for doing
all of this is too
> > > > complicated, proprietary, and valuable to give out
in code, but
> > that's
> > > > the general idea....
> > > >
> > > > You need an adjustable layer of abstraction between
AB and the data
> > > > source, and that's how I handle it. In my case,
this layer of
> > > > abstraction, handled by the stored procedure in the
SQL database, is
> > > > also capable of delivering the desired size of the
bars... I have SQL
> > > > bar building code that creates different tables for
different bar
> > > > intervals from the 1 minute data, and rather than
leaving it up to AB
> > > > to assemble the 1m bars into, for instance, 5
minute bars (which
> > > > burdens AB significantly for a large 1m database),
I use my prebuilt
> > > > QuotesMin5 SQL table. That simple thing allows me
to work with 5
> > times
> > > > the history with 1/5th the memory/CPU requirements,
as long as I
> > don't
> > > > need to look into bars smaller than 5 minutes.
> > > > That kind of thing.... So maybe that sparks some
ideas for y'all.
> > > > Certainly not an easy solution for the masses....
but the determined
> > > > few will always find a way.
> > > >
> > > > -Longstrangest
> > > >
> > > > One man's dream for AB: If only AB itself would use
> > > > multithreading/multiprocessing to assemble
bars on time intervals
> > > > different than the database (or using
multithreading/multiprocessing
> > > > for *anything* for that matter).... Then maybe the
other 3 cores
> > on my
> > > > quad core CPU would get some use. Still scratching
my head and
> > > > wondering why AB maxes out one core constantly, and
never uses the
> > > > other processors, in this day and age where the
only significant
> > > > performance improvements we've seen in CPU power
has been achieved
> > > > with multiple CPU cores on a single die. Until AB
becomes
> > > > multiprocessing-aware, I'll be forced to write
code that
> > duplicates AB
> > > > functionality, cave-man style, like bar building,
that I can run
> > in my
> > > > own separate thread. IMO, there's no better
platform than AB.... but
> > > > if there's one single monstrous performance
improvement AB can make,
> > > > it's taking advantage of multiprocessing.
> > > > <drums fingers on desk, taps foot>
> > > > Dream on.
> > > >
> > > >
> > >
> >
> >
>