[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: First calculation problem



PureBytes Links

Trading Reference Links

MetaStock does indeed use single precision floating point numbers.  As you
mentioned, going to double precision would literally double the memory
requirements for data storage for charts and would also slow down
calculations.  When you get into mathematical calculations, however, going
to double precision doesn't necessarily make the problem better.  PC
computer hardware still cannot accurately store a number as simple as 0.1
whether you are using single or double precision.  It is stored as an
approximation.  When it comes to floating point numbers, the hardware can
really only accurately store fractional numbers that are powers of two (1/2,
1/4, 1/8, 1/16 etc).

Other software packages suffer from the same problem (including VB and
Excel) although some manage to mask it better than others.  If you don't
believe this, I can submit a set of "simple" calculations that will cause
Excel to show precision errors also.

Some software packages will use other methods to store and/or calculate
floating point numbers.  This usually involves something like BCD encoding
or some type of integer encoded fixed point real numbers.  While this
ultimately solves the precision problem, it has other problems with speed of
calculations as well as a reduced ability to store large or very small
numbers.

We have always been aware of this issue and that is why we added the
precision function to the formula language.  It was put there in an attempt
to help those writing formulas to work with the precision they needed.

Ken Hunt
Programming Manager
Equis International


-----Original Message-----
From: Kent Rollins [mailto:kentr@xxxxxxxxxxxxxx]
Sent: Friday, August 25, 2000 12:40 AM
To: metastock@xxxxxxxxxxxxx
Subject: Re: First calculation problem


Looks like you may have hit the old single-precision problem.  PCs basically
have 3 native ways of storing floating point numbers: single-precision (4
bytes), double-precision (8 bytes), and long double (10 bytes).  The problem
is that each one of these representations has a limited number of "numbers"
that it can represent and from time to time you will hit a calculation that
reveals this limitation in all it's splendor.  Single-precision floats can
represent approximately 4 billion different numbers.  That's a lot until you
consider that between 0 and 1 there are an infinite number of floating point
numbers.  Double-precision has many, many more number that it can represent
(4 billion times 4 billion) and you RARELY see the kind of error you have
hit when you are dealing with numbers on the scale of 1486 with only 2
places of precision.  That leads me to suspect that Equis is using
single-precision numbers for these calculations (Omega does the same thing).
Saves memory, SLIGHTLY faster in computation, loses precision.  There is
really no good reason for using singles in an app like this and there is a
(now obvious) good reason not to.  I would scream and yell at Equis.  Tell
Little Guy that if he convinces Equis to use doubles you'll buy him a pony
and then drop him off in the programmer's offices.

Ken Hunt, does MetaStock use single precision for these calculations?

Kent


-----Original Message-----
From: Guy Tann <grt@xxxxxxxxxxxx>
To: Metastock User Group <metastock-list@xxxxxxxxxxxxx>
Date: Friday, August 25, 2000 1:29 AM
Subject: First calculation problem


List,

Well, I decided to do a little more work and discovered my first problem.

Somehow, MS came up with the following:

     1486.20
- 1469.40
          16.7999 instead of the more commonly expected 16.80

Now the first number is the Close and the second number is the day's low, so
we can't blame this on any previous calculation or anything left over from
something else.  Well, that's not quite true.  The Low used in the
calculation was the result of an IF() statement that made sure that the Low
was really the Low by our definition (by checking it against the previous
day's Close).

What internal methodology might cause this excellent bit of subtraction?

Granted, in checking out the 170-member dataset, I didn't check them all.  I
checked the first 20 and the last 20.  Now I'll probably have to go back and
sample some in the middle.

I used my trusty, solar powered calculator to double-check my Clipper output
and they both agree that MS is wrong.  Any suggestions?

This is making me very nervous and might force me back to Excel and/or VB.
So far I've spent over two week on this relatively simple program and I have
to admit that I never thought it would be necessary to go back and
double-check such basic arithmetic.


Guy

Never be afraid to try something new. Remember, amateurs built the ark,
professionals built the Titanic.