PureBytes Links
Trading Reference Links
|
Mike,
Good post.
----- Original Message -----
From: "MikeSuesserott" <MikeSuesserott@xxxxxxxxxxx>
To: <metastock@xxxxxxxxxxxxx>
Sent: Monday, October 01, 2001 8:18 PM
Subject: Floating-point precision has inherent limitations
> Hi all,
>
> it is sometimes erroneously believed that computer arithmetic is precise,
> and that using a computer precludes calculation errors. As regards
> floating-point numbers, nothing could be further from the truth.
>
> Here are two short programs that I posted to the Omega list some time ago
> (summary of results attached). They may serve to demonstrate some of the
> pitfalls that can occur even in simple calculations.
>
> The iteration example demonstrated below uses no advanced mathematics,
only
> very simple basic math (square a number, then subtract 2). This formula
> generates normalized numbers that will not grow out of bounds, but will
stay
> roughly between -2 and 2, for easier handling. In spite of this simplicity
> you will see that after only 27 iterations the single-precision
calculation
> will create errors of a magnitude of more than 100%, with all further
> iteration results in single precision proving totally meaningless and
> belonging purely to the realm of chance.
>
> Double precision fares somewhat better; it takes 58 iterations for
> double-precision results to become equally worthless.
>
> Please note that the errors do not merely crop up in the last, most
> insignificant decimal places, but sooner rather than later mercilessly
> destroy even the most significant digits.
>
> To compare single precision with double precision, C++ was used; to be
able
> to verify, in turn, the double precision results from C++, I had to resort
> to the arbitrary-precision capabilities of Mathematica. I used an internal
> precision of 50 decimal digits, which is sufficient to ensure a 19 digits
> precision in the result after the one-hundredth iteration.
>
> Here are the programs, first C++, then Mathematica:
>
> -------------------------------------------------
> #include <condefs.h>
> #include <iostream.h>
>
> int main(int argc, char **argv)
> {
> float a[101]; // single precision
> double b[101]; // double precision
>
> a[0]=b[0]=0.5; // initial condition
>
> for (int i=1; i<=100; i++) // 100 iterations
> {
> a[i] = a[i-1]*a[i-1] - 2;
> b[i] = b[i-1]*b[i-1] - 2;
> cout<<"i="<<i<<" a["<<i<<"]="<<a[i]<<" b["<<i<<"]="<<b[i]<<"\n";
> }
>
> getchar();
> return 0;
> }
> --------------------------------------------------
>
> a[0] = 0.5`50
> Table[{i, a[i]=N[a[i-1]^2-2, 50], Precision[a[i]]}, {i,1,100}]
>
> --------------------------------------------------
>
> Please find the results in the attached text file. Single and double
> precision results are rounded to 6 significant places, the Mathematica
> results are given to the precision guaranteed to be correct (>=19).
>
> I would like to add that this example is by no means contrived in any way.
I
> could easily demonstrate thousands of similar cases, and there is no doubt
> in my mind that every one of us who uses any single-precision software at
> all will, unwittingly and without any fault of his, already have fallen
prey
> to this phenomenon at one time or another.
>
> Michael Suesserott
>
|