From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 107079,c7637cfdf68e766 X-Google-Attributes: gid107079,public X-Google-Thread: 103376,c7637cfdf68e766 X-Google-Attributes: gid103376,public X-Google-Thread: f43e6,c7637cfdf68e766 X-Google-Attributes: gidf43e6,public X-Google-Thread: f8362,c7637cfdf68e766 X-Google-Attributes: gidf8362,public X-Google-Thread: 109d8a,c7637cfdf68e766 X-Google-Attributes: gid109d8a,public From: dewar@merv.cs.nyu.edu (Robert Dewar) Subject: Re: floating point comparison Date: 1997/08/10 Message-ID: #1/1 X-Deja-AN: 263482142 References: <33E61497.33E2@pseserv3.fw.hac.com> <5sar4r$t7m$1@cnn.nas.nasa.gov> <5sbb90$qsc@redtail.cruzio.com> <33ECA115.13DE@math.okstate.edu> Distribution: inet Organization: New York University Newsgroups: comp.lang.ada,sci.math.num-analysis,comp.software-eng,comp.theory,sci.math Date: 1997-08-10T00:00:00+00:00 List-Id: David says << Seems like there must be _some_ error allowed in the standard standards, and any error at all means I can't trust the floating-point here. Please tell me I'm wrong; it would save me some work.> If we are talking about IEEE floating-point, the operations have no errors at all. There are no rounding errors or anything else in the arithemtic model defined in this standard. The results of the operations are exactly well defined to the last bit (*) and must be the same on all machines. If you regard floating-point as equivalent to real arithmetic, then of course the results do not match those of real arithmetic. But to regard floating-point arithmetic as real arithmetic is fundamentally wrong, just as regarding integer arithmetic in a language with limited range integers as an implementation of mathematical natural numbers is misleading. It would be nice to have real arithmetic on computers. There is a very nice little program for computing pi to a huge number of places if you had this capability (See Martin Gardner's column on how to compute pi to 10000 digits on a pocket calculator in Scientific American -- the catch is your calculator must have real arithmetic :-) You can trust floating-point completely if the following conditions are met: 1. You are using IEEE, or some other well defined standard (many non-IEEE floating-point systems do not meet this requirement) 2. You understand the system you are using, and do not expect it to be equivalent to real arithmetic. 3. The mapping from the language/implementation you are using to the floating-point operations is well defined, and you understand it. 1 is little problem today, most modern computer systems support IEEE arithmetic, although DEC and SGI have introduced unwelcome variations in their most recent high performance hardware (the Intel implemenmtation is complete and accurate, as are IBM, HP, Sun and most other implementations). 2 is the subject of this thread. It is a big problem, but one made by people not by computers. 3 is definitely a problem, but a managable one in practice. I have a thesis student, Sam Figueroa, who recently moved from Next to Apple :-), who is working on exactly this problem, and the recent LCAS/LIAS standardization is a step in the right direction. I really dislike the term "rounding error". Error is a loaded term which somehow indicates that something is wrong. When you take two IEEE numbers and do an addition, you get a precise answer, with no error. Perhaps a term like "real discrepancy" would be better, and would help avoid propagating the dangerous impression that floating-point arithmetic *is* real arithmetic that does not work right. (*) Yes, for the IEEE experts, I know perfectly well that in marginal cases involving denormals (cases that most traditional floating point systems blow completely with sudden underflow), there is an implementation defined non-determinacy arising from double rounding. It's a pity that this undermines the absolute statements that results are always entirely defined, but in practice, it is very unlikely that this is a problem. For a complete discussion of this issue, you can look at the paper that Sam Figueroa wrote on this subject. Email me if you are interested.