From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,dad65365cb2b3396 X-Google-Attributes: gidfac41,public X-Google-Thread: 109fba,dad65365cb2b3396 X-Google-Attributes: gid109fba,public X-Google-Thread: 103376,dad65365cb2b3396 X-Google-Attributes: gid103376,public X-Google-Thread: 106c43,dad65365cb2b3396 X-Google-Attributes: gid106c43,public X-Google-Thread: 1014db,dad65365cb2b3396 X-Google-Attributes: gid1014db,public From: dewar@merv.cs.nyu.edu (Robert Dewar) Subject: Re: The disturbing myth of Eiffel portability Date: 1996/11/24 Message-ID: #1/1 X-Deja-AN: 198419425 references: <3294e64b.74799475@news2.ibm.net> <56t1m4$nis@bcrkh13.bnr.ca> <570j6p$nlq$1@news-s01.ny.us.ibm.net> organization: New York University newsgroups: comp.lang.eiffel,comp.lang.ada,comp.lang.c,comp.lang.c++,comp.lang.object Date: 1996-11-24T00:00:00+00:00 List-Id: Francois agrees with me: "Anyway, I agree with you that unless you deal with the infinitely small or infinitely large, gone are the days where you had to substract two floats and see if the difference was less than some predefined "tiny" value rather than testing them for equality." BUT, I said nothing of the kind. I said that comparing two float's for equality was a sensible operation in the sense that it has well defined semantics. But whether you use exact equality operations, or some more complex criterion, including for example "subtracting two floats and see if the difference was less than some predefined 'tiny' value", is an issue of algorithmic requirements. Suppose we are doing an iteration with a converging result. How do we detect convergence? It depends on the situation. Let's give two examples: 1. Computing square roots using Newton-Raphson iteration with round to zero arithmetic. In this case, you get absolute convergence in IEEE arithmetic, so an exact test for equality is fine. 2. Integrating a function using Simpson's rule with finer and finer element size, e.g. halving the element each time. Here you jolly well will NOT get absolute convergence, since after a while the rounding errors will dominate the fundamental inaccuracy of the approximation method. Catching the right point (i.e. figuring out what the right tiny value is) is not easy. I well remember that the business shool at U of Chicago used to give out this assignment as the second assignment in the beginning Fortran course, and it ate up HUGE amounts of computer time, because students missed the convergence point, and once you have missed it, you are in an infinite loop. The required analysis is delicate. For example, in the above, I happen to know (or at least I am pretty sure I remember), that the analysis for case one is correct for chop semantics, but I simply don't know if it is right for round semantics (you have to worry about the possibility of oscillating between two adjacent machine numbers for example). Sometimes you do the (abs (a-b) < tiny) as a substitute for proper analysis, and that's not terrible ... but may be unnecessarily ineffcient. My point was simply that it is wrong to tell students (or to think!) that exact comparison of floating-point values is always wrong. Consequently it is indeed fine in some cases to do such comparisons. But it is wildly wrong to think that such comparisons are always appropriate. For example if (1.0 / 10.0) = 0.1 then has somewhat tricky semantics. One reading of IEEE 754 requires the value of the non-machine number literal 0.1 to be done in the currrent rounding mode. Since this is dynamic, this would imply a truly horrible execution model, one which is barely realistic. If instead, as is typical in almost all implementations of all languages, the 0.1 is evaluated at compile time using round-to-nearest, then the behavior of this test *may* depend on the current rounding mode (I say may because without more work than I care to put in, I do not know if 10.0 has a differnt representation in chop and round semantics, but if 10.0 is a wrong choice, there are others :-) P.S. I know that in Ada the above is universal real --just change the 1.0 to a variable with valeu 1.0 if you are nitpicking at that level :-) P.P.S My proposed binding to the IEEE model for Ada dealt with the vexing issue of literals by saying that using a normal numeric literal required round-to-nearest semantics at compile time, and that the special intrinsic function: +"1.0" yielded a literal value that was evaluatd according to the current rounding mode.