From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 109d8a,c7637cfdf68e766 X-Google-Attributes: gid109d8a,public X-Google-Thread: f8362,c7637cfdf68e766 X-Google-Attributes: gidf8362,public X-Google-Thread: 107079,c7637cfdf68e766 X-Google-Attributes: gid107079,public X-Google-Thread: f43e6,c7637cfdf68e766 X-Google-Attributes: gidf43e6,public X-Google-Thread: 103376,c7637cfdf68e766 X-Google-Attributes: gid103376,public From: nesterov@holo.ioffe.rssi.ru (Andrew V. Nesterov) Subject: Re: floating point comparison Date: 1997/08/29 Message-ID: <5u58oc$k096_002@ioffe.rssi.ru> X-Deja-AN: 268859576 References: <5tk5he$7rq$1@news.fsu.edu> <5tnreu$9ac$1@news.fsu.edu> Distribution: inet Organization: Ioffe Phys. Techn. Institute Newsgroups: comp.lang.ada,sci.math.num-analysis,comp.software-eng,comp.theory,sci.math Date: 1997-08-29T00:00:00+00:00 List-Id: In article , dewar@merv.cs.nyu.edu (Robert Dewar) wrote: >James Carr says [...] >The point, which perhaps you just don't see, because I assume you also >understand floating-point well, is that I think the term "error" for >the discrepancies that occur when an arbitrary decimal number is converted >to binary floating-point are not errors. An error is something wrong. There >is nothing wrong here, you habve made the decision to represent a decimal >value in binary form, and used a method that gives a very well defined >result. If this is indeed an "error", then please don't use methods that >are in error, represent the number some other way. If on the other hand, >your analysis shows that the requirements of the calculation are met by >the use of this conversion, then there is no error here! > >Similarly when we write a := b + c, the result in a is of course not the >mathematical result of adding the two real numbers represented by b and c, >but there is no "error". An error again is something wrong. If an addition >like that is an error, i.e. causes your program to malfunction, then don't >do the addition. The result in "a" of above expression could be either exact or rounded (or inexact or approximate) in the sence of infinite presicion algebra. The answer is whether the terms and the result have finite representation in the particular floating point model and does it fit in or can it be completely placed to a given finite number of binary or other n-ary digits places. >From a more general point (Hans Olsson has already mentioned that) there is much more simple and robust and not bounded to any radix arithmetic, way of analizing floating-point computations. It is said that floating poing representation fl(x) of a number x is such that fl(x) = x(1 + delta) where delta is small (in some definite sence) positive number, called roundoff error. Indeed, it does not meant that something wrong has been done, it mean that floating point representation of an exact value is known imprecise. And indeed, it is quite similar to a measurement process, where a result is also usually known imprecise or equivalently, with an error. This analogue could be extended even to systematic (or directed) and random errors, round to nearest would produce random errors, while truncation toward +-infinity would introduce systematic error. I am a little skeptical about the "uncertainty" term that was proposed by James A. Carr, because in quantum mechanics it is relevant to a value that not only "undetermined" i.e. unknown at the moment but moreover has not any value at all, as of coordinate and momentum pair if the former is known the latter could have any value. The next step in error analysis is to suggest that a binary (i.e. involving two terms or factors) operation, the operands of which are in the floating-point domain, is performed exact and the final result is rounded ( * stands for any binary operation) fl(x*y) = (x*y)(1 + delta) This way eliminates any difference between so-called "real" numbers and "fp" numbers whose are simply subset of the former more wide set, and by no means are somewhat "artificial" or "unreal". Complete discussion of above method is in chapter 20 of G.E.Forsythe and C.B.Moler "Computer Solution of Linear Algebraic Systems", Prentice- Hall, 1967 -- long before the IEEE standard! This way of analyzing floating point numbers and operations is invariant to any radix or precision fp arithmetic, and correct not just for IEEE standards. They was adopted relatively recently, while many computational programs based on above analysis have long been working on many different architectures quite well. > >If on the other hand, your analysis shows that it is appropriate to perform >the computation a = b + c, where a is defined as the result required by >IEEE854, then there is no error. Once again, as I am sure somebody had already noticed that, the standard has been drawn to merely standardize parameters of floating point arithmetic -- ranges, mantissa length, gradual underflow process, exception signals, etc, because to then in 70s (and to now on) there were plenty of different fp implementations, although all they behave more or less similarly. By no means IEEE will save computations from roundoff errors! > >Yes, it is useful to have a term to describe the difference between the >IEEE result and the true real arithmetic result, but it is just a >difference, and perhaps it would be better if this value had been called >something like "rounding difference", i.e. something more neutral than >error. > The term "difference" is already used in other terms of numerical computations, e.g. "finite-difference methods", "finite-difference equations", why make things even more tangled? >The trouble with the term error, is that it feeds the impression, seen >more than once in this thread, that floating-point is somehow unreliable >and full of errors etc. The strength of the term is that it can be smoothly combined with other sources of errors in any calculations, pertain to not only computer calculations. A model for a calculation can be imprecise i.e. with errors, input data could be as well imprecise, thus the result would be computed imprecise that is with errors of the model and input data. All these errors (of model, input data and fp arithmetic) can be compared and analized to get the estimate of how close the computed result to a perfect one. > >Go back to the original subject line of this thread for a moment. > >Is it wrong to say > > if a = b then > >where we know exactly how a and b are represnted in terms of IEEE854, and >we know the IEEE854 equality test that will be performed/ > >The answer is that it might be wrong or it might be right. it has a well >defined meaning, and if that meaning is such that your program works, it >is right, if that meaning is such that your program fails, then it is wrong. > Yes, indeed equality test has its well-defined meaning, although could be dangerous in its naive useage because A and B usually (or possibly) are results of a great deal of calculations involving roundoffs. A possibility that A and B would coinside in ALL binary or whatever places is just very small. On the other hand it could be very well justified, as an example tests for iteration termination in EISPACK codes (e.g. TSTURM) can be mentioned. A funny thing, right in the mentioned tests there is also an excellent example of how roundoff errors works to ensure iterations converge. Further, one can easily figure out how some kind of optimization could corrupt those tests and how to prevent optimization to do so. >But --- we could make the same statement about any possible programming >language statement we might write down. > >There is nothing inherently wrong, bad, erroneous, or anything else like >that about floating-point. > >Now one caveat is that we often do NOT know how statements we write will >map into IEEE854 operations unless we are operating in a well defined >environment like SANE. As I have noted before, this is a definite gap >in programming language design, and is specifically the topic that my >thesis student Sam Figueroa is working on. >