From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,c7637cfdf68e766 X-Google-Attributes: gid103376,public X-Google-Thread: f8362,c7637cfdf68e766 X-Google-Attributes: gidf8362,public X-Google-Thread: 107079,c7637cfdf68e766 X-Google-Attributes: gid107079,public X-Google-Thread: 109d8a,c7637cfdf68e766 X-Google-Attributes: gid109d8a,public X-Google-Thread: f43e6,c7637cfdf68e766 X-Google-Attributes: gidf43e6,public From: jac@ibms48.scri.fsu.edu (Jim Carr) Subject: Re: floating point comparison Date: 1997/09/01 Message-ID: <5ud3mc$5d2$1@news.fsu.edu>#1/1 X-Deja-AN: 269827725 Distribution: inet References: <5tnreu$9ac$1@news.fsu.edu> Organization: Supercomputer Computations Research Institute Newsgroups: comp.lang.ada,sci.math.num-analysis,comp.software-eng,comp.theory,sci.math Date: 1997-09-01T00:00:00+00:00 List-Id: James Carr says | | The difference is inevitable. It does exist. It has important | consequences when interpreting the result of a calculation that | is being used as a substitute for working with real numbers, a | major reason computers exist. If you do not like the name, | propose another -- but do not pretend that it does not happen. dewar@merv.cs.nyu.edu (Robert Dewar) writes: > >I think you should assume that I do understand how floating-point works, Understood (from the beginning). >The point, which perhaps you just don't see, because I assume you also >understand floating-point well, is that I think the term "error" for >the discrepancies that occur when an arbitrary decimal number is converted >to binary floating-point are not errors. An error is something wrong. Yes, in one of its meanings. That this is the most commonly understood meanings among the general public, even though it is not the meaning used in the sciences, is why I seek good synonyms that will clarify its meaning in numerical computation. >There >is nothing wrong here, you habve made the decision to represent a decimal >value in binary form, and used a method that gives a very well defined >result. Not quite. I have made the decision to represent a real number in a way restricted to a finite number of digits. It does not matter if they are binary or decimal. (Yes, I know what you probably meant to write, but saying "decimal value" carries various meanings as well.) >Similarly when we write a := b + c, the result in a is of course not the >mathematical result of adding the two real numbers represented by b and c, >but there is no "error". An error again is something wrong. But now something new happens, because a is not the representation of the the real number (b+c). That is the important fact and the one that *demands* that this quantity have a name so it can be discussed and analyzed. >If on the other hand, your analysis shows that it is appropriate to perform >the computation a = b + c, where a is defined as the result required by >IEEE854, then there is no error. That analysis must include the effect of the propagation of the quantity known as "roundoff error" in numerical analysis through those calculations, with emphasis on whether it gets bigger or not and whether those changes are acceptable. >Yes, it is useful to have a term to describe the difference between the >IEEE result and the true real arithmetic result, but it is just a >difference, and perhaps it would be better if this value had been called >something like "rounding difference", i.e. something more neutral than >error. That is a useful suggestion. However, the field itself uses only "rounding error" so it is important that programmers learn what numerical analysts mean when they say this and that the term "error" does not convey any value judgement. The value judgement is made when one says that the rounding error is unacceptably large. After all, there are problems (with the very value-loaded name "ill conditioned") where that rounding difference results in nicely deterministic results that are always the same and always have essentially no relation to the answer found with real arithmetic. My suggestion would be to use difference as part of the arsenal (with the laboratory experience with "uncertainty" as another tool in that arsenal) one might use to teach what rounding error is and when it can become significant. -- James A. Carr | Commercial e-mail is _NOT_ http://www.scri.fsu.edu/~jac/ | desired to this or any address Supercomputer Computations Res. Inst. | that resolves to my account Florida State, Tallahassee FL 32306 | for any reason at any time.