From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 107079,c7637cfdf68e766 X-Google-Attributes: gid107079,public X-Google-Thread: f43e6,c7637cfdf68e766 X-Google-Attributes: gidf43e6,public X-Google-Thread: 109d8a,c7637cfdf68e766 X-Google-Attributes: gid109d8a,public X-Google-Thread: f8362,c7637cfdf68e766 X-Google-Attributes: gidf8362,public X-Google-Thread: 103376,c7637cfdf68e766 X-Google-Attributes: gid103376,public From: jac@ibms48.scri.fsu.edu (Jim Carr) Subject: Re: floating point comparison Date: 1997/08/22 Message-ID: <5tk5he$7rq$1@news.fsu.edu>#1/1 X-Deja-AN: 267919889 References: <5theo8$qg7$1@news.fsu.edu> Distribution: inet Organization: Supercomputer Computations Research Institute Newsgroups: comp.lang.ada,sci.math.num-analysis,comp.software-eng,comp.theory,sci.math Date: 1997-08-22T00:00:00+00:00 List-Id: dewar@merv.cs.nyu.edu (Robert Dewar) writes: > ><< So is the process of measurement with a device with less precision > than necessary to make the measurement. For example, in your case > there is no uncertainty in the integer part of a 14 digit real in > a double precision IEEE representation that has been properly > rounded, but there is an uncertainty if it was stored as a float.>> > >I strongly agree with Christian here, uncertainty is an even WORSE >term than round off error. I wrote the above, not Christian. I prefer uncertainty because, in a university context, it is familiar to students from their chemistry and physics labs and the rules for propagating it are the same. The tradition in numerical analysis has always (?) been to identify two kinds of "error" -- formula and round-off -- that compete in any given kind of calculation. Depends on the audience. >A common viewpoint of floating-point arithmetic held by many who don't >know too much about it is that somehow floating-point arithmetic is >inherently (slightly) unreliable and can't be trusted. The term >uncertainty encourages this totally unuseful attitude. Are you saying that the floating point representation is not an approximation to the real number being stored? I don't think so. I think you are saying that the results of floating point operations on numbers in that floating point representation are deterministic. I agree that they are. The point is that the difference between the real number being represented in the machine and a particular floating point representation will be propagated by the "exact" procedure and sometimes dramatically increase the difference between the result of real arithmetic on the original real number and the result of floating point arithmetic on the original fl-pt representation. The propagation of this uncertainty also follows deterministic (albeit statistical) rules quite familiar from the sciences in which measurement is commonly used. I do not see how you can claim that the difference between the result of real arithmetic and floating point arithmetic for a range of input values will not show a distribution of the type normally associated with measurement uncertainty. It does. In the example cited, the uncertainty in one possible representation (float) is greater than in another (double) with the result that the desired integer conversion is not exact in some cases that original author claimed they would be. (The example also appeared to ignore the minimum size of int mandated by C and present in many common implementations, but that is irrelevant here.) This is obvious if one knows the relative uncertainty in those two IEEE representations of real numbers. -- James A. Carr | Commercial e-mail is _NOT_ http://www.scri.fsu.edu/~jac/ | desired to this or any address Supercomputer Computations Res. Inst. | that resolves to my account Florida State, Tallahassee FL 32306 | for any reason at any time.