From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 107079,183ebe04e93f0506 X-Google-Attributes: gid107079,public X-Google-Thread: 103376,183ebe04e93f0506 X-Google-Attributes: gid103376,public From: gwinn@res.ray.com (Joe Gwinn) Subject: Re: fixed point vs floating point Date: 1997/11/25 Message-ID: X-Deja-AN: 292721032 Distribution: inet References: <65846t$4vq$1@gonzo.sun3.iaf.nl> <65c58j$1302@mean.stat.purdue.edu> Organization: Raytheon Electronic Systems Newsgroups: comp.lang.ada,sci.math.num-analysis Date: 1997-11-25T00:00:00+00:00 List-Id: In article , dewar@merv.cs.nyu.edu (Robert Dewar) wrote: > Joe says > > < in those days didn't work well or even correctly in many cases, so we > never used it. I don't offhand recall doing an Ada job that didn't use > floating point; Ada83 was mostly used on the larger computers, such as > VAXes and Silicon Graphics boxes, so the issue never came up. The > exception may have been XD Ada from System Designers (where are they now?) > on the 68020.>> > > While I can't speak for all Ada 83 compilers, I certainly know many of them > that did fixed-point just fine. It has actually been my experience that in > most cases where users thought Ada 83 compilers were doing things wrong, it > was because they did not understand the Ada 83 fixed-point semantics properly, > so it would be interesting to know specifically what Joe is referring to. It's been at least ten years since use of fixed point in Ada83 came up, and I no longer recall the details. I don't think it was lack of understanding, as we had lots of oldtimers experienced with scaled binary (most often in assembly, but also in fortran etc) and lots of Ada folk, some quite good. Anyway, we gave up on it. It may have been an early-compiler effect, later fixed, but the damage was already done. > < program with, as scaling issues go away, and so it's much easier to get > right. Remember, the traditional comparison was with manually-implemented > scaled binary. The fact that floating point arithmetic is approximate > isn't much of an issue for most physics-based mathematical code, and those > places that are exceptions are still *much* less trouble to deal with than > the traditional scaled binary.>> > > So how do we square this with Robert Eachus claim that it is *easier* to > analyze fixed-point code. Simple actually, it depends on what you are doing. I think Robert Eachus was doing safety critical code. > If you are writing code casually, without careful error analysis, then it is > indeed true that floating-point is easier, but if you are doing careful > error analysis, then fixed-point is usually easier. Joe's offhand comment > about errors not being an issue for most physics-based mathematical code > (I do not agree!) clearly indicates that we are dealing with the casual > approach here, so in that context Joe's comment makes good sense. Not so fast. I was obviously talking about the numerical noise caused by use of floating-point arithmetic. The noise caused by one's choice of mathematical algorithm and its formulation, always an issue, is with us in either case, float or fixed. We used scaled binary (fixed-point) arithmetic most often with 16-bit words, so every bit counted, and numerical noise was very much an issue. One 1980s-era radar system I worked on (in fortran 77) used 16-bit mantissas and 16-bit "block" exponents (one exponent covered multiple mantissa words), to handle the scaling issues without requiring floating point hardware. Yes, it *was* ugly to program, but given the machines of the day, there was little choice. A single-precision (32-bit) float has a 23-bit mantissa, so the numerical noise will be much less than with a 16-bit mantissa (one word), by a factor of about 2^(23-16)= 128. A 32-bit fixed point number can approach this, if and only if one doesn't have to spend too many bits on the dynamic range. In all truth, in most systems, the input data streams aren't all that clean, and numerical noise in the arithmetic is by far the least of it. This is the basic reason we could use 16-bit arithmetic in the first place. A badly-formulated algorithm will come up against this inherent measurement noise first. And, there is always 64-bit double precision, with 53 bits of mantissa. Only a few things require such precision. > < arithmetic)>> > > I really don't know what problems might be fixed, since I don't know of > problems in Ada 83, at least not ones that are likely to be what Joe is > referring to, so I can't say whether these problems have been fixed! > Certainly to the extent that Joe was seeing compiler bugs in compilers > I am unfamiliar with, this seems to have no relationship to Ada 95. I do recall some discussion of the Ada83 fixed point issues being discussed while Ada95 was being developed. Again, I don't recall the details, this time because it wasn't an issue that I followed all that closely, because we had no plans to ever use fixed point. Anyway, I didn't claim that Ada95 had these problems, only that with present-day hardware, we will generally use floating point, so I probably never will find out for myself if the Ada95 fixed point arithmetic works. Others may be more likely to use fixed point arithmetic; I can report our experience and outlook. Joe Gwinn