From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 107079,183ebe04e93f0506 X-Google-Attributes: gid107079,public X-Google-Thread: 103376,183ebe04e93f0506 X-Google-Attributes: gid103376,public From: gwinn@res.ray.com (Joe Gwinn) Subject: Re: fixed point vs floating point Date: 1997/12/01 Message-ID: #1/1 X-Deja-AN: 294342059 Distribution: inet References: <65846t$4vq$1@gonzo.sun3.iaf.nl> <65c58j$1302@mean.stat.purdue.edu> Organization: Raytheon Electronic Systems Newsgroups: comp.lang.ada,sci.math.num-analysis Date: 1997-12-01T00:00:00+00:00 List-Id: In article , dewar@merv.cs.nyu.edu (Robert Dewar) wrote: > Joe said > > < we hoped for was that conversion between scalings and normalizations after > arithmetic would be handled, and they weren't. This, one could expect a > language to do.>> > > Well we are still unclear as to what are language issues here, and what > are issues of bad implementation (i.e. bugs or bad implemenation choices > in the compiler), and so are you :-) In fact we still have no convincing > evidence that there were *any* compiler problems from what you have recalled > so far. I neither know nor care if this is an Ada language design issue. I don't buy and use language designs or even languages, I buy and use compilers. It doesn't much matter if we are today convinced or not; the decision was made then, based on the compilers then available. It may be different today, but I don't see many people using fixed point where floating point is available. > As to expecting the language to do scaling autoamtically, I think this is > a mistake for fixed-point. PL/1 tried and failed, and COBOL certainly > does not succeed (a common coding rule in COBOL is never to use the > COMPUTE verb, precisely because the scaling is not well defined). Whoa. Read my words again. "Scalings" are nouns, not verbs. In this context, a "scaling" is the (human) decision of where to put the binary point, and what the value of the least significant bit shall be. Scalings are static, done as part of code design, and documented in Interface Control Documents and Interface Design Documents. Remember, this is 1960s era stuff. This information can in theory be declared to a suitably designed compiler, which can then deal with the details of normalization, a boring mechanical process that humans aren't very good at. > I think any attempt to automatically determine the scaling of intermediate > results in multiplications and divisions in fixed-point is doomed to failure. > This just *has* to be left up to the programmer, since it is highly > implementation dependent. No, it isn't hopeless. It's quite simple and mechanical, actually. If it were imposssible, why then did Ada83 attempt it? It's exactly the same rules we were taught in grade school for the handling of decimal points after arithmetic on decimals, especially multiplication and division. Ada83 should have been able to do it, given the information provided when the various variables types were defined. > I have no idea what you mean by "normalizations". This term makes no sense > to me at all in a fixed-point context. Normalization is also known as rescaling, typically done by right-shifting the result of a multiplication to put the binary point back where it belongs in the desired output variable's scaling. A rounding may also be done. Joe Gwinn