From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 107079,183ebe04e93f0506 X-Google-Attributes: gid107079,public X-Google-Thread: 103376,183ebe04e93f0506 X-Google-Attributes: gid103376,public From: gwinn@res.ray.com (Joe Gwinn) Subject: Re: fixed point vs floating point Date: 1997/12/01 Message-ID: #1/1 X-Deja-AN: 294261660 References: <65846t$4vq$1@gonzo.sun3.iaf.nl> <65c58j$1302@mean.stat.purdue.edu> Distribution: inet Organization: Raytheon Electronic Systems Newsgroups: comp.lang.ada,sci.math.num-analysis Date: 1997-12-01T00:00:00+00:00 List-Id: In article , dewar@merv.cs.nyu.edu (Robert Dewar) wrote: > Joe said > > < memos unnecessary. This would have been a good thing, but it had to wait > for the wide use of floating point. > >> > > It would also be a good thing if Ada could magically write your program > for you :-) > > There is no free lunch when it comes to being careful with fixed-point > scaling. To expect Ada83 to somehow solve this problem automatically > sounds a bit naive, or let's at least say over-optimistic. Who said we expected Ada83 fixed point types to solve all problems? What we hoped for was that conversion between scalings and normalizations after arithmetic would be handled, and they weren't. This, one could expect a language to do. I think the reason it didn't was that in my experience most compiler experts don't understand computer math functions all that well. I used to run a small compiler group (fortran and C), and I must say that that group didn't understand such things. I recall one incident when our fortran compiler failed some math function accuracy tests, and they convinced themselves that the hardware floating point multiply instruction was wrong. Wrong. It was clear that their expertise was in computer language grammars and compiler internals, not mathematics per se. Numerical Methods is not Compiler Design. Programmers still have to figure out what the variables shall mean, their ranges, and their accuracies (resolutions, really). > < that, or that they couldn't understand how it worked. What I do recall > was that the Ada experts couldn't get it to work reliably or well using > the compilers of the day. > >> > > Sounds like you had the wrong "Ada experts" or the wrong compilers, or both. > Many people used fixed-point in Ada very successfully in the early days > (I am really thinking specifically of the Alsys compiler here, since I was > working for Alsys at the time). Well, we had the Ada experts we had, but not a lot of them, or time to fiddle. Almost by definition, the real Ada experts work for Ada vendors, not their customers. And something that requires the level of expertise you seem to imply cannot be widely used in the context of full-scale engineering development (FSED) projects, with 50 or 100 programmers, most of which are not language gurus in any language. Most are experts in the problem domain, not the langauge of the day. It cannot be otherwise; we are not in the language business. As for choice of compiler, it's a big decision, made using a matrix of weighted numerical scores covering all manner of issues, and I don't recall that Alsys was ever chosen here. I don't know (or recall) why, but there are lots of bigger issues than the handling of fixed point types. Verdix (pre-Rational) was the usual winner, as was XD Ada to a lesser extent. Handling of the usual embedded realtime issues, plus toolpath isses, dominated the decisions then, and still do Joe Gwinn