From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,183ebe04e93f0506 X-Google-Attributes: gid103376,public From: dewar@merv.cs.nyu.edu (Robert Dewar) Subject: Re: fixed point vs floating point Date: 1997/12/03 Message-ID: #1/1 X-Deja-AN: 295032403 References: <3485A850.3A92@gsg.eds.com> X-Complaints-To: usenet@news.nyu.edu X-Trace: news.nyu.edu 881200841 12744 (None) 128.122.140.58 Organization: New York University Newsgroups: comp.lang.ada Date: 1997-12-03T00:00:00+00:00 List-Id: Seymour J says <> Maybe and maybe not. Depending on the scale and precision of B, you may prefer to write A = 1 + B/03; which has different semantics. I find changing the number of leading zeroes in a decimal constant to be a non-intuitive way of controlling the scale and precision of the intermediate result, and I think it is FAR safer to make the programmer think explicitly about what scale and precision is required, define a type that encapsulates this decision, and write A := 1 + Inttype (B / 3.0); Yes, it requires more work from the programmer, but only work that really is quite necessary. Fixed-point requires manual scaling, which is why people prefer floating-point. To think that fixed-point semantics can be effectively automated so it really functions as a poor man's floating- point where everything is done right automatically is, I am afraid, naive.