From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,183ebe04e93f0506 X-Google-Attributes: gid103376,public From: dewar@merv.cs.nyu.edu (Robert Dewar) Subject: Re: fixed point vs floating point Date: 1997/12/02 Message-ID: X-Deja-AN: 294730355 References: X-Complaints-To: usenet@news.nyu.edu X-Trace: news.nyu.edu 881120639 16305 (None) 128.122.140.58 Organization: New York University Newsgroups: comp.lang.ada Date: 1997-12-02T00:00:00+00:00 List-Id: Joe said: <> I have NO clear idea what you did or didn't do. You made certain claims in your original post, and I was interested to see if you could substantiate them. It seems that really you can't, which I don't find too surprising, because they did not make much sense to me. But I did not want to jump to that conclusion, which is why I was trying to find out more details. <> No, you did not read carefully. Ada 95 absolutely DOES require programmers to provide intermediate precision precisely. It is just that in certain cases, this can be implicit in Ada 95. For example x := y(A * B) + y(C * D); the conversions to type y here, specifying the intermediate scale and precision are required in Ada 83 and Ada 95, and those are the important cases. The case which has changed is relatively unimportant: x := y(A * B); Ada 83 required the explicit conversion here, but since the most likely case is that y is the type of x, Ada 95 allows a shorthand IN THIS SITUATION ONLY of leaving out the type conversion. So it is not at all the case that Ada83 should be able to handle fixed point arithmetic automatically. Neither Ada 83 nor Ada 95 attempts this. As I noted, PL/1 tried, but most people (not all, Robin characteristically dissents :-) would agree that the attempt was a failure, for example the rule that requires 22 + 5/3 to overflow in PL/1, where 22 + 05/3 succeeds, is hardly a masterpiece of programming language design, but to be fair, PL/1 was really trying to solve an impossible problem, so it is not surprising that its solution has glitches. Indeed the reason that 22 + 5/3 overflows bears looking at, since it is a nice example of good decisions combining to have unintended consequences. Question 1: What should be the precision of A/B? Answer: The maximum precision possible guaranteeing no overflow. Question 2: What happens when two numbers with different scales are added? Answer: One is shifted left to line up decimal points (we do not want to lose pennies!) Now 5/3 is a case of one digit divided by one digit, which can have at most one digit before the decimal point (for the case of 9/1), so the result is 1.6666666666666666666666666667 Where by definition the number of digits is the maximum number of digits that can be handled. Now we add the 22 and 1.6666666666666666666666666667 22.0000000000000000000000000000 oops, the 2 fell out the left hand side. But if the literal is 05 (with a leading zero), then it is two digits, and we get: 01.666666666666666666666666667 22.000000000000000000000000000 It is really hard to get these rules to behave well without anomolies of this type. To my taste no one has succeeded, and the decision in Ada 83 and Ada 95 to make the programmer specify intermediate precision and scaling seems exactly right. <> There goes that claim again, but we still have seen no evidence or data to back up this claim, other than "that must have been it, otherwise we would not have thought it was the case". NOT very convincing! Incidentally, if you want to read more about fixed-point issues in Ada, a good starting point is the special issues report on Ada fixed-point that I did for the Ada 9X project.