From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,183ebe04e93f0506 X-Google-Attributes: gid103376,public From: dewar@merv.cs.nyu.edu (Robert Dewar) Subject: Re: fixed point vs floating point Date: 1997/12/03 Message-ID: #1/1 X-Deja-AN: 295032404 References: <3485A850.3A92@gsg.eds.com> X-Complaints-To: usenet@news.nyu.edu X-Trace: news.nyu.edu 881200568 13033 (None) 128.122.140.58 Organization: New York University Newsgroups: comp.lang.ada Date: 1997-12-03T00:00:00+00:00 List-Id: Seymour says <> The fact that such task groups are periodically put together is as clear an indication as one could get that there are problems. That they do not succeed in improving things, especially given the burden that compatibility imposes in this case, is hardly surprising. What is quite significant here is that despite a significant desire in the COBOL community to solve the same problem (and better define the semantics of COMPUTE, now left implementation dependent), the COBOL community has NOT decided to follow the PL/1 direction here, and I think that is definitely wise, since I find the COBOL semantics preferable to those in PL/1. Basically, when you avoid COMPUTE, as most COBOL programmers do, then you must give intermediate scales and precisions explicitly, which, as I have noted before, clearly seems the right approach to me. Ada reached a nice comopomise, where the presence of type abstractions allows the basic expression notation to be used, while retaining the requirement for providing intermediate precisions and scales where they cannot be reliably deduced by the compiler (rather than adopting the PL/1 approach of resolving such situations with arbitrary rules that can have surprising consequences).