From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,4751d44ff54a2c2c X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2002-08-03 11:55:31 PST Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!newsfeed.berkeley.edu!news-hog.berkeley.edu!ucberkeley!newshub.sdsu.edu!west.cox.net!cox.net!newsfeed1.earthlink.net!newsfeed.earthlink.net!stamper.news.pas.earthlink.net!stamper.news.atl.earthlink.net!harp.news.atl.earthlink.net!not-for-mail From: Richard Riehle Newsgroups: comp.lang.ada Subject: Re: 64-bit integers in Ada Date: Sat, 03 Aug 2002 11:59:17 -0700 Organization: AdaWorks Software Engineering Message-ID: <3D4C2805.62563584@adaworks.com> References: <3CE3978F.6070704@gmx.spam.egg.sausage.and.spam.net> <3D46DC69.7C291297@adaworks.com> <5ee5b646.0207301613.5b59616c@posting.google.com> <5ee5b646.0208030424.39703482@posting.google.com> Reply-To: richard@adaworks.com NNTP-Posting-Host: 3f.bb.69.0e Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Server-Date: 3 Aug 2002 18:54:47 GMT X-Mailer: Mozilla 4.7 [en] (Win98; I) X-Accept-Language: en Xref: archiver1.google.com comp.lang.ada:27658 Date: 2002-08-03T18:54:47+00:00 List-Id: Robert Dewar wrote: > As for implementations for 8-bit micros, I would still make > Integer 32 bits on such a target. I assume you are talking about Standard.Integer. If so, this would not correspond to the way software is so frequently written for these machines. In particular, for I-8051 platforms, it would introduce a potential inefficiency and force the programmer to explicitly declare a shorter integer (absent the availability of Standard.Short_Integer). Since we often counsel designers to specify their own numeric types anyway, this is probably not a hardship, but it could be troublesome for an experienced I-8051 programmer who expects 16 bit integers. Consider, for example, that simply pushing a 16 bit entity on the stack requires storing two eight-bit stack entries. To store a 32-bit integer would take four stack entries. The corresponding inefficiency would be intolerable for most I-8051 applications. One reason I like Ada is because we can define our own numeric types. Though there are few machines still extant that use storage multiples of other than eight-bits, they do still exist. I think the compiler for the Unisys 11xx series has a word size of 36 bits. Randy can correct me on that if I am wrong. > Anyway, I see no reason for the standard to essentially encourage > inappropriate choices for integer types by adding a requirement that > has no practical significance whatever. I completely agree with you on this point. The designer should make the decision based on the architecture of the targeted platform and the application requirements. The language should be, as Ada is, flexible enough to give the designer this level of support. Don Reifer recently told me that one reason Ada was becoming irrelevant, and his reason for recommending against its use for new projects, was that it is not sufficiently flexible to support the new kinds of architectures in the pipeline. Though I disagree with him on this assessment, forcing the language to correspond to a single word-size architecture (read 32 bits) would be play into his flawed view of Ada's value for new software. Richard Riehle