From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,4751d44ff54a2c2c X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2002-08-13 14:09:06 PST Path: archiver1.google.com!news1.google.com!sn-xit-02!sn-xit-01!sn-post-01!supernews.com!corp.supernews.com!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: 64-bit integers in Ada Date: Tue, 13 Aug 2002 16:09:24 -0500 Organization: Posted via Supernews, http://www.supernews.com Message-ID: References: <3CE3978F.6070704@gmx.spam.egg.sausage.and.spam.net> <3D46DC69.7C291297@adaworks.com> <5ee5b646.0207301613.5b59616c@posting.google.com> <5ee5b646.0208030424.39703482@posting.google.com> X-Newsreader: Microsoft Outlook Express 4.72.3612.1700 X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3719.2500 X-Complaints-To: newsabuse@supernews.com Xref: archiver1.google.com comp.lang.ada:27986 Date: 2002-08-13T16:09:24-05:00 List-Id: Robert Dewar wrote in message <5ee5b646.0208030424.39703482@posting.google.com>... >"Marin David Condic" > >> so having the requirement is not a >> bad thing. > >Here is why I think it *is* a bad requirement. > >No implementation would possibly make Integer less than >16 bits. None ever has, and none ever would. Remember that >the only really critical role of integer is as an index >in the standard type String (it was a mistake to have string tied in >this way, but it's too late to fix this) > >No one would make integer 8 bits and have strings limited >to 127 characters. > >If you are worried about implementors making deliberately >useless compilers, that's a silly worry, since there are >lots of ways of doing that (e.g. declaring that any >expression with more than one operator exceeds the capacity >of the compiler). Right. The reason for the watered-down requirement was that certain implementors wouldn't stand for a 32-bit requirement, which would break all of their customer's code. >So why is the requirement harmful? Because it implies that >it is reasonable to limit integers to 16 bits, but in fact >any implementation on a modern architecture that chose 16 bits for >integer would be badly broken. Here's where we disagree. Janus/Ada in fact still uses 16-bits for Integer. This is simply because changing it breaks too much code (ours and our customers); the compiler fully supports 32-bit integers and 32-bit strings. (I've tried it.) It would be trivial to change if needed. (I've thought about putting in a compiler switch, but I haven't had time or customer demand, and it is messy because of the way that symbol table files are interrelated.) In any case, code that declares their own types does not have a portability problem. Indeed, I have occassionally declared my own string type to avoid this dependence on Integer: type String_Count is range 0 .. 2**32-1; subtype String_Index is String_Count range 1 .. String_Count'Last; type My_String is array (String_Index range <>) of Character; Ada's rules mean that this type can be used exactly like String. The only time something extra need be done is when an item of type String is actually required (such as for a file name). Then a simple type conversion will do the trick (presuming that the string is short enough, which better be true for file names). Ada.Text_IO.Open (File, String(My_Name), Ada.Text_IO.In_File); >As for implementations for 8-bit micros, I would still make >Integer 32 bits on such a target. A choice of 16-bit integer (as for >example Alsys did on early on) would cause >giant problems in porting code. On the other hand, a choice >of 32-bit integers would make strings take a bit more room. I would >still choose 32-bits. It takes a lot more room, as the runtime has to carry support for doing math on 32-bit items (multiply, divide, and so on are too large to put in-line at the point of use). You're likely to pay for this support always, because it is very likely that a String or Integer will occur in the code (and use these operations). On (very) memory-limited targets, that can be very bad. String descriptors, for loop data, and the like also is twice as large -- which can matter. The portability problem only occurs with code that is dubiously designed in the first place (using predefined numeric types), and it probably is too large for memory-limited targets anyway. So I strongly disagree with this. >Anyway, I see no reason for the standard to essentially encourage >inappropriate choices for integer types by adding a requirement that >has no practical significance whatever. > >This is an old old discussion. As is so often the case on >CLA, newcomers like to repeat old discussions :-) > >This particular one can probably be tracked down from the >design discussions for Ada 9X. Sure, I remember this well. Randy wouldn't stand for a stronger requirement (there was one at one point); he managed to get enough support for his position, so we have the one we have now. Note that Ada 83 had no such requirement; the ACVC tests assumed 12-bit integers maximum. It is useful for the ACVC/ACATS for there to be such a minimum requirement. But I agree it probably isn't useful for anyone else. Randy.