From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=BAYES_00,INVALID_MSGID, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,f71c159449d6e114 X-Google-Attributes: gid103376,public From: Stephen Leake Subject: Re: Ada 83 - avoiding unchecked conversions. Date: 1996/12/13 Message-ID: <32B17F46.123B@gsfc.nasa.gov>#1/1 X-Deja-AN: 203926147 references: <32AED68A.48BE@aisf.com> <32B062D3.7E9A@ccgate.hac.com> content-type: text/plain; charset=us-ascii organization: NASA Goddard Space Flight Center -- Greenbelt, Maryland USA mime-version: 1.0 reply-to: Stephen.Leake@gsfc.nasa.gov newsgroups: comp.lang.ada x-mailer: Mozilla 3.0 (Win95; U) Date: 1996-12-13T00:00:00+00:00 List-Id: Chris Brand wrote: > > Matthew Heaney wrote: > > > [cut] > > But knowing the representation of the data is only important at the > > hardware level, ie "the interface boundary." Internal to the software > > there's a lot going on that doesn't depend on representation of the data, > > so we want to practice some information hiding by letting the compiler > > choose the representation. > > > [cut] > > > > This confusion is also the reason why shops often say "Thou shalt not use > > predefined types, because it's not portable." This too is another silly > > rule. > > > > It would be perfectly valid for such a rule at the interface layer, because > > I have to know the representation of my types. But internal to the > > software system, I don't care, and using predefined types is just another > > form of information hiding. > > > [examples cut] > > I worked on a project where the rule was not to use predefined types. > We had a package defining the "base types" to use, with all their > representation clauses and everything and we had to base our types on > these. > The main type was an "Integer_16". > > This was great while we were using a 286 target. It was only when we > ported > it to a 486 target (where 16-bit values need more work than 32-bit ones) > that we noticed the disadvantage of such a rule. Left to itself, the > compiler would have chosen a 16-bit value for most things on the 286 and > a 32-bit value on the 486. > > Of course, if you take these things away, people have to think about > "what is a sensible upper bound for this type ?" :-) Perhaps there should have been a type "Fast_Integer", with the understanding that it was at least 16 bits (this is of course just Standard.Integer). Then programmers can use Fast_Integer when they don't really care about the size. > > -- > Chris > Stating my own opinions, not those of my company. -- - Stephe