comp.lang.ada
 help / color / mirror / Atom feed
* 64-bit integers in Ada
@ 2002-05-16 11:27 David Rasmussen
  2002-05-17  2:28 ` Robert Dewar
                   ` (2 more replies)
  0 siblings, 3 replies; 41+ messages in thread
From: David Rasmussen @ 2002-05-16 11:27 UTC (permalink / raw)


I understand that I can easily use an integer in Ada that has exactly 64 
bits. But are there any guarantees that such a type would be mapped to 
native integers on 64-bit machines or to a reasonable double 32-bit 
implementation on 32-bit machines? At least, are compilers ok at this in 
real life?

/David




^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-05-16 11:27 64-bit integers in Ada David Rasmussen
@ 2002-05-17  2:28 ` Robert Dewar
  2002-05-17 13:56 ` Mark Johnson
  2002-07-29 15:33 ` Victor Giddings
  2 siblings, 0 replies; 41+ messages in thread
From: Robert Dewar @ 2002-05-17  2:28 UTC (permalink / raw)


David Rasmussen <david.rasmussen@gmx.spam.egg.sausage.and.spam.net> wrote in message news:<3CE3978F.6070704@gmx.spam.egg.sausage.and.spam.net>...
> I understand that I can easily use an integer in Ada that has exactly 64 
> bits. But are there any guarantees that such a type would be mapped to 
> native integers on 64-bit machines or to a reasonable double 32-bit 
> implementation on 32-bit machines? At least, are compilers ok at this in 
> real life?
> 
> /David

You have to check the particular compiler. GNAT supports 64-bit arithmetic
on all targets, using whatever is appropriate efficient code to do so.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-05-16 11:27 64-bit integers in Ada David Rasmussen
  2002-05-17  2:28 ` Robert Dewar
@ 2002-05-17 13:56 ` Mark Johnson
  2002-07-29 15:33 ` Victor Giddings
  2 siblings, 0 replies; 41+ messages in thread
From: Mark Johnson @ 2002-05-17 13:56 UTC (permalink / raw)


David Rasmussen wrote:
> 
> I understand that I can easily use an integer in Ada that has exactly 64
> bits. But are there any guarantees that such a type would be mapped to
> native integers on 64-bit machines or to a reasonable double 32-bit
> implementation on 32-bit machines?
No. However...
> At least, are compilers ok at this in
> real life?
> 
Yes. Robert mentions GNAT, but you will have to review the technical
information for your compiler [or test it] to be sure. Generally, if the
compiler doesn't reject the construct, it will implement it correctly.
  --Mark



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-05-16 11:27 64-bit integers in Ada David Rasmussen
  2002-05-17  2:28 ` Robert Dewar
  2002-05-17 13:56 ` Mark Johnson
@ 2002-07-29 15:33 ` Victor Giddings
  2002-07-29 20:15   ` Robert A Duff
  2002-07-30  4:29   ` Robert Dewar
  2 siblings, 2 replies; 41+ messages in thread
From: Victor Giddings @ 2002-07-29 15:33 UTC (permalink / raw)


David Rasmussen <david.rasmussen@gmx.spam.egg.sausage.and.spam.net> wrote 
in news:3CE3978F.6070704@gmx.spam.egg.sausage.and.spam.net:

> I understand that I can easily use an integer in Ada that has exactly 64 
> bits. But are there any guarantees that such a type would be mapped to 
> native integers on 64-bit machines or to a reasonable double 32-bit 
> implementation on 32-bit machines? At least, are compilers ok at this in 
> real life?
> 
> /David
> 

Try using (or deriving from) Interfaces.Integer_64 or Integer.Unsigned_64. 
Admittedly, this requires 2 steps on the part of the compiler developer. 1) 
actually support the 64-bit integer type. 2) to put it in Interfaces (as 
required by B.2(7)). However, we rely on this in our CORBA product 
implementation and have been making sure that the compiler vendors are 
adding these types when they are supported. 

As of now, I know of only one compiler that supports 64-bit integers and 
doesn't define Interface.Integer_64. That is to be remedied very soon.

-- 
Victor Giddings		mailto:victor.giddings@ois.com
Senior Product Engineer	+1 703 295 6500
Objective Interface Systems	Fax: +1 703 295 6501



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-29 15:33 ` Victor Giddings
@ 2002-07-29 20:15   ` Robert A Duff
  2002-07-30 18:35     ` Richard Riehle
  2002-07-30  4:29   ` Robert Dewar
  1 sibling, 1 reply; 41+ messages in thread
From: Robert A Duff @ 2002-07-29 20:15 UTC (permalink / raw)


Victor Giddings <victor.giddings@ois.com> writes:

> David Rasmussen <david.rasmussen@gmx.spam.egg.sausage.and.spam.net> wrote 
> in news:3CE3978F.6070704@gmx.spam.egg.sausage.and.spam.net:
> 
> > I understand that I can easily use an integer in Ada that has exactly 64 
> > bits. But are there any guarantees that such a type would be mapped to 
> > native integers on 64-bit machines or to a reasonable double 32-bit 
> > implementation on 32-bit machines?

In the language standard, there is no "guarantee" of anything related to
efficiency.  I think that's true of pretty much all languages, and I
don't see how one could do better.

>... At least, are compilers ok at this in 
> > real life?

Yes.  Compiler writers don't deliberately go out of their way to harm
efficiency.

Note that the RM does not require support for 64-bit integers
(unfortunately, IMHO), and there are compilers that do not support
64-bit integers.  But if the compiler *does* support 64-bit integers,
I see no reason to suspect that it wouldn't do so in the obviously
efficient way (native 64-bit ints on a 64-bit machine, or a pair of
32-bit ints on a 32-bit machine).

> > /David
> 
> Try using (or deriving from) Interfaces.Integer_64 or Integer.Unsigned_64. 
> Admittedly, this requires 2 steps on the part of the compiler developer. 1) 
> actually support the 64-bit integer type. 2) to put it in Interfaces (as 
> required by B.2(7)). However, we rely on this in our CORBA product 
> implementation and have been making sure that the compiler vendors are 
> adding these types when they are supported. 

I don't see the point of this advice.  If you say "type T is range
-2**63..2**63-1;", I don't see any reason why T would be more or less
efficient than Interfaces.Integer_64.  In fact, I would think they would
have identical representation, and arithmetic ops would use identical
machine code.

> As of now, I know of only one compiler that supports 64-bit integers and 
> doesn't define Interface.Integer_64. That is to be remedied very soon.

- Bob



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-29 15:33 ` Victor Giddings
  2002-07-29 20:15   ` Robert A Duff
@ 2002-07-30  4:29   ` Robert Dewar
  1 sibling, 0 replies; 41+ messages in thread
From: Robert Dewar @ 2002-07-30  4:29 UTC (permalink / raw)


Victor Giddings <victor.giddings@ois.com> wrote in message news:<Xns925A758E8FB6Avictorgiddingsoiscom@192.84.85.25>...
> 
> Try using (or deriving from) Interfaces.Integer_64 or Integer.Unsigned_64. 
> Admittedly, this requires 2 steps on the part of the compiler developer. 1) 
> actually support the 64-bit integer type. 2) to put it in Interfaces (as 
> required by B.2(7)). However, we rely on this in our CORBA product 
> implementation and have been making sure that the compiler vendors are 
> adding these types when they are supported. 
> 
> As of now, I know of only one compiler that supports 64-bit integers and 
> doesn't define Interface.Integer_64. That is to be remedied very soon.

I don't understand the point of this advice. What does this gain over just
declaring the type you want. Either construct will be rejected if the compiler
does not support 64 bit integers. 

Actually there is no requirement in the RM that a compiler that supports 64-bit
integers must have this declaration there. On a 36-bit machine like the PDP-10
you would expect to find Interfaces.Integer_72, but not Interfaces.Integer_64.

I would also argue that it is dubious to expect Interfaces.Integer_64 on a
32 bit machine. Are 64 bit integers "supported by the target architecture?"
Well it's arguable.

Far simpler to use

   type I64 is mod 2 ** 64;

if that's what you want!



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-29 20:15   ` Robert A Duff
@ 2002-07-30 18:35     ` Richard Riehle
  2002-07-30 20:20       ` Robert A Duff
  2002-07-31  0:13       ` Robert Dewar
  0 siblings, 2 replies; 41+ messages in thread
From: Richard Riehle @ 2002-07-30 18:35 UTC (permalink / raw)


Robert A Duff wrote:

> Note that the RM does not require support for 64-bit integers
> (unfortunately, IMHO), and there are compilers that do not support
> 64-bit integers.

Robert,

We still have quite a few embedded platforms for which 64 bit
integers are not supported.   We would like to be able to use
Ada 95 for them, so a requirement for a language feature that
is not supported would be meaningless.   Also, there has been
some discussion, in the past, about support for eight-bit
microcontrollers such as the I-8051 family.   I am sure some
compiler developer would find it very entertaining to design
an Ada compiler with 8051 64 bit integers, but also  quite
useless.

Richard Riehle




^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-30 18:35     ` Richard Riehle
@ 2002-07-30 20:20       ` Robert A Duff
  2002-07-31  0:13       ` Robert Dewar
  1 sibling, 0 replies; 41+ messages in thread
From: Robert A Duff @ 2002-07-30 20:20 UTC (permalink / raw)


Richard Riehle <richard@adaworks.com> writes:

> Robert A Duff wrote:
> 
> > Note that the RM does not require support for 64-bit integers
> > (unfortunately, IMHO), and there are compilers that do not support
> > 64-bit integers.
> 
> Robert,
> 
> We still have quite a few embedded platforms for which 64 bit
> integers are not supported.   We would like to be able to use
> Ada 95 for them, so a requirement for a language feature that
> is not supported would be meaningless.

All processors can easily support 64-bit arithmetic, or 640-bit
arithmetic.  It's not meaningless -- it just means that the
implementation has to provide software support.

I think I know how to design such a feature in accordance with the
"Bauer Principle", which Robert Dewar recently told us the name of.  So
if you have an 8-bit processor, maybe you don't want 64-bit integers, or
maybe you don't want 32- or even 16-bit integers, but I still think the
compiler should be required to provide them.  And more.

>...   Also, there has been
> some discussion, in the past, about support for eight-bit
> microcontrollers such as the I-8051 family.   I am sure some
> compiler developer would find it very entertaining to design
> an Ada compiler with 8051 64 bit integers, but also  quite
> useless.

Why?  What is the largest integer that a programmer might want, given
that the programmer has chosen an 8-bit processor?

Where do you draw the line?

To me 8-bit processor implies limited address space, but I don't see why
that *necessarily* implies small integers.

> Richard Riehle

- Bob



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-30 18:35     ` Richard Riehle
  2002-07-30 20:20       ` Robert A Duff
@ 2002-07-31  0:13       ` Robert Dewar
  2002-07-31  4:17         ` Keith Thompson
  1 sibling, 1 reply; 41+ messages in thread
From: Robert Dewar @ 2002-07-31  0:13 UTC (permalink / raw)


Richard Riehle <richard@adaworks.com> wrote in message news:<3D46DC69.7C291297@adaworks.com>...
> Robert,
> 
> We still have quite a few embedded platforms for which 64 bit
> integers are not supported. 

There is no reason for hardware support here, even the ia32
does not have hardware support, but 64-bit integers are
very useful and must be supported, just as floating-point
MUST be supported even on processors with no floating-point.

>  We would like to be able to use
> Ada 95 for them, so a requirement for a language feature that
> is not supported would be meaningless.

gcc supports 64-bit integers on virtually all processors
including 8-bit microprocessors.

> Also, there has been
> some discussion, in the past, about support for eight-bit
> microcontrollers such as the I-8051 family.   I am sure 
> some compiler developer would find it very entertaining 
> to design an Ada compiler with 8051 64 bit integers,

Not so much entertaining, but rather quite straightforward.

> but also  quite useless.

Not at all! If your application requires 64-bit integers,
e.g. long durations measured in nanoseconds, then you have
to have this facility, and it is far better that it be
provided by the compiler, rather than having to cook up some half
baked software multiple precision support which
is likely to be FAR less efficient, and certainly far less
convenient.

Once again, in a language which requires all implementations to
provide floating-point, it seems a trivial additional effort to
provide 64-bit integer support.

Note that if anyone bothers to port GNAT to an 8-bit microprocessor
currently supported by GCC, then the 64-bit integer support will come
for free.

> Richard Riehle



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-31  0:13       ` Robert Dewar
@ 2002-07-31  4:17         ` Keith Thompson
  2002-07-31  8:41           ` Larry Kilgallen
  2002-07-31 13:20           ` Robert A Duff
  0 siblings, 2 replies; 41+ messages in thread
From: Keith Thompson @ 2002-07-31  4:17 UTC (permalink / raw)


dewar@gnat.com (Robert Dewar) writes:
> Richard Riehle <richard@adaworks.com> wrote in message
> news:<3D46DC69.7C291297@adaworks.com>...
> > Robert,
> > 
> > We still have quite a few embedded platforms for which 64 bit
> > integers are not supported. 
> 
> There is no reason for hardware support here, even the ia32
> does not have hardware support, but 64-bit integers are
> very useful and must be supported, just as floating-point
> MUST be supported even on processors with no floating-point.

For certain values of "must".  I'm fairly sure that the Ada standard
does not require support for 64-bit integers, and I've worked with Ada
implementations that didn't support anything bigger than 32 bits
(System.Max_Int = 2**31-1).

If you want to argue that such an implementation is broken (even
though it's conforming), I won't disagree.

-- 
Keith Thompson (The_Other_Keith) kst@cts.com  <http://www.ghoti.net/~kst>
San Diego Supercomputer Center           <*>  <http://www.sdsc.edu/~kst>
Schroedinger does Shakespeare: "To be *and* not to be"



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-31  4:17         ` Keith Thompson
@ 2002-07-31  8:41           ` Larry Kilgallen
  2002-07-31 13:20           ` Robert A Duff
  1 sibling, 0 replies; 41+ messages in thread
From: Larry Kilgallen @ 2002-07-31  8:41 UTC (permalink / raw)


In article <yeceldkpm34.fsf@king.cts.com>, Keith Thompson <kst@cts.com> writes:
> dewar@gnat.com (Robert Dewar) writes:
>> Richard Riehle <richard@adaworks.com> wrote in message
>> news:<3D46DC69.7C291297@adaworks.com>...
>> > Robert,
>> > 
>> > We still have quite a few embedded platforms for which 64 bit
>> > integers are not supported. 
>> 
>> There is no reason for hardware support here, even the ia32
>> does not have hardware support, but 64-bit integers are
>> very useful and must be supported, just as floating-point
>> MUST be supported even on processors with no floating-point.
> 
> For certain values of "must".  I'm fairly sure that the Ada standard
> does not require support for 64-bit integers, and I've worked with Ada
> implementations that didn't support anything bigger than 32 bits
> (System.Max_Int = 2**31-1).
> 
> If you want to argue that such an implementation is broken (even
> though it's conforming), I won't disagree.

Do you mean Ada has not defined its own counterintuitive meaning for
the term "broken" ?    :-)



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-31  4:17         ` Keith Thompson
  2002-07-31  8:41           ` Larry Kilgallen
@ 2002-07-31 13:20           ` Robert A Duff
  2002-07-31 13:42             ` Marin David Condic
  2002-07-31 21:50             ` Keith Thompson
  1 sibling, 2 replies; 41+ messages in thread
From: Robert A Duff @ 2002-07-31 13:20 UTC (permalink / raw)


Keith Thompson <kst@cts.com> writes:

> dewar@gnat.com (Robert Dewar) writes:
> > Richard Riehle <richard@adaworks.com> wrote in message
> > news:<3D46DC69.7C291297@adaworks.com>...
> > > Robert,
> > > 
> > > We still have quite a few embedded platforms for which 64 bit
> > > integers are not supported. 
> > 
> > There is no reason for hardware support here, even the ia32
> > does not have hardware support, but 64-bit integers are
> > very useful and must be supported, just as floating-point
> > MUST be supported even on processors with no floating-point.
> 
> For certain values of "must".  I'm fairly sure that the Ada standard
> does not require support for 64-bit integers, ...

Yes, and I'm pretty sure Robert is well aware of that.

Actually, Ada requires 16 bit integers (at minimum).
Robert has argued in the past that this is silly -- too small to be of
use, and better to let the market decide.  Probably true.

>... and I've worked with Ada
> implementations that didn't support anything bigger than 32 bits
> (System.Max_Int = 2**31-1).

I don't know of any Ada implementation that only supports 16 bits,
and only one that doesn't support at least 32 (it supports 24 bits).

> If you want to argue that such an implementation is broken (even
> though it's conforming), I won't disagree.

But why 64?  Why shouldn't we say 128?  Or 1000?

After all, Lisp implementations have been supporting more than that
for decades.

- Bob



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-31 13:20           ` Robert A Duff
@ 2002-07-31 13:42             ` Marin David Condic
  2002-08-01  7:54               ` Lutz Donnerhacke
                                 ` (3 more replies)
  2002-07-31 21:50             ` Keith Thompson
  1 sibling, 4 replies; 41+ messages in thread
From: Marin David Condic @ 2002-07-31 13:42 UTC (permalink / raw)


One advantage to insisting that the type Integer supports at least 16 bits
is that it gives the developer some minimal parameters on which to write
software that might rely on the standard type Integer. If you won't
guarantee that the type Integer has some minimal usefulness, then why bother
to have it at all? Most hardware will support 16 bits and even if it doesn't
you might find it difficult to write any useful programs if you can't count
up to at least 32767, so a software simulation is probably necessary.

Is there a case where, for example, it would make any sense at all for an
implementation to *not* give the user 16 bits? Would it ever make sense for
the type Integer to be 8 bits, for example? (Assuming that if I actually
want an 8-bit integer type I can still declare one of my own, that is...)
Would anyone ever really want to build an Ada implementation that had a
maximum integer of something less than 16 bits? If not, then the ARM
specifying support for at least 16 bits is a good thing in terms of giving
the developer a warm fuzzy feeling that he can depend on at least that much.

For what its worth, IIRC, the XD-Ada compiler for the Mil-Std-1750a had 16
bits for the standard type Integer. I don't remember if it allowed
declarations of integers larger than this, but my vague memory was that it
did not. Given the machine architecture and the intended usage, it would not
have been an unreasonable restriction.

MDC
--
Marin David Condic
Senior Software Engineer
Pace Micro Technology Americas    www.pacemicro.com
Enabling the digital revolution
e-Mail:    marin.condic@pacemicro.com


"Robert A Duff" <bobduff@shell01.TheWorld.com> wrote in message
news:wcc65ywhw3s.fsf@shell01.TheWorld.com...
>
> Actually, Ada requires 16 bit integers (at minimum).
> Robert has argued in the past that this is silly -- too small to be of
> use, and better to let the market decide.  Probably true.
>






^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-31 13:20           ` Robert A Duff
  2002-07-31 13:42             ` Marin David Condic
@ 2002-07-31 21:50             ` Keith Thompson
  2002-07-31 21:59               ` Robert A Duff
  1 sibling, 1 reply; 41+ messages in thread
From: Keith Thompson @ 2002-07-31 21:50 UTC (permalink / raw)


Robert A Duff <bobduff@shell01.TheWorld.com> writes:
[...]
> Actually, Ada requires 16 bit integers (at minimum).

You're right, I had forgotten that.

It also implicitly requires support for at least 24-bit fixed-point
(see the requirements for type Duration, 9.6(27)).

-- 
Keith Thompson (The_Other_Keith) kst@cts.com  <http://www.ghoti.net/~kst>
San Diego Supercomputer Center           <*>  <http://www.sdsc.edu/~kst>
Schroedinger does Shakespeare: "To be *and* not to be"



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-31 21:50             ` Keith Thompson
@ 2002-07-31 21:59               ` Robert A Duff
  0 siblings, 0 replies; 41+ messages in thread
From: Robert A Duff @ 2002-07-31 21:59 UTC (permalink / raw)


Keith Thompson <kst@cts.com> writes:

> Robert A Duff <bobduff@shell01.TheWorld.com> writes:
> [...]
> > Actually, Ada requires 16 bit integers (at minimum).
> 
> You're right, I had forgotten that.
> 
> It also implicitly requires support for at least 24-bit fixed-point
> (see the requirements for type Duration, 9.6(27)).

...which is kind of silly.  Why would an implementation want to support
less for integers than for fixed?

- Bob



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-31 13:42             ` Marin David Condic
@ 2002-08-01  7:54               ` Lutz Donnerhacke
  2002-08-01 13:07                 ` Marin David Condic
  2002-08-01 11:57               ` Larry Kilgallen
                                 ` (2 subsequent siblings)
  3 siblings, 1 reply; 41+ messages in thread
From: Lutz Donnerhacke @ 2002-08-01  7:54 UTC (permalink / raw)


* Marin David Condic wrote:
>Is there a case where, for example, it would make any sense at all for an
>implementation to *not* give the user 16 bits? Would it ever make sense for
>the type Integer to be 8 bits, for example? (Assuming that if I actually
>want an 8-bit integer type I can still declare one of my own, that is...)

Integer ist expected to be the size of the maschine word. Now choose a 6502
or similar �P. If you need a certain range of countable numbers, define your
own type. But do not insist on types the maschine can not handle efficiently.
Ada requires native support for all native maschine types anyway.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-31 13:42             ` Marin David Condic
  2002-08-01  7:54               ` Lutz Donnerhacke
@ 2002-08-01 11:57               ` Larry Kilgallen
  2002-08-01 17:53               ` Ben Brosgol
  2002-08-01 20:32               ` Keith Thompson
  3 siblings, 0 replies; 41+ messages in thread
From: Larry Kilgallen @ 2002-08-01 11:57 UTC (permalink / raw)


In article <ai8pf9$fip$1@nh.pace.co.uk>, "Marin David Condic" <dont.bother.mcondic.auntie.spam@[acm.org> writes:
> One advantage to insisting that the type Integer supports at least 16 bits
> is that it gives the developer some minimal parameters on which to write
> software that might rely on the standard type Integer. If you won't
> guarantee that the type Integer has some minimal usefulness, then why bother
> to have it at all? Most hardware will support 16 bits and even if it doesn't
> you might find it difficult to write any useful programs if you can't count
> up to at least 32767, so a software simulation is probably necessary.

Certainly it is unlikely you would find it useful to be limited to
a maximum value of 127.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-01  7:54               ` Lutz Donnerhacke
@ 2002-08-01 13:07                 ` Marin David Condic
  2002-08-02  7:31                   ` Lutz Donnerhacke
                                     ` (2 more replies)
  0 siblings, 3 replies; 41+ messages in thread
From: Marin David Condic @ 2002-08-01 13:07 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1680 bytes --]

Its been a while since I've seen a C compiler for a 6502, but my
recollection is that the last one I did look at had 16 bits for the type
int. Please correct me if I'm wrong here - have you seen C compilers for
this target using 8 bits for the type int? My thinking here is that even for
guys programming small computers, an int or Integer being required to
support at least 16 bits, is a useful constraint even if it has to be
simulated with software. Guys programming 6502's still need to frequently
count things well above +/-128 and int or Integer are the customary, handy
counter that you'd like to have some asurance will accommodate some useful
range.

That's why I wouldn't object to the standard requiring that an
implementation support at least 16 bits - even for small machines. People
expect it. Going the other direction (requiring support for 64 bits or 128
bits or unlimited bits) is a different situation in that this might become
an unreasonable burden on a compiler for a small target. (I'd certainly
consider it desirable that it be "permissable" - just not "required")

MDC
--
Marin David Condic
Senior Software Engineer
Pace Micro Technology Americas    www.pacemicro.com
Enabling the digital revolution
e-Mail:    marin.condic@pacemicro.com


"Lutz Donnerhacke" <lutz@iks-jena.de> wrote in message
news:slrnakhqa5.ok.lutz@taranis.iks-jena.de...
>
> Integer ist expected to be the size of the maschine word. Now choose a
6502
> or similar �P. If you need a certain range of countable numbers, define
your
> own type. But do not insist on types the maschine can not handle
efficiently.
> Ada requires native support for all native maschine types anyway.





^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-31 13:42             ` Marin David Condic
  2002-08-01  7:54               ` Lutz Donnerhacke
  2002-08-01 11:57               ` Larry Kilgallen
@ 2002-08-01 17:53               ` Ben Brosgol
  2002-08-01 20:32               ` Keith Thompson
  3 siblings, 0 replies; 41+ messages in thread
From: Ben Brosgol @ 2002-08-01 17:53 UTC (permalink / raw)


> Is there a case where, for example, it would make any sense at all for an
> implementation to *not* give the user 16 bits? Would it ever make sense
for
> the type Integer to be 8 bits, for example?

Not really.  E.g., remember that String's index subtype is Positive, so if
Integer were 8 bits then String objects would have a max size of 127.







^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-07-31 13:42             ` Marin David Condic
                                 ` (2 preceding siblings ...)
  2002-08-01 17:53               ` Ben Brosgol
@ 2002-08-01 20:32               ` Keith Thompson
  3 siblings, 0 replies; 41+ messages in thread
From: Keith Thompson @ 2002-08-01 20:32 UTC (permalink / raw)


"Marin David Condic" <dont.bother.mcondic.auntie.spam@[acm.org> writes:
[...]
> Is there a case where, for example, it would make any sense at all for an
> implementation to *not* give the user 16 bits? Would it ever make sense for
> the type Integer to be 8 bits, for example?

3.5.4(21) requires the range of Integer to include the range -2**15+1
.. 2**15-1 (i.e., Integer has to be at least 16 bits).

Even if that requirement weren't there, Ada.Calendar declares:

    subtype Year_Number is Integer range 1901 .. 2099;

-- 
Keith Thompson (The_Other_Keith) kst@cts.com  <http://www.ghoti.net/~kst>
San Diego Supercomputer Center           <*>  <http://www.sdsc.edu/~kst>
Schroedinger does Shakespeare: "To be *and* not to be"



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-01 13:07                 ` Marin David Condic
@ 2002-08-02  7:31                   ` Lutz Donnerhacke
  2002-08-02 13:21                     ` Marin David Condic
  2002-08-02  8:37                   ` Fraser Wilson
  2002-08-02 12:54                   ` Frank J. Lhota
  2 siblings, 1 reply; 41+ messages in thread
From: Lutz Donnerhacke @ 2002-08-02  7:31 UTC (permalink / raw)


* Marin David Condic wrote:
>Its been a while since I've seen a C compiler for a 6502, but my
>recollection is that the last one I did look at had 16 bits for the type
>int. Please correct me if I'm wrong here - have you seen C compilers for
>this target using 8 bits for the type int?

The only C-Compiler for the C64 has a 16bit int, because it's required by
the language (at least now).

>My thinking here is that even for guys programming small computers, an int
>or Integer being required to support at least 16 bits, is a useful
>constraint even if it has to be simulated with software.

Ack for C, nack for Ada. Ada has the ability to specify the type ranges you
need. C hasn't.

>That's why I wouldn't object to the standard requiring that an
>implementation support at least 16 bits - even for small machines. People
>expect it.

No. Ada people expect to define there own types if they need certain ranges.
They expect efficient implementations for those types.

>Going the other direction (requiring support for 64 bits or 128
>bits or unlimited bits) is a different situation in that this might become
>an unreasonable burden on a compiler for a small target. (I'd certainly
>consider it desirable that it be "permissable" - just not "required")

It would be fine the have a minimum requirement for the user defined ranges.
"type uint64 is mod 2**64;" is still a portability problem.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-01 13:07                 ` Marin David Condic
  2002-08-02  7:31                   ` Lutz Donnerhacke
@ 2002-08-02  8:37                   ` Fraser Wilson
  2002-08-02 12:54                   ` Frank J. Lhota
  2 siblings, 0 replies; 41+ messages in thread
From: Fraser Wilson @ 2002-08-02  8:37 UTC (permalink / raw)


"Marin David Condic" <dont.bother.mcondic.auntie.spam@[acm.org> writes:

> Its been a while since I've seen a C compiler for a 6502, but my
> recollection is that the last one I did look at had 16 bits for the type
> int. Please correct me if I'm wrong here - have you seen C compilers for
> this target using 8 bits for the type int?

No, but Ada has a special problem on 8 bit targets -- the string type.
It's obvious that Integer should be 16 bits, but it's also obvious
that using a two byte string index is overkill; one unsigned byte is
plenty.  I'm vaguely planning some compiler trickery, but it feels
bad.  Is that the normal solution to this issue?  Indexing with two
bytes takes about ten times longer than one byte.

Actually, I don't know that a C compiler could do anything at all
about this, since there's no special string type for which index
finangling could be used.  It would be up to the programmer to address
with an int8.

Fraser.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-01 13:07                 ` Marin David Condic
  2002-08-02  7:31                   ` Lutz Donnerhacke
  2002-08-02  8:37                   ` Fraser Wilson
@ 2002-08-02 12:54                   ` Frank J. Lhota
  2 siblings, 0 replies; 41+ messages in thread
From: Frank J. Lhota @ 2002-08-02 12:54 UTC (permalink / raw)


"Marin David Condic" <dont.bother.mcondic.auntie.spam@[acm.org> wrote in
message news:aibbr8$4ie$1@nh.pace.co.uk...
> Its been a while since I've seen a C compiler for a 6502, but my
> recollection is that the last one I did look at had 16 bits for the type
> int. Please correct me if I'm wrong here - have you seen C compilers for
> this target using 8 bits for the type int?

The C standard requires that 'short int' must include all integers in the
range -32767 .. 32767, and that int includes all the values in 'short'. So
if there ever was a C compiler that implemented int with 8 bits, it was
clearly pre-ANSI.





^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-02  7:31                   ` Lutz Donnerhacke
@ 2002-08-02 13:21                     ` Marin David Condic
  2002-08-03 12:24                       ` Robert Dewar
  0 siblings, 1 reply; 41+ messages in thread
From: Marin David Condic @ 2002-08-02 13:21 UTC (permalink / raw)


Maybe we're just talking at cross-purposes here. I certainly don't object to
support for user defined ranges/sizes that would be smaller than 16 bits.
The question at hand here had to do with the standard type Integer being
required to be at least 16 bits and therefore the standard mandating that an
implementation be able to support at least 16 bit integers in general.
Removing that requirement would mean that an implementation would be free to
say something like "The largest integer you can declare is 8 bits..." -
which seems like an unlikely thing to do, so having the requirement is not a
bad thing. Would most programmers find a language implementation useful if
you couldn't declare integers larger than (for example) 8 bits? Is an
implementation greatly inconvenienced by having the standard type Integer
required to be at least 16 bits? (Especially since they are free to create
Short_Integer and Long_Integer as 8 and 32 bits if they like. Seems like
they'd be free to create things that match most common hardware - even for
small machines.)

As for declaring one's own types, yes, I'm generally in favor of that. When
I care about specific sizes and ranges, I'll make my own types or subtypes.
Often, when all I care about is that I've got a variable big enough to
handle some chore, I'll use the standard Integer type. If you want to argue
that "All *competent* Ada programmers dutifully declare types to be sure
they are as big as they need." then that would imply the standard should
remove the types Integer, Float, Duration, etc. So long as these types
remain in the standard, I think it is a good thing that the standard
guarantees some minimal characteristics for the types so that a programmer
knows what he can count on.

MDC
--
Marin David Condic
Senior Software Engineer
Pace Micro Technology Americas    www.pacemicro.com
Enabling the digital revolution
e-Mail:    marin.condic@pacemicro.com


"Lutz Donnerhacke" <lutz@iks-jena.de> wrote in message
news:slrnakkdbb.ou.lutz@taranis.iks-jena.de...
>
> Ack for C, nack for Ada. Ada has the ability to specify the type ranges
you
> need. C hasn't.
>
> >That's why I wouldn't object to the standard requiring that an
> >implementation support at least 16 bits - even for small machines. People
> >expect it.
>
> No. Ada people expect to define there own types if they need certain
ranges.
> They expect efficient implementations for those types.
>






^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-02 13:21                     ` Marin David Condic
@ 2002-08-03 12:24                       ` Robert Dewar
  2002-08-03 18:59                         ` Richard Riehle
  2002-08-13 21:09                         ` Randy Brukardt
  0 siblings, 2 replies; 41+ messages in thread
From: Robert Dewar @ 2002-08-03 12:24 UTC (permalink / raw)


"Marin David Condic" 

> so having the requirement is not a
> bad thing. 

Here is why I think it *is* a bad requirement.

No implementation would possibly make Integer less than
16 bits. None ever has, and none ever would. Remember that
the only really critical role of integer is as an index
in the standard type String (it was a mistake to have string tied in
this way, but it's too late to fix this)

No one would make integer 8 bits and have strings limited
to 127 characters. 

If you are worried about implementors making deliberately
useless compilers, that's a silly worry, since there are
lots of ways of doing that (e.g. declaring that any
expression with more than one operator exceeds the capacity
of the compiler).

So why is the requirement harmful? Because it implies that
it is reasonable to limit integers to 16 bits, but in fact
any implementation on a modern architecture that chose 16 bits for
integer would be badly broken.

As for implementations for 8-bit micros, I would still make
Integer 32 bits on such a target. A choice of 16-bit integer (as for
example Alsys did on early on) would cause
giant problems in porting code. On the other hand, a choice
of 32-bit integers would make strings take a bit more room. I would
still choose 32-bits.

Anyway, I see no reason for the standard to essentially encourage
inappropriate choices for integer types by adding a requirement that
has no practical significance whatever.

This is an old old discussion. As is so often the case on
CLA, newcomers like to repeat old discussions :-)

This particular one can probably be tracked down from the
design discussions for Ada 9X.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-03 12:24                       ` Robert Dewar
@ 2002-08-03 18:59                         ` Richard Riehle
  2002-08-04  6:12                           ` Chad R. Meiners
                                             ` (4 more replies)
  2002-08-13 21:09                         ` Randy Brukardt
  1 sibling, 5 replies; 41+ messages in thread
From: Richard Riehle @ 2002-08-03 18:59 UTC (permalink / raw)


Robert Dewar wrote:

> As for implementations for 8-bit micros, I would still make
> Integer 32 bits on such a target.

I assume you are talking about Standard.Integer.   If so, this
would not correspond to the way software is so frequently
written for these machines.   In particular, for I-8051
platforms, it would introduce a potential inefficiency and
force the programmer to explicitly declare a shorter
integer (absent the availability of Standard.Short_Integer).

Since we often counsel designers to specify their own numeric
types anyway, this is probably not a hardship, but it could
be troublesome for an experienced I-8051 programmer who
expects 16 bit integers.   Consider, for example, that simply
pushing a 16 bit entity on the stack requires storing two
eight-bit stack entries.   To store a 32-bit integer would
take four stack entries.   The corresponding inefficiency
would be intolerable for most I-8051 applications.

One reason I like Ada is because we can define our own
numeric types.  Though there are few machines still extant
that use storage multiples of other than eight-bits,  they
do still exist.   I think the compiler for the Unisys 11xx
series has a word size of 36 bits.  Randy can correct me
on that if I am wrong.

> Anyway, I see no reason for the standard to essentially encourage
> inappropriate choices for integer types by adding a requirement that
> has no practical significance whatever.

I completely agree with you on this point.  The designer should make
the decision based on the architecture of the targeted platform and
the application requirements.  The language should be, as Ada is,
flexible enough to give the designer this level of support.

Don Reifer recently told me that one reason Ada was becoming
irrelevant, and his reason for recommending against its use for
new projects, was that it is not sufficiently flexible to support
the new kinds of architectures in the pipeline.   Though I disagree
with him on this assessment,  forcing the language to correspond
to a single word-size architecture (read 32 bits) would be
play into his flawed view of Ada's value for new software.

Richard Riehle




^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-03 18:59                         ` Richard Riehle
@ 2002-08-04  6:12                           ` Chad R. Meiners
  2002-08-04 14:07                           ` Robert Dewar
                                             ` (3 subsequent siblings)
  4 siblings, 0 replies; 41+ messages in thread
From: Chad R. Meiners @ 2002-08-04  6:12 UTC (permalink / raw)



"Richard Riehle" <richard@adaworks.com> wrote in message
news:3D4C2805.62563584@adaworks.com...
> with him on this assessment,  forcing the language to correspond
> to a single word-size architecture (read 32 bits) would be
> play into his flawed view of Ada's value for new software.

I don't think Dr. Dewar is arguing for forcing the language to correspond to
a single word-size architecture.

-CRM





^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-03 18:59                         ` Richard Riehle
  2002-08-04  6:12                           ` Chad R. Meiners
@ 2002-08-04 14:07                           ` Robert Dewar
  2002-08-05  2:28                             ` Richard Riehle
  2002-08-13 21:14                             ` Randy Brukardt
  2002-08-04 18:00                           ` Larry Kilgallen
                                             ` (2 subsequent siblings)
  4 siblings, 2 replies; 41+ messages in thread
From: Robert Dewar @ 2002-08-04 14:07 UTC (permalink / raw)


Richard Riehle <richard@adaworks.com> wrote in message news:<3D4C2805.62563584@adaworks.com>...
> I assume you are talking about Standard.Integer.   If so, > this
> would not correspond to the way software is so frequently
> written for these machines.   In particular, for I-8051
> platforms, it would introduce a potential inefficiency 
> and
> force the programmer to explicitly declare a shorter
> integer (absent the availability of
> Standard.Short_Integer).

First. I trust we all agree that new Ada code should
almost NEVER EVER use type Integer, except as the
index of the String type.

To talk of the programmer being "forced to declare a shorter integer"
is very peculiar, since this is nothing
more than good Ada style. If this approach helps persuade
incompetent programmers to adopt better style -- GOOD!

Second. In fact legacy code does tend to over use Integer.
So in practice when acquiring or porting legacy code, this
may be an issue, but this is *precisely* the case where
making Integer 32 bits can be appropriate, because most
of that badly written code that uses type Integer will
have assumed that integer is 32 bits.

However, the issue of unwanted overhead on the String
type is an issue (too bad these got intertwined in the
original design).
 
> Since we often counsel designers to specify their own
           ^^^^^
I trust this is a typo for *always*

> numeric types anyway, this is probably not a hardship, 
> but it could be troublesome for an experienced I-8051 
> programmer who expects 16 bit integers.

I don't understand, is this "experienced I-8051" programmer
an experienced Ada programmer. If so, he has no business
using Standard.Integer. If not, and he is writing Ada in
C style, then perhaps the choice of 32-bit integers will
help cure this bad practice.

> Consider, for example, that simply
> pushing a 16 bit entity on the stack requires storing two
> eight-bit stack entries.   To store a 32-bit integer
> would
> take four stack entries.   The corresponding inefficiency
> would be intolerable for most I-8051 applications.

Yes, but it is also intolerable for this experienced
I-8051 programmer to be using Standard.Integer explicitly.

 
> One reason I like Ada is because we can define our own
> numeric types.

Exactly, so what's the issue.

> Though there are few machines still 
> extant that use storage multiples of other than 
> eight-bits,  they
> do still exist.   I think the compiler for the Unisys 
> 11xx
> series has a word size of 36 bits.  Randy can correct me
> on that if I am wrong.

Yes, of course it's 36 bits (that was a port of Alsys
technology with which I am familiar).

 
> Don Reifer recently told me that one reason Ada was becoming
> irrelevant, and his reason for recommending against its use for
> new projects, was that it is not sufficiently flexible to support
> the new kinds of architectures in the pipeline.

This is complete and utter nonsense. Where on earth does
Reifer get these crazy ideas?

> Though I disagree
> with him on this assessment,  forcing the language to 
> correspond
> to a single word-size architecture (read 32 bits) would 
> be
> play into his flawed view of Ada's value for new 
> software.

I find this completely puzzling. Given that in Ada code
we always define the integer types we want, the standard
forces nothing.

Well we still have the tie in with Integer and String, 
and that is worthy of discussion, but this business of
claiming that Ada is flawed because incompetent programmers
misusing Standard.Integer might get an integer size they
do not expect is really not a sustainable argument.

Indeed a good argument can be made that the type Integer
should never have been introduced in the first place. It
is an unnecessary concession to C and Fortran programmers.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-03 18:59                         ` Richard Riehle
  2002-08-04  6:12                           ` Chad R. Meiners
  2002-08-04 14:07                           ` Robert Dewar
@ 2002-08-04 18:00                           ` Larry Kilgallen
       [not found]                           ` <5ee5b646.0208040607.ebb6909@posting.googOrganization: LJK Software <PG2KS5+doDWm@eisner.encompasserve.org>
  2002-08-11 21:56                           ` Robert A Duff
  4 siblings, 0 replies; 41+ messages in thread
From: Larry Kilgallen @ 2002-08-04 18:00 UTC (permalink / raw)


In article <5ee5b646.0208040607.ebb6909@posting.google.com>, dewar@gnat.com (Robert Dewar) writes:
> Richard Riehle <richard@adaworks.com> wrote in message news:<3D4C2805.62563584@adaworks.com>...

>> Since we often counsel designers to specify their own
>            ^^^^^
> I trust this is a typo for *always*

I would expect no such counsel when:

	1. They are already doing so.
   or
	2. This consideration is vastly outweighed by a body of
	   more important items that deserve their intention.

>> numeric types anyway, this is probably not a hardship, 
>> but it could be troublesome for an experienced I-8051 
>> programmer who expects 16 bit integers.
> 
> I don't understand, is this "experienced I-8051" programmer
> an experienced Ada programmer. If so, he has no business
> using Standard.Integer. If not, and he is writing Ada in
> C style, then perhaps the choice of 32-bit integers will
> help cure this bad practice.

Let's cure _all_ such individuals, by standardizing on 4096 bits
as the size for Standard.Integer :-)



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
       [not found]                           ` <5ee5b646.0208040607.ebb6909@posting.googOrganization: LJK Software <PG2KS5+doDWm@eisner.encompasserve.org>
@ 2002-08-05  1:44                             ` Robert Dewar
  2002-08-05  1:48                             ` Robert Dewar
  2002-08-05  2:34                             ` Richard Riehle
  2 siblings, 0 replies; 41+ messages in thread
From: Robert Dewar @ 2002-08-05  1:44 UTC (permalink / raw)


Kilgallen@SpamCop.net (Larry Kilgallen) wrote in message news:<PG2KS5+doDWm@eisner.encompasserve.org>...
> Let's cure _all_ such individuals, by standardizing on 
> 4096 bits as the size for Standard.Integer :-)

Actually I think the rule of avoiding using explicit
references to Integer is pretty well established. yes,
you see occasional shops that violate this rule, but not
many (in our experience of seeing code from hundreds of
serious projects using Ada).

Actually the above would almost warrant presentation without a smiley
(as a way of effectively removing the
injudicious Standard.Integer type from the language if
it were not for the darned cross contamination with
type String.

What I would have done for the Ada design is to have only
*one* predefined type which would be called something like

   type String_Index is range 1 .. Implementation_Defined;

and leave all other integer types out of standard. I think
that would have worked better, and people would not have
"misused" String_Index as they occasionally misuse Integer.

I feel differently about Float incidentally, I think it is
reasonable to have predefined Float and Long_Float types.
It is simply asking too much for all useful fpt library
packages to be generic since they can't easily speak to
one another if you do that.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
       [not found]                           ` <5ee5b646.0208040607.ebb6909@posting.googOrganization: LJK Software <PG2KS5+doDWm@eisner.encompasserve.org>
  2002-08-05  1:44                             ` Robert Dewar
@ 2002-08-05  1:48                             ` Robert Dewar
  2002-08-05 11:40                               ` Marc A. Criley
  2002-08-05  2:34                             ` Richard Riehle
  2 siblings, 1 reply; 41+ messages in thread
From: Robert Dewar @ 2002-08-05  1:48 UTC (permalink / raw)


Kilgallen@SpamCop.net (Larry Kilgallen) wrote in message news:<PG2KS5+doDWm@eisner.encompasserve.org>...
> 	2. This consideration is vastly outweighed by a 
>          body of more important items that deserve their
>          intention.
           ^^^^^^^^^
Nice typo, I assume you mean attention :-)

Actually, this kind of short range thinking (sorry I am too
busy to learn how to do things right, I have too much to
worry about) is really inappropriate to the Ada world and
the world of serious software engineering. I can't tell you
how many projects have a heck of a time porting legacy code
because programmers have not given attention to the issue of writing
portable code earlier on. Note that portable
code is code that is easily able to be ported. That does
NOT mean having stupid rules like "no use of unchecked
conversion". What it does mean is careful encapsulation
of dependencies, and avoidance of gratuitous non-portabilities (such
as using type Standard.Integer :-)



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-04 14:07                           ` Robert Dewar
@ 2002-08-05  2:28                             ` Richard Riehle
  2002-08-11 15:32                               ` Simon Wright
  2002-08-13 21:14                             ` Randy Brukardt
  1 sibling, 1 reply; 41+ messages in thread
From: Richard Riehle @ 2002-08-05  2:28 UTC (permalink / raw)


Robert Dewar wrote:

> First. I trust we all agree that new Ada code should
> almost NEVER EVER use type Integer, except as the
> index of the String type.

Robert,

We are in full agreement on all of the points you made.

The only two uses of predefined type Integer are its
requirement in the predefined String, and for occasional
pedagogic purposes, and the latter use often leads to
more trouble than it is worth.

Richard Riehle




^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
       [not found]                           ` <5ee5b646.0208040607.ebb6909@posting.googOrganization: LJK Software <PG2KS5+doDWm@eisner.encompasserve.org>
  2002-08-05  1:44                             ` Robert Dewar
  2002-08-05  1:48                             ` Robert Dewar
@ 2002-08-05  2:34                             ` Richard Riehle
  2 siblings, 0 replies; 41+ messages in thread
From: Richard Riehle @ 2002-08-05  2:34 UTC (permalink / raw)


Larry Kilgallen wrote:

> In article <5ee5b646.0208040607.ebb6909@posting.google.com>, dewar@gnat.com (Robert Dewar) writes:
> > Richard Riehle <richard@adaworks.com> wrote in message news:<3D4C2805.62563584@adaworks.com>...
>
> >> Since we often counsel designers to specify their own
> >            ^^^^^
> > I trust this is a typo for *always*

I do have difficulty with words such as always and never.   Also, over
the years, especially when I was a day-to-day programmer, I found
myself suspicious of rigid rules for programming practice.  So, my
use of the word, often, though it might be a little to indecisive for
some tastes, it originates in the caution I use when giving advice
about programming style.  Perhaps I am just too wishy-washy about
this and need more backbone in my counsel.

Richard Riehle




^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-05  1:48                             ` Robert Dewar
@ 2002-08-05 11:40                               ` Marc A. Criley
  2002-08-05 14:40                                 ` Pat Rogers
  0 siblings, 1 reply; 41+ messages in thread
From: Marc A. Criley @ 2002-08-05 11:40 UTC (permalink / raw)


Robert Dewar wrote:
> 
> Actually, this kind of short range thinking (sorry I am too
> busy to learn how to do things right, I have too much to
> worry about) is really inappropriate to the Ada world and
> the world of serious software engineering. I can't tell you
> how many projects have a heck of a time porting legacy code
> because programmers have not given attention to the issue of writing
> portable code earlier on.

I can certainly vouch for the presence of this practice in the
industry.  I've done a few ports over the years, and so many times I
have to deal with problems that had the original developers spent just 5
minutes thinking about how to implement in an "Ada" way, rather than the
first approach that came into their head, would've eliminated the
porting problem and made the original code more straightforward and
readable.

The latest was the message buffer handling portion, wherein
Unchecked_Conversion, 'Address, and an Interfaced memcpy routine were
used to move the bits around from buffer to buffer, and to build and
unpack messages.  Variant records and assignment statements would've
worked just as well.

Although, software systems like these are really helpful for stress
testing source code analysis tools :-)

Marc A. Criley, Consultant
Quadrus Corporation
www.quadruscorp.com



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-05 11:40                               ` Marc A. Criley
@ 2002-08-05 14:40                                 ` Pat Rogers
  0 siblings, 0 replies; 41+ messages in thread
From: Pat Rogers @ 2002-08-05 14:40 UTC (permalink / raw)


"Marc A. Criley" <mcq95@earthlink.net> wrote in message
news:3D4E652E.FDB4C1C@earthlink.net...
<snip>
> The latest was the message buffer handling portion, wherein
> Unchecked_Conversion, 'Address, and an Interfaced memcpy routine were
> used to move the bits around from buffer to buffer, and to build and
> unpack messages.  Variant records and assignment statements would've
> worked just as well.

I smell 1553 (or such).  OOP works even better for packing/unpacking,
although you do have to convince people that it is OK in the application
domain!





^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-05  2:28                             ` Richard Riehle
@ 2002-08-11 15:32                               ` Simon Wright
  0 siblings, 0 replies; 41+ messages in thread
From: Simon Wright @ 2002-08-11 15:32 UTC (permalink / raw)


Richard Riehle <richard@adaworks.com> writes:

> Robert Dewar wrote:
> 
> > First. I trust we all agree that new Ada code should
> > almost NEVER EVER use type Integer, except as the
> > index of the String type.
> 
> Robert,
> 
> We are in full agreement on all of the points you made.
> 
> The only two uses of predefined type Integer are its
> requirement in the predefined String, and for occasional
> pedagogic purposes, and the latter use often leads to
> more trouble than it is worth.

Well, I think that when you are writing libraries you may find that
Integer, Natural and Positive have their uses. For example,

   package Tables is

      type Item_Array is array (Positive range <>) of Items.Item_Container;
      type Value_Array is array (Positive range <>) of Values.Value_Container;

      type Table (Number_Of_Buckets : Positive) is record
         Items : Item_Array (1 .. Number_Of_Buckets);
         Values : Value_Array (1 .. Number_Of_Buckets);
         Size : Natural := 0;
      end record;

.. it seems pointless to force the user to provide yet another
parameter to specify the type for Number_Of_Buckets.

I note also that GNAT.Dynamic_Tables has

   generic
      type Table_Component_Type is private;
      type Table_Index_Type     is range <>;

      Table_Low_Bound : Table_Index_Type;
      Table_Initial   : Positive;
      Table_Increment : Natural;

   package GNAT.Dynamic_Tables is



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-03 18:59                         ` Richard Riehle
                                             ` (3 preceding siblings ...)
       [not found]                           ` <5ee5b646.0208040607.ebb6909@posting.googOrganization: LJK Software <PG2KS5+doDWm@eisner.encompasserve.org>
@ 2002-08-11 21:56                           ` Robert A Duff
  4 siblings, 0 replies; 41+ messages in thread
From: Robert A Duff @ 2002-08-11 21:56 UTC (permalink / raw)


Simon Wright <simon@pushface.org> writes:

> Well, I think that when you are writing libraries you may find that
> Integer, Natural and Positive have their uses. For example,
> 
>    package Tables is
> 
>       type Item_Array is array (Positive range <>) of Items.Item_Container;
>       type Value_Array is array (Positive range <>) of Values.Value_Container;

But using different index types for different unrelated arrays prevents
accidentally using an index for the wrong array.

- Bob



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-03 12:24                       ` Robert Dewar
  2002-08-03 18:59                         ` Richard Riehle
@ 2002-08-13 21:09                         ` Randy Brukardt
  2002-08-18  1:01                           ` AG
  1 sibling, 1 reply; 41+ messages in thread
From: Randy Brukardt @ 2002-08-13 21:09 UTC (permalink / raw)


Robert Dewar wrote in message
<5ee5b646.0208030424.39703482@posting.google.com>...
>"Marin David Condic"
>
>> so having the requirement is not a
>> bad thing.
>
>Here is why I think it *is* a bad requirement.
>
>No implementation would possibly make Integer less than
>16 bits. None ever has, and none ever would. Remember that
>the only really critical role of integer is as an index
>in the standard type String (it was a mistake to have string tied in
>this way, but it's too late to fix this)
>
>No one would make integer 8 bits and have strings limited
>to 127 characters.
>
>If you are worried about implementors making deliberately
>useless compilers, that's a silly worry, since there are
>lots of ways of doing that (e.g. declaring that any
>expression with more than one operator exceeds the capacity
>of the compiler).


Right. The reason for the watered-down requirement was that certain
implementors wouldn't stand for a 32-bit requirement, which would break
all of their customer's code.

>So why is the requirement harmful? Because it implies that
>it is reasonable to limit integers to 16 bits, but in fact
>any implementation on a modern architecture that chose 16 bits for
>integer would be badly broken.

Here's where we disagree. Janus/Ada in fact still uses 16-bits for
Integer. This is simply because changing it breaks too much code (ours
and our customers); the compiler fully supports 32-bit integers and
32-bit strings. (I've tried it.) It would be trivial to change if
needed. (I've thought about putting in a compiler switch, but I haven't
had time or customer demand, and it is messy because of the way that
symbol table files are interrelated.) In any case, code that declares
their own types does not have a portability problem.

Indeed, I have occassionally declared my own string type to avoid this
dependence on Integer:

    type String_Count is range 0 .. 2**32-1;
    subtype String_Index is String_Count range 1 .. String_Count'Last;
    type My_String is array (String_Index range <>) of Character;

Ada's rules mean that this type can be used exactly like String. The
only time something extra need be done is when an item of type String is
actually required (such as for a file name). Then a simple type
conversion will do the trick (presuming that the string is short enough,
which better be true for file names).

     Ada.Text_IO.Open (File, String(My_Name), Ada.Text_IO.In_File);

>As for implementations for 8-bit micros, I would still make
>Integer 32 bits on such a target. A choice of 16-bit integer (as for
>example Alsys did on early on) would cause
>giant problems in porting code. On the other hand, a choice
>of 32-bit integers would make strings take a bit more room. I would
>still choose 32-bits.


It takes a lot more room, as the runtime has to carry support for doing
math on 32-bit items (multiply, divide, and so on are too large to put
in-line at the point of use). You're likely to pay for this support
always, because it is very likely that a String or Integer will occur in
the code (and use these operations). On (very) memory-limited targets,
that can be very bad. String descriptors, for loop data, and the like
also is twice as large -- which can matter.

The portability problem only occurs with code that is dubiously designed
in the first place (using predefined numeric types), and it probably is
too large for memory-limited targets anyway. So I strongly disagree with
this.

>Anyway, I see no reason for the standard to essentially encourage
>inappropriate choices for integer types by adding a requirement that
>has no practical significance whatever.
>
>This is an old old discussion. As is so often the case on
>CLA, newcomers like to repeat old discussions :-)
>
>This particular one can probably be tracked down from the
>design discussions for Ada 9X.

Sure, I remember this well. Randy wouldn't stand for a stronger
requirement (there was one at one point); he managed to get enough
support for his position, so we have the one we have now. Note that Ada
83 had no such requirement; the ACVC tests assumed 12-bit integers
maximum. It is useful for the ACVC/ACATS for there to be such a minimum
requirement. But I agree it probably isn't useful for anyone else.

               Randy.






^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-04 14:07                           ` Robert Dewar
  2002-08-05  2:28                             ` Richard Riehle
@ 2002-08-13 21:14                             ` Randy Brukardt
  1 sibling, 0 replies; 41+ messages in thread
From: Randy Brukardt @ 2002-08-13 21:14 UTC (permalink / raw)


Robert Dewar wrote in message
<5ee5b646.0208040607.ebb6909@posting.google.com>...

>> Though there are few machines still
>> extant that use storage multiples of other than
>> eight-bits,  they do still exist.   I think the compiler for the
Unisys
>> 11xx series has a word size of 36 bits.  Randy can correct me
>> on that if I am wrong.
>
>Yes, of course it's 36 bits (that was a port of Alsys
>technology with which I am familiar).


That's the Ada 83 compiler. The Ada 95 compiler is a port of Janus/Ada.
And, yes it has a word size of 36-bits. (Although the compiler
internally considers it a machine with 9-bit storage units with strong
alignment requirements. This matched the code generator better (which
was created for C), and insured that strings were packed without
standing on ones head.)

               Randy.






^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-13 21:09                         ` Randy Brukardt
@ 2002-08-18  1:01                           ` AG
  2002-08-20  0:15                             ` Robert Dewar
  0 siblings, 1 reply; 41+ messages in thread
From: AG @ 2002-08-18  1:01 UTC (permalink / raw)



"Randy Brukardt" <randy@rrsoftware.com> wrote in message
news:ulita886np1nd9@corp.supernews.com...

>     type String_Count is range 0 .. 2**32-1;
>     subtype String_Index is String_Count range 1 .. String_Count'Last;

To be consistent, should not it be:

subtype String_Index is String_Count range String_Count'First + 1 ..
String_Count'Last;

Otherwise you are assuming that counting starts from zero (or one).





^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 64-bit integers in Ada
  2002-08-18  1:01                           ` AG
@ 2002-08-20  0:15                             ` Robert Dewar
  0 siblings, 0 replies; 41+ messages in thread
From: Robert Dewar @ 2002-08-20  0:15 UTC (permalink / raw)


"AG" <ang@xtra.co.nz> wrote in message news:<UjC79.5960$hk3.1112693@news.xtra.co.nz>...

> To be consistent, should not it be:
> 
> subtype String_Index is String_Count range 
> String_Count'First + 1 .. String_Count'Last;
> 
> Otherwise you are assuming that counting starts from zero (or one).

Actually this seems a poor suggestion, your rewrite does
not emphasize that the lower bound of strings starts at 1.



^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2002-08-20  0:15 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-05-16 11:27 64-bit integers in Ada David Rasmussen
2002-05-17  2:28 ` Robert Dewar
2002-05-17 13:56 ` Mark Johnson
2002-07-29 15:33 ` Victor Giddings
2002-07-29 20:15   ` Robert A Duff
2002-07-30 18:35     ` Richard Riehle
2002-07-30 20:20       ` Robert A Duff
2002-07-31  0:13       ` Robert Dewar
2002-07-31  4:17         ` Keith Thompson
2002-07-31  8:41           ` Larry Kilgallen
2002-07-31 13:20           ` Robert A Duff
2002-07-31 13:42             ` Marin David Condic
2002-08-01  7:54               ` Lutz Donnerhacke
2002-08-01 13:07                 ` Marin David Condic
2002-08-02  7:31                   ` Lutz Donnerhacke
2002-08-02 13:21                     ` Marin David Condic
2002-08-03 12:24                       ` Robert Dewar
2002-08-03 18:59                         ` Richard Riehle
2002-08-04  6:12                           ` Chad R. Meiners
2002-08-04 14:07                           ` Robert Dewar
2002-08-05  2:28                             ` Richard Riehle
2002-08-11 15:32                               ` Simon Wright
2002-08-13 21:14                             ` Randy Brukardt
2002-08-04 18:00                           ` Larry Kilgallen
     [not found]                           ` <5ee5b646.0208040607.ebb6909@posting.googOrganization: LJK Software <PG2KS5+doDWm@eisner.encompasserve.org>
2002-08-05  1:44                             ` Robert Dewar
2002-08-05  1:48                             ` Robert Dewar
2002-08-05 11:40                               ` Marc A. Criley
2002-08-05 14:40                                 ` Pat Rogers
2002-08-05  2:34                             ` Richard Riehle
2002-08-11 21:56                           ` Robert A Duff
2002-08-13 21:09                         ` Randy Brukardt
2002-08-18  1:01                           ` AG
2002-08-20  0:15                             ` Robert Dewar
2002-08-02  8:37                   ` Fraser Wilson
2002-08-02 12:54                   ` Frank J. Lhota
2002-08-01 11:57               ` Larry Kilgallen
2002-08-01 17:53               ` Ben Brosgol
2002-08-01 20:32               ` Keith Thompson
2002-07-31 21:50             ` Keith Thompson
2002-07-31 21:59               ` Robert A Duff
2002-07-30  4:29   ` Robert Dewar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox