comp.lang.ada
 help / color / mirror / Atom feed
* Efficiency of code generated by Ada compilers
@ 2010-08-06 20:21 Elias Salomão Helou Neto
  2010-08-06 20:24 ` (see below)
                   ` (4 more replies)
  0 siblings, 5 replies; 94+ messages in thread
From: Elias Salomão Helou Neto @ 2010-08-06 20:21 UTC (permalink / raw)


I would like to know how does code generated by Ada compilers compare
to those generated by C++. I use C++ for numerical software
implementation, but I am trying to find alternatives. One thing,
however, I cannot trade for convenience is efficiency. Will Ada
compiled code possibly be as efficient as that generated by C++
compilers?

Also, I do need to have something similar to C++ "templated
metaprogramming" techniques. In particular, C++0x will introduce
variadic templates, which will allow us to write templates that will
generate efficient, type-safe, variable-argument functions. Is there
anything like that in Ada?

If any of the above questions is to be negatively answered, I ask: why
does Ada even exist? And further, is there any language which is
_truly_ better (regarding code maintainability, readability and
developing ease) than C++ and as overhead-free as it?

Thank you in advance,
Elias.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-06 20:21 Efficiency of code generated by Ada compilers Elias Salomão Helou Neto
@ 2010-08-06 20:24 ` (see below)
  2010-08-06 23:14 ` Shark8
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 94+ messages in thread
From: (see below) @ 2010-08-06 20:24 UTC (permalink / raw)


On 06/08/2010 21:21, in article
f3c0cf89-6993-4b83-b4fe-8a920ce23a14@f6g2000yqa.googlegroups.com, "Elias
Salom�o Helou Neto" <eshneto@gmail.com> wrote:

> I would like to know how does code generated by Ada compilers compare
...
> Elias.

Troll alert.

-- 
Bill Findlay
<surname><forename> chez blueyonder.co.uk





^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-06 20:21 Efficiency of code generated by Ada compilers Elias Salomão Helou Neto
  2010-08-06 20:24 ` (see below)
@ 2010-08-06 23:14 ` Shark8
  2010-08-07  7:53 ` Dmitry A. Kazakov
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 94+ messages in thread
From: Shark8 @ 2010-08-06 23:14 UTC (permalink / raw)


On Aug 6, 2:21 pm, Elias Salomão Helou Neto <eshn...@gmail.com> wrote:
> I would like to know how does code generated by Ada compilers compare
> to those generated by C++. I use C++ for numerical software
> implementation, but I am trying to find alternatives. One thing,
> however, I cannot trade for convenience is efficiency. Will Ada
> compiled code possibly be as efficient as that generated by C++
> compilers?
>
> Also, I do need to have something similar to C++ "templated
> metaprogramming" techniques. In particular, C++0x will introduce
> variadic templates, which will allow us to write templates that will
> generate efficient, type-safe, variable-argument functions. Is there
> anything like that in Ada?
>
> If any of the above questions is to be negatively answered, I ask: why
> does Ada even exist? And further, is there any language which is
> _truly_ better (regarding code maintainability, readability and
> developing ease) than C++ and as overhead-free as it?
>
> Thank you in advance,
> Elias.

Well, it really depends on what you mean by "efficient." If you're
terming things like accessing invalid indecies of arrays as
'efficient' "because there's no time wasted on range-checking" then
you're *REALLY* in the wrong place... however, if you're terming
'efficient' as there are no range-checks generated because the
variable indexing the array is constrained to valid values, well then
you'd be quite welcome here.

{GCC has an Ada front-end, so in-theory Ada and C++ gcc-compiled
programs should have very-near the same profiles; this is, however,
not taking into any consideration for Ada's stronger type-system and
impacts of compile-time checks & optimizations that allows.} As for
me, I am rather strongly opinionated that things like range-checking
should [always] be done, unless there is some VERY compelling and
definable reason not to. {Say you're making a Stream-interpreter where
the stream is operation-instructions for some robotic component,
except the stream itself also contains control-commands for sub-
unit[s] that have a different/larger/disjoint range.}

Templates, as I understand them in C++ are roughly analogous to Ada's
generic-system; I never got into C++'s templates and am relatively new
to Ada, so I'll let someone with more experience address that aspect.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-06 20:21 Efficiency of code generated by Ada compilers Elias Salomão Helou Neto
  2010-08-06 20:24 ` (see below)
  2010-08-06 23:14 ` Shark8
@ 2010-08-07  7:53 ` Dmitry A. Kazakov
  2010-08-10 13:52   ` Elias Salomão Helou Neto
  2010-08-08 14:03 ` Gene
  2010-08-15 12:32 ` Florian Weimer
  4 siblings, 1 reply; 94+ messages in thread
From: Dmitry A. Kazakov @ 2010-08-07  7:53 UTC (permalink / raw)


On Fri, 6 Aug 2010 13:21:48 -0700 (PDT), Elias Salom�o Helou Neto wrote:

> I would like to know how does code generated by Ada compilers compare
> to those generated by C++.

The short answer is yes, of course.

The long answer is that such comparisons are quite difficult to perform
accurately.

> I use C++ for numerical software
> implementation, but I am trying to find alternatives.

Ada is especially good for this, because it has an elaborated system of
numeric types with specified accuracy and precision (and behavior), which
C++ lacks.

> One thing,
> however, I cannot trade for convenience is efficiency. Will Ada
> compiled code possibly be as efficient as that generated by C++
> compilers?

Certainly yes. Potentially Ada code can be more efficient, because in Ada
your program usually tells the compiler more than in C++. This information
can be used in optimization.

> Also, I do need to have something similar to C++ "templated
> metaprogramming" techniques.

Ada has generics which are roughly same as templates. Unlikely to C+
generics are contracted and not automatically instantiated.

> In particular, C++0x will introduce
> variadic templates, which will allow us to write templates that will
> generate efficient, type-safe, variable-argument functions. Is there
> anything like that in Ada?

No.
 
> If any of the above questions is to be negatively answered, I ask: why
> does Ada even exist?

Do you mean variadic templates here? Seriously?

> And further, is there any language which is
> _truly_ better (regarding code maintainability, readability and
> developing ease) than C++ and as overhead-free as it?

Maintainability, readability and developing ease are sufficiently dependent
on *not* using things like C++ templates. Even more variadic templates!

Note that for numeric applications templates do not help much. Consider the
following problem. Let you have to implement some mathematical function of
known algorithm and put it into a library. That latter is not possible with
templates anyway is beside the point. How do you do it to work for say
float, double etc? Metaprogramming you said. You could try implementing it
as a template<class number>, but the parameter here is not the actual type
you need to use in the computations in order to make the result accurate
within the precision of the type number. In C++ you cannot even learn the
precision of a template argument. In Ada at least you can (Ada Reference
Manual 3.5.8). But considering:

generic
   type Real is digits <>;  -- To work with any floating-point type
function Generic_Elliptic_Integral (X : Real; K : Real) return Real;

The following implementation is illegal:

function Generic_Elliptic_Integral (X : Real; K : Real) return Real is
   type Internal is digits Real'Digits * 2;  -- You cannot do that!
begin
   ...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-06 20:21 Efficiency of code generated by Ada compilers Elias Salomão Helou Neto
                   ` (2 preceding siblings ...)
  2010-08-07  7:53 ` Dmitry A. Kazakov
@ 2010-08-08 14:03 ` Gene
  2010-08-08 15:49   ` Robert A Duff
  2010-08-15 12:32 ` Florian Weimer
  4 siblings, 1 reply; 94+ messages in thread
From: Gene @ 2010-08-08 14:03 UTC (permalink / raw)


On Aug 6, 4:21 pm, Elias Salomão Helou Neto <eshn...@gmail.com> wrote:
> I would like to know how does code generated by Ada compilers compare
> to those generated by C++. I use C++ for numerical software
> implementation, but I am trying to find alternatives. One thing,
> however, I cannot trade for convenience is efficiency. Will Ada
> compiled code possibly be as efficient as that generated by C++
> compilers?
>
> Also, I do need to have something similar to C++ "templated
> metaprogramming" techniques. In particular, C++0x will introduce
> variadic templates, which will allow us to write templates that will
> generate efficient, type-safe, variable-argument functions. Is there
> anything like that in Ada?
>
> If any of the above questions is to be negatively answered, I ask: why
> does Ada even exist? And further, is there any language which is
> _truly_ better (regarding code maintainability, readability and
> developing ease) than C++ and as overhead-free as it?

My experience is that GNAT produces code very similar to GCC-compiled C
++ for equivalent programs.

In some cases, a built-in Ada construct will generate better code than
a "hand-coded" equivalent in C++.  An example I ran into recently was
incrementing a modular type.  In Ada, you say I := I + 1;, and the
compiler takes care of "wrapping" to zero.  In C++, I've frequently
seen people write i = (i + 1) % n; to simulate the same operation in
the absence of modular types. In this case, GCC/C++ generates a div/
mod instruction, which is very expensive.  GNAT generates if (n ==
const(n - 1)) i = 0; else i++; In the past, I have seen examples where
Ada array indexing was much more efficient than C++, apparently
because GCC could not make needed inferences about aliasing.

In some cases, GNAT "compiles in" error checking that you must turn
off to get the equivalent GCC code.  There's no down side in this, as
it means you never had to write the error checking (overflows, array
bounds, etc.), which is much more expensive and error-prone than
letting GNAT do it for you (and then possibly turning selected bits
off in the 1% of cases where it makes a difference).  And of course
it's even more expensive never to check for errors at all, adding risk
of an undiscovered bug in production, which an unfortunate amount of C+
+ code does.

Finally, there are a few places where GCC/C++ does produce somewhat
better code than GNAT simply because there are more people tweaking
the C++ portion of GCC.  I can't recall ever seeing the reverse
occur.  So it goes.  There will always be differences among compilers.

Design and idiomatic coding in Ada wouldn't benefit from variadic
functions and procedures.  Various combinations of overloading, named
parameters with default values, and aggregates accomplish the same
objectives in a more coherent manner.  Certainly efficient variadic
functions would be useful in C++ because they're specified in the
libraries.

You were asking reasonable questions until the last paragraph. So I'll
stop here.  But see comp.lang.c where C zealots frequently make
similar assertions about C with respect to C++.




^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-08 14:03 ` Gene
@ 2010-08-08 15:49   ` Robert A Duff
  2010-08-08 17:13     ` Charles H. Sampson
  2010-08-08 22:35     ` tmoran
  0 siblings, 2 replies; 94+ messages in thread
From: Robert A Duff @ 2010-08-08 15:49 UTC (permalink / raw)


Gene <gene.ressler@gmail.com> writes:

> In some cases, a built-in Ada construct will generate better code than
> a "hand-coded" equivalent in C++.  An example I ran into recently was
> incrementing a modular type.  In Ada, you say I := I + 1;, and the
> compiler takes care of "wrapping" to zero.  In C++, I've frequently
> seen people write i = (i + 1) % n; to simulate the same operation in
> the absence of modular types.

You will see me writing "I := (I + 1) mod N;" (for signed
integer I) in Ada.  ;-)

>...In this case, GCC/C++ generates a div/
> mod instruction, which is very expensive.

That's surprising.  It wouldn't be hard to optimize the
"I := (I + 1) mod N;" case (which would apply to both
C and Ada).

And it would be desirable, because in most cases the explicit
"mod" (or "%") is more readable.

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-08 15:49   ` Robert A Duff
@ 2010-08-08 17:13     ` Charles H. Sampson
  2010-08-08 18:11       ` Dmitry A. Kazakov
  2010-08-08 20:51       ` Robert A Duff
  2010-08-08 22:35     ` tmoran
  1 sibling, 2 replies; 94+ messages in thread
From: Charles H. Sampson @ 2010-08-08 17:13 UTC (permalink / raw)


Robert A Duff <bobduff@shell01.TheWorld.com> wrote:

> Gene <gene.ressler@gmail.com> writes:
> 
> > In some cases, a built-in Ada construct will generate better code than
> > a "hand-coded" equivalent in C++.  An example I ran into recently was
> > incrementing a modular type.  In Ada, you say I := I + 1;, and the
> > compiler takes care of "wrapping" to zero.  In C++, I've frequently
> > seen people write i = (i + 1) % n; to simulate the same operation in
> > the absence of modular types.
> 
> You will see me writing "I := (I + 1) mod N;" (for signed
> integer I) in Ada.  ;-)
>
     I'm surprised, Bob.  Are you saying that you signed integers in
preference to a modular type for a variable that cycles?  I use
modular-typed variables and, if I've got my engineer's hat on, write

     I := I + 1;  -- Modular variable.  Wraps.

but I have to admit that in the heat of battle the comment might be
omitted.  Even in that case, the variable name probably indicates that
wrapping should be expected.

                        Charlie
-- 
All the world's a stage, and most 
of us are desperately unrehearsed.  Sean O'Casey



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-08 17:13     ` Charles H. Sampson
@ 2010-08-08 18:11       ` Dmitry A. Kazakov
  2010-08-08 20:51       ` Robert A Duff
  1 sibling, 0 replies; 94+ messages in thread
From: Dmitry A. Kazakov @ 2010-08-08 18:11 UTC (permalink / raw)


On Sun, 8 Aug 2010 10:13:51 -0700, Charles H. Sampson wrote:

> Robert A Duff <bobduff@shell01.TheWorld.com> wrote:
> 
>> Gene <gene.ressler@gmail.com> writes:
>> 
>>> In some cases, a built-in Ada construct will generate better code than
>>> a "hand-coded" equivalent in C++.  An example I ran into recently was
>>> incrementing a modular type.  In Ada, you say I := I + 1;, and the
>>> compiler takes care of "wrapping" to zero.  In C++, I've frequently
>>> seen people write i = (i + 1) % n; to simulate the same operation in
>>> the absence of modular types.
>> 
>> You will see me writing "I := (I + 1) mod N;" (for signed
>> integer I) in Ada.  ;-)
>>
>      I'm surprised, Bob.  Are you saying that you signed integers in
> preference to a modular type for a variable that cycles?

Sometimes it has to be that way because modular types do not support
non-static bounds. E.g.:

procedure Foo (N : Positive) is
   type M is mod 2**N; -- You cannot do that!

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-08 17:13     ` Charles H. Sampson
  2010-08-08 18:11       ` Dmitry A. Kazakov
@ 2010-08-08 20:51       ` Robert A Duff
  2010-08-08 22:10         ` (see below)
                           ` (4 more replies)
  1 sibling, 5 replies; 94+ messages in thread
From: Robert A Duff @ 2010-08-08 20:51 UTC (permalink / raw)


csampson@inetworld.net (Charles H. Sampson) writes:

> Robert A Duff <bobduff@shell01.TheWorld.com> wrote:
>
>      I'm surprised, Bob.  Are you saying that you signed integers in
> preference to a modular type for a variable that cycles?

Yes.  Unless I'm forced to use modular for some other reason
(e.g. I need one extra bit).

>...I use
> modular-typed variables and, if I've got my engineer's hat on, write
>
>      I := I + 1;  -- Modular variable.  Wraps.

You're not alone.  Even Tucker has advocated using modular
types for this sort of thing.

But I think an explicit "mod N" is clearer than a comment.

Variables that cycle are rare, so should be noted explicitly
in the code.  And, as Dmitry noted, modular types only work
when the lower bound is 0.  It's not unreasonable to have
a circular buffer indexed by a range 1..N.

See my point?  Still "surprised"?

> but I have to admit that in the heat of battle the comment might be
> omitted.  Even in that case, the variable name probably indicates that
> wrapping should be expected.

- Bob

P.S. Can anybody recall the definition of "not" on modular
types, when the modulus is not a power of 2, without
looking it up?  Hint: It makes no sense.  The only feature
that's worse than "modular types" is "modular types with a
non-power-of-2 modulus".  ;-)



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-08 20:51       ` Robert A Duff
@ 2010-08-08 22:10         ` (see below)
  2010-08-08 22:22           ` Robert A Duff
  2010-08-09  4:46         ` Yannick Duchêne (Hibou57)
                           ` (3 subsequent siblings)
  4 siblings, 1 reply; 94+ messages in thread
From: (see below) @ 2010-08-08 22:10 UTC (permalink / raw)


On 08/08/2010 21:51, in article wcchbj4j09y.fsf@shell01.TheWorld.com,
"Robert A Duff" <bobduff@shell01.TheWorld.com> wrote:

> csampson@inetworld.net (Charles H. Sampson) writes:
> 
>> Robert A Duff <bobduff@shell01.TheWorld.com> wrote:
>> 
>>      I'm surprised, Bob.  Are you saying that you signed integers in
>> preference to a modular type for a variable that cycles?
> 
> Yes.  Unless I'm forced to use modular for some other reason
> (e.g. I need one extra bit).
> 
>> ...I use
>> modular-typed variables and, if I've got my engineer's hat on, write
>> 
>>      I := I + 1;  -- Modular variable.  Wraps.
> 
> You're not alone.  Even Tucker has advocated using modular
> types for this sort of thing.
> 
> But I think an explicit "mod N" is clearer than a comment.
> 
> Variables that cycle are rare, so should be noted explicitly
> in the code.  And, as Dmitry noted, modular types only work
> when the lower bound is 0.  It's not unreasonable to have
> a circular buffer indexed by a range 1..N.
> 
> See my point?  Still "surprised"?
> 
>> but I have to admit that in the heat of battle the comment might be
>> omitted.  Even in that case, the variable name probably indicates that
>> wrapping should be expected.
> 
> - Bob
> 
> P.S. Can anybody recall the definition of "not" on modular
> types, when the modulus is not a power of 2, without
> looking it up?  Hint: It makes no sense.  The only feature
> that's worse than "modular types" is "modular types with a
> non-power-of-2 modulus".  ;-)

I don't use non-power-of-2 mod types, but I write emulators, and I would be
stuffed if I could not declare, e.g.:

   type KDF9_word is mod 2**48;

-- 
Bill Findlay
<surname><forename> chez blueyonder.co.uk





^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-08 22:10         ` (see below)
@ 2010-08-08 22:22           ` Robert A Duff
  0 siblings, 0 replies; 94+ messages in thread
From: Robert A Duff @ 2010-08-08 22:22 UTC (permalink / raw)


"(see below)" <yaldnif.w@blueyonder.co.uk> writes:

> I don't use non-power-of-2 mod types, but I write emulators, and I would be
> stuffed if I could not declare, e.g.:
>
>    type KDF9_word is mod 2**48;

Yes, that's one of the few cases where modular types make sense.

Unfortunately, nothing in the Ada RM requires implementations
to accept that.  GNAT does, on all targets.

But for a circular buffer of (say) 100 items?  No, I wouldn't
use modular types for the index.

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-08 15:49   ` Robert A Duff
  2010-08-08 17:13     ` Charles H. Sampson
@ 2010-08-08 22:35     ` tmoran
  2010-08-09 13:53       ` Robert A Duff
  2010-08-11  7:42       ` Charles H. Sampson
  1 sibling, 2 replies; 94+ messages in thread
From: tmoran @ 2010-08-08 22:35 UTC (permalink / raw)


> And it would be desirable, because in most cases the explicit
> "mod" (or "%") is more readable.

   I := Ring_Indices'succ(I);
vs
   I := (I + 1) mod Ring_Size;
or
   Bearing := Bearing + Turn_Angle;
vs
   Bearing := (Bearing + Turn_Angle) mod 360;



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-08 20:51       ` Robert A Duff
  2010-08-08 22:10         ` (see below)
@ 2010-08-09  4:46         ` Yannick Duchêne (Hibou57)
  2010-08-09  5:52         ` J-P. Rosen
                           ` (2 subsequent siblings)
  4 siblings, 0 replies; 94+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-08-09  4:46 UTC (permalink / raw)


Le Sun, 08 Aug 2010 22:51:37 +0200, Robert A Duff  
<bobduff@shell01.theworld.com> a écrit:
> The only feature
> that's worse than "modular types" is "modular types with a
> non-power-of-2 modulus".  ;-)
Illegal in SPARK

(not that I support the idea that non power of 2 modular types are ugly  
beast)



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-08 20:51       ` Robert A Duff
  2010-08-08 22:10         ` (see below)
  2010-08-09  4:46         ` Yannick Duchêne (Hibou57)
@ 2010-08-09  5:52         ` J-P. Rosen
  2010-08-09 13:28           ` Robert A Duff
  2010-08-10 12:26         ` Phil Clayton
  2010-08-11  6:42         ` Charles H. Sampson
  4 siblings, 1 reply; 94+ messages in thread
From: J-P. Rosen @ 2010-08-09  5:52 UTC (permalink / raw)


Robert A Duff a �crit :
> P.S. Can anybody recall the definition of "not" on modular
> types, when the modulus is not a power of 2, without
> looking it up? 
Sure. Like any other operation. First you do the operation in the
mathematical sense, then you take the mathematical result mod T'Modulus.

> Hint: It makes no sense.  
It makes sense because it is consistent with all other operations on
modular types. Whether it is usefule is another story ;-)

> The only feature
> that's worse than "modular types" is "modular types with a
> non-power-of-2 modulus".  ;-)
Hmmm... Hash coded tables?

-- 
---------------------------------------------------------
           J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-09  5:52         ` J-P. Rosen
@ 2010-08-09 13:28           ` Robert A Duff
  2010-08-09 18:42             ` Jeffrey Carter
  2010-08-09 19:33             ` Yannick Duchêne (Hibou57)
  0 siblings, 2 replies; 94+ messages in thread
From: Robert A Duff @ 2010-08-09 13:28 UTC (permalink / raw)


"J-P. Rosen" <rosen@adalog.fr> writes:

> Robert A Duff a �crit :
>> P.S. Can anybody recall the definition of "not" on modular
>> types, when the modulus is not a power of 2, without
>> looking it up? 
> Sure. Like any other operation. First you do the operation in the
> mathematical sense, then you take the mathematical result mod T'Modulus.

"and", "or", and "xor" are defined that way (bit-wise logical operation,
followed by "mod").  "not" is NOT defined as bit-wise "not" followed
by "mod".

    type T is mod 3;
    X : T := 2;
    ...
    X := not X;

Now X = 0.  2 = 2#10#.  Bit-wise "not" of that is 2#01#.
"mod" that by 3, and you get 2#1#.

>> Hint: It makes no sense.
> It makes sense because it is consistent with all other operations on
> modular types. Whether it is usefule is another story ;-)

It is neither consistent nor useful.  I'll stick with
"makes no sense".

>> The only feature
>> that's worse than "modular types" is "modular types with a
>> non-power-of-2 modulus".  ;-)
> Hmmm... Hash coded tables?

Signed integers work fine for that.  Except when you need
that one extra bit -- that's the problem.

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-08 22:35     ` tmoran
@ 2010-08-09 13:53       ` Robert A Duff
  2010-08-09 17:59         ` tmoran
  2010-08-11  7:42       ` Charles H. Sampson
  1 sibling, 1 reply; 94+ messages in thread
From: Robert A Duff @ 2010-08-09 13:53 UTC (permalink / raw)


tmoran@acm.org writes:

>> And it would be desirable, because in most cases the explicit
>> "mod" (or "%") is more readable.
>
>    I := Ring_Indices'succ(I);
> vs
>    I := (I + 1) mod Ring_Size;
> or
>    Bearing := Bearing + Turn_Angle;
> vs
>    Bearing := (Bearing + Turn_Angle) mod 360;

The explicit "mod"s are more readable, I think.

For angles, you probably want a fixed-point type, and there's no such
thing as modular fixed-point types in Ada.  There was a proposal to add
that to Ada 2005, but I think ARG decided not to do that, IIRC.

Note that you can always write functions that do a "mod",
if that's what you want -- Next_Ring_Index, or "+" on fixed-point
angles.

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-09 13:53       ` Robert A Duff
@ 2010-08-09 17:59         ` tmoran
  2010-08-09 19:36           ` Yannick Duchêne (Hibou57)
  0 siblings, 1 reply; 94+ messages in thread
From: tmoran @ 2010-08-09 17:59 UTC (permalink / raw)


>>    I := Ring_Indices'succ(I);
>> vs
>>    I := (I + 1) mod Ring_Size;
>> or
>>    Bearing := Bearing + Turn_Angle;
>> vs
>>    Bearing := (Bearing + Turn_Angle) mod 360;
>
>The explicit "mod"s are more readable, I think.

Interesting.  I think the opposite.  The explicit mod versions take a
computer-centric, rather than problem-centric, view, which is the opposite
of the usual "Ada approach".  They are also subject to the possible error
of failing to write the "mod" part, whereas with modular types the
compiler has the responsibility to remember to do the mod operation.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-09 13:28           ` Robert A Duff
@ 2010-08-09 18:42             ` Jeffrey Carter
  2010-08-09 19:05               ` Robert A Duff
  2010-08-09 19:33             ` Yannick Duchêne (Hibou57)
  1 sibling, 1 reply; 94+ messages in thread
From: Jeffrey Carter @ 2010-08-09 18:42 UTC (permalink / raw)


On 08/09/2010 06:28 AM, Robert A Duff wrote:
>
> Signed integers work fine for that.  Except when you need
> that one extra bit -- that's the problem.

We've been able to do

type T is range 0 .. 2 ** N - 1;
for T'Size use N;

since Ada 83. So the only problem is when you want the equivalent of

type T is mod System.Max_Binary_Modulus;

since System.Max_Binary_Modulus - 1 > System.Max_Integer.

-- 
Jeff Carter
"Run away! Run away!"
Monty Python and the Holy Grail
58

--- news://freenews.netfront.net/ - complaints: news@netfront.net ---



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-09 18:42             ` Jeffrey Carter
@ 2010-08-09 19:05               ` Robert A Duff
  2010-08-10 10:00                 ` Jacob Sparre Andersen
  0 siblings, 1 reply; 94+ messages in thread
From: Robert A Duff @ 2010-08-09 19:05 UTC (permalink / raw)


Jeffrey Carter <spam.jrcarter.not@spam.not.acm.org> writes:

> On 08/09/2010 06:28 AM, Robert A Duff wrote:
>>
>> Signed integers work fine for that.  Except when you need
>> that one extra bit -- that's the problem.
>
> We've been able to do
>
> type T is range 0 .. 2 ** N - 1;
> for T'Size use N;
>
> since Ada 83.

Yes.  And T'Size = N by default in Ada >= 95, so I would write:

    pragma Assert(T'Size = N);

instead of the Size clause.

>...So the only problem is when you want the equivalent of
>
> type T is mod System.Max_Binary_Modulus;
>
> since System.Max_Binary_Modulus - 1 > System.Max_Integer.

Exactly.  (Well, I don't think that's required by the RM,
but in all Ada implementations it's probably the case
that Max_Binary_Modulus = 2*(Max_Integer+1).)

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-09 13:28           ` Robert A Duff
  2010-08-09 18:42             ` Jeffrey Carter
@ 2010-08-09 19:33             ` Yannick Duchêne (Hibou57)
  2010-08-09 21:42               ` Robert A Duff
  1 sibling, 1 reply; 94+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-08-09 19:33 UTC (permalink / raw)


Le Mon, 09 Aug 2010 15:28:02 +0200, Robert A Duff  
<bobduff@shell01.theworld.com> a écrit:
>> Hmmm... Hash coded tables?
>
> Signed integers work fine for that.  Except when you need
> that one extra bit -- that's the problem.

All of the hash functions I have seen so far use modular types. Do you  
known ones computing on integers ?



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-09 17:59         ` tmoran
@ 2010-08-09 19:36           ` Yannick Duchêne (Hibou57)
  2010-08-09 21:38             ` Robert A Duff
  0 siblings, 1 reply; 94+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-08-09 19:36 UTC (permalink / raw)


Le Mon, 09 Aug 2010 19:59:45 +0200, <tmoran@acm.org> a écrit:
> They are also subject to the possible error
> of failing to write the "mod" part, whereas with modular types the
> compiler has the responsibility to remember to do the mod operation.
DRY principle in action



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-09 19:36           ` Yannick Duchêne (Hibou57)
@ 2010-08-09 21:38             ` Robert A Duff
  0 siblings, 0 replies; 94+ messages in thread
From: Robert A Duff @ 2010-08-09 21:38 UTC (permalink / raw)


"Yannick Duch�ne (Hibou57)" <yannick_duchene@yahoo.fr> writes:

> Le Mon, 09 Aug 2010 19:59:45 +0200, <tmoran@acm.org> a �crit:
>> They are also subject to the possible error
>> of failing to write the "mod" part, whereas with modular types the
>> compiler has the responsibility to remember to do the mod operation.
> DRY principle in action

It's a good point.  But you can get DRY without an implicit-mod type.
For example, for a circular buffer, you can write a Next_Index
function that does the explicit "mod", so you don't scatter "mod"
ops all over the code.  You don't need modular types for that.
And you don't want any "*"-with-implicit-mod operator for the index.
Nor do you want "xor" on the index.

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-09 19:33             ` Yannick Duchêne (Hibou57)
@ 2010-08-09 21:42               ` Robert A Duff
  0 siblings, 0 replies; 94+ messages in thread
From: Robert A Duff @ 2010-08-09 21:42 UTC (permalink / raw)


"Yannick Duch�ne (Hibou57)" <yannick_duchene@yahoo.fr> writes:

> Le Mon, 09 Aug 2010 15:28:02 +0200, Robert A Duff
> <bobduff@shell01.theworld.com> a �crit:
>>> Hmmm... Hash coded tables?
>>
>> Signed integers work fine for that.  Except when you need
>> that one extra bit -- that's the problem.
>
> All of the hash functions I have seen so far use modular types. Do you
> known ones computing on integers ?

Modular types are integer types in Ada terms.  So that should be
"...computing on SIGNED integers".  Sorry for nitpicking.  ;-)

Well, all the ones I wrote in Ada 83, or Pascal, or... used
signed integers.

But I concede the point -- modular types are appropriate for
hash functions.

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-09 19:05               ` Robert A Duff
@ 2010-08-10 10:00                 ` Jacob Sparre Andersen
  2010-08-10 12:39                   ` Robert A Duff
  0 siblings, 1 reply; 94+ messages in thread
From: Jacob Sparre Andersen @ 2010-08-10 10:00 UTC (permalink / raw)


Robert A Duff wrote:
> Jeffrey Carter <spam.jrcarter.not@spam.not.acm.org> writes:

>> type T is range 0 .. 2 ** N - 1;
>> for T'Size use N;

> Yes.  And T'Size = N by default in Ada >= 95, so I would write:
>
>     pragma Assert(T'Size = N);
>
> instead of the Size clause.

But don't you loose the compile time error in case the compiler
cannot make T'Size = N?

Jacob
-- 
PNG: Pretty Nice Graphics



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-08 20:51       ` Robert A Duff
                           ` (2 preceding siblings ...)
  2010-08-09  5:52         ` J-P. Rosen
@ 2010-08-10 12:26         ` Phil Clayton
  2010-08-10 12:57           ` Yannick Duchêne (Hibou57)
  2010-08-11  6:42         ` Charles H. Sampson
  4 siblings, 1 reply; 94+ messages in thread
From: Phil Clayton @ 2010-08-10 12:26 UTC (permalink / raw)


On Aug 8, 9:51 pm, Robert A Duff <bobd...@shell01.TheWorld.com> wrote:

> P.S. Can anybody recall the definition of "not" on modular
> types, when the modulus is not a power of 2, without
> looking it up?  Hint: It makes no sense.  The only feature
> that's worse than "modular types" is "modular types with a
> non-power-of-2 modulus".  ;-)

I have the identity

  not X + 1 = -X

ingrained on my brain because I wrote too much in assembly language!
I haven't checked (honest!) but I'm sure Ada just uses this identity
to define "not" for modular types, i.e.

  not X = -X - 1  (rhs wraps around)

In my view, this is still useful when the modulus is not a power of
two, it's just that "not" is a terrible name for the operation.

Only yesterday, I was working in another language with arrays starting
at index 0 and needed the elements in reverse: the required effect was
achieved by subscripting from the end rather than the start.  Had I
been working in Ada with an array indexed by a modular type, the
effect of reversing the elements could have been achieved by replacing

  A(I)

with

  A(not I)

which I think that is nicer than

  A(-I - 1)

provided people have the intuitive understanding that "not" is
effectively an operator that reverses the elements of a modular type.
In the other language I was using, I effectively had to write

  A(A'Length - I - 1)

(not valid Ada, of course.)

Perhaps "reverse" would be a better name for the operator "not" when
the modulus is not a power of two?

Phil



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 10:00                 ` Jacob Sparre Andersen
@ 2010-08-10 12:39                   ` Robert A Duff
  0 siblings, 0 replies; 94+ messages in thread
From: Robert A Duff @ 2010-08-10 12:39 UTC (permalink / raw)


Jacob Sparre Andersen <sparre@nbi.dk> writes:

> Robert A Duff wrote:
>> Jeffrey Carter <spam.jrcarter.not@spam.not.acm.org> writes:
>
>>> type T is range 0 .. 2 ** N - 1;
>>> for T'Size use N;
>
>> Yes.  And T'Size = N by default in Ada >= 95, so I would write:
>>
>>     pragma Assert(T'Size = N);
>>
>> instead of the Size clause.
>
> But don't you loose the compile time error in case the compiler
> cannot make T'Size = N?

No.  Compilers are required to make T'Size = N.  See 13.3(54-55).
The Assert is just a hint that we care about that fact.

Suppose N = 32, and I make a mistake, and write:

    pragma Assert (T'Size = 33);

GNAT will warn that this is False, which is what I want.
(And in -gnatwe mode, that's an error.)

If I instead wrote "for T'Size use 33;", the compiler
will not catch the error, but will instead use a strange
Size.

That's why I don't like to use rep clauses to assert that
the compiler chose the obviously-correct representation --
the compiler will complain if it's too small, but not
if it's too big.

Consider:

    type M is mod 32; -- Oops, I meant 2**32.
    for M'Size use 32; -- No error here.

I've seen this bug in real code.

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 12:26         ` Phil Clayton
@ 2010-08-10 12:57           ` Yannick Duchêne (Hibou57)
  2010-08-10 14:03             ` Elias Salomão Helou Neto
  0 siblings, 1 reply; 94+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-08-10 12:57 UTC (permalink / raw)


Le Tue, 10 Aug 2010 14:26:25 +0200, Phil Clayton  
<phil.clayton@lineone.net> a écrit:
>   A(A'Length - I - 1)
>
> (not valid Ada, of course.)

A(A'Last - (I - A'First))

> Perhaps "reverse" would be a better name for the operator "not" when
> the modulus is not a power of two?
Or "wrapped" ? But this would only make sens on an array instance basis,  
not on type basis. If the array actual range is not the same as the index  
type range, it fails. So this could not be a type primitive (operator).  
May be an array attribute ? A'From_Last ? A'Last_Based ?

"reverse" is already there for loop statements. I feel this makes for sens  
for loop, as what is reverse, is not the array index: this is the  
iteration which goes reversed order.

-- 
There is even better than a pragma Assert: a SPARK --# check.
--# check C and WhoKnowWhat and YouKnowWho;
--# assert Ada;
--  i.e. forget about previous premises which leads to conclusion
--  and start with new conclusion as premise.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-07  7:53 ` Dmitry A. Kazakov
@ 2010-08-10 13:52   ` Elias Salomão Helou Neto
  2010-08-10 14:24     ` Shark8
                       ` (5 more replies)
  0 siblings, 6 replies; 94+ messages in thread
From: Elias Salomão Helou Neto @ 2010-08-10 13:52 UTC (permalink / raw)


On Aug 7, 4:53 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:
> On Fri, 6 Aug 2010 13:21:48 -0700 (PDT), Elias Salomão Helou Neto wrote:
>
> > I would like to know how does code generated by Ada compilers compare
> > to those generated by C++.
>
> The short answer is yes, of course.
>
> The long answer is that such comparisons are quite difficult to perform
> accurately.

Yes, I know that. I am, however, writing code within that 1% of
applications that would be tremendously affected if there is no way to
access arrays with no range checking. So I am asking very precisely:
does Ada allow me to do non range-checked access to arrays?

>
> > I use C++ for numerical software
> > implementation, but I am trying to find alternatives.
>
> Ada is especially good for this, because it has an elaborated system of
> numeric types with specified accuracy and precision (and behavior), which
> C++ lacks.

This is what attracted me, but, as you may guess, I cannot spend
months learning the language if I am not sure about some very specific
issues, such non RC array indexing.

> > One thing,
> > however, I cannot trade for convenience is efficiency. Will Ada
> > compiled code possibly be as efficient as that generated by C++
> > compilers?
>
> Certainly yes. Potentially Ada code can be more efficient, because in Ada
> your program usually tells the compiler more than in C++. This information
> can be used in optimization.

All right. This one is easy to believe!

> > Also, I do need to have something similar to C++ "templated
> > metaprogramming" techniques.
>
> Ada has generics which are roughly same as templates. Unlikely to C+
> generics are contracted and not automatically instantiated.

What exactly does it mean? Is it something like run-time
instantiation?

> > In particular, C++0x will introduce
> > variadic templates, which will allow us to write templates that will
> > generate efficient, type-safe, variable-argument functions. Is there
> > anything like that in Ada?
>
> No.

Hum... I intend to write an efficient n-dimensional matrix. This would
leave to me the option to individually write element accessing code
for each possible instance of my generic class if I wish to make it
through a member function (or whatever is equivalent to that in Ada)
that takes as many elements as there are dimensions, right?

> > If any of the above questions is to be negatively answered, I ask: why
> > does Ada even exist?
>
> Do you mean variadic templates here? Seriously?

I did not mean that, but I am terribly sorry for this paragraph since,
as Gene commented, this is a truly unreasonable question. I am sure
there are plenty of reasons why Ada should exist that goes far beyond
the questions I pointed out :) - no I am not a troll.

> > And further, is there any language which is
> > _truly_ better (regarding code maintainability, readability and
> > developing ease) than C++ and as overhead-free as it?
>
> Maintainability, readability and developing ease are sufficiently dependent
> on *not* using things like C++ templates. Even more variadic templates!

Could you elaborate on that? I do not agree and I do have lots of
experience in writing and maintaining template code in C++. They are
as easy to read, maintain and develop as any C++ code, or should I say
as difficult? So, the point in C++ difficulty is not quite templated
code, but rather everything else in the language :) Even if it is
difficult, I like C++, but I have long been feeling that there _must_
be better options. I am right now looking for the right one.

> Note that for numeric applications templates do not help much. Consider the
> following problem. Let you have to implement some mathematical function of
> known algorithm and put it into a library. That latter is not possible with
> templates anyway is beside the point.

You seem to imply that templated code cannot be part of a library, but
it definitely can. Just consider the possibility of distributing the
source, which is what I wish to do. STL does just that. Even if you do
not want to go open source, it is easier to write the code once and
instantiate it for every type your users are supposed to use, maybe
wrapped within some overloaded function.

> How do you do it to work for say
> float, double etc? Metaprogramming you said. You could try implementing it
> as a template<class number>, but the parameter here is not the actual type
> you need to use in the computations in order to make the result accurate
> within the precision of the type number. In C++ you cannot even learn the
> precision of a template argument.

You can! There is the std::numeric_limits<class number> template that
does allow you to learn nearly everything you need about the numeric
type "number". You can even specialize this template to your own user-
defined type so it will act as a fundamental arithmetic type in most
aspects.

> In Ada at least you can (Ada Reference
> Manual 3.5.8). But considering:
>
> generic
>    type Real is digits <>;  -- To work with any floating-point type
> function Generic_Elliptic_Integral (X : Real; K : Real) return Real;
>
> The following implementation is illegal:
>
> function Generic_Elliptic_Integral (X : Real; K : Real) return Real is
>    type Internal is digits Real'Digits * 2;  -- You cannot do that!
> begin
>    ...

I don't fully understand the code, but it does seem to be very
intuitive. What does

  type Real is digits <>;

mean? Is "digits" a keyword of the language? I guess Ada groups
fundamental types in categories and "digits" mean we must use some
floating point type as the template argument, right? It sounds like a
good idea, specially if things like that could be done for user
defined types, i.e., if I can define my own type that "is digits <>".

> --
> Regards,
> Dmitry A. Kazakovhttp://www.dmitry-kazakov.de




^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 12:57           ` Yannick Duchêne (Hibou57)
@ 2010-08-10 14:03             ` Elias Salomão Helou Neto
  2010-08-10 14:27               ` Yannick Duchêne (Hibou57)
                                 ` (2 more replies)
  0 siblings, 3 replies; 94+ messages in thread
From: Elias Salomão Helou Neto @ 2010-08-10 14:03 UTC (permalink / raw)


On Aug 10, 9:57 am, Yannick Duchêne (Hibou57)
<yannick_duch...@yahoo.fr> wrote:
> Le Tue, 10 Aug 2010 14:26:25 +0200, Phil Clayton  
> <phil.clay...@lineone.net> a écrit:
>
> >   A(A'Length - I - 1)
>
> > (not valid Ada, of course.)
>
> A(A'Last - (I - A'First))
>
> > Perhaps "reverse" would be a better name for the operator "not" when
> > the modulus is not a power of two?
>
> Or "wrapped" ? But this would only make sens on an array instance basis,  
> not on type basis. If the array actual range is not the same as the index  
> type range, it fails. So this could not be a type primitive (operator).  
> May be an array attribute ? A'From_Last ? A'Last_Based ?
>
> "reverse" is already there for loop statements. I feel this makes for sens  
> for loop, as what is reverse, is not the array index: this is the  
> iteration which goes reversed order.
>
> --
> There is even better than a pragma Assert: a SPARK --# check.
> --# check C and WhoKnowWhat and YouKnowWho;
> --# assert Ada;
> --  i.e. forget about previous premises which leads to conclusion
> --  and start with new conclusion as premise.

It is a pity that this post became a technical discussion on array
indexing. A simple question that could be asked in a single line is:
can Ada access arrays without range checking? My algorithm needs not
wrapping, neither it needs range checking!

Please, try avoiding pointless discussions on wrapped/non-wrapped
types. Recall that I do not even know Ada's syntax, let alone its
sophisticated type system. Your discussion can only possibly confuse
me.

Elias.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 13:52   ` Elias Salomão Helou Neto
@ 2010-08-10 14:24     ` Shark8
  2010-08-10 14:28     ` Shark8
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 94+ messages in thread
From: Shark8 @ 2010-08-10 14:24 UTC (permalink / raw)


On Aug 10, 7:52 am, Elias Salomão Helou Neto <eshn...@gmail.com>
wrote:

> > function Generic_Elliptic_Integral (X : Real; K : Real) return Real is
> >    type Internal is digits Real'Digits * 2;  -- You cannot do that!
> > begin
> >    ...
>
> I don't fully understand the code, but it does seem to be very
> intuitive. What does
>
>   type Real is digits <>;
>
> mean? Is "digits" a keyword of the language? I guess Ada groups
> fundamental types in categories and "digits" mean we must use some
> floating point type as the template argument, right? It sounds like a
> good idea, specially if things like that could be done for user
> defined types, i.e., if I can define my own type that "is digits <>".
>
> > --
> > Regards,
> > Dmitry A. Kazakovhttp://www.dmitry-kazakov.de
>
>

"Digits" is a keyword in the language, in particular it is used to
define the number of digits-of-precision in a floating-point type. The
<> in this case is telling the compiler that the type 'Real', which is
the name some floating-point type will be referred to in the generic;
the purpose is to allow the programmer the ability to write code that
is independent of the precision but dependent on the [properties of
the] class of type.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 14:03             ` Elias Salomão Helou Neto
@ 2010-08-10 14:27               ` Yannick Duchêne (Hibou57)
  2010-08-10 22:50                 ` anon
  2010-08-10 14:31               ` Shark8
  2010-08-11  7:14               ` Charles H. Sampson
  2 siblings, 1 reply; 94+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-08-10 14:27 UTC (permalink / raw)


Le Tue, 10 Aug 2010 16:03:13 +0200, Elias Salomão Helou Neto  
<eshneto@gmail.com> a écrit:
> It is a pity that this post became a technical discussion on array
> indexing.
Could I, if you please, talk to Phil without your prior acknowledgment  
please ? ;)

While I understand why you reacted (not the way you do... closed  
parenthesis)

> A simple question that could be asked in a single line is:
> can Ada access arrays without range checking?
Sorry, due to a recent trouble with my news reader, I've the original post.

However, on this sentence basis, I would say: if you do not want range  
check, then just disable it in the compiler option. But be warned you will  
then be unable to catch runtime error. While there is a way to safely drop  
this compiler option: validation with SPARK checker (just tell if you need  
to learn about it).

If your matter is just about range checking, the answer is as simple as  
that.

If you use GNAT, you may insert "-gnatp" in the command line arguments.
http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gnat_ugn_unw/Run_002dTime-Checks.html

If you use GNAT from the GPS environment, you may open the "Project" menu,  
then the "Edit project properties" submenu. Then choose the "Switches"  
tab, then the "Ada" tab and check the "Suppress all check" check box or  
uncheck the "Overflow check" check box.

Providing I did not fail to understand what you meant.

-- 
There is even better than a pragma Assert: a SPARK --# check.
--# check C and WhoKnowWhat and YouKnowWho;
--# assert Ada;
--  i.e. forget about previous premises which leads to conclusion
--  and start with new conclusion as premise.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 13:52   ` Elias Salomão Helou Neto
  2010-08-10 14:24     ` Shark8
@ 2010-08-10 14:28     ` Shark8
  2010-08-10 15:01     ` Robert A Duff
                       ` (3 subsequent siblings)
  5 siblings, 0 replies; 94+ messages in thread
From: Shark8 @ 2010-08-10 14:28 UTC (permalink / raw)


On Aug 10, 7:52 am, Elias Salomão Helou Neto <eshn...@gmail.com>
wrote:

> > function Generic_Elliptic_Integral (X : Real; K : Real) return Real is
> >    type Internal is digits Real'Digits * 2;  -- You cannot do that!
> > begin
> >    ...
>
> I don't fully understand the code, but it does seem to be very
> intuitive. What does
>
>   type Real is digits <>;
>
> mean? Is "digits" a keyword of the language? I guess Ada groups
> fundamental types in categories and "digits" mean we must use some
> floating point type as the template argument, right? It sounds like a
> good idea, specially if things like that could be done for user
> defined types, i.e., if I can define my own type that "is digits <>".
>
> > --
> > Regards,
> > Dmitry A. Kazakovhttp://www.dmitry-kazakov.de
>
>

"Digits" is a keyword in the language, in particular it is used to
define the number of digits-of-precision in a floating-point type. The
<> in this case is telling the compiler that the type 'Real', which is
the name some floating-point type will be referred to in the generic;
the purpose is to allow the programmer the ability to write code that
is independent of the precision but dependent on the [properties of
the] class of type.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 14:03             ` Elias Salomão Helou Neto
  2010-08-10 14:27               ` Yannick Duchêne (Hibou57)
@ 2010-08-10 14:31               ` Shark8
  2010-08-11  7:14               ` Charles H. Sampson
  2 siblings, 0 replies; 94+ messages in thread
From: Shark8 @ 2010-08-10 14:31 UTC (permalink / raw)


On Aug 10, 8:03 am, Elias Salomão Helou Neto <eshn...@gmail.com>
wrote:
> On Aug 10, 9:57 am, Yannick Duchêne (Hibou57)
>
>
>
> <yannick_duch...@yahoo.fr> wrote:
> > Le Tue, 10 Aug 2010 14:26:25 +0200, Phil Clayton  
> > <phil.clay...@lineone.net> a écrit:
>
> > >   A(A'Length - I - 1)
>
> > > (not valid Ada, of course.)
>
> > A(A'Last - (I - A'First))
>
> > > Perhaps "reverse" would be a better name for the operator "not" when
> > > the modulus is not a power of two?
>
> > Or "wrapped" ? But this would only make sens on an array instance basis,  
> > not on type basis. If the array actual range is not the same as the index  
> > type range, it fails. So this could not be a type primitive (operator).  
> > May be an array attribute ? A'From_Last ? A'Last_Based ?
>
> > "reverse" is already there for loop statements. I feel this makes for sens  
> > for loop, as what is reverse, is not the array index: this is the  
> > iteration which goes reversed order.
>
> > --
> > There is even better than a pragma Assert: a SPARK --# check.
> > --# check C and WhoKnowWhat and YouKnowWho;
> > --# assert Ada;
> > --  i.e. forget about previous premises which leads to conclusion
> > --  and start with new conclusion as premise.
>
> It is a pity that this post became a technical discussion on array
> indexing. A simple question that could be asked in a single line is:
> can Ada access arrays without range checking? My algorithm needs not
> wrapping, neither it needs range checking!
>
> Please, try avoiding pointless discussions on wrapped/non-wrapped
> types. Recall that I do not even know Ada's syntax, let alone its
> sophisticated type system. Your discussion can only possibly confuse
> me.
>
> Elias.

Yes, you can turn off run-time checking of array indecies.
Pragma Suppress is what you want; http://en.wikibooks.org/wiki/Ada_Programming/Pragmas/Suppress

The example given exactly answers your question:
  My_Array : Array ( 1 .. 100 ) of Integer;
  pragma Suppress( Index_Check );
  ...
  Some_Variable := My_Array( 1000 ); -- Erroneous execution, here we
come!



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 13:52   ` Elias Salomão Helou Neto
  2010-08-10 14:24     ` Shark8
  2010-08-10 14:28     ` Shark8
@ 2010-08-10 15:01     ` Robert A Duff
  2010-08-10 15:14       ` Yannick Duchêne (Hibou57)
  2010-08-10 15:10     ` Georg Bauhaus
                       ` (2 subsequent siblings)
  5 siblings, 1 reply; 94+ messages in thread
From: Robert A Duff @ 2010-08-10 15:01 UTC (permalink / raw)


Elias Salom�o Helou Neto <eshneto@gmail.com> writes:

> does Ada allow me to do non range-checked access to arrays?

Yes.  Pragma Suppress can be used to turn off some or all run-time
checks, locally or globally.  As far as I know, all Ada compilers
have some way to do the same thing via compiler options
(command-line switches or whatever).

>> Ada has generics which are roughly same as templates. Unlikely to C+
>> generics are contracted and not automatically instantiated.
>
> What exactly does it mean? Is it something like run-time
> instantiation?

No, Ada generics work the same way as C++ templates -- the typical
implementation is that each instance gets a separate copy of
the code; instantiation happens at compile time.  There is no
run-time instantiation.

"Contracted" above means that if the instantiation obeys the contract,
then you can't get any compilation errors in the body of the template.
For example, the generic says it wants a type with certain operations,
or with certain properties, and if the instance supplies such a type,
all is well.

"Not automatically instantiated" above means that each instantiation
of a generic appears explicitly in the code, as a separate
declaration.

> I don't fully understand the code, but it does seem to be very
> intuitive. What does
>
>   type Real is digits <>;
>
> mean? Is "digits" a keyword of the language?

Yes.  To declare a floating-point type:

    type My_Float is digits 6;

And the compiler will pick some hardware type with at
least the requested precision.  "My_Float'Digits" queries
the precision.  Or:

    type My_Float is digits 7 range 0.0 .. 1.0;

If you ask for more digits than are supported, the compiler
will complain.

The "digits <>" notation means this is a generic formal type,
and the instantiation should pass in an actual type that
is some floating-point type.  Different instantiations might
pass different types with different 'Digits, and inside the
generic you can query it.  In other words, the "<>" means
roughly "unknown".

If you try to pass a non-floating-point type to Real,
you will get a compile-time error.

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 13:52   ` Elias Salomão Helou Neto
                       ` (2 preceding siblings ...)
  2010-08-10 15:01     ` Robert A Duff
@ 2010-08-10 15:10     ` Georg Bauhaus
  2010-08-10 15:32     ` Dmitry A. Kazakov
  2010-08-10 22:26     ` Randy Brukardt
  5 siblings, 0 replies; 94+ messages in thread
From: Georg Bauhaus @ 2010-08-10 15:10 UTC (permalink / raw)


On 10.08.10 15:52, Elias Salom�o Helou Neto wrote:
> On Aug 7, 4:53 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>> On Fri, 6 Aug 2010 13:21:48 -0700 (PDT), Elias Salom�o Helou Neto wrote:
>>
>>> I would like to know how does code generated by Ada compilers compare
>>> to those generated by C++.
>>
>> The short answer is yes, of course.
>>
>> The long answer is that such comparisons are quite difficult to perform
>> accurately.
> 
> Yes, I know that. I am, however, writing code within that 1% of
> applications that would be tremendously affected if there is no way to
> access arrays with no range checking. So I am asking very precisely:
> does Ada allow me to do non range-checked access to arrays?

Yes.

The language mandates that checks can be turned off.
See LRM 11.5, for example, of how to give permission.

You can document this is source text by placing a language
defined pragma Suppress in some scope.  (There is also
a pragma Unsuppress!) Compilers can be advised to turn
checks on or off, either specific checks, or all checks.

To see the effects, get GNAT or some other compiler;
with GNAT, translate the same source with suitable options
and look at the code generated in each case.


$ gnatmake -c -O2 -funroll-loops -gnatn -gnatp a.adb
vs
$ gnatmake -c -gnato -fstack-check -gnata a.adb

procedure A is
   pragma Suppress (Range_Check); -- or All_Checks




   type My_Index is range 10 .. 55;
   subtype Only_Some is My_Index range 10 .. 12;

   type A_Few_Bucks is digits 4;

   type Data is array (My_Index range <>) of A_Few_Bucks;
   subtype Few_Data is Data (Only_Some);

   X, Y, Z: Few_Data;

begin
   X := Few_Data'(1.2, 3.4, 5.6);
   Y := Few_Data'(3.21, 0.19, 3.3E+12);

   -- hand made sum placed in Z:
   for K in Z'Range loop
      Z(K) := X(K) + Y(K);
   end loop;

   pragma Assert (Z(10) in 4.4 .. 4.5
              and Z(11) in 3.5 .. 3.6
              and Z(12) >= 3.3E+12);

end A;



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 15:01     ` Robert A Duff
@ 2010-08-10 15:14       ` Yannick Duchêne (Hibou57)
  2010-08-10 18:32         ` Robert A Duff
  0 siblings, 1 reply; 94+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-08-10 15:14 UTC (permalink / raw)


Le Tue, 10 Aug 2010 17:01:10 +0200, Robert A Duff  
<bobduff@shell01.theworld.com> a écrit:
> No, Ada generics work the same way as C++ templates -- the typical
> implementation is that each instance gets a separate copy of
> the code
I wonder If I've understood you correctly. It seems Ada use code sharing  
(which is the reason of some restriction on formal parameters). Code  
duplication is the C++'s template way, not the Ada's generic package way.  
Or am I mistaken somewhere ?



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 13:52   ` Elias Salomão Helou Neto
                       ` (3 preceding siblings ...)
  2010-08-10 15:10     ` Georg Bauhaus
@ 2010-08-10 15:32     ` Dmitry A. Kazakov
  2010-08-10 22:26     ` Randy Brukardt
  5 siblings, 0 replies; 94+ messages in thread
From: Dmitry A. Kazakov @ 2010-08-10 15:32 UTC (permalink / raw)


On Tue, 10 Aug 2010 06:52:21 -0700 (PDT), Elias Salom�o Helou Neto wrote:

> So I am asking very precisely:
> does Ada allow me to do non range-checked access to arrays?

Yes, in many ways. Others have pointed out how to suppress checks. I would
rather advise you to write programs so that the compiler would omit them.
Consider this:

   type Vector is array (Positive range <>) of Long_Float;
   function Sum (X : Vector) return Long_Float;

Implementation:

   function Sum (X : Vector) return Long_Float is
      Result : Long_Float := 0.0;
   begin
      for Index in X'Range loop
         Result := Result + X (Index);
      end loop
      return Result;
   end Sum;

Since the compiler knows that Index is in X'Range, you told it that, it
will generate no subscripts checks. 

> This is what attracted me, but, as you may guess, I cannot spend
> months learning the language if I am not sure about some very specific
> issues, such non RC array indexing.

Ada arrays are simple and intuitive. And there are multidimensional arrays
too. They certainly do not require months to learn.
 
>>> Also, I do need to have something similar to C++ "templated
>>> metaprogramming" techniques.
>>
>> Ada has generics which are roughly same as templates. Unlikely to C+
>> generics are contracted and not automatically instantiated.
> 
> What exactly does it mean? Is it something like run-time
> instantiation?

It means that the formal parameters of the generics have contracts = typed.
C++ template parameters are untyped. You can substitute anything as long as
the result compiles.

> Hum... I intend to write an efficient n-dimensional matrix.

What's wrong with an n-dimensional array?

> This would
> leave to me the option to individually write element accessing code
> for each possible instance of my generic class if I wish to make it
> through a member function (or whatever is equivalent to that in Ada)
> that takes as many elements as there are dimensions, right?

I cannot imagine generic/templated code for an nD matrix package, because
it would not allow you to unroll the nested loops over n, if we took an
implementation of "+" as an example. It becomes much, much worse for
multiplication. I doubt it were a realistic goal to have it generic over n,
you will need a more powerful preprocessor than templates.

>>> And further, is there any language which is
>>> _truly_ better (regarding code maintainability, readability and
>>> developing ease) than C++ and as overhead-free as it?
>>
>> Maintainability, readability and developing ease are sufficiently dependent
>> on *not* using things like C++ templates. Even more variadic templates!
> 
> Could you elaborate on that? I do not agree and I do have lots of
> experience in writing and maintaining template code in C++. They are
> as easy to read, maintain and develop as any C++ code, or should I say
> as difficult?

Honestly, you are the first I met who considered templates easy to read and
maintain. So I am not really prepared to argue. It is like someone said
that warm beer is great. How do you test a template class? How do you make
it work for Borland C++, MSVC 3, 5, 8, 10 and gcc? When you get an error
message whom do you ask what's wrong?

>> Note that for numeric applications templates do not help much. Consider the
>> following problem. Let you have to implement some mathematical function of
>> known algorithm and put it into a library. That latter is not possible with
>> templates anyway is beside the point.
> 
> You seem to imply that templated code cannot be part of a library, but
> it definitely can. Just consider the possibility of distributing the
> source, which is what I wish to do. STL does just that.

So, how are you going to test it? We have to maintain our own template
library. It is 10 years old, and errors keep on coming because the number
of combinations needed to check is impossible to cover. So the maintenance
looks like: change the code and commit. If someone gets a problem upon
instantiation let us know. BTW, Ada geneircs are much better, because of
their contracts, they are compiled in true sense of this word.

> Even if you do
> not want to go open source, it is easier to write the code once and
> instantiate it for every type your users are supposed to use, maybe
> wrapped within some overloaded function.

We were talking about a n-D matrix package. You suggest to instantiate it
for each possible combination of types for each n? (:-))

> It sounds like a
> good idea, specially if things like that could be done for user
> defined types, i.e., if I can define my own type that "is digits <>".

Yes you can. E.g.:

   type My_Float is digits 8 range -1.0E10..1.0E10;

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 15:14       ` Yannick Duchêne (Hibou57)
@ 2010-08-10 18:32         ` Robert A Duff
  0 siblings, 0 replies; 94+ messages in thread
From: Robert A Duff @ 2010-08-10 18:32 UTC (permalink / raw)


"Yannick Duch�ne (Hibou57)" <yannick_duchene@yahoo.fr> writes:

> Le Tue, 10 Aug 2010 17:01:10 +0200, Robert A Duff
> <bobduff@shell01.theworld.com> a �crit:
>> No, Ada generics work the same way as C++ templates -- the typical
>> implementation is that each instance gets a separate copy of
>> the code
> I wonder If I've understood you correctly. It seems Ada use code sharing
> (which is the reason of some restriction on formal parameters). Code
> duplication is the C++'s template way, not the Ada's generic package
> way.  Or am I mistaken somewhere ?

Most Ada compilers, including GNAT, use code duplication.  Some Ada
compilers support code-sharing for instance bodies, either in some
cases, or in all cases.

I don't know of any C++ compilers that support sharing of template
instances.

Code sharing is hard in Ada.  It is substantially harder in C++,
I think.  But note that I am not a C++ compiler expert!

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 13:52   ` Elias Salomão Helou Neto
                       ` (4 preceding siblings ...)
  2010-08-10 15:32     ` Dmitry A. Kazakov
@ 2010-08-10 22:26     ` Randy Brukardt
  2010-08-20  7:22       ` Yannick Duchêne (Hibou57)
  5 siblings, 1 reply; 94+ messages in thread
From: Randy Brukardt @ 2010-08-10 22:26 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1400 bytes --]

"Elias Salom�o Helou Neto" <eshneto@gmail.com> wrote in message 
news:8349c981-4dca-49dc-9189-8ea726234de3@f42g2000yqn.googlegroups.com...
...
> Yes, I know that. I am, however, writing code within that 1% of
> applications that would be tremendously affected if there is no way to
> access arrays with no range checking. So I am asking very precisely:
> does Ada allow me to do non range-checked access to arrays?

Several people have already answered your exact question, so I won't bother 
to repeat that. But I'd like to point out that Ada compilers spend a lot of 
effort in eliminating unnecessary range checks. So that if your code is 
well-written, there will be few if any range checks when you access arrays. 
(Dmitry showed you one way that can be accomplished; another is to ensure 
that temporaries have appropriate subtypes.)

So I'm suggesting that you try to avoid premature optimization. I can 
believe that there will be cases where you'll need to suppress range checks, 
but I'd also suggest that those will be far rarer than you are thinking. 
And, of course, the problem is that suppressing range checks is essentially 
the same as not wearing seat belts when driving. Just remember how many 
"security patches" have been caused by buffer overflows, all of which would 
have been detected and prevented by having range checking.

                                    Randy.





^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 14:27               ` Yannick Duchêne (Hibou57)
@ 2010-08-10 22:50                 ` anon
  2010-08-10 23:28                   ` Yannick Duchêne (Hibou57)
  0 siblings, 1 reply; 94+ messages in thread
From: anon @ 2010-08-10 22:50 UTC (permalink / raw)


In <op.vg77swroxmjfy8@garhos>, =?iso-8859-15?Q?Yannick_Duch=EAne_=28Hibou57=29?= writes:
>Le Tue, 10 Aug 2010 16:03:13 +0200, Elias Salom=E3o Helou Neto  =
>
><eshneto@gmail.com> a =E9crit:
>> It is a pity that this post became a technical discussion on array
>> indexing.
>Could I, if you please, talk to Phil without your prior acknowledgment  =
>
>please ? ;)
>
>While I understand why you reacted (not the way you do... closed  =
>
>parenthesis)
>
>> A simple question that could be asked in a single line is:
>> can Ada access arrays without range checking?
>Sorry, due to a recent trouble with my news reader, I've the original po=
>st.
>
>However, on this sentence basis, I would say: if you do not want range  =
>
>check, then just disable it in the compiler option. But be warned you wi=
>ll  =
>
>then be unable to catch runtime error. While there is a way to safely dr=
>op  =
>
>this compiler option: validation with SPARK checker (just tell if you ne=
>ed  =
>
>to learn about it).
>
>If your matter is just about range checking, the answer is as simple as =
> =
>
>that.
>
>If you use GNAT, you may insert "-gnatp" in the command line arguments.
>http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gnat_ugn_unw/Run_002dTime-Checks=
>..html
>
>If you use GNAT from the GPS environment, you may open the "Project" men=
>u,  =
>
>then the "Edit project properties" submenu. Then choose the "Switches"  =
>
>tab, then the "Ada" tab and check the "Suppress all check" check box or =
> =
>
>uncheck the "Overflow check" check box.
>
>Providing I did not fail to understand what you meant.
>
>-- =
>
>There is even better than a pragma Assert: a SPARK --# check.
>--# check C and WhoKnowWhat and YouKnowWho;
>--# assert Ada;
>--  i.e. forget about previous premises which leads to conclusion
>--  and start with new conclusion as premise.
While the main expression are equal. The code generation by Ada compilers 
versus C or C++ is less efficient do to a number of factors.

  1. Elaboration:     Ada compilers can generate run-time elaboration
                      routines that must be executed before starting 
                      the main users program. C/C++ compilers do not 
                      perform any run-time elaboration, which cause 
                      the execution and code generation to be more 
                      efficient but cause the less program to be less
                      reliable.

  2. Run-Time Checks: The Ada compiler generates inline run-time checks 
                      which the C/C++ compiler do not.  Using the pragma 
                      "suppress" statement can eliminate most checks. The 
                      absent of these checks makes C/C++ less reliable.
                      Using the following removes all checks the statement 
                      does have other precise options as well.

             pragma Suppress ( All_Checks ) ;
             -- removes all checks on the program or that package.
                                            
             pragma Suppress ( All_Checks, ON => <name 1>, 
                                               ...
                                           ON => <name n> ) ;
             --  removes all checks on that set of names only. 
             --  Names may be an object such as an array or routine
             --  Also, the "ON =>" symbols are optional


  3. GCC backend:     Ada as well as any other language besides C or C++
                      must translate the core language into the C/C++ like 
                      tree structure. Which forces the core languages to be 
                      limited to C/C++ code generation.




^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 22:50                 ` anon
@ 2010-08-10 23:28                   ` Yannick Duchêne (Hibou57)
  2010-08-10 23:38                     ` Yannick Duchêne (Hibou57)
  0 siblings, 1 reply; 94+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-08-10 23:28 UTC (permalink / raw)


Le Wed, 11 Aug 2010 00:50:40 +0200, <anon@att.net> a écrit:
> While the main expression are equal. The code generation by Ada compilers
> versus C or C++ is less efficient do to a number of factors.
>
>   1. Elaboration:     Ada compilers can generate run-time elaboration
>                       routines that must be executed before starting
>                       the main users program. C/C++ compilers do not
>                       perform any run-time elaboration, which cause
>                       the execution and code generation to be more
>                       efficient but cause the less program to be less
>                       reliable.
>
>   2. Run-Time Checks: The Ada compiler generates inline run-time checks
>                       which the C/C++ compiler do not.  Using the pragma
>                       "suppress" statement can eliminate most checks. The
>                       absent of these checks makes C/C++ less reliable.

Yes. You have the choice (with the former)

>             pragma Suppress ( All_Checks, ON => <name 1>,
>                                                ...
>                                            ON => <name n> ) ;
>              --  removes all checks on that set of names only.
>              --  Names may be an object such as an array or routine
>              --  Also, the "ON =>" symbols are optional

Interesting point. Is that GNAT or other compiler specific ? I do not this  
variant in the reference.

In “11.5 Suppressing Checks”:
> 3/2    The forms of checking pragmas are of a pragma Suppress is as  
> follows:4/2    pragma Suppress(identifier [, [On =>] name]);
> 4.1/2  pragma Unsuppress(identifier);

Standard or not, this seems nice. This may be better for centralized  
management of runtime-checks.

-- 
There is even better than a pragma Assert: a SPARK --# check.
--# check C and WhoKnowWhat and YouKnowWho;
--# assert Ada;
--  i.e. forget about previous premises which leads to conclusion
--  and start with new conclusion as premise.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 23:28                   ` Yannick Duchêne (Hibou57)
@ 2010-08-10 23:38                     ` Yannick Duchêne (Hibou57)
  2010-08-11  7:06                       ` Niklas Holsti
  0 siblings, 1 reply; 94+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-08-10 23:38 UTC (permalink / raw)


Le Wed, 11 Aug 2010 01:28:25 +0200, Yannick Duchêne (Hibou57)  
<yannick_duchene@yahoo.fr> a écrit:
> In “11.5 Suppressing Checks”:
> 4/2 pragma Suppress(identifier [, [On =>] name]);
Strange. I've noticed it with the reply to you : the “[, [On =>] name]”  
appeared in the message while it is not visible in my reference. May be  
some trouble with the HTML version I use.

So this is part of the RM.

Hey... I've just check the online version at AdaIC, it says

http://www.adaic.org/standards/05rm/html/RM-11-5.html

3/2 The forms of checking pragmas are as follows:
4/2 pragma Suppress(identifier);
4.1/2 pragma Unsuppress(identifier);

Let me guess: the reference I use is a reworked HTML version of the  
annotated reference, where text originally marked as deleted are hidden.  
So I suppose this “[, [On =>] name]” was part of an old standard and was  
dropped with Ada 2005.

... some time later ...

Yes, got it. The ARM for Ada 95 says:

http://www.adahome.com/rm95/rm9x-11-05.html
pragma Suppress(identifier [, [On =>] name]);

This is no more valid Ada (well.... this is still valid Ada 95, this is  
just not more valid Ada 2005/2012)

-- 
There is even better than a pragma Assert: a SPARK --# check.
--# check C and WhoKnowWhat and YouKnowWho;
--# assert Ada;
--  i.e. forget about previous premises which leads to conclusion
--  and start with new conclusion as premise.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-08 20:51       ` Robert A Duff
                           ` (3 preceding siblings ...)
  2010-08-10 12:26         ` Phil Clayton
@ 2010-08-11  6:42         ` Charles H. Sampson
  4 siblings, 0 replies; 94+ messages in thread
From: Charles H. Sampson @ 2010-08-11  6:42 UTC (permalink / raw)


Robert A Duff <bobduff@shell01.TheWorld.com> wrote:

> csampson@inetworld.net (Charles H. Sampson) writes:
> 
> > Robert A Duff <bobduff@shell01.TheWorld.com> wrote:
> >
> >      I'm surprised, Bob.  Are you saying that you signed integers in
> > preference to a modular type for a variable that cycles?
> 
> Yes.  Unless I'm forced to use modular for some other reason
> (e.g. I need one extra bit).
> 
> >...I use
> > modular-typed variables and, if I've got my engineer's hat on, write
> >
> >      I := I + 1;  -- Modular variable.  Wraps.
> 
> You're not alone.  Even Tucker has advocated using modular
> types for this sort of thing.
> 
> But I think an explicit "mod N" is clearer than a comment.
> 
> Variables that cycle are rare, so should be noted explicitly
> in the code.  And, as Dmitry noted, modular types only work
> when the lower bound is 0.  It's not unreasonable to have
> a circular buffer indexed by a range 1..N.
> 
> See my point?  Still "surprised"?
>
     I hope you didn't read my profession of surprise as being the same
as "Wow!  Bob did something really stupid."  I just thought you would be
one of the people (like me) who uses every feature of Ada whenever
there's an opportunity.

     For the record, my uses of modular types have been as indexes into
circular buffers.  I think these buffers have always been my own
invention, in support of a requirement but not a requirement in
themselves.  I'm not sure what I might do if it were natural, or
required, to have a circular structure whose lower index is not 0.  I
learned to program on very slow machines and I've never been able to
shake efficiency concerns.  I've already thought of a couple of pretty
unnatural things to do in order to achieve decent efficiency, efficiency
that, as usual, is totally unnecessary in the problem domain.
> 
> ...  Can anybody recall the definition of "not" on modular
> types, when the modulus is not a power of 2, without
> looking it up?  Hint: It makes no sense.  The only feature
> that's worse than "modular types" is "modular types with a
> non-power-of-2 modulus".  ;-)

     Actually, those circular buffers of mine have often had sizes that
are not powers of two.   The compiler I was using generated pretty
decent code for them.

     Regarding what I read as your main complaint, very late in the Ada
95 effort I submitted a comment that the proposed modular types were an
uncomfortable merger of two ideas: modular types and bit twiddling.
Since there was no obvious relation between those two ideas (weak
cohesion, as it were), I requested that two distinct types be used
instead.  I have no  idea what the ARG thought of the merits of my
comment, but even if they thought is was the most brilliant comment they
had received, I submitted it much to late to have any hope of its being
acted on.

                        Charlie
-- 
All the world's a stage, and most 
of us are desperately unrehearsed.  Sean O'Casey



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 23:38                     ` Yannick Duchêne (Hibou57)
@ 2010-08-11  7:06                       ` Niklas Holsti
  2010-08-11 11:58                         ` anon
  2010-08-11 13:13                         ` Robert A Duff
  0 siblings, 2 replies; 94+ messages in thread
From: Niklas Holsti @ 2010-08-11  7:06 UTC (permalink / raw)


Yannick Duchêne (Hibou57) wrote:
> Le Wed, 11 Aug 2010 01:28:25 +0200, Yannick Duchêne (Hibou57) 
> <yannick_duchene@yahoo.fr> a écrit:
>> In “11.5 Suppressing Checks”:
>> 4/2 pragma Suppress(identifier [, [On =>] name]);
> Strange. I've noticed it with the reply to you : the “[, [On =>] name]” 
> appeared in the message while it is not visible in my reference. May be 
> some trouble with the HTML version I use.

   [ snip ]

> This is no more valid Ada (well.... this is still valid Ada 95, this is 
> just not more valid Ada 2005/2012)

In the Ada 2005 RM, see section J.10 (Obsolescent Features: Specific 
Suppression of Checks).

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 14:03             ` Elias Salomão Helou Neto
  2010-08-10 14:27               ` Yannick Duchêne (Hibou57)
  2010-08-10 14:31               ` Shark8
@ 2010-08-11  7:14               ` Charles H. Sampson
  2 siblings, 0 replies; 94+ messages in thread
From: Charles H. Sampson @ 2010-08-11  7:14 UTC (permalink / raw)


Elias Salom�o Helou Neto <eshneto@gmail.com> wrote:

> It is a pity that this post became a technical discussion on array
> indexing. A simple question that could be asked in a single line is:
> can Ada access arrays without range checking? My algorithm needs not
> wrapping, neither it needs range checking!

     In this newsgroup, as in most others, threads tend to take on lives
of their own and can drift very far from the original post if they live
long enough.  The nice thing about this newsgroup is that threads almost
never drift into noise.

     According to my newsreader, you had not yet asked your above
question at the time you submitted this post.  I have no idea when
USENET posts are timestamped, but even if they appear to be in the wrong
order, these two were submitted very close together.  Your implied
complaint that nobody's answering your simple question is somewhat out
of line.

     Finally, in response to that question, the answer is yes, as others
have pointed out.  Use pragma Suppress (Range_Check, ...) (LRM 11.5).

                        Charlie
-- 
All the world's a stage, and most 
of us are desperately unrehearsed.  Sean O'Casey



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-08 22:35     ` tmoran
  2010-08-09 13:53       ` Robert A Duff
@ 2010-08-11  7:42       ` Charles H. Sampson
  2010-08-11 13:38         ` Robert A Duff
  2010-08-11 18:49         ` Simon Wright
  1 sibling, 2 replies; 94+ messages in thread
From: Charles H. Sampson @ 2010-08-11  7:42 UTC (permalink / raw)


<tmoran@acm.org> wrote:

> > And it would be desirable, because in most cases the explicit
> > "mod" (or "%") is more readable.
> 
>    I := Ring_Indices'succ(I);
> vs
>    I := (I + 1) mod Ring_Size;
> or
>    Bearing := Bearing + Turn_Angle;
> vs
>    Bearing := (Bearing + Turn_Angle) mod 360;

     I'd be interested in hearing reactions to something I did.  About a
year ago I had reason to do something similar to the Bearing/Turn_Angle
example.  To avoid having to do a lot of background, I'll just restate
what I did in terms of Bearing/Turn_Angle.

     Bearing and Turn_Angle as integers are way too inaccurate for the
application I was working on, which involved real-time position
calculations at a high rate.  So Bearing and Turn_Angle were defined as
two (distinct) types derived from Long_Precision.  (They were defined as
types because "Bearing" and "Turn_Angle" are non-specific nouns.  See
the Ada Style Guide.)

     I then overloaded "+" and "-" for (Bearing, Turn_Angle) arguments
and Bearing return value.  In those functions is where the mod 360
occurred.  (Actually, mod 360.0, as it were.)

     There were two advantages to doing that.  The more important was
that previously both of the kinds of values were being represented as
subtypes of Long_Precision and programmers would occasionally
interchange them and cause big debugging problems.  The second was
removing the "mod" from sight, which allowed the programmers to simply
think of taking a bearing, turning an angle, and getting the resulting
bearing, without worrying about all the niceties that might be going on
inside "+".

     Of course, new programmers coming on the project had to be taught
about the overloads and that they should not be writing elaborate
if-then-elses when they wanted to turn the ship.

                        Charlie
-- 
All the world's a stage, and most 
of us are desperately unrehearsed.  Sean O'Casey



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-11  7:06                       ` Niklas Holsti
@ 2010-08-11 11:58                         ` anon
  2010-08-11 12:37                           ` Georg Bauhaus
  2010-08-11 13:13                         ` Robert A Duff
  1 sibling, 1 reply; 94+ messages in thread
From: anon @ 2010-08-11 11:58 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1154 bytes --]

In <8cf0fkF60rU1@mid.individual.net>, Niklas Holsti <niklas.holsti@tidorum.invalid> writes:
>Yannick Duchêne (Hibou57) wrote:
>> Le Wed, 11 Aug 2010 01:28:25 +0200, Yannick Duchêne (Hibou57) 
>> <yannick_duchene@yahoo.fr> a écrit:
>>> In “11.5 Suppressing Checks”:
>>> 4/2 pragma Suppress(identifier [, [On =>] name]);
>> Strange. I've noticed it with the reply to you : the “[, [On =>] name]” 
>> appeared in the message while it is not visible in my reference. May be 
>> some trouble with the HTML version I use.
>
>   [ snip ]
>
>> This is no more valid Ada (well.... this is still valid Ada 95, this is 
>> just not more valid Ada 2005/2012)
>
>In the Ada 2005 RM, see section J.10 (Obsolescent Features: Specific 
>Suppression of Checks).
>
>-- 
>Niklas Holsti
>Tidorum Ltd
>niklas holsti tidorum fi
>       .      @       .

Since, all Ada compilers except for GNAT and IBM are still using Ada 83 or 
95 standard the "pragma Suppress ( identifier [ , [ On => ] name ] ) ;" 
statement is still valid.  

Plus with GNAT having all of the standards within one compiler, GNAT should 
allows this version plus the new version as well. 






^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-11 11:58                         ` anon
@ 2010-08-11 12:37                           ` Georg Bauhaus
  0 siblings, 0 replies; 94+ messages in thread
From: Georg Bauhaus @ 2010-08-11 12:37 UTC (permalink / raw)


On 11.08.10 13:58, anon@att.net wrote:

> Since, all Ada compilers except for GNAT and IBM are still using Ada 83 or 
> 95 standard the "pragma Suppress ( identifier [ , [ On => ] name ] ) ;" 
> statement is still valid.  

The AppletMagic compiler (using the AdaMagic front end) has let me see
some hints to Ada 2005 features a few years ago.  So I think it is safe
to assume that AdaMagic supports Ada 95, and Ada 2005 to some extent.
(Another possible hint is, I think, its being used in a current project
in Britain IIRC, an embeeded one. Someone reported here, recently.)


Georg



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-11  7:06                       ` Niklas Holsti
  2010-08-11 11:58                         ` anon
@ 2010-08-11 13:13                         ` Robert A Duff
  2010-08-11 23:49                           ` Randy Brukardt
  1 sibling, 1 reply; 94+ messages in thread
From: Robert A Duff @ 2010-08-11 13:13 UTC (permalink / raw)


Niklas Holsti <niklas.holsti@tidorum.invalid> writes:

> Yannick Duch�ne (Hibou57) wrote:
>> This is no more valid Ada (well.... this is still valid Ada 95, this
>> is just not more valid Ada 2005/2012)
>
> In the Ada 2005 RM, see section J.10 (Obsolescent Features: Specific
> Suppression of Checks).

Right.  And things in the "Obsolescent Features" annex are perfectly
good Ada, and all Ada compilers are required to support them.
These features are "obsolescing" so slowly that they will never
actually become "obsolete".  ;-)

That's my prediction, given that I've never heard anyone on the
ARG suggest that they should eventually be removed from the
standard.

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-11  7:42       ` Charles H. Sampson
@ 2010-08-11 13:38         ` Robert A Duff
  2010-08-12  7:48           ` Charles H. Sampson
  2010-08-11 18:49         ` Simon Wright
  1 sibling, 1 reply; 94+ messages in thread
From: Robert A Duff @ 2010-08-11 13:38 UTC (permalink / raw)


csampson@inetworld.net (Charles H. Sampson) writes:

> <tmoran@acm.org> wrote:

>>    Bearing := (Bearing + Turn_Angle) mod 360;
>
>      I'd be interested in hearing reactions to something I did. ...

>      I then overloaded "+" and "-" for (Bearing, Turn_Angle) arguments
> and Bearing return value.  In those functions is where the mod 360
> occurred.  (Actually, mod 360.0, as it were.)
>
>      There were two advantages to doing that.  The more important was
> that previously both of the kinds of values were being represented as
> subtypes of Long_Precision and programmers would occasionally
> interchange them and cause big debugging problems.  The second was
> removing the "mod" from sight, which allowed the programmers to simply
> think of taking a bearing, turning an angle, and getting the resulting
> bearing, without worrying about all the niceties that might be going on
> inside "+".

Sounds to me like a good way to do things.  It would still be a good
idea if you called it "Turn_Left" or something like that, instead
of "+".  But I don't object to "+".

I object to having built-in support for angle arithmetic in the
language, though.

Did you eliminate meaningless ops like "*"?  You could do that
by making the types private, but then you lose useful things
like literal notation.  Or you could declare "*" to be
abstract, which is an annoyance, but might be worth it.

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-11  7:42       ` Charles H. Sampson
  2010-08-11 13:38         ` Robert A Duff
@ 2010-08-11 18:49         ` Simon Wright
  2010-08-12  7:54           ` Charles H. Sampson
  1 sibling, 1 reply; 94+ messages in thread
From: Simon Wright @ 2010-08-11 18:49 UTC (permalink / raw)


csampson@inetworld.net (Charles H. Sampson) writes:

>      I then overloaded "+" and "-" for (Bearing, Turn_Angle) arguments
> and Bearing return value.  In those functions is where the mod 360
> occurred.  (Actually, mod 360.0, as it were.)

Depending on what is doing the turning, in our application that would
in some cases have to be mod 720.0 ...



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-11 13:13                         ` Robert A Duff
@ 2010-08-11 23:49                           ` Randy Brukardt
  0 siblings, 0 replies; 94+ messages in thread
From: Randy Brukardt @ 2010-08-11 23:49 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1556 bytes --]

"Robert A Duff" <bobduff@shell01.TheWorld.com> wrote in message 
news:wcczkwtnvhb.fsf@shell01.TheWorld.com...
> Niklas Holsti <niklas.holsti@tidorum.invalid> writes:
>
>> Yannick Duch�ne (Hibou57) wrote:
>>> This is no more valid Ada (well.... this is still valid Ada 95, this
>>> is just not more valid Ada 2005/2012)
>>
>> In the Ada 2005 RM, see section J.10 (Obsolescent Features: Specific
>> Suppression of Checks).
>
> Right.  And things in the "Obsolescent Features" annex are perfectly
> good Ada, and all Ada compilers are required to support them.
> These features are "obsolescing" so slowly that they will never
> actually become "obsolete".  ;-)

Somewhat irrelevant in this case, because there is no definition or 
agreement as to what it means to suppress checks on a particular type or 
object. One possibility is to do nothing at all (a compiler is never 
*required* to suppress checks). So while this probably will work on most 
compilers, there is no guarantee that it will work the same way.

To make a more concrete example:

     A : Integer : = -1;
     B : Natural := 0;
     pragma Suppress (Range_Check, On => B);

     B := A; -- ??

The language does not say what object or type needs to be suppressed in 
order to suppress the range check on this assignment. It might require the 
check to be suppressed on A or on B or on both or on the subtype Natural or 
on something else. So this feature is best avoided. (Note that this is as 
true for Ada 83 and Ada 95 as it is for Ada 2005.)

                             Randy.





^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-11 13:38         ` Robert A Duff
@ 2010-08-12  7:48           ` Charles H. Sampson
  2010-08-12  8:08             ` Ludovic Brenta
  0 siblings, 1 reply; 94+ messages in thread
From: Charles H. Sampson @ 2010-08-12  7:48 UTC (permalink / raw)


Robert A Duff <bobduff@shell01.TheWorld.com> wrote:

> csampson@inetworld.net (Charles H. Sampson) writes:
> 
> > <tmoran@acm.org> wrote:
> 
> >>    Bearing := (Bearing + Turn_Angle) mod 360;
> >
> >      I'd be interested in hearing reactions to something I did. ...
> 
> >      I then overloaded "+" and "-" for (Bearing, Turn_Angle) arguments
> > and Bearing return value.  In those functions is where the mod 360
> > occurred.  (Actually, mod 360.0, as it were.)
> >
> >      There were two advantages to doing that.  The more important was
> > that previously both of the kinds of values were being represented as
> > subtypes of Long_Precision and programmers would occasionally
> > interchange them and cause big debugging problems.  The second was
> > removing the "mod" from sight, which allowed the programmers to simply
> > think of taking a bearing, turning an angle, and getting the resulting
> > bearing, without worrying about all the niceties that might be going on
> > inside "+".
> 
> Sounds to me like a good way to do things.  It would still be a good
> idea if you called it "Turn_Left" or something like that, instead
> of "+".  But I don't object to "+".

     This was for the U. S. Navy, and "positive is right" is pretty much
universal.  For programmers, that is, not for sailors.  They think port
and starbord.

> I object to having built-in support for angle arithmetic in the
> language, though.

     I don't follow.  What is angle arithmetic?  Whatever it is, it
sounds too specific for a general purpose language.

> Did you eliminate meaningless ops like "*"?  You could do that
> by making the types private, but then you lose useful things
> like literal notation.  Or you could declare "*" to be
> abstract, which is an annoyance, but might be worth it.

     I would have liked to eliminate meaningless ops.  As you noted,
making the types private results in the loss of literals.  I wasn't
ready to go that far.

     What I really want is a way to hide existing visible operations,
particularly the ones from Standard.  I've asked about it before and
there doesn't seem to be too much enthusiasm, not enough to warrant the
time to get the niggling details correct.  The problem is, you can't
write something like

          if Current_Bearing + 0.5 > 180.0

because it's ambiguous.  (Is "+" from Standard or is it the special "+"
for bearings and turn angles?)  So you end up having to qualify literals
when there's only one reasonable "+" in the minds of the programmers.
Of course, you won't have that problem if your style is to always use
typed constants, but I wasn't ready to go that far either.

                        Charlie
-- 
All the world's a stage, and most 
of us are desperately unrehearsed.  Sean O'Casey



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-11 18:49         ` Simon Wright
@ 2010-08-12  7:54           ` Charles H. Sampson
  2010-08-12  8:36             ` Dmitry A. Kazakov
                               ` (2 more replies)
  0 siblings, 3 replies; 94+ messages in thread
From: Charles H. Sampson @ 2010-08-12  7:54 UTC (permalink / raw)


Simon Wright <simon@pushface.org> wrote:

> csampson@inetworld.net (Charles H. Sampson) writes:
> 
> >      I then overloaded "+" and "-" for (Bearing, Turn_Angle) arguments
> > and Bearing return value.  In those functions is where the mod 360
> > occurred.  (Actually, mod 360.0, as it were.)
> 
> Depending on what is doing the turning, in our application that would
> in some cases have to be mod 720.0 ...

     I'm puzzled.  Unless you're very careful, intermediate calculations
could result in a quasi-Bearing of more than 360 degrees but I'm pretty
sure most programmers on my project would have been surprised to see a
real bearing of 360 degrees or more.

                        Charlie
-- 
All the world's a stage, and most 
of us are desperately unrehearsed.  Sean O'Casey



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-12  7:48           ` Charles H. Sampson
@ 2010-08-12  8:08             ` Ludovic Brenta
  2010-08-12 17:10               ` Charles H. Sampson
  0 siblings, 1 reply; 94+ messages in thread
From: Ludovic Brenta @ 2010-08-12  8:08 UTC (permalink / raw)


Charles H. Sampson wrote on comp.lang.ada:
[on defining "+" to add angles in modular arithmetic]
>> Sounds to me like a good way to do things.  It would still be a good
>> idea if you called it "Turn_Left" or something like that, instead
>> of "+".  But I don't object to "+".
>
>      This was for the U. S. Navy, and "positive is right" is pretty much
> universal.  For programmers, that is, not for sailors.  They think port
> and starbord.

Actually, to mathematicians and engineers, "+" is "turn counter-
clockwise" or "turn left", too. Granted, they'd probably use radians
instead of degrees.  So, "+" meaning "turn right" is not as universal
as you might think.

--
Ludovic Brenta.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-12  7:54           ` Charles H. Sampson
@ 2010-08-12  8:36             ` Dmitry A. Kazakov
  2010-08-12 11:04             ` Brian Drummond
  2010-08-12 19:23             ` Simon Wright
  2 siblings, 0 replies; 94+ messages in thread
From: Dmitry A. Kazakov @ 2010-08-12  8:36 UTC (permalink / raw)


On Thu, 12 Aug 2010 00:54:52 -0700, Charles H. Sampson wrote:

> Simon Wright <simon@pushface.org> wrote:
> 
>> csampson@inetworld.net (Charles H. Sampson) writes:
>> 
>>>      I then overloaded "+" and "-" for (Bearing, Turn_Angle) arguments
>>> and Bearing return value.  In those functions is where the mod 360
>>> occurred.  (Actually, mod 360.0, as it were.)
>> 
>> Depending on what is doing the turning, in our application that would
>> in some cases have to be mod 720.0 ...
> 
>      I'm puzzled.  Unless you're very careful, intermediate calculations
> could result in a quasi-Bearing of more than 360 degrees but I'm pretty
> sure most programmers on my project would have been surprised to see a
> real bearing of 360 degrees or more.

Maybe it was rhumbs? Just could not resist, sorry (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-12  7:54           ` Charles H. Sampson
  2010-08-12  8:36             ` Dmitry A. Kazakov
@ 2010-08-12 11:04             ` Brian Drummond
  2010-08-12 19:23             ` Simon Wright
  2 siblings, 0 replies; 94+ messages in thread
From: Brian Drummond @ 2010-08-12 11:04 UTC (permalink / raw)


On Thu, 12 Aug 2010 00:54:52 -0700, csampson@inetworld.net (Charles H. Sampson)
wrote:

>Simon Wright <simon@pushface.org> wrote:
>
>> csampson@inetworld.net (Charles H. Sampson) writes:
>> 
>> >      I then overloaded "+" and "-" for (Bearing, Turn_Angle) arguments
>> > and Bearing return value.  In those functions is where the mod 360
>> > occurred.  (Actually, mod 360.0, as it were.)
>> 
>> Depending on what is doing the turning, in our application that would
>> in some cases have to be mod 720.0 ...
>
>     I'm puzzled.  Unless you're very careful, intermediate calculations
>could result in a quasi-Bearing of more than 360 degrees but I'm pretty
>sure most programmers on my project would have been surprised to see a
>real bearing of 360 degrees or more.

Some applications must distinguish between odd and even revolutions.
Get it wrong and they might backfire.

- Brian




^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-12  8:08             ` Ludovic Brenta
@ 2010-08-12 17:10               ` Charles H. Sampson
  2010-08-12 18:06                 ` Jeffrey Carter
  0 siblings, 1 reply; 94+ messages in thread
From: Charles H. Sampson @ 2010-08-12 17:10 UTC (permalink / raw)


Ludovic Brenta <ludovic@ludovic-brenta.org> wrote:

> Charles H. Sampson wrote on comp.lang.ada:
> [on defining "+" to add angles in modular arithmetic]
> >> Sounds to me like a good way to do things.  It would still be a good
> >> idea if you called it "Turn_Left" or something like that, instead
> >> of "+".  But I don't object to "+".
> >
> >      This was for the U. S. Navy, and "positive is right" is pretty much
> > universal.  For programmers, that is, not for sailors.  They think port
> > and starbord.
> 
> Actually, to mathematicians and engineers, "+" is "turn counter-
> clockwise" or "turn left", too. Granted, they'd probably use radians
> instead of degrees.  So, "+" meaning "turn right" is not as universal
> as you might think.

     You're right, of course.  I was being a bit too elliptical.  I
should have said Navy programmers (and Navy mathematicians, for that
matter).  In the U. S. Navy, a bearing of 0 degrees is due North, 90
degrees is East, etc.

                        Charlie
-- 
All the world's a stage, and most 
of us are desperately unrehearsed.  Sean O'Casey



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-12 17:10               ` Charles H. Sampson
@ 2010-08-12 18:06                 ` Jeffrey Carter
  0 siblings, 0 replies; 94+ messages in thread
From: Jeffrey Carter @ 2010-08-12 18:06 UTC (permalink / raw)


On 08/12/2010 10:10 AM, Charles H. Sampson wrote:
>
>       You're right, of course.  I was being a bit too elliptical.  I
> should have said Navy programmers (and Navy mathematicians, for that
> matter).  In the U. S. Navy, a bearing of 0 degrees is due North, 90
> degrees is East, etc.

This is also the convention used in aviation.

-- 
Jeff Carter
"What I wouldn't give for a large sock with horse manure in it."
Annie Hall
42

--- news://freenews.netfront.net/ - complaints: news@netfront.net ---



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-12  7:54           ` Charles H. Sampson
  2010-08-12  8:36             ` Dmitry A. Kazakov
  2010-08-12 11:04             ` Brian Drummond
@ 2010-08-12 19:23             ` Simon Wright
  2010-08-12 20:21               ` (see below)
  2 siblings, 1 reply; 94+ messages in thread
From: Simon Wright @ 2010-08-12 19:23 UTC (permalink / raw)


csampson@inetworld.net (Charles H. Sampson) writes:

> Simon Wright <simon@pushface.org> wrote:
>
>> csampson@inetworld.net (Charles H. Sampson) writes:
>> 
>> >      I then overloaded "+" and "-" for (Bearing, Turn_Angle) arguments
>> > and Bearing return value.  In those functions is where the mod 360
>> > occurred.  (Actually, mod 360.0, as it were.)
>> 
>> Depending on what is doing the turning, in our application that would
>> in some cases have to be mod 720.0 ...
>
>      I'm puzzled.  Unless you're very careful, intermediate calculations
> could result in a quasi-Bearing of more than 360 degrees but I'm pretty
> sure most programmers on my project would have been surprised to see a
> real bearing of 360 degrees or more.

A tracker radar like this one (hope the link will work) might be able to
turn through an unlimited number of revolutions, or (with a more
mechanical design) there might be a limit on how many revs it could
manage. So if it's currently pointing 10 degrees to starboard, how much
further can it go before having to unwind?

I agree that this is Training, not Bearing, of course. 

http://www.artisan3d.co.uk/static/bae_cimg_radar_Fi_latestReleased_bae_cimg_radar_Fi_Web.jpg



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-12 19:23             ` Simon Wright
@ 2010-08-12 20:21               ` (see below)
  2010-08-13 15:08                 ` Elias Salomão Helou Neto
  0 siblings, 1 reply; 94+ messages in thread
From: (see below) @ 2010-08-12 20:21 UTC (permalink / raw)


On 12/08/2010 20:23, in article m28w4bfxdy.fsf@pushface.org, "Simon Wright"
<simon@pushface.org> wrote:

> csampson@inetworld.net (Charles H. Sampson) writes:
> 
>> Simon Wright <simon@pushface.org> wrote:
>> 
>>> csampson@inetworld.net (Charles H. Sampson) writes:
>>> 
>>>>      I then overloaded "+" and "-" for (Bearing, Turn_Angle) arguments
>>>> and Bearing return value.  In those functions is where the mod 360
>>>> occurred.  (Actually, mod 360.0, as it were.)
>>> 
>>> Depending on what is doing the turning, in our application that would
>>> in some cases have to be mod 720.0 ...
>> 
>>      I'm puzzled.  Unless you're very careful, intermediate calculations
>> could result in a quasi-Bearing of more than 360 degrees but I'm pretty
>> sure most programmers on my project would have been surprised to see a
>> real bearing of 360 degrees or more.
> 
> A tracker radar like this one (hope the link will work) might be able to
> turn through an unlimited number of revolutions, or (with a more
> mechanical design) there might be a limit on how many revs it could
> manage. So if it's currently pointing 10 degrees to starboard, how much
> further can it go before having to unwind?
> 
> I agree that this is Training, not Bearing, of course.
> 
> http://www.artisan3d.co.uk/static/bae_cimg_radar_Fi_latestReleased_bae_cimg_ra
> dar_Fi_Web.jpg

I think quantum mechanics codes would require angles mod 720 for spin 1/2
particles. 8-)

-- 
Bill Findlay
<surname><forename> chez blueyonder.co.uk





^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-12 20:21               ` (see below)
@ 2010-08-13 15:08                 ` Elias Salomão Helou Neto
  2010-08-13 15:10                   ` Elias Salomão Helou Neto
                                     ` (3 more replies)
  0 siblings, 4 replies; 94+ messages in thread
From: Elias Salomão Helou Neto @ 2010-08-13 15:08 UTC (permalink / raw)


On Aug 12, 5:21 pm, "(see below)" <yaldni...@blueyonder.co.uk> wrote:
> On 12/08/2010 20:23, in article m28w4bfxdy....@pushface.org, "Simon Wright"
>
>
>
>
>
> <si...@pushface.org> wrote:
> > csamp...@inetworld.net (Charles H. Sampson) writes:
>
> >> Simon Wright <si...@pushface.org> wrote:
>
> >>> csamp...@inetworld.net (Charles H. Sampson) writes:
>
> >>>>      I then overloaded "+" and "-" for (Bearing, Turn_Angle) arguments
> >>>> and Bearing return value.  In those functions is where the mod 360
> >>>> occurred.  (Actually, mod 360.0, as it were.)
>
> >>> Depending on what is doing the turning, in our application that would
> >>> in some cases have to be mod 720.0 ...
>
> >>      I'm puzzled.  Unless you're very careful, intermediate calculations
> >> could result in a quasi-Bearing of more than 360 degrees but I'm pretty
> >> sure most programmers on my project would have been surprised to see a
> >> real bearing of 360 degrees or more.
>
> > A tracker radar like this one (hope the link will work) might be able to
> > turn through an unlimited number of revolutions, or (with a more
> > mechanical design) there might be a limit on how many revs it could
> > manage. So if it's currently pointing 10 degrees to starboard, how much
> > further can it go before having to unwind?
>
> > I agree that this is Training, not Bearing, of course.
>
> >http://www.artisan3d.co.uk/static/bae_cimg_radar_Fi_latestReleased_ba...
> > dar_Fi_Web.jpg
>
> I think quantum mechanics codes would require angles mod 720 for spin 1/2
> particles. 8-)
>
> --
> Bill Findlay
> <surname><forename> chez blueyonder.co.uk

Well, we are definitely drifting away from the original question. So
much that I am considering to give up reading answers from this
post...

Anyway, someone prematurely implied that I may be doing premature
optimization. The fact is that I do know, from profiling, how
important is _not_ to range check in my specific application, so I
will try to give you a "bottom line" of what have been said that
really matters to me.

1) You can, in more than one way, tell the compiler to suppress most
(any?) checks, but people do not advise to do so. Even if I say that I
do need that :(

2) It is not necessary for the compiler to actually suppress any
checking! It seems that the LRM demands the compiler to allow you to
ask for the suppression, but it does mandate the compiler to actually
skip such checks. Well, this is, to say the least, funny - unless I
misunderstood something here.

I am having a fairly good impression of Ada's features, but the effort
it would take to learn the language may eventually not pay off.

Thank you,
Elias.

P.S: Multidimensional _arrays_ are not multidimensional _matrices_
neither are them MD _images_. The two latter are far more specific
classes and the last case even needs to be templated in, say, pixel
type, etc.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 15:08                 ` Elias Salomão Helou Neto
@ 2010-08-13 15:10                   ` Elias Salomão Helou Neto
  2010-08-13 18:01                     ` Georg Bauhaus
                                       ` (2 more replies)
  2010-08-13 15:37                   ` Dmitry A. Kazakov
                                     ` (2 subsequent siblings)
  3 siblings, 3 replies; 94+ messages in thread
From: Elias Salomão Helou Neto @ 2010-08-13 15:10 UTC (permalink / raw)


On Aug 13, 12:08 pm, Elias Salomão Helou Neto <eshn...@gmail.com>
wrote:
> On Aug 12, 5:21 pm, "(see below)" <yaldni...@blueyonder.co.uk> wrote:
>
>
>
>
>
> > On 12/08/2010 20:23, in article m28w4bfxdy....@pushface.org, "Simon Wright"
>
> > <si...@pushface.org> wrote:
> > > csamp...@inetworld.net (Charles H. Sampson) writes:
>
> > >> Simon Wright <si...@pushface.org> wrote:
>
> > >>> csamp...@inetworld.net (Charles H. Sampson) writes:
>
> > >>>>      I then overloaded "+" and "-" for (Bearing, Turn_Angle) arguments
> > >>>> and Bearing return value.  In those functions is where the mod 360
> > >>>> occurred.  (Actually, mod 360.0, as it were.)
>
> > >>> Depending on what is doing the turning, in our application that would
> > >>> in some cases have to be mod 720.0 ...
>
> > >>      I'm puzzled.  Unless you're very careful, intermediate calculations
> > >> could result in a quasi-Bearing of more than 360 degrees but I'm pretty
> > >> sure most programmers on my project would have been surprised to see a
> > >> real bearing of 360 degrees or more.
>
> > > A tracker radar like this one (hope the link will work) might be able to
> > > turn through an unlimited number of revolutions, or (with a more
> > > mechanical design) there might be a limit on how many revs it could
> > > manage. So if it's currently pointing 10 degrees to starboard, how much
> > > further can it go before having to unwind?
>
> > > I agree that this is Training, not Bearing, of course.
>
> > >http://www.artisan3d.co.uk/static/bae_cimg_radar_Fi_latestReleased_ba...
> > > dar_Fi_Web.jpg
>
> > I think quantum mechanics codes would require angles mod 720 for spin 1/2
> > particles. 8-)
>
> > --
> > Bill Findlay
> > <surname><forename> chez blueyonder.co.uk
>
> Well, we are definitely drifting away from the original question. So
> much that I am considering to give up reading answers from this
> post...
>
> Anyway, someone prematurely implied that I may be doing premature
> optimization. The fact is that I do know, from profiling, how
> important is _not_ to range check in my specific application, so I
> will try to give you a "bottom line" of what have been said that
> really matters to me.
>
> 1) You can, in more than one way, tell the compiler to suppress most
> (any?) checks, but people do not advise to do so. Even if I say that I
> do need that :(
>
> 2) It is not necessary for the compiler to actually suppress any
> checking! It seems that the LRM demands the compiler to allow you to
> misunderstood something here.
>
> I am having a fairly good impression of Ada's features, but the effort
> it would take to learn the language may eventually not pay off.
>
> Thank you,
> Elias.
>
> P.S: Multidimensional _arrays_ are not multidimensional _matrices_
> neither are them MD _images_. The two latter are far more specific
> classes and the last case even needs to be templated in, say, pixel
> type, etc.

Sorry, but where it is written

> ask for the suppression, but it does mandate the compiler to actually
> skip such checks. Well, this is, to say the least, funny - unless I

you should read


ask for the suppression, but it does NOT mandate the compiler to
actually
skip such checks. Well, this is, to say the least, funny - unless I

Sorry again,
Elias.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 15:08                 ` Elias Salomão Helou Neto
  2010-08-13 15:10                   ` Elias Salomão Helou Neto
@ 2010-08-13 15:37                   ` Dmitry A. Kazakov
  2010-08-16 13:29                     ` Elias Salomão Helou Neto
  2010-08-13 16:58                   ` (see below)
  2010-08-14 16:13                   ` Charles H. Sampson
  3 siblings, 1 reply; 94+ messages in thread
From: Dmitry A. Kazakov @ 2010-08-13 15:37 UTC (permalink / raw)


On Fri, 13 Aug 2010 08:08:42 -0700 (PDT), Elias Salom�o Helou Neto wrote:

> 1) You can, in more than one way, tell the compiler to suppress most
> (any?) checks, but people do not advise to do so. Even if I say that I
> do need that :(

You think you need that, but you cannot know it, because you didn't used
Ada and have no experience to realistically estimate the amount of checks
an Ada compiler would introduce in your, not yet developed program.

> P.S: Multidimensional _arrays_ are not multidimensional _matrices_

I don't see any difference, unless specialized non-dense matrices are
considered.

> neither are them MD _images_.

Same here. Multi-channel images, sequences of images, scenes can be seen
and implemented as  arrays. Again, unless some special hardware is in play.

> The two latter are far more specific
> classes and the last case even needs to be templated in, say, pixel
> type, etc.

I don't see a great need in that. Usually the type of pixel (grayscale,
tricky color models, separate channels etc) influences image operations so
heavily that the common denominator might become too thin to be useful, at
least in the case of image processing.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 15:08                 ` Elias Salomão Helou Neto
  2010-08-13 15:10                   ` Elias Salomão Helou Neto
  2010-08-13 15:37                   ` Dmitry A. Kazakov
@ 2010-08-13 16:58                   ` (see below)
  2010-08-14 16:13                   ` Charles H. Sampson
  3 siblings, 0 replies; 94+ messages in thread
From: (see below) @ 2010-08-13 16:58 UTC (permalink / raw)


On 13/08/2010 16:08, in article
61f149b9-00ff-40cd-9698-01e69fdc5c0f@v15g2000yqe.googlegroups.com, "Elias
Salom�o Helou Neto" <eshneto@gmail.com> wrote:

> 1) You can, in more than one way, tell the compiler to suppress most
> (any?) checks, but people do not advise to do so. Even if I say that I
> do need that :(
> 
> 2) It is not necessary for the compiler to actually suppress any
> checking! It seems that the LRM demands the compiler to allow you to
> ask for the suppression, but it does mandate the compiler to actually
> skip such checks. Well, this is, to say the least, funny - unless I
> misunderstood something here.

Quite so. You can be confident that the compiler will suppress the checks as
asked, *unless* you are using an architecture (e.g. B6700) where that is
simply impossible. Since Ada is a portable language, the semantics has to
allow for the latter possibility.

-- 
Bill Findlay
<surname><forename> chez blueyonder.co.uk





^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 15:10                   ` Elias Salomão Helou Neto
@ 2010-08-13 18:01                     ` Georg Bauhaus
  2010-08-13 19:52                       ` Robert A Duff
  2010-08-13 20:22                     ` Robert A Duff
  2010-08-13 21:57                     ` Jeffrey Carter
  2 siblings, 1 reply; 94+ messages in thread
From: Georg Bauhaus @ 2010-08-13 18:01 UTC (permalink / raw)


On 13.08.10 17:10, Elias Salom�o Helou Neto wrote:

>> 1) You can, in more than one way, tell the compiler to suppress most
>> (any?) checks, but people do not advise to do so. Even if I say that I
>> do need that :(

Don't worry. A rule frequently heard is that if your requirements,
either directly or by implication, include highest possible speed,
lowest possible overhead, or whatever, then these are requirements
and Ada (designed to be usable also in hard real-time environments,
as might be mentioned...), must provide for meeting the requirements.
If the LRM did not address absence of checks, Unchecked_Conversion etc.,
this goal would not be met, nor Ada's, see below.

Some here have been involved in an almost insane effort to achieve
highest possible speeds, very much at the cost of a clear,
obvious algorithm.  And of course we couldn't tolerate any
checks in a program that we knew wouldn't need them!
A nice result is that the programs still do not need
to resort to CPU-specific macros, say, to achieve highest speed.
They use plain Ada and the compiler produces good code.
(BTW, one can use standard Ada facilities to have the compiler
"try" different object code, via representation clauses, or suitably
defined (and sized) types.)

>> 2) It is not necessary for the compiler to actually suppress any
>> checking! It seems that the LRM demands the compiler to allow you to
>> misunderstood something here.

> ask for the suppression, but it does NOT mandate the compiler to
> actually
> skip such checks. Well, this is, to say the least, funny - unless I

1. An authoritative comment from an ARG member has been (more than
once) that the LRM never requires anything stupid, or funny, as
you have put it.  Range checking can be turned off. (In fact,
the SPARK effort can be understood to mean that a SPARK
program can run without checks because it has been analyzed
and found to be without need for any checks.)

2. By the AS-IF rule, a compiler need not even introduce checks
where it can show at compile time that none are needed.  I think
Dmitry has hinted at this.

3. Even though pedantically one might insist on what seems written
in the LRM and dismiss the implications of equally dependable
1. and 2., it helps remembering that you cannot do so in the
real world:
From the outset Ada was made by and for customers. This is documented
in the archives.  If the LRM had mandated funny things, like
precluding perfectly normal compiler pragmatics, there would have
been little support (and the whole thing wouldn't be consistent
with Ada language requirements).

4. The description (however obsolescent) of pragma Suppress
mentions "permission"; an indication that programmer allows
compiler to suppress.  Consequently, and consistent with what
is usually the case with many other language AFAIK, suppressing
checks is a compiler thing, and the LRM even expressly mentions
this.

In fact, documents have been more explicit.
This is from the June 1978 Requirements

"1D. Efficiency. The language design should aid the production of efficient
object programs. Constructs that have unexpectedly expensive implementations
should be easily recognizable by translators and by users. Features should
be chosen to have a simple and efficient implementation in many object
machines, to avoid execution costs for available generality where it is not
needed, to maximize the number of safe optimizations available to
translators, and to ensure that unused and constant portions of programs
will not add to execution costs. Execution time support packages of the
language shall not be included in object code unless they are called."

http://archive.adaic.com/docs/reports/steelman/steelman.txt

I think you can rely on programmer control over checks.


Georg



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 18:01                     ` Georg Bauhaus
@ 2010-08-13 19:52                       ` Robert A Duff
  2010-08-14  9:44                         ` Georg Bauhaus
  0 siblings, 1 reply; 94+ messages in thread
From: Robert A Duff @ 2010-08-13 19:52 UTC (permalink / raw)


Georg Bauhaus <rm.dash-bauhaus@futureapps.de> writes:

> 4. The description (however obsolescent) of pragma Suppress
>...

Pragma Suppress is not obsolescent.  Just the "On => ..."
feature is.

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 15:10                   ` Elias Salomão Helou Neto
  2010-08-13 18:01                     ` Georg Bauhaus
@ 2010-08-13 20:22                     ` Robert A Duff
  2010-08-14  1:34                       ` Randy Brukardt
  2010-08-13 21:57                     ` Jeffrey Carter
  2 siblings, 1 reply; 94+ messages in thread
From: Robert A Duff @ 2010-08-13 20:22 UTC (permalink / raw)


Elias Salom�o Helou Neto <eshneto@gmail.com> writes:

>> Anyway, someone prematurely implied that I may be doing premature
>> optimization. The fact is that I do know, from profiling, how
>> important is _not_ to range check in my specific application, so I
>> will try to give you a "bottom line" of what have been said that
>> really matters to me.
>>
>> 1) You can, in more than one way, tell the compiler to suppress most
>> (any?) checks, but people do not advise to do so. Even if I say that I
>> do need that :(

Even if it's true that it is "important _not_ to range check",
that does not necessarily mean that you need to suppress checks.
As Dmitry pointed out, many checks will be removed by the
compiler even if you don't suppress them -- it will remove the
ones it can prove will not fail.  The only way to determine
whether that's good enough is to measure it, and/or look
at the machine code.

As for the advice you mention, if it said "Never suppress checks",
it's bad advice.

> ask for the suppression, but it does NOT mandate the compiler to
> actually
> skip such checks. ...

Of course!  That's true of all higher-level languages, including C.
Higher-level language never formally define what machine code gets
generated, nor how efficient it has to be -- there's no way to do so.
A C compiler is allowed by the C standard to do array bounds checking,
and I assure you that if such a C compiler exists, the checks are
FAR more expensive than for Ada.

The Ada RM has some informal rules, called "Implementation Advice"
and "Implementation Requirements", and there's one in there
somewhere saying compilers really ought to skip the checks
when it makes sense.

Don't worry, Ada compiler writers do not deliberately try
to harm their customers -- of course they actually skip the
checks when it makes sense.  Note that it doesn't always make
sense, since some checks are done for free by the hardware.

- Bob



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 15:10                   ` Elias Salomão Helou Neto
  2010-08-13 18:01                     ` Georg Bauhaus
  2010-08-13 20:22                     ` Robert A Duff
@ 2010-08-13 21:57                     ` Jeffrey Carter
  2010-08-13 22:37                       ` Yannick Duchêne (Hibou57)
  2 siblings, 1 reply; 94+ messages in thread
From: Jeffrey Carter @ 2010-08-13 21:57 UTC (permalink / raw)


On 08/13/2010 08:10 AM, Elias Salom�o Helou Neto wrote:
>
> ask for the suppression, but it does NOT mandate the compiler to
> actually
> skip such checks. Well, this is, to say the least, funny - unless I

Right. As others have pointed out, are some platforms on which a check cannot be 
suppressed, and since the ARM is platform-independent, it does not mandate the 
impossible.

But economics plays an important role. Customers won't pay for a compiler that 
doesn't behave as expected. As a result, I'm not aware of any compiler that 
doesn't actually suppress checks (that can be suppressed) when requested. (If 
there is one, no doubt someone will set me straight.)

It's also been noted that some hardware performs some checks for free. On such a 
platform, suppressing the check may take more time than leaving it it. So if you 
are not able to meet timing requirements with checks, blindly suppressing checks 
may make the timing worse, not better.

-- 
Jeff Carter
"Have you gone berserk? Can't you see that that man is a ni?"
Blazing Saddles
38

--- news://freenews.netfront.net/ - complaints: news@netfront.net ---



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 21:57                     ` Jeffrey Carter
@ 2010-08-13 22:37                       ` Yannick Duchêne (Hibou57)
  2010-08-13 22:43                         ` Yannick Duchêne (Hibou57)
  2010-08-13 23:29                         ` tmoran
  0 siblings, 2 replies; 94+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-08-13 22:37 UTC (permalink / raw)


Le Fri, 13 Aug 2010 23:57:44 +0200, Jeffrey Carter  
<spam.jrcarter.not@spam.not.acm.org> a écrit:
> It's also been noted that some hardware performs some checks for free.  
> On such a platform, suppressing the check may take more time than  
> leaving it it. So if you are not able to meet timing requirements with  
> checks, blindly suppressing checks may make the timing worse, not better.
When you are talking about hardware level checks, are you talking about  
check for null address divide par zero and so on or something else ? I  
have never heard about a machine with a level array type and an hardware  
check for array access index. Would be interesting if you could be more  
explicit about these hardware checks.


-- 
There is even better than a pragma Assert: a SPARK --# check.
--# check C and WhoKnowWhat and YouKnowWho;
--# assert Ada;
--  i.e. forget about previous premises which leads to conclusion
--  and start with new conclusion as premise.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 22:37                       ` Yannick Duchêne (Hibou57)
@ 2010-08-13 22:43                         ` Yannick Duchêne (Hibou57)
  2010-08-13 23:29                         ` tmoran
  1 sibling, 0 replies; 94+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-08-13 22:43 UTC (permalink / raw)


> When you are talking about hardware level checks, are you talking about  
> check for null address divide par zero and so on or something else ? I  
> have never heard about a machine with a level array type and an hardware  
> check for array access index. Would be interesting if you could be more  
> explicit about these hardware checks.

Well, I know about some CPU with low level capabilities to check for read  
access to unassigned/not-initialized memory location.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 22:37                       ` Yannick Duchêne (Hibou57)
  2010-08-13 22:43                         ` Yannick Duchêne (Hibou57)
@ 2010-08-13 23:29                         ` tmoran
  2010-08-14  0:02                           ` Yannick Duchêne (Hibou57)
  1 sibling, 1 reply; 94+ messages in thread
From: tmoran @ 2010-08-13 23:29 UTC (permalink / raw)


>have never heard about a machine with a level array type and an hardware
>check for array access index. Would be interesting if you could be more
  The Burroughs machines were like that.  The descriptor pointing to
an array included the size as well as the base address, so index in-range
was automatically checked by the hardware.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 23:29                         ` tmoran
@ 2010-08-14  0:02                           ` Yannick Duchêne (Hibou57)
  2010-08-14  0:16                             ` (see below)
  2010-08-14 10:47                             ` Brian Drummond
  0 siblings, 2 replies; 94+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-08-14  0:02 UTC (permalink / raw)


Le Sat, 14 Aug 2010 01:29:56 +0200, <tmoran@acm.org> a écrit:
>   The Burroughs machines were like that.  The descriptor pointing to
> an array included the size as well as the base address, so index in-range
> was automatically checked by the hardware.
So these old machines were in some way safer than ones of nowadays

A similar thing exist with Intel CPUs : memory segment are defined using a  
base and a limit. This may be feasible to assign a memory segment to each  
array and then get the same check done (an hardware exception would be  
raised if ever an attempt to access a byte beyond the limit was maid),  
while this would come with a cost, at least the one to have to switch  
memory segment register all the time (this is more costly than reading and  
writing any other register).

I found a nice page about Burroughs machines and others:
http://www.cs.virginia.edu/brochure/museum.html

Nice historical document

Near the bottom a the page, an photo of a pins labeled “I touched a B5000”  
(which as the page said, was one of these Burroughs machines)


If you go down into the page, there is another picture, of an addressable  
magnetic tape containing the Nicklaus Wirth's Pascal compiler (touching,  
if I may say so)



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-14  0:02                           ` Yannick Duchêne (Hibou57)
@ 2010-08-14  0:16                             ` (see below)
  2010-08-14 10:47                             ` Brian Drummond
  1 sibling, 0 replies; 94+ messages in thread
From: (see below) @ 2010-08-14  0:16 UTC (permalink / raw)


On 14/08/2010 01:02, in article op.vheielzsule2fv@garhos, "Yannick Duch�ne
(Hibou57)" <yannick_duchene@yahoo.fr> wrote:

> Le Sat, 14 Aug 2010 01:29:56 +0200, <tmoran@acm.org> a �crit:
>>   The Burroughs machines were like that.  The descriptor pointing to
>> an array included the size as well as the base address, so index in-range
>> was automatically checked by the hardware.
> So these old machines were in some way safer than ones of nowadays

Burroughs-architecture m/cs should not be referred to in the past tense.
They still sell very profitably.

-- 
Bill Findlay
<surname><forename> chez blueyonder.co.uk





^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 20:22                     ` Robert A Duff
@ 2010-08-14  1:34                       ` Randy Brukardt
  2010-08-14  7:18                         ` anon
  0 siblings, 1 reply; 94+ messages in thread
From: Randy Brukardt @ 2010-08-14  1:34 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 5123 bytes --]

"Robert A Duff" <bobduff@shell01.TheWorld.com> wrote in message 
news:wcchbiytg9e.fsf@shell01.TheWorld.com...
> Elias Salom�o Helou Neto <eshneto@gmail.com> writes:
...
>>> Anyway, someone prematurely implied that I may be doing premature
>>> optimization. The fact is that I do know, from profiling, how
>>> important is _not_ to range check in my specific application, so I
>>> will try to give you a "bottom line" of what have been said that
>>> really matters to me.
>>>
>>> 1) You can, in more than one way, tell the compiler to suppress most
>>> (any?) checks, but people do not advise to do so. Even if I say that I
>>> do need that :(
>
> Even if it's true that it is "important _not_ to range check",
> that does not necessarily mean that you need to suppress checks.
> As Dmitry pointed out, many checks will be removed by the
> compiler even if you don't suppress them -- it will remove the
> ones it can prove will not fail.  The only way to determine
> whether that's good enough is to measure it, and/or look
> at the machine code.
>
> As for the advice you mention, if it said "Never suppress checks",
> it's bad advice.

I think it was my comment that he was responding to. Bob has captured my 
intent perfectly; you probably won't need to Suppress range checks, because 
the compiler probably already will have removed them. But if you do need to, 
you can.

>> ask for the suppression, but it does NOT mandate the compiler to
>> actually
>> skip such checks. ...
>
> Of course!  That's true of all higher-level languages, including C.
> Higher-level language never formally define what machine code gets
> generated, nor how efficient it has to be -- there's no way to do so.
> A C compiler is allowed by the C standard to do array bounds checking,
> and I assure you that if such a C compiler exists, the checks are
> FAR more expensive than for Ada.
>
> The Ada RM has some informal rules, called "Implementation Advice"
> and "Implementation Requirements", and there's one in there
> somewhere saying compilers really ought to skip the checks
> when it makes sense.
>
> Don't worry, Ada compiler writers do not deliberately try
> to harm their customers -- of course they actually skip the
> checks when it makes sense.  Note that it doesn't always make
> sense, since some checks are done for free by the hardware.

Again, I agree with Bob. All Ada compilers that I know of do suppress checks 
when the permission is given. But it is also the case that some checks might 
be made automatically by the underlying machine or in the runtime, and they 
might be very difficult or impossible to remove.

For example, the integer divide instruction on the X86 traps if an attempt 
to divide by zero occurs. Suppressing this check would be rather expensive, 
as the compiler would have to provide some sort of map to the trap handler 
as to whether or not the check was suppressed at the point of the divide. 
Plus you would have to be able to restart after the divide (not all OSes 
provide this capability for trap handlers). So it is impractical on an X86 
machine to suppress the zero divide check.

A similar example occurs when suppressing storage checks on allocators. The 
allocator is going to be implemented by some sort of library routine, and it 
is that routine that makes the check. One could imagine passing a flag 
whether or not you want the check suppressed, but that would slow down all 
allocations -- so typically no way to eliminate the check is provided.

OTOH, range checks take extra code on every processor that I've ever worked 
with, and compilers do everything that they can to avoid generating that 
extra code. So in practice, range checks are always suppressed when it is 
asked for.

You have to keep in mind the difference between Ada as defined by the 
standard and particular Ada implementations. As Bob says, implementers don't 
make their compilers generate bad code if they don't have a good reason. But 
the standard (like all standards) give the implementers some flexibility to 
generate the best code for a given target. So some questions (including 
*all* questions about performance) have to be answered in the context of 
particular implementations (and often, particular code). "Ada" (or "C" for 
that matter) doesn't have performance characteristics; Ada implementations 
like GNAT or Janus or ObjectAda do.

One more point: when you suppress checks in Ada, all you are doing is 
telling the compiler that "I guarentee that these checks will not fail". If 
they do fail, the program is what Ada calls erroneous -- and the code is 
allowed to do anything at all (including, as the saying goes, "erasing the 
hard disk" -- although I'm not aware of that ever actually happening on a 
protected OS like Windows or Linux. More likely, it would cause some sort of 
buffer overflow bug). That includes raising the exception that the check 
including. Since the OP is mostly interested in performance, this doesn't 
matter -- but it might in a safety-critical application, so it has to be 
defined in the Standard.

                                   Randy.








^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-14  1:34                       ` Randy Brukardt
@ 2010-08-14  7:18                         ` anon
  0 siblings, 0 replies; 94+ messages in thread
From: anon @ 2010-08-14  7:18 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 6946 bytes --]

In <i44rqb$efe$1@munin.nbi.dk>, "Randy Brukardt" <randy@rrsoftware.com> writes:
>"Robert A Duff" <bobduff@shell01.TheWorld.com> wrote in message 
>news:wcchbiytg9e.fsf@shell01.TheWorld.com...
>> Elias Salom�o Helou Neto <eshneto@gmail.com> writes:
>....
>>>> Anyway, someone prematurely implied that I may be doing premature
>>>> optimization. The fact is that I do know, from profiling, how
>>>> important is _not_ to range check in my specific application, so I
>>>> will try to give you a "bottom line" of what have been said that
>>>> really matters to me.
>>>>
>>>> 1) You can, in more than one way, tell the compiler to suppress most
>>>> (any?) checks, but people do not advise to do so. Even if I say that I
>>>> do need that :(
>>
>> Even if it's true that it is "important _not_ to range check",
>> that does not necessarily mean that you need to suppress checks.
>> As Dmitry pointed out, many checks will be removed by the
>> compiler even if you don't suppress them -- it will remove the
>> ones it can prove will not fail.  The only way to determine
>> whether that's good enough is to measure it, and/or look
>> at the machine code.
>>
>> As for the advice you mention, if it said "Never suppress checks",
>> it's bad advice.
>
>I think it was my comment that he was responding to. Bob has captured my 
>intent perfectly; you probably won't need to Suppress range checks, because 
>the compiler probably already will have removed them. But if you do need to, 
>you can.
>
>>> ask for the suppression, but it does NOT mandate the compiler to
>>> actually
>>> skip such checks. ...
>>
>> Of course!  That's true of all higher-level languages, including C.
>> Higher-level language never formally define what machine code gets
>> generated, nor how efficient it has to be -- there's no way to do so.
>> A C compiler is allowed by the C standard to do array bounds checking,
>> and I assure you that if such a C compiler exists, the checks are
>> FAR more expensive than for Ada.
>>
>> The Ada RM has some informal rules, called "Implementation Advice"
>> and "Implementation Requirements", and there's one in there
>> somewhere saying compilers really ought to skip the checks
>> when it makes sense.
>>
>> Don't worry, Ada compiler writers do not deliberately try
>> to harm their customers -- of course they actually skip the
>> checks when it makes sense.  Note that it doesn't always make
>> sense, since some checks are done for free by the hardware.
>
>Again, I agree with Bob. All Ada compilers that I know of do suppress checks 
>when the permission is given. But it is also the case that some checks might 
>be made automatically by the underlying machine or in the runtime, and they 
>might be very difficult or impossible to remove.
>
>For example, the integer divide instruction on the X86 traps if an attempt 
>to divide by zero occurs. Suppressing this check would be rather expensive, 
>as the compiler would have to provide some sort of map to the trap handler 
>as to whether or not the check was suppressed at the point of the divide. 
>Plus you would have to be able to restart after the divide (not all OSes 
>provide this capability for trap handlers). So it is impractical on an X86 
>machine to suppress the zero divide check.
>
>A similar example occurs when suppressing storage checks on allocators. The 
>allocator is going to be implemented by some sort of library routine, and it 
>is that routine that makes the check. One could imagine passing a flag 
>whether or not you want the check suppressed, but that would slow down all 
>allocations -- so typically no way to eliminate the check is provided.
>
>OTOH, range checks take extra code on every processor that I've ever worked 
>with, and compilers do everything that they can to avoid generating that 
>extra code. So in practice, range checks are always suppressed when it is 
>asked for.
>
>You have to keep in mind the difference between Ada as defined by the 
>standard and particular Ada implementations. As Bob says, implementers don't 
>make their compilers generate bad code if they don't have a good reason. But 
>the standard (like all standards) give the implementers some flexibility to 
>generate the best code for a given target. So some questions (including 
>*all* questions about performance) have to be answered in the context of 
>particular implementations (and often, particular code). "Ada" (or "C" for 
>that matter) doesn't have performance characteristics; Ada implementations 
>like GNAT or Janus or ObjectAda do.
>
>One more point: when you suppress checks in Ada, all you are doing is 
>telling the compiler that "I guarentee that these checks will not fail". If 
>they do fail, the program is what Ada calls erroneous -- and the code is 
>allowed to do anything at all (including, as the saying goes, "erasing the 
>hard disk" -- although I'm not aware of that ever actually happening on a 
>protected OS like Windows or Linux. More likely, it would cause some sort of 
>buffer overflow bug). That includes raising the exception that the check 
>including. Since the OP is mostly interested in performance, this doesn't 
>matter -- but it might in a safety-critical application, so it has to be 
>defined in the Standard.
>
>                                   Randy.
>


If a program uses the pragma Suppress or Restrictions, it just means that 
the author is required to insert data verification where the program needs 
it.

And in the case of integer math most CPUs do provide error checking that 
most OS trap and in some cases allow the program to provide the error 
handling.  Like in the case of divide-by-zero, which most CPU trap and 
allow most OS like Linux or windows to handle.  And these OS provides 
links for the program to bypass the OS default action.  Sometime it is 
easier to allow a secondard task to handler the hardware check than to 
insert checks with in the ptogram code.

Because why should a program verity and reverify a object such as a number 
each time that object is passed into a routine or a package.  The Suppress 
check can be used to reduce the code size as well as speeding up the 
executions.

And in the case of integer math the cpu does provide error check that 
most OS trap and in some cases allow the program to provide error 
handling.  Like in the case of divide-by-zero, which most CPU trap and 
allow most OS like Linux or windows to handle.  And these OS provides 
links for the program to bypass the OS default action.

Also, no program or OS can provide all check for any system or program. 
An example is in the rare case where a CPU or memory is in error how do 
you check that the system CPU or code memory while the program is running. 
So, any and all programs can and with time will result in erroneous 
executions as the hardware fails which is not predictable. Ada RM allows 
for this in RM 1.1.5 (9, 10).  So, any and all programs can become 
erroneous.




^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 19:52                       ` Robert A Duff
@ 2010-08-14  9:44                         ` Georg Bauhaus
  0 siblings, 0 replies; 94+ messages in thread
From: Georg Bauhaus @ 2010-08-14  9:44 UTC (permalink / raw)


On 8/13/10 9:52 PM, Robert A Duff wrote:
> Georg Bauhaus<rm.dash-bauhaus@futureapps.de>  writes:
>
>> 4. The description (however obsolescent) of pragma Suppress
>> ...
>
> Pragma Suppress is not obsolescent.  Just the "On =>  ..."
> feature is.

Sorry if spreading a misunderstanding.


Georg




^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-14  0:02                           ` Yannick Duchêne (Hibou57)
  2010-08-14  0:16                             ` (see below)
@ 2010-08-14 10:47                             ` Brian Drummond
  2010-08-14 13:58                               ` Yannick Duchêne (Hibou57)
  2010-08-14 14:51                               ` (see below)
  1 sibling, 2 replies; 94+ messages in thread
From: Brian Drummond @ 2010-08-14 10:47 UTC (permalink / raw)


On Sat, 14 Aug 2010 02:02:35 +0200, Yannick Duch�ne (Hibou57)
<yannick_duchene@yahoo.fr> wrote:

>Le Sat, 14 Aug 2010 01:29:56 +0200, <tmoran@acm.org> a �crit:
>>   The Burroughs machines were like that.  The descriptor pointing to
>> an array included the size as well as the base address, so index in-range
>> was automatically checked by the hardware.
>So these old machines were in some way safer than ones of nowadays
>
>A similar thing exist with Intel CPUs : memory segment are defined using a  
>base and a limit. This may be feasible to assign a memory segment to each  
>array and then get the same check done (an hardware exception would be  
>raised if ever an attempt to access a byte beyond the limit was maid),  
>while this would come with a cost, at least the one to have to switch  
>memory segment register all the time (this is more costly than reading and  
>writing any other register).

I believe that was the serious intent behind the segments on the 80286, but I
don't know of any software that took full advantage of it (the 8086 segments
were just a hack to address a whole megabyte!). Possibly OS/2 did, at the
process level, but nothing at the object level within application software. 
(The segments also offered privilege levels, so your app couldn't corrupt a
kernel segment). But you're right, there were an extra few cycles penalty
incurred in using 80286 "protected" mode, and at the time, that mattered to
users.

I expect that penalty would be completely hidden by a cache line load nowadays,
if the facility had been used heavily enough to justify improving it, instead of
regarding it as an expensive aberration and letting it quietly atrophy.

>I found a nice page about Burroughs machines and others:
>http://www.cs.virginia.edu/brochure/museum.html

Allegedly the Intel 432 chips pictured there were intended specifically to
target Ada, and the 286 inherited its segments from them(possibly with B5000
influence) as the 432 feature most worth saving, with the smallest
implementation penalty.

I once had my hands on a set of 432 datasheets - its slowness appears to have
largely been the result of pushing a huge microcode word across a 16-bit bus, to
save pins on the very expensive custom packages (before PGA packages).

I was involved with the Linn Rekursiv processor (which I'm sure at least two
other group regulars remember) which used 299-pin packages (2 of them plus a
223-pin) for enough connectivity to overcome most of the 432's problems. There
were some similarities between the two processors otherwise.
In a single cycle, one small corner of the Rekursiv would perform both range
checks and type checks, in parallel with whatever the ALU was doing on the real
data.

At the time it was billed as the most complex processor in existence, but if you
look at the complexity of the hardware needed to support safety and security
without additional execution time, it is pretty much invisible next to a
superscalar scheduler, branch prediction, or L1 cache. The entire Rekursiv was
about 70k gates + memory, next to a contemporary RISC of around 20-30k.

There was much talk then of "bridging the semantic gap" between high level
languages and the low level operations in hardware. It seems we have
subsequently bridged the semantic gap, by leaning to program in C...

- Brian



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-14 10:47                             ` Brian Drummond
@ 2010-08-14 13:58                               ` Yannick Duchêne (Hibou57)
  2010-08-15  0:23                                 ` Brian Drummond
  2010-08-14 14:51                               ` (see below)
  1 sibling, 1 reply; 94+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-08-14 13:58 UTC (permalink / raw)


Le Sat, 14 Aug 2010 12:47:27 +0200, Brian Drummond  
<brian_drummond@btconnect.com> a écrit:
> In a single cycle, one small corner of the Rekursiv would perform both  
> range
> checks and type checks,

Range check... and type check. Just real vs integer or was their some kind  
of typing in integers as well ? Also may be character, BCD and the like ?

> The entire  Rekursiv was
> about 70k gates + memory, next to a contemporary RISC of around 20-30k.

70 000 gates, that's light.

> There was much talk then of "bridging the semantic gap" between high  
> level
> languages and the low level operations in hardware.

Yes. This matter seems gone now, or else nobody care any more or else  
nobody believe in it any more.

Thanks for all of your testimony/account, was very interesting :)

-- 
There is even better than a pragma Assert: a SPARK --# check.
--# check C and WhoKnowWhat and YouKnowWho;
--# assert Ada;
--  i.e. forget about previous premises which leads to conclusion
--  and start with new conclusion as premise.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-14 10:47                             ` Brian Drummond
  2010-08-14 13:58                               ` Yannick Duchêne (Hibou57)
@ 2010-08-14 14:51                               ` (see below)
  2010-08-15  0:58                                 ` Brian Drummond
  1 sibling, 1 reply; 94+ messages in thread
From: (see below) @ 2010-08-14 14:51 UTC (permalink / raw)


On 14/08/2010 11:47, in article jvqc66t42m4ek5rkpj59ctu3r75fb4hln2@4ax.com,
"Brian Drummond" <brian_drummond@btconnect.com> wrote:

> I was involved with the Linn Rekursiv processor (which I'm sure at least two
> other group regulars remember)

Indeed. Though David Harland's ideas on programming language design were
about as far removed from the Ada philosophy as it is possible to get.

> There was much talk then of "bridging the semantic gap" between high level
> languages and the low level operations in hardware. It seems we have
> subsequently bridged the semantic gap, by leaning to program in C...

One of our subject's many self-inflicted tragedies (c.f. the apparently
pervasive ignorance about the Burroughs architecture).

-- 
Bill Findlay
<surname><forename> chez blueyonder.co.uk





^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 15:08                 ` Elias Salomão Helou Neto
                                     ` (2 preceding siblings ...)
  2010-08-13 16:58                   ` (see below)
@ 2010-08-14 16:13                   ` Charles H. Sampson
  2010-08-16 13:45                     ` Elias Salomão Helou Neto
  3 siblings, 1 reply; 94+ messages in thread
From: Charles H. Sampson @ 2010-08-14 16:13 UTC (permalink / raw)


Elias Salom�o Helou Neto <eshneto@gmail.com> wrote:

> ...
> 
> Well, we are definitely drifting away from the original question. So
> much that I am considering to give up reading answers from this
> post...
> 
> Anyway, someone prematurely implied that I may be doing premature
> optimization. The fact is that I do know, from profiling, how
> important is _not_ to range check in my specific application, so I
> will try to give you a "bottom line" of what have been said that
> really matters to me.
> 
> ...

     I think I'm the guy who implied that you're doing premature
optimization, or at least the guy who you think implied that.

     You appear to have a deep knowledge of your application.  I
certainly don't.  Therefore I have to defer to your judgement on the
question.  But I probably did raise the question that you _might be_
prematurely worrying about optimiziing.  What I remember saying is that
almost every time  _I_ make a guess about optimization, I'm wrong.

     A number of people have given you information about optimization of
a number of Ada compilers.  I'll give you an example of the last one I
worked with.  This compiler uses a not uncommon approach to code
generation.  It initially generates pretty mediocre code, in particular
code that contains a lot of unnecessary range checks and even
unnecessary register loads.  Then, if the user wants, it passes that
code through a pretty good optimizer.  You might say that the compiler
has two modes, development and production.

     My last project was a real-time application.  We hit all of our
deadlines, which is all that counts in the real-time world.  (We
achieved this in the crudest way: We beat it about the head and
shoulders with a very fast CPU and gobs of memory.)  Nonetheless, twice
during my time on the project, I compiled the whole application using
the optimizer (but not with range checks, or any checks, explicitly
suppressed).  The generated code improved dramatically.  Checks that the
optimizer could verify weren't needed disappeared.  Code motion did some
pretty things.  And so on.  However, when I ran this much improved
version, I couldn't see any difference between it and the crude version,
including any increase in the amount of time the application was in its
idle state.  In short, whatever improvement there was in execution,
wasn't worth the increase in compilation time.

     That's a pretty standard story, illustrating that you usally need
to analyze before optimizing.  However, as people love to say, YMMV.
(That's twitter-speak for "This might not apply in your case.")

                        Charlie
-- 
All the world's a stage, and most 
of us are desperately unrehearsed.  Sean O'Casey



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-14 13:58                               ` Yannick Duchêne (Hibou57)
@ 2010-08-15  0:23                                 ` Brian Drummond
  0 siblings, 0 replies; 94+ messages in thread
From: Brian Drummond @ 2010-08-15  0:23 UTC (permalink / raw)


On Sat, 14 Aug 2010 15:58:06 +0200, Yannick Duch�ne (Hibou57)
<yannick_duchene@yahoo.fr> wrote:

>Le Sat, 14 Aug 2010 12:47:27 +0200, Brian Drummond  
><brian_drummond@btconnect.com> a �crit:
>> In a single cycle, one small corner of the Rekursiv would perform both  
>> range
>> checks and type checks,
>
>Range check... and type check. Just real vs integer or was their some kind  
>of typing in integers as well ? Also may be character, BCD and the like ?

Integer was an object like anything else - you could ask 2 its class, size etc.

The equivalent of fetching a word from memory was "paging" an object, by its
object number, which would simultaneously fetch its type, size in words, address
in memory (if it had one), and the first word of its representation - usually
from a (155-bit wide) cache.

Integers and other simple types encoded the (32-bit) value in the (40-bit)
object number so didn't have a separate body in memory - the paging operation
unpacked all the above data as though it had come from cache, so even microcode
didn't see them as any different. Being an astronomer, David Harland named these
"compact objects"!

You could certainly subclass integers and impose other semantics - subranges,
units, currency exchange rates - on them if desired, but without explicit
microcode support, the subclasses would be memory-based objects and thus slower.

Explicit microcode support was not so far fetched, either - it had a writable
microcode store - though if there were ever a secure Rekursiv, I am convinced
the microcode would have to be in ROM. 

It had a novel approach in many ways - the microcode could be arbitrarily
complex, (recursive; hence the name) so garbage collection could be a single
instruction, or even none - running in the background. In theory you could turn
the nodes of a syntax tree into an instruction set, and execute parse trees
directly.

Page faults didn't interfere with complex instructions - instead of aborting
them, they were suspended on stacks, and resumed after handling the fault. 
(To be fair, the Rekursiv didn't do all that itself; to keep the project size
down, a separate "disk processor" handled the faults. Potentially an embedded
processor, but a Sun-3 workstation in the prototypes) 

>> The entire  Rekursiv was
>> about 70k gates + memory, next to a contemporary RISC of around 20-30k.
>
>70 000 gates, that's light.

Not so light in 1987...

- Brian





^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-14 14:51                               ` (see below)
@ 2010-08-15  0:58                                 ` Brian Drummond
  2010-08-15  1:58                                   ` (see below)
  0 siblings, 1 reply; 94+ messages in thread
From: Brian Drummond @ 2010-08-15  0:58 UTC (permalink / raw)


On Sat, 14 Aug 2010 15:51:15 +0100, "(see below)" <yaldnif.w@blueyonder.co.uk>
wrote:

>On 14/08/2010 11:47, in article jvqc66t42m4ek5rkpj59ctu3r75fb4hln2@4ax.com,
>"Brian Drummond" <brian_drummond@btconnect.com> wrote:
>
>> I was involved with the Linn Rekursiv processor (which I'm sure at least two
>> other group regulars remember)
>
>Indeed. Though David Harland's ideas on programming language design were
>about as far removed from the Ada philosophy as it is possible to get.

I'm not so sure on *design* - the goals were largely to achieve expressive
power, to give the programmer the best abstractions; the best means of
expressing the task, instead of working around the limitations of the language.

I was reminded today of his insistence there should be no difference between
intrinsic and extrinsic notation today, when I made a record "private" and
replaced all its field accesses with accessor functions - and because the syntax
was the same, the client code worked unchanged. 

So I suggest there is at least some common ground.

But on *implementation*, or the route to achieving those goals; absolutely!
Everything was to be done at runtime, by the hardware, rather than at compile
time. Including strict typing, or indeed any typing at all! 

And with _some_ justification - an Ada compiler at the time was not a trivial
task, to write or even to run... while the compilers of his were simple in the
extreme, ultimately supporting a language (called Lingo, long before Macromedia
borrowed the name) with a very compact grammar. This could be compiled to an
instruction set of eleven instructions. 

I won't claim any great merit in that simplicity, but it did allow a team of six
or seven people at its largest to develop the whole lot; from compilers through
microcode to chipset to circuit board. I wonder how many people worked on Ada...

>> There was much talk then of "bridging the semantic gap" between high level
>> languages and the low level operations in hardware. It seems we have
>> subsequently bridged the semantic gap, by leaning to program in C...
(ack! should have read: learning to program in C)

>One of our subject's many self-inflicted tragedies (c.f. the apparently
>pervasive ignorance about the Burroughs architecture).

I never had the chance to work on those, though I remember lectures on them.
Perhaps we need a Burroughs B5000 emulator; it would probably fit on a
medium-sized Spartan-3 FPGA...

- Brian



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-15  0:58                                 ` Brian Drummond
@ 2010-08-15  1:58                                   ` (see below)
  2010-08-15 10:31                                     ` Brian Drummond
  0 siblings, 1 reply; 94+ messages in thread
From: (see below) @ 2010-08-15  1:58 UTC (permalink / raw)


On 15/08/2010 01:58, in article irce665hsdkil92saql82nl41cc3vpbq1j@4ax.com,
"Brian Drummond" <brian_drummond@btconnect.com> wrote:

> On Sat, 14 Aug 2010 15:51:15 +0100, "(see below)" <yaldnif.w@blueyonder.co.uk>
> wrote:
> 
>> On 14/08/2010 11:47, in article jvqc66t42m4ek5rkpj59ctu3r75fb4hln2@4ax.com,
>> "Brian Drummond" <brian_drummond@btconnect.com> wrote:
>> 
>>> I was involved with the Linn Rekursiv processor (which I'm sure at least two
>>> other group regulars remember)
>> 
>> Indeed. Though David Harland's ideas on programming language design were
>> about as far removed from the Ada philosophy as it is possible to get.
> 
> I'm not so sure on *design* - the goals were largely to achieve expressive
> power, to give the programmer the best abstractions; the best means of
> expressing the task, instead of working around the limitations of the
> language.

Yes, but I fear he was over-optimistic about the ability of most programmers
to design coherent and effective language extensions/features (as opposed to
application-oriented abstractions) for themselves. We had many an argument
about that. I think the history of programming languages supports my
pessimism. Ada stands out, head and shoulders above the other 700K
programming languages.

>>> There was much talk then of "bridging the semantic gap" between high level
>>> languages and the low level operations in hardware. It seems we have
>>> subsequently bridged the semantic gap, by leaning to program in C...
> (ack! should have read: learning to program in C)

"leaning" works too, 8-)

-- 
Bill Findlay
<surname><forename> chez blueyonder.co.uk




^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-15  1:58                                   ` (see below)
@ 2010-08-15 10:31                                     ` Brian Drummond
  0 siblings, 0 replies; 94+ messages in thread
From: Brian Drummond @ 2010-08-15 10:31 UTC (permalink / raw)


On Sun, 15 Aug 2010 02:58:11 +0100, "(see below)" <yaldnif.w@blueyonder.co.uk>
wrote:

>On 15/08/2010 01:58, in article irce665hsdkil92saql82nl41cc3vpbq1j@4ax.com,

>>> Indeed. Though David Harland's ideas on programming language design were
>>> about as far removed from the Ada philosophy as it is possible to get.
>> 
>> I'm not so sure on *design* - the goals were largely to achieve expressive
>> power, to give the programmer the best abstractions; 

>Yes, but I fear he was over-optimistic about the ability of most programmers
>to design coherent and effective language extensions/features (as opposed to
>application-oriented abstractions) for themselves. 

I think that's an interesting distinction - when are you extending the language,
rather than abstracting over application details? Probably not for here and now.
But - maybe I shouldn't say this - I see some echoes of David's arguments when
Dmitry Kazakov is expressing frustration at Ada's limitations.

>We had many an argument
>about that. I think the history of programming languages supports my
>pessimism. Ada stands out, head and shoulders above the other 700K
>programming languages.

Yes, there is no doubt that Ada's complexity, though it was one of our drivers
to simplify SW (especially compilers) at the expense of the HW, is no longer
relevant. Compilers have advanced enormously, and machines can run them fast
enough!  

However I think the way Ada developed in 95 and 2005 brings it much closer to
the expressive power he was seeking, without compromising on its compile time
strengths. 

But, I am told, writing astronomy books is more fun anyway.

Thanks for sharing memories,

- Brian



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-06 20:21 Efficiency of code generated by Ada compilers Elias Salomão Helou Neto
                   ` (3 preceding siblings ...)
  2010-08-08 14:03 ` Gene
@ 2010-08-15 12:32 ` Florian Weimer
  4 siblings, 0 replies; 94+ messages in thread
From: Florian Weimer @ 2010-08-15 12:32 UTC (permalink / raw)


* Elias Salom�o Helou Neto:

> I would like to know how does code generated by Ada compilers compare
> to those generated by C++. I use C++ for numerical software
> implementation, but I am trying to find alternatives. One thing,
> however, I cannot trade for convenience is efficiency. Will Ada
> compiled code possibly be as efficient as that generated by C++
> compilers?

For Ada 83 features except tasking, performance will be very similar.

> Also, I do need to have something similar to C++ "templated
> metaprogramming" techniques. In particular, C++0x will introduce
> variadic templates, which will allow us to write templates that will
> generate efficient, type-safe, variable-argument functions. Is there
> anything like that in Ada?

You could try writing a code generator, possibly using ASIS.  Whether
this is practical depends on the task at hand.  A code generator is
typically not more difficult to build than a good template
meta-programming library (which is a code generator, too, but has to
be written in a rather bizarre way).

> If any of the above questions is to be negatively answered, I ask: why
> does Ada even exist?

It's older than (modern) C++, so there is tons of software that uses
it, and it has a relatively safe subset which is difficult to leave.

> And further, is there any language which is _truly_ better
> (regarding code maintainability, readability and developing ease)
> than C++ and as overhead-free as it?

D is often mentioned in this context.

For many tasks, it's difficult to do significantly better than C++
without garbage collection, so the question is whether you can afford
that.  (Few Ada compilers support garbage collection, though.)



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-13 15:37                   ` Dmitry A. Kazakov
@ 2010-08-16 13:29                     ` Elias Salomão Helou Neto
  2010-08-16 14:09                       ` Dmitry A. Kazakov
  0 siblings, 1 reply; 94+ messages in thread
From: Elias Salomão Helou Neto @ 2010-08-16 13:29 UTC (permalink / raw)


On Aug 13, 12:37 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:
> On Fri, 13 Aug 2010 08:08:42 -0700 (PDT), Elias Salomão Helou Neto wrote:
>
> > 1) You can, in more than one way, tell the compiler to suppress most
> > (any?) checks, but people do not advise to do so. Even if I say that I
> > do need that :(
>
> You think you need that, but you cannot know it, because you didn't used
> Ada and have no experience to realistically estimate the amount of checks
> an Ada compiler would introduce in your, not yet developed program.

This is a good point indeed! But it should be a really smart compiler
to realize that my ray-tracing algorithm is mathematically designed to
not to go out of bounds and so suppress automatically those checks.
Anyway, it wouldn't be wise not to try it out.

>
> > P.S: Multidimensional _arrays_ are not multidimensional _matrices_
>
> I don't see any difference, unless specialized non-dense matrices are
> considered.
>
> > neither are them MD _images_.
>
> Same here. Multi-channel images, sequences of images, scenes can be seen
> and implemented as  arrays. Again, unless some special hardware is in play.

Well, in such a case C could do the job, right? Encapsulating the low-
level array implementation within well designed classes, possibly
templated ones, would be a much better idea, would not?

What I meant from the beginning was to possibly write code such as
(sorry for the C++ syntax):

Matrix< 3, double > volumeImage( 100, 100, 100 );
Matrix< 2, char > asciiArt( 50, 75 );

asciiArt( 10, 10 ) = a;
volumeImage( 1, 2, 3 ) = 5.7;

Where those two or three dimensional members/constructors were
automatically generated by the compiler by using "template
metaprogramming". It will be possible with C++0x. It is already
possible by using the experimental implementation of the new standard
in GCC.

>
> > The two latter are far more specific
> > classes and the last case even needs to be templated in, say, pixel
> > type, etc.
>
> I don't see a great need in that. Usually the type of pixel (grayscale,
> tricky color models, separate channels etc) influences image operations so
> heavily that the common denominator might become too thin to be useful, at
> least in the case of image processing.

Humm, I guess you're not quite into the medical image reconstruction
field, right? Granted, you may, very likely, be a much better and
experienced programmer than I am (I am a mathematician, not a computer
scientist), but you should realize that non-overlapping pixels are not
the only beast in this world. There are useful spherically symmetric
pixels that may be parametrized in more than one way and for which you
won't find hardware implementation - recall it is image
reconstruction, not processing.

As for matrices, you've got it right when mentioned sparsity. But it
is not a truly fundamental issue for me, so let us move right along.

Again, you're right when you say that the common denominator may be
too thin. I am right now struggling to find a good compromise and for
that matter in C++ there are template specializations, which are great
in this sense - but far from ideal, of course.

> --
> Regards,
> Dmitry A. Kazakovhttp://www.dmitry-kazakov.de

Thanks,
Elias Salomão Helou Neto.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-14 16:13                   ` Charles H. Sampson
@ 2010-08-16 13:45                     ` Elias Salomão Helou Neto
  0 siblings, 0 replies; 94+ messages in thread
From: Elias Salomão Helou Neto @ 2010-08-16 13:45 UTC (permalink / raw)


On Aug 14, 1:13 pm, csamp...@inetworld.net (Charles H. Sampson) wrote:
> Elias Salomão Helou Neto <eshn...@gmail.com> wrote:
>
> > ...
>
> > Well, we are definitely drifting away from the original question. So
> > much that I am considering to give up reading answers from this
> > post...
>
> > Anyway, someone prematurely implied that I may be doing premature
> > optimization. The fact is that I do know, from profiling, how
> > important is _not_ to range check in my specific application, so I
> > will try to give you a "bottom line" of what have been said that
> > really matters to me.
>
> > ...
>
>      I think I'm the guy who implied that you're doing premature
> optimization, or at least the guy who you think implied that.
>
>      You appear to have a deep knowledge of your application.  I
> certainly don't.  Therefore I have to defer to your judgement on the
> question.  But I probably did raise the question that you _might be_
> prematurely worrying about optimiziing.  What I remember saying is that
> almost every time  _I_ make a guess about optimization, I'm wrong.

Dimitry pointed out above that the compiler could suppress the checks
by itself, which would mean my optimization was unnecessary anyway. I
think it was a good point indeed, but I kind of doubt that a compiler
could be smart enough for doing that in my case anyway. Maybe I am
just not smart enough to think of a good optimizing strategy for
compilers :)

And yes, I know how many times guesses about optimization are just
plain and simply wrong.

>      A number of people have given you information about optimization of
> a number of Ada compilers.  I'll give you an example of the last one I
> worked with.  This compiler uses a not uncommon approach to code
> generation.  It initially generates pretty mediocre code, in particular
> code that contains a lot of unnecessary range checks and even
> unnecessary register loads.  Then, if the user wants, it passes that
> code through a pretty good optimizer.  You might say that the compiler
> has two modes, development and production.
>
>      My last project was a real-time application.  We hit all of our
> deadlines, which is all that counts in the real-time world.  (We
> achieved this in the crudest way: We beat it about the head and
> shoulders with a very fast CPU and gobs of memory.)  Nonetheless, twice
> during my time on the project, I compiled the whole application using
> the optimizer (but not with range checks, or any checks, explicitly
> suppressed).  The generated code improved dramatically.  Checks that the
> optimizer could verify weren't needed disappeared.  Code motion did some
> pretty things.  And so on.  However, when I ran this much improved
> version, I couldn't see any difference between it and the crude version,
> including any increase in the amount of time the application was in its
> idle state.  In short, whatever improvement there was in execution,
> wasn't worth the increase in compilation time.
>
>      That's a pretty standard story, illustrating that you usally need
> to analyze before optimizing.  However, as people love to say, YMMV.
> (That's twitter-speak for "This might not apply in your case.")

In my case, things were somewhat different. For example, manually
telling GCC to inline some functions reduced running times in about
15%. Even if I turned GCC's optimizer fully on, it was not able to
automatically inline those functions. That should have been a simple
matter to the optimizer, since I did not use any pointer to that
function or something similar. I guess we are running into different
results here because I am writing numerical software that runs very
tight loops within which I have to address matrices elements in a
_very_ efficient manner.

>
>                         Charlie
> --
> All the world's a stage, and most
> of us are desperately unrehearsed.  Sean O'Casey

Thank you,
Elias Salomão Helou Neto



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-16 13:29                     ` Elias Salomão Helou Neto
@ 2010-08-16 14:09                       ` Dmitry A. Kazakov
  2010-08-18 14:00                         ` Elias Salomão Helou Neto
  0 siblings, 1 reply; 94+ messages in thread
From: Dmitry A. Kazakov @ 2010-08-16 14:09 UTC (permalink / raw)


On Mon, 16 Aug 2010 06:29:42 -0700 (PDT), Elias Salom�o Helou Neto wrote:

> Well, in such a case C could do the job, right?

Everything is Turing-Complete...

> Encapsulating the low-
> level array implementation within well designed classes, possibly
> templated ones, would be a much better idea, would not?
> 
> What I meant from the beginning was to possibly write code such as
> (sorry for the C++ syntax):
> 
> Matrix< 3, double > volumeImage( 100, 100, 100 );
> Matrix< 2, char > asciiArt( 50, 75 );
> 
> asciiArt( 10, 10 ) = a;
> volumeImage( 1, 2, 3 ) = 5.7;

Well, maybe, but it is already done in Ada. Furthermore, the compiler
"knows" about arrays and can apply array-specific optimizations or take
advantage of array-specific processor instructions. Not that optimization
is my concern, but it seemingly is yours.

> Where those two or three dimensional members/constructors were
> automatically generated by the compiler by using "template
> metaprogramming". It will be possible with C++0x. It is already
> possible by using the experimental implementation of the new standard
> in GCC.

I don't see any gain, because this is trivial compared to the real
challenges of modeling linear algebraic types. Consider description of the
types on which the matrix multiplication is defined. E.g.: NxMxK * KxM = N.
Here NxM denotes matrix with dimensions N and M. Is it one type or many
types or many subtypes? Templates give no choice but many different
unrelated types, when dimensionality is a parameter. Not my choice. Now you
have a combinatorial explosion of overloaded variants. Generate them, all
possible multiplication operation profiles for 4D, 3D, 2D, 1D matrices.
Make the constraint on matrix size in given dimension statically enforced
with an appropriate (understandable!) error messages when sizes are
statically known. Allow unconstrained matrices as well. Generate aggregates
for each matrix type. Provide matrix slicing operations for all types of
matrices (e.g. column from matrix, row from matrix, submatrix from matrix
etc). Have fun!

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-16 14:09                       ` Dmitry A. Kazakov
@ 2010-08-18 14:00                         ` Elias Salomão Helou Neto
  2010-08-18 16:38                           ` Dmitry A. Kazakov
  0 siblings, 1 reply; 94+ messages in thread
From: Elias Salomão Helou Neto @ 2010-08-18 14:00 UTC (permalink / raw)


> Well, maybe, but it is already done in Ada.

Really? This is great. Also, Ada compile-time checking features are
(seem to be) truly impressive, but I did not feel compelled to learn a
whole new language only because of them. Furthermore, C++ can use so
many numeric libraries which I am not sure would be available in Ada
that I tend to stay here in my comfort zone.

> I don't see any gain, because this is trivial compared to the real
> challenges of modeling linear algebraic types. Consider description of the
> types on which the matrix multiplication is defined. E.g.: NxMxK * KxM = N.
> Here NxM denotes matrix with dimensions N and M. Is it one type or many
> types or many subtypes? Templates give no choice but many different
> unrelated types, when dimensionality is a parameter. Not my choice. Now you
> have a combinatorial explosion of overloaded variants. Generate them, all
> possible multiplication operation profiles for 4D, 3D, 2D, 1D matrices.
> Make the constraint on matrix size in given dimension statically enforced
> with an appropriate (understandable!) error messages when sizes are
> statically known.

Not quite, because you could possibly write a templated operator* that
could make use of inhomogeneous operands (by that I mean different
types on each side of the operator call). To be honest, I am not sure
about how I would fill in the details, nor this is my concern now. But
I tend to believe it could be possibly done with compiler generated
code in C++0x (thanks to specialization, templates are also Turing
complete in that language!).

> Allow unconstrained matrices as well. Generate aggregates
> for each matrix type. Provide matrix slicing operations for all types of
> matrices (e.g. column from matrix, row from matrix, submatrix from matrix
> etc). Have fun!

I guess I will have! Really, this is fun. Just kidding, but I do not
actually see that amount of trouble if you could write a "slice
indexing" class that could be passed to some slicing operator (this
may not even be needed at all). It would be the proverbial "extra
layer of indirection" required to make the job easier, but let us not
delve in such details.

I really appreciate your interest and would be pleased to go on with
our conversation, even though it has gone off-topic.

Elias Salomão Helou Neto



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-18 14:00                         ` Elias Salomão Helou Neto
@ 2010-08-18 16:38                           ` Dmitry A. Kazakov
  2010-08-19 18:52                             ` Elias Salomão Helou Neto
  0 siblings, 1 reply; 94+ messages in thread
From: Dmitry A. Kazakov @ 2010-08-18 16:38 UTC (permalink / raw)


On Wed, 18 Aug 2010 07:00:56 -0700 (PDT), Elias Salom�o Helou Neto wrote:

> but I did not feel compelled to learn a
> whole new language only because of them.

Even if the language is so good? (:-))

> Furthermore, C++ can use so
> many numeric libraries which I am not sure would be available in Ada
> that I tend to stay here in my comfort zone.

You can do it in Ada, no problem actually. Interfacing to C is easy in Ada.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-18 16:38                           ` Dmitry A. Kazakov
@ 2010-08-19 18:52                             ` Elias Salomão Helou Neto
  2010-08-19 19:48                               ` Dmitry A. Kazakov
  0 siblings, 1 reply; 94+ messages in thread
From: Elias Salomão Helou Neto @ 2010-08-19 18:52 UTC (permalink / raw)


On Aug 18, 1:38 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:
> On Wed, 18 Aug 2010 07:00:56 -0700 (PDT), Elias Salomão Helou Neto wrote:
>
> > but I did not feel compelled to learn a
> > whole new language only because of them.
>
> Even if the language is so good? (:-))

Ok, ok, I just don't think I'll tackle the task right now, but where
do you recommend me to start from?

> > Furthermore, C++ can use so
> > many numeric libraries which I am not sure would be available in Ada
> > that I tend to stay here in my comfort zone.
>
> You can do it in Ada, no problem actually. Interfacing to C is easy in Ada.

This is really important to make me feel more confident.

Elias.



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-19 18:52                             ` Elias Salomão Helou Neto
@ 2010-08-19 19:48                               ` Dmitry A. Kazakov
  0 siblings, 0 replies; 94+ messages in thread
From: Dmitry A. Kazakov @ 2010-08-19 19:48 UTC (permalink / raw)


On Thu, 19 Aug 2010 11:52:08 -0700 (PDT), Elias Salom�o Helou Neto wrote:

> but where
> do you recommend me to start from?

Take something simple.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Efficiency of code generated by Ada compilers
  2010-08-10 22:26     ` Randy Brukardt
@ 2010-08-20  7:22       ` Yannick Duchêne (Hibou57)
  0 siblings, 0 replies; 94+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-08-20  7:22 UTC (permalink / raw)


Le Wed, 11 Aug 2010 00:26:45 +0200, Randy Brukardt <randy@rrsoftware.com>  
a écrit:

> "Elias Salomão Helou Neto" <eshneto@gmail.com> wrote in message
> news:8349c981-4dca-49dc-9189-8ea726234de3@f42g2000yqn.googlegroups.com...
> ...
>> Yes, I know that. I am, however, writing code within that 1% of
>> applications that would be tremendously affected if there is no way to
>> access arrays with no range checking. So I am asking very precisely:
>> does Ada allow me to do non range-checked access to arrays?

> So I'm suggesting that you try to avoid premature optimization. I can
> believe that there will be cases where you'll need to suppress range  
> checks, but I'd also suggest that those will be far rarer than you are  
> thinking.

Sorry for non-french reader, however the page I have read makes me think  
about this topic.

That is there, page 28:
http://www.infres.enst.fr/~pautet/sar/fset/LangageAdaTempsReel.pdf

Pragma Supress is quoted in this document, ... which is a document about  
real-time applications. Pragma Supress may be indeed most of time better  
suited for that area. At least, this may fund the idea that this Pragma  
was not first intended to save 0.001s execution time on common  
applications.



^ permalink raw reply	[flat|nested] 94+ messages in thread

end of thread, other threads:[~2010-08-20  7:22 UTC | newest]

Thread overview: 94+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-08-06 20:21 Efficiency of code generated by Ada compilers Elias Salomão Helou Neto
2010-08-06 20:24 ` (see below)
2010-08-06 23:14 ` Shark8
2010-08-07  7:53 ` Dmitry A. Kazakov
2010-08-10 13:52   ` Elias Salomão Helou Neto
2010-08-10 14:24     ` Shark8
2010-08-10 14:28     ` Shark8
2010-08-10 15:01     ` Robert A Duff
2010-08-10 15:14       ` Yannick Duchêne (Hibou57)
2010-08-10 18:32         ` Robert A Duff
2010-08-10 15:10     ` Georg Bauhaus
2010-08-10 15:32     ` Dmitry A. Kazakov
2010-08-10 22:26     ` Randy Brukardt
2010-08-20  7:22       ` Yannick Duchêne (Hibou57)
2010-08-08 14:03 ` Gene
2010-08-08 15:49   ` Robert A Duff
2010-08-08 17:13     ` Charles H. Sampson
2010-08-08 18:11       ` Dmitry A. Kazakov
2010-08-08 20:51       ` Robert A Duff
2010-08-08 22:10         ` (see below)
2010-08-08 22:22           ` Robert A Duff
2010-08-09  4:46         ` Yannick Duchêne (Hibou57)
2010-08-09  5:52         ` J-P. Rosen
2010-08-09 13:28           ` Robert A Duff
2010-08-09 18:42             ` Jeffrey Carter
2010-08-09 19:05               ` Robert A Duff
2010-08-10 10:00                 ` Jacob Sparre Andersen
2010-08-10 12:39                   ` Robert A Duff
2010-08-09 19:33             ` Yannick Duchêne (Hibou57)
2010-08-09 21:42               ` Robert A Duff
2010-08-10 12:26         ` Phil Clayton
2010-08-10 12:57           ` Yannick Duchêne (Hibou57)
2010-08-10 14:03             ` Elias Salomão Helou Neto
2010-08-10 14:27               ` Yannick Duchêne (Hibou57)
2010-08-10 22:50                 ` anon
2010-08-10 23:28                   ` Yannick Duchêne (Hibou57)
2010-08-10 23:38                     ` Yannick Duchêne (Hibou57)
2010-08-11  7:06                       ` Niklas Holsti
2010-08-11 11:58                         ` anon
2010-08-11 12:37                           ` Georg Bauhaus
2010-08-11 13:13                         ` Robert A Duff
2010-08-11 23:49                           ` Randy Brukardt
2010-08-10 14:31               ` Shark8
2010-08-11  7:14               ` Charles H. Sampson
2010-08-11  6:42         ` Charles H. Sampson
2010-08-08 22:35     ` tmoran
2010-08-09 13:53       ` Robert A Duff
2010-08-09 17:59         ` tmoran
2010-08-09 19:36           ` Yannick Duchêne (Hibou57)
2010-08-09 21:38             ` Robert A Duff
2010-08-11  7:42       ` Charles H. Sampson
2010-08-11 13:38         ` Robert A Duff
2010-08-12  7:48           ` Charles H. Sampson
2010-08-12  8:08             ` Ludovic Brenta
2010-08-12 17:10               ` Charles H. Sampson
2010-08-12 18:06                 ` Jeffrey Carter
2010-08-11 18:49         ` Simon Wright
2010-08-12  7:54           ` Charles H. Sampson
2010-08-12  8:36             ` Dmitry A. Kazakov
2010-08-12 11:04             ` Brian Drummond
2010-08-12 19:23             ` Simon Wright
2010-08-12 20:21               ` (see below)
2010-08-13 15:08                 ` Elias Salomão Helou Neto
2010-08-13 15:10                   ` Elias Salomão Helou Neto
2010-08-13 18:01                     ` Georg Bauhaus
2010-08-13 19:52                       ` Robert A Duff
2010-08-14  9:44                         ` Georg Bauhaus
2010-08-13 20:22                     ` Robert A Duff
2010-08-14  1:34                       ` Randy Brukardt
2010-08-14  7:18                         ` anon
2010-08-13 21:57                     ` Jeffrey Carter
2010-08-13 22:37                       ` Yannick Duchêne (Hibou57)
2010-08-13 22:43                         ` Yannick Duchêne (Hibou57)
2010-08-13 23:29                         ` tmoran
2010-08-14  0:02                           ` Yannick Duchêne (Hibou57)
2010-08-14  0:16                             ` (see below)
2010-08-14 10:47                             ` Brian Drummond
2010-08-14 13:58                               ` Yannick Duchêne (Hibou57)
2010-08-15  0:23                                 ` Brian Drummond
2010-08-14 14:51                               ` (see below)
2010-08-15  0:58                                 ` Brian Drummond
2010-08-15  1:58                                   ` (see below)
2010-08-15 10:31                                     ` Brian Drummond
2010-08-13 15:37                   ` Dmitry A. Kazakov
2010-08-16 13:29                     ` Elias Salomão Helou Neto
2010-08-16 14:09                       ` Dmitry A. Kazakov
2010-08-18 14:00                         ` Elias Salomão Helou Neto
2010-08-18 16:38                           ` Dmitry A. Kazakov
2010-08-19 18:52                             ` Elias Salomão Helou Neto
2010-08-19 19:48                               ` Dmitry A. Kazakov
2010-08-13 16:58                   ` (see below)
2010-08-14 16:13                   ` Charles H. Sampson
2010-08-16 13:45                     ` Elias Salomão Helou Neto
2010-08-15 12:32 ` Florian Weimer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox