comp.lang.ada
 help / color / mirror / Atom feed
* Overhead for type conversions?
@ 2003-07-25  1:01 Bobby D. Bryant
  2003-07-25  2:36 ` Robert I. Eachus
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Bobby D. Bryant @ 2003-07-25  1:01 UTC (permalink / raw)



Given these declarations -

   type f1 is new float;
   type f2 is new float;

   function "*"( left : in f1; right : in f2 ) return f1 is
   begin
      return( left * f1( right ) );
   end "*";

and given that run-time range checks are enabled, should I expect my
compiler to execute range checking code for each f1*f2 multiplication,
or will the compiler conclude from the definitions that the two types
are always convertible without error regardless of the specific value,
and thus not generate any code for a range check?

(In practice the compiler is GNAT.)

Thanks,
-- 
Bobby Bryant
Austin, Texas




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Overhead for type conversions?
  2003-07-25  1:01 Overhead for type conversions? Bobby D. Bryant
@ 2003-07-25  2:36 ` Robert I. Eachus
  2003-07-25  7:12   ` Bobby D. Bryant
  2003-07-25 15:34 ` Matthew Heaney
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: Robert I. Eachus @ 2003-07-25  2:36 UTC (permalink / raw)


Bobby D. Bryant wrote:

> and given that run-time range checks are enabled, should I expect my
> compiler to execute range checking code for each f1*f2 multiplication,
> or will the compiler conclude from the definitions that the two types
> are always convertible without error regardless of the specific value,
> and thus not generate any code for a range check?
> 
> (In practice the compiler is GNAT.)

GNAT chooses IEEE representation for Float with non-signalling NaNs.  In 
other words there are operation which will generate +infinity or 
-infinity but no constraint checks.

Of course this means that if you really want to force a Constraint_Error 
you have to assign a Float to a variable with a constraint.  To 
reference a huge amount of sound and fury going on right now, something 
like:

   BH := Short_Integer(Some_Float_Value);

will do just fine. (But then you have to decide how the exception should 
be handled. ;-)

-- 

                                                        Robert I. Eachus

�In an ally, considerations of house, clan, planet, race are 
insignificant beside two prime questions, which are: 1. Can he shoot? 2. 
Will he aim at your enemy?� -- from the Laiden novels by Sharon Lee and 
Steve Miller.




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Overhead for type conversions?
  2003-07-25  2:36 ` Robert I. Eachus
@ 2003-07-25  7:12   ` Bobby D. Bryant
  0 siblings, 0 replies; 9+ messages in thread
From: Bobby D. Bryant @ 2003-07-25  7:12 UTC (permalink / raw)


On Fri, 25 Jul 2003 02:36:08 +0000, Robert I. Eachus wrote:

> Bobby D. Bryant wrote:
> 
>> and given that run-time range checks are enabled, should I expect my
>> compiler to execute range checking code for each f1*f2 multiplication,
>> or will the compiler conclude from the definitions that the two types
>> are always convertible without error regardless of the specific value,
>> and thus not generate any code for a range check?
>> 
>> (In practice the compiler is GNAT.)
> 
> GNAT chooses IEEE representation for Float with non-signalling NaNs.  In
> other words there are operation which will generate +infinity or
> -infinity but no constraint checks.
> 
> Of course this means that if you really want to force a Constraint_Error
> you have to assign a Float to a variable with a constraint.

There's not really any possibility of error, since any fp value is
potentially legitimate for either type.  In fact I have traditionally
gotten by with simply using float for both kinds of variable; I was just
thinking about separating the types for logical clarity and to make some
composite structures based on the two types incompatible.  Also I would
like to get away from using float directly so I'll have a single point of
change if I decide to start using long floats later on.

The only reason I'm asking about type conversion overhead in this case is
because my application already does many billions of multiplications over
the course of a run, and within a few years that number might rise to
trillions; and since I can live with float for both types I don't want to
saddle the application with a lot of avoidable overhead.

Thanks,
-- 
Bobby Bryant
Austin, Texas




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Overhead for type conversions?
  2003-07-25  1:01 Overhead for type conversions? Bobby D. Bryant
  2003-07-25  2:36 ` Robert I. Eachus
@ 2003-07-25 15:34 ` Matthew Heaney
  2003-07-25 18:28 ` Randy Brukardt
  2003-07-26 15:54 ` Nick Roberts
  3 siblings, 0 replies; 9+ messages in thread
From: Matthew Heaney @ 2003-07-25 15:34 UTC (permalink / raw)


"Bobby D. Bryant" <bdbryant@mail.utexas.edu> wrote in message news:<pan.2003.07.25.01.01.43.5699@mail.utexas.edu>...
> Given these declarations -
> 
>    type f1 is new float;
>    type f2 is new float;
> 
>    function "*"( left : in f1; right : in f2 ) return f1 is
>    begin
>       return( left * f1( right ) );
>    end "*";
> 
> and given that run-time range checks are enabled, should I expect my
> compiler to execute range checking code for each f1*f2 multiplication,
> or will the compiler conclude from the definitions that the two types
> are always convertible without error regardless of the specific value,
> and thus not generate any code for a range check?

Floating point types don't have constraint checks, unless the type is
declared that way.

The declaration

  type FT is digits 6;

doesn't have any range constraint, and I think that means that there
are no constraint checks.  I'm not sure about overflow checks.

However, the type declaration

   type FT is digits 6 range X .. Y;

does have a constraint, and so constraint checks apply.

This was one of the changes from Ada83 to Ada95: part of the ARG's
charter was to improve efficiency.

So to answer your question, I don't think there is any overhead for
the type conversion.  There are no constraint checks, because the type
doesn't have any constraints.  If the type had had constraints, then
constraint checks would apply.

Bob Duff answered a similar question about the floating point base
types, on CLA during 1997/10/03:

http://groups.google.com/groups?q=+%27base+group:comp.lang.ada+author:duff&hl=en&lr=&ie=UTF-8&selm=EHHp4E.AC4%40world.std.com&rnum=4

See also:

http://groups.google.com/groups?q=+%27base+group:comp.lang.ada+author:duff&hl=en&lr=&ie=UTF-8&selm=E4oE3q.1tq%40world.std.com&rnum=1

-Matt



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Overhead for type conversions?
  2003-07-25  1:01 Overhead for type conversions? Bobby D. Bryant
  2003-07-25  2:36 ` Robert I. Eachus
  2003-07-25 15:34 ` Matthew Heaney
@ 2003-07-25 18:28 ` Randy Brukardt
  2003-07-26 15:54 ` Nick Roberts
  3 siblings, 0 replies; 9+ messages in thread
From: Randy Brukardt @ 2003-07-25 18:28 UTC (permalink / raw)


"Bobby D. Bryant" <bdbryant@mail.utexas.edu> wrote in message
news:pan.2003.07.25.01.01.43.5699@mail.utexas.edu...
>
> Given these declarations -
>
>    type f1 is new float;
>    type f2 is new float;
>
>    function "*"( left : in f1; right : in f2 ) return f1 is
>    begin
>       return( left * f1( right ) );
>    end "*";

Matt answered your original question correctly, I think.

But I wanted to point out that you are likely to get into trouble with
expressions containing literals if you actually have such declarations in
your program.

    V1, V2 : F1;

    V2 := V1 * 12.0; -- Ambiguous (12.0 could have type F1 or F2).
    V2 := V1 * F1'(12.0); -- OK.

So you might have to modify essentially all literal uses in that code.

                        Randy.







^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Overhead for type conversions?
  2003-07-25  1:01 Overhead for type conversions? Bobby D. Bryant
                   ` (2 preceding siblings ...)
  2003-07-25 18:28 ` Randy Brukardt
@ 2003-07-26 15:54 ` Nick Roberts
  2003-07-26 16:01   ` Warren W. Gay VE3WWG
  3 siblings, 1 reply; 9+ messages in thread
From: Nick Roberts @ 2003-07-26 15:54 UTC (permalink / raw)


"Bobby D. Bryant" <bdbryant@mail.utexas.edu> wrote in message
news:pan.2003.07.25.01.01.43.5699@mail.utexas.edu...

> Given these declarations -
>
>    type f1 is new float;
>    type f2 is new float;
>
>    function "*"( left : in f1; right : in f2 ) return f1 is
>    begin
>       return( left * f1( right ) );
>    end "*";
>
> and given that run-time range checks are enabled, should I expect my
> compiler to execute range checking code for each f1*f2 multiplication,
> or will the compiler conclude from the definitions that the two types
> are always convertible without error regardless of the specific value,
> and thus not generate any code for a range check?

The type Standard.Float is unconstrained (RM95 3.5.7(12)), so the types f1
and f2 derived from it will also be unconstrained (RM95 3.4(6)). Thus, there
will be no range checks (there may be other checks).

I would also suggest that a function like this should be declared inline,
since it is likely that the eliminated call and return code would be bigger
(as well as much slower) than the intrinsic code for the multiplication
anyway.

Furthermore, should f1 and/or f2 become constrained in the future, such
inlining could give an optimising compiler the opportunity to eliminate
range checks (and to perform other optimisations).

> (In practice the compiler is GNAT.)

I can't help you with the specifics of GNAT; sorry.

--
Nick Roberts
Jabber: debater@charente.de [ICQ: 159718630]






^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Overhead for type conversions?
  2003-07-26 15:54 ` Nick Roberts
@ 2003-07-26 16:01   ` Warren W. Gay VE3WWG
  2003-07-26 23:39     ` Bobby D. Bryant
  0 siblings, 1 reply; 9+ messages in thread
From: Warren W. Gay VE3WWG @ 2003-07-26 16:01 UTC (permalink / raw)


"Nick Roberts" <nickroberts@blueyonder.co.uk> wrote in message news:bfu83g$ibodl$1@ID-25716.news.uni-berlin.de...
> "Bobby D. Bryant" <bdbryant@mail.utexas.edu> wrote in message
> news:pan.2003.07.25.01.01.43.5699@mail.utexas.edu...
>
> > Given these declarations -
> >
> >    type f1 is new float;
> >    type f2 is new float;
> >
> >    function "*"( left : in f1; right : in f2 ) return f1 is
> >    begin
> >       return( left * f1( right ) );
> >    end "*";
> >
...
> I would also suggest that a function like this should be declared inline,
> since it is likely that the eliminated call and return code would be bigger
> (as well as much slower) than the intrinsic code for the multiplication
> anyway.

I once suggested something like this to Simon Wright regarding
some procedure/function calls within the Booch Components. He
did some profiling, and discovered that inlining that I suggested
actually worsened the performance slightly under Linux. I don't
know if he did more investigations along this line, but for
the examples that I suggested, it actually made things worse
(due to instruction caching reasons I expect).

So inlining should probably be tested before the conclusion is
made. Furthermore, inlining on one platform may hurt another
platform's performance (which is a difficult portability issue
to deal with in the source code without a preprocessor).

Perhaps a platform sensitive pragma Inline is called for? ;-)

-- 
Warren W. Gay VE3WWG
http://home.cogeco.ca/~ve3wwg





^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Overhead for type conversions?
  2003-07-26 16:01   ` Warren W. Gay VE3WWG
@ 2003-07-26 23:39     ` Bobby D. Bryant
  2003-07-27 13:41       ` Loop Optimisation [was: Overhead for type conversions?] Nick Roberts
  0 siblings, 1 reply; 9+ messages in thread
From: Bobby D. Bryant @ 2003-07-26 23:39 UTC (permalink / raw)


On Sat, 26 Jul 2003 12:01:43 -0400, Warren W. Gay VE3WWG wrote:

> "Nick Roberts" <nickroberts@blueyonder.co.uk> wrote in message news:bfu83g$ibodl$1@ID-25716.news.uni-berlin.de...
>> "Bobby D. Bryant" <bdbryant@mail.utexas.edu> wrote in message
>> news:pan.2003.07.25.01.01.43.5699@mail.utexas.edu...
>>
>> > Given these declarations -
>> >
>> >    type f1 is new float;
>> >    type f2 is new float;
>> >
>> >    function "*"( left : in f1; right : in f2 ) return f1 is
>> >    begin
>> >       return( left * f1( right ) );
>> >    end "*";
>> >
> ...
>> I would also suggest that a function like this should be declared inline,
>> since it is likely that the eliminated call and return code would be bigger
>> (as well as much slower) than the intrinsic code for the multiplication
>> anyway.

I actually used the inline pragma in the test program I wrote before
posting, but decided to delete it from the post in order to avoid raising
an issue that I thought irrelevant to the basic question.  (But of course
it isn't really irrelevant, since the question of inlining wouldn't be
raised at all if I just used floats instead of the two derived types.)


> I once suggested something like this to Simon Wright regarding
> some procedure/function calls within the Booch Components. He
> did some profiling, and discovered that inlining that I suggested
> actually worsened the performance slightly under Linux. I don't
> know if he did more investigations along this line, but for
> the examples that I suggested, it actually made things worse
> (due to instruction caching reasons I expect).
> 
> So inlining should probably be tested before the conclusion is
> made. [...]

Yes, I have profiled some code a time or two and discovered that some
"obvious" inlining didn't help, and in fact actually appeared to hurt a
little in one case.  So I now assert that the desirability of inlining is
an empirical issue, though in fact I still make the decision
heuristically more often than empirically.

-- 
Bobby Bryant
Austin, Texas




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Loop Optimisation [was: Overhead for type conversions?]
  2003-07-26 23:39     ` Bobby D. Bryant
@ 2003-07-27 13:41       ` Nick Roberts
  0 siblings, 0 replies; 9+ messages in thread
From: Nick Roberts @ 2003-07-27 13:41 UTC (permalink / raw)


"Bobby D. Bryant" <bdbryant@mail.utexas.edu> wrote in message
news:pan.2003.07.26.23.39.45.556296@mail.utexas.edu...
> On Sat, 26 Jul 2003 12:01:43 -0400, Warren W. Gay VE3WWG wrote:
> ...
> > I once suggested something like this to Simon Wright regarding
> > some procedure/function calls within the Booch Components. He
> > did some profiling, and discovered that inlining that I suggested
> > actually worsened the performance slightly under Linux. I don't
> > know if he did more investigations along this line, but for
> > the examples that I suggested, it actually made things worse
> > (due to instruction caching reasons I expect).
> >
> > So inlining should probably be tested before the conclusion is
> > made. [...]

In order to illustrate this, perhaps bizarre-seeming, phenomenon, consider
the following loop:

   loop
      ...
      if C then
         P(...);
      else
         Q(...);
      end if;
      ...
   end loop;

Typical modern processors have a very limited (second level) instruction
cache (a few kilobytes), with limited (or no) associativity. Let's assume
that the code for this loop, where the call (in the Ada sense) of procedure
P is expanded to call instructions (in the machine code sense), all fits
into the instruction cache. Let's also assume that P will tend to be called
less frequently than Q.

If procedure P is inlined, the call to procedure P is replaced by the code
representing P's body; this could cause the size of the loop code to
increase so much that it causes the overall loop to become bigger than the
instruction cache. In this case, the slowing down caused by having to
retrieve some of the instructions in the loop from the first-level cache on
each iteration of the loop could outweigh the speeding up achieved by
eliminating the instructions associated with the machine call to and return
from P.

However, look at a possible skeleton assembly expansion of the loop:

L1:
   ...
   [test C, truth in flag x]
   JNx L2
   [call or expand P]
   JMP L3
L2:
   [Q]
L3:
  ...
L4:
  ;end of loop

JMP means 'jump'. JNx means 'jump if not x'. A clever compiler can (and
will!) change this expansion to the following:

L1:
   ...
   [test C, truth in flag x]
   Jx L2
   [Q]
L3:
  ...
L4:
  ;end of loop
   ...
L2:
   [call or expand P]
   JMP L3

In other words, it pushes the code associated with the less frequently
executed branch of the 'if' statement outside the loop, precisely so as to
avoid the cache overflow problem.

This is a classic form of loop optimisation in the literature. I'd be very
interested to know how many Ada compilers are capable of performing it in
practice.

The much-touted modern technique for determining which branch of an
if-then-else is least often executed is profiling. Personally, my preference
would be for the compiler to assume the 'else' part is more often executed
by default, and to provide a pragma to instruct the compiler otherwise.

I might propose a 'straw man' for such a pragma as follows:

----------

Syntax

   pragma Principal_Path;

Static Semantics

The effect defined by this standard applies only when at most one pragma
Principal_Path occurs within an if statement which contains more than one
sequence of statements. In all other cases, the effect of pragma
Principal_Path is implementation-defined.

Pragma Principal_Path advises the implementation that optimization is to be
based on the assumption that the sequence of statements in which it occurs
will be executed more frequently than any other sequences of statements in
the same if statement.

In the absence of any pragma Principal_Path within an if statement which
contains more than one sequence of statements, the implementation should
assume that the last sequence of statements in the if statement will be
executed more frequently than any of the other sequences of statements.

An implementation should always assume, for any if statement, that any
sequence of statements within the if statement will be executed less
frequently than the if statement itself.

Implementatation Permissions

An implementation may ignore pragma Principal_Path; in particular, it may
override the effect of this pragma (for example, on the basis of a profiling
analysis).

An implementation may provide more sophisticated ways for the programmer to
express relative path importance -- for example by ranking paths
numerically -- possibly by defining the effect of pragma Principal_Path in
places where this standard does not, or by defining parameters for pragma
Principal_Path, or by some other means. Such extensions should be
documented.

----------

I'd also suggest a pragma for the parallelisation of 'for' loops. Maybe in
another post.

--
Nick Roberts
Jabber: debater@charente.de [ICQ: 159718630]






^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2003-07-27 13:41 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-07-25  1:01 Overhead for type conversions? Bobby D. Bryant
2003-07-25  2:36 ` Robert I. Eachus
2003-07-25  7:12   ` Bobby D. Bryant
2003-07-25 15:34 ` Matthew Heaney
2003-07-25 18:28 ` Randy Brukardt
2003-07-26 15:54 ` Nick Roberts
2003-07-26 16:01   ` Warren W. Gay VE3WWG
2003-07-26 23:39     ` Bobby D. Bryant
2003-07-27 13:41       ` Loop Optimisation [was: Overhead for type conversions?] Nick Roberts

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox