comp.lang.ada
 help / color / mirror / Atom feed
* Can compilers do this?
@ 1996-02-22  0:00 BWBurnsed
  1996-02-23  0:00 ` Robert Dewar
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: BWBurnsed @ 1996-02-22  0:00 UTC (permalink / raw)


I came across some very strange looking code that someone else wrote,
long, long ago, and (apparently) in a universe far, far away. But before I

make too many critical comments, I want to be sure I'm not missing
something.

Repeatedly in this code (in many files), there are places where a floating
point variable is tested to see if it is negative. However, the way it is
done is:

     if  X * abs(X)  <  0.0  then  ...

Is there (or was there ever) some pathological anomaly about floating
point 
implementations that would make a conversion (abs) and floating point
multiply 
preferable to testing a sign bit? Can, and will, optimizing compilers
recognize the
real test desired in such constructs, i.e. reduce it to a sign bit test?

Also, suppose Y and X are floating point variables, and M and B are 
CONSTANT floating point variables (not named numbers) initialized to
1.0  and  0.0  respectively. If one writes

     Y := M * X  +  B ;

can (and will) any compiler reduce this to

     Y := X ;

If not, could it do so if M and B were named numbers?

Thanks,
BwB




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Can compilers do this?
  1996-02-22  0:00 Can compilers do this? BWBurnsed
  1996-02-23  0:00 ` Robert Dewar
@ 1996-02-23  0:00 ` Mark A Biggar
  1996-02-24  0:00   ` Robert A Duff
  1996-02-25  0:00   ` Robert Dewar
  1996-02-23  0:00 ` Stuart Palin
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 12+ messages in thread
From: Mark A Biggar @ 1996-02-23  0:00 UTC (permalink / raw)


In article <4gjd6g$mfq@newsbf02.news.aol.com> bwburnsed@aol.com (BWBurnsed) writes:
>I came across some very strange looking code that someone else wrote,
>long, long ago, and (apparently) in a universe far, far away. But before I
>make too many critical comments, I want to be sure I'm not missing
>something.
>Repeatedly in this code (in many files), there are places where a floating
>point variable is tested to see if it is negative. However, the way it is
>done is:
>     if  X * abs(X)  <  0.0  then  ...
>Is there (or was there ever) some pathological anomaly about floating
>point 
>implementations that would make a conversion (abs) and floating point
>multiply 
>preferable to testing a sign bit? Can, and will, optimizing compilers
>recognize the
>real test desired in such constructs, i.e. reduce it to a sign bit test?

My guess is that on that machine signed zeros exist and that -0.0 < 0.0
and that -0.0 * 0.0 => 0.0, thus the above gives the right answer when
given a -0.0.

>Also, suppose Y and X are floating point variables, and M and B are 
>CONSTANT floating point variables (not named numbers) initialized to
>1.0  and  0.0  respectively. If one writes
>     Y := M * X  +  B ;
>can (and will) any compiler reduce this to
>     Y := X ;
>If not, could it do so if M and B were named numbers?

Yes, a lot of compiler peephole optimizers recognize "add immediate 0" and
"mul immediate 1" instructions as things that can be optimized away.  
Other compiler do this as part of the constant folding algorithm
(in Ada terms "static expression evaluator") and never even generate the
code.  In most cases, it should not matter if the constants are universal 
or not.

--
Mark Biggar
mab@wdl.loral.com





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Can compilers do this?
  1996-02-22  0:00 Can compilers do this? BWBurnsed
                   ` (3 preceding siblings ...)
  1996-02-23  0:00 ` Robert Dewar
@ 1996-02-23  0:00 ` Cordes MJ
  1996-02-26  0:00 ` Robert I. Eachus
  5 siblings, 0 replies; 12+ messages in thread
From: Cordes MJ @ 1996-02-23  0:00 UTC (permalink / raw)


BWBurnsed (bwburnsed@aol.com) wrote:

   <snip>

: Also, suppose Y and X are floating point variables, and M and B are 
: CONSTANT floating point variables (not named numbers) initialized to
: 1.0  and  0.0  respectively. If one writes

:      Y := M * X  +  B ;

: can (and will) any compiler reduce this to

:      Y := X ;

I can't say that all compilers will perform this optimization, but
the Tartan VMS/1750 Ada compiler does (at its standard optimization
level).

We have seen cases where some compilers will perform the optimization
if the statement is written as:

     Y := (M * X) + B;

but it will not perform the optimization when written as you 
originally posted.

--
---------------------------------------------------------------------
Michael J Cordes
Phone: (817) 935-3823
Fax:   (817) 935-3800
EMail: CordesMJ@lfwc.lockheed.com
---------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Can compilers do this?
  1996-02-22  0:00 Can compilers do this? BWBurnsed
                   ` (2 preceding siblings ...)
  1996-02-23  0:00 ` Stuart Palin
@ 1996-02-23  0:00 ` Robert Dewar
  1996-02-23  0:00 ` Cordes MJ
  1996-02-26  0:00 ` Robert I. Eachus
  5 siblings, 0 replies; 12+ messages in thread
From: Robert Dewar @ 1996-02-23  0:00 UTC (permalink / raw)


To follow up BwB's question

"Also, suppose Y and X are floating point variables, and M and B are
CONSTANT floating point variables (not named numbers) initialized to
1.0  and  0.0  respectively. If one writes

     Y := M * X  +  B ;

can (and will) any compiler reduce this to

     Y := X ;"

They do not even have to be CONSTANT floating point variables, the
compiler may well be able to figure out the current values of
variables without them being CONSTANT.

If indeed they are constant, I cannot imagine any compiler that would

NOT do the optimization you suggest. Any compiler that did not is
broken if you ask me!





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Can compilers do this?
  1996-02-22  0:00 Can compilers do this? BWBurnsed
@ 1996-02-23  0:00 ` Robert Dewar
  1996-02-23  0:00 ` Mark A Biggar
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Robert Dewar @ 1996-02-23  0:00 UTC (permalink / raw)


BwB says

"Also, suppose Y and X are floating point variables, and M and B are
CONSTANT floating point variables (not named numbers) initialized to
1.0  and  0.0  respectively. If one writes

     Y := M * X  +  B ;

can (and will) any compiler reduce this to

     Y := X ;


Sure, this is just standard constant propagation, a very common
optimization.





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Can compilers do this?
  1996-02-24  0:00   ` Robert A Duff
@ 1996-02-23  0:00     ` Robert Dewar
  0 siblings, 0 replies; 12+ messages in thread
From: Robert Dewar @ 1996-02-23  0:00 UTC (permalink / raw)


Robert Duff said, answering a question

">My guess is that on that machine signed zeros exist and that -0.0 < 0.0
>and that -0.0 * 0.0 => 0.0, thus the above gives the right answer when
>given a -0.0.

Ada 83 doesn't say anything about signed zeros."

Yes, true, but even in Ada 83 if a compiler 

(a) generated -0.0 values as the result of executing correct Ada code

and

(b) gave a result of false for -0.0 = +0.0, or true for -0.0 < +0.0

then this would plainly be a (significant) bug in the compiler





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Can compilers do this?
  1996-02-22  0:00 Can compilers do this? BWBurnsed
  1996-02-23  0:00 ` Robert Dewar
  1996-02-23  0:00 ` Mark A Biggar
@ 1996-02-23  0:00 ` Stuart Palin
  1996-02-23  0:00 ` Robert Dewar
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Stuart Palin @ 1996-02-23  0:00 UTC (permalink / raw)


bwburnsed@aol.com (BWBurnsed) wrote:

[snip]
>
>Repeatedly in this code (in many files), there are places where a floating
>point variable is tested to see if it is negative. However, the way it is
>done is:
>
>     if  X * abs(X)  <  0.0  then  ...
>
>Is there (or was there ever) some pathological anomaly about floating
>point 
>implementations that would make a conversion (abs) and floating point
>multiply 

A quick guess on this is a concern over negative zero in the IEEE
standard for FP numbers, though using abs and * seems overkill since
I would have thought if (X <> 0.0) and then (X < 0.0) might have been
better.

Stuart Palin
stuart.palin@gecm.com





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Can compilers do this?
  1996-02-23  0:00 ` Mark A Biggar
@ 1996-02-24  0:00   ` Robert A Duff
  1996-02-23  0:00     ` Robert Dewar
  1996-02-25  0:00   ` Robert Dewar
  1 sibling, 1 reply; 12+ messages in thread
From: Robert A Duff @ 1996-02-24  0:00 UTC (permalink / raw)


In article <4gl344$q4j@wdl1.wdl.loral.com>,
Mark A Biggar <mab@dst17.wdl.loral.com> wrote:
>My guess is that on that machine signed zeros exist and that -0.0 < 0.0
>and that -0.0 * 0.0 => 0.0, thus the above gives the right answer when
>given a -0.0.

Ada 83 doesn't say anything about signed zeros.

Ada 95 does, and it says that if the machine has signed zeros, then
minus zero is equal to plus zero, according to the predefined "="
operator.

- Bob




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Can compilers do this?
  1996-02-23  0:00 ` Mark A Biggar
  1996-02-24  0:00   ` Robert A Duff
@ 1996-02-25  0:00   ` Robert Dewar
  1 sibling, 0 replies; 12+ messages in thread
From: Robert Dewar @ 1996-02-25  0:00 UTC (permalink / raw)


Mark said

"My guess is that on that machine signed zeros exist and that -0.0 < 0.0
and that -0.0 * 0.0 => 0.0, thus the above gives the right answer when
given a -0.0."

Mark, I think your guess is probably right (this was in response to the
strange abs code for testing for zero). But note that if this was Ada
code, then the compiler involved had a serious bug. In both Ada 83 and
Ada 95, minus zero and plus zero must compare equal.

The situation in Ada 83 is that minus zero does not exist in the language
at all. A machine that generates minus zeroes is simply exhibiting a case
where one Ada value can have multiple hardware representations. If this
is the case, the compiler must ensure that these hardware representations
are treated as identical in all contexts where the semantics of the value
is well defined (an exception would be unchecked conversion, where in any
case the result has some implementation dependence).

The situation in Ada 95 is that minus zero is a recognized value, and certain
operations in the language can tell the difference between minus zero and
plus zero (and in fact MUST do so correctly if Signed_Zeroes is true).
However, equality testing is not among these operations, and must work
"correctly", i.e. plus and minus zero must compare equal.

The other guess of course for the original code is that it came out of
uninformed superstition (the same kind of superstition that says never
compare floating-point values for equality), and that the compiler did
things right, but the programmer did not know or believe that this was
the case!





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Can compilers do this?
  1996-02-26  0:00 ` Robert I. Eachus
@ 1996-02-26  0:00   ` BWBurnsed
  0 siblings, 0 replies; 12+ messages in thread
From: BWBurnsed @ 1996-02-26  0:00 UTC (permalink / raw)


I have read with interest the replies to my message. I can see that we
shall have to check with our specific vendor to know whether or not
this (validated!) compiler suffers from the  signed zero syndrome.
The discussion of "clipping" near zero is very interesting. Allow me to
elaborate (so to speak) on more of the surrounding code. Actually, they
provided a function (which I shall refer to as "clip") that was
implemented
thus (only the names have been changed, to protect the...):

  function Clip ( x : some_float_type;
                  Z : some_float_type )
     return some_float_type is
  begin
     if abs(x) <= Z then
        return 0.0;
     else
        return x;
     end if;
  end;

But-- when it came to employing this function, it goes something like:

  tmp_x : some_float_type;
  abs_x : some_float_type;
    ...
  tmp_x := {some calculation} ;
  abs_x := clip ( abs(tmp_x), 0.01 );

  if tmp_x * abs_x < 0.0 then
    final_answer := tmp_x;
  else
    final_answer := -tmp_x;
  end if;

There are also examples of simply using

  if X * abs(X) < 0.0 then..

but the above astounds me more.




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Can compilers do this?
  1996-02-22  0:00 Can compilers do this? BWBurnsed
                   ` (4 preceding siblings ...)
  1996-02-23  0:00 ` Cordes MJ
@ 1996-02-26  0:00 ` Robert I. Eachus
  1996-02-26  0:00   ` BWBurnsed
  5 siblings, 1 reply; 12+ messages in thread
From: Robert I. Eachus @ 1996-02-26  0:00 UTC (permalink / raw)


In article <4gjd6g$mfq@newsbf02.news.aol.com> bwburnsed@aol.com (BWBurnsed) writes:

  > I came across some very strange looking code that someone else wrote,
  > long, long ago, and (apparently) in a universe far, far away. But before I

  > make too many critical comments, I want to be sure I'm not missing
  > something.

  > Repeatedly in this code (in many files), there are places where a floating
  > point variable is tested to see if it is negative. However, the way it is
  > done is:

  >	if  X * abs(X)  <  0.0  then  ...

  > Is there (or was there ever) some pathological anomaly about
  > floating point implementations that would make a conversion (abs)
  > and floating point multiply preferable to testing a sign bit? Can,
  > and will, optimizing compilers recognize the real test desired in
  > such constructs, i.e. reduce it to a sign bit test?

   A lot of answers have shown up here which focused on hardware which
supports negative zeros.  But anyone with a through understanding of
floating-point should recognize that there are many legitimate
negative numbers for which this expression is false.  In fact it
becomes much more understandable if written:

   if X < 0.0 and then X * X > Float'Model_Small then  ...

  (Replace Model_Small with Safe_Small in Ada 83, and Float with the
appropriate type name if it isn't Float.)

   But of course this is just a (portable) approximation to the
behavior on your hardware.  In any case it looks as if the author of
the code is trying to avoid treating small deviations from zero,
probably due to floating point arithmetic errors, as negative.

   Of course if this is the intended effect, the "best" portable
source would be:

   Fuzz: constant Float := Sqrt(Float'Model_Small);  -- or whatever
   ...
   if X + Fuzz < 0.0 then ...

--

					Robert I. Eachus

with Standard_Disclaimer;
use  Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Can compilers do this?
@ 1996-02-26  0:00 Marin David Condic, 407.796.8997, M/S 731-93
  0 siblings, 0 replies; 12+ messages in thread
From: Marin David Condic, 407.796.8997, M/S 731-93 @ 1996-02-26  0:00 UTC (permalink / raw)


Robert Dewar <dewar@CS.NYU.EDU> writes:
>Subject: Re: Can compilers do this?
>
>BwB says
>
>"Also, suppose Y and X are floating point variables, and M and B are
>CONSTANT floating point variables (not named numbers) initialized to
>1.0  and  0.0  respectively. If one writes
>
>     Y := M * X  +  B ;
>
>can (and will) any compiler reduce this to
>
>     Y := X ;
>
>
>Sure, this is just standard constant propagation, a very common
>optimization.
>
    Except that you might not always want to. Constant folding can be
    a "good" optimization from a speed standpoint, but a "bad"
    optimization from a test standpoint.

    When we build control systems, we typically like to keep tables
    and assorted "constants" as "rubber constants". They are declared
    constants so that the program can't change the values, but when
    they're loaded into the box, we can change them with test
    equipment in order to affect "trim" on the control or accomplish
    other test stuff. If they were optimized away, you couldn't do
    this.

    The EDS-Scicon compiler we are working with gives us a "pragma
    VOLATILE" command to make sure that the constants aren't optimized
    away. I personally think anyone making a compiler for embedded
    systems should provide a similar option

    Pax,
    MDC

Marin David Condic, Senior Computer Engineer    ATT:        407.796.8997
M/S 731-93                                      Technet:    796.8997
Pratt & Whitney, GESP                           Fax:        407.796.4669
P.O. Box 109600                                 Internet:   CONDICMA@PWFL.COM
West Palm Beach, FL 33410-9600                  Internet:   MDCONDIC@AOL.COM
===============================================================================
    "Nobody shot me."

        --  Last words of Frank Gusenberg when asked by police who
            shot him fourteen times with a machine gun in the Saint
            Valentine's Day Massacre.
===============================================================================




^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~1996-02-26  0:00 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1996-02-22  0:00 Can compilers do this? BWBurnsed
1996-02-23  0:00 ` Robert Dewar
1996-02-23  0:00 ` Mark A Biggar
1996-02-24  0:00   ` Robert A Duff
1996-02-23  0:00     ` Robert Dewar
1996-02-25  0:00   ` Robert Dewar
1996-02-23  0:00 ` Stuart Palin
1996-02-23  0:00 ` Robert Dewar
1996-02-23  0:00 ` Cordes MJ
1996-02-26  0:00 ` Robert I. Eachus
1996-02-26  0:00   ` BWBurnsed
  -- strict thread matches above, loose matches on Subject: below --
1996-02-26  0:00 Marin David Condic, 407.796.8997, M/S 731-93

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox