comp.lang.ada
 help / color / mirror / Atom feed
* floating point comparison
@ 1997-07-29  0:00 Matthew Heaney
  1997-07-30  0:00 ` Robert Dewar
                   ` (3 more replies)
  0 siblings, 4 replies; 105+ messages in thread
From: Matthew Heaney @ 1997-07-29  0:00 UTC (permalink / raw)



I need to do some floating point comparisons in Ada 83 (but I'd like Ada 95
advice too), and needed someone to interpret the advice in AQ&S.

My immediate problem is, How do I test 2 floating point numbers for equality?

The AQ&S states that 

if abs (X - Y) <= Float_T'Small 

be used to test for "absolute equality in storage."

Q: What does "absolute equality in storage" mean?

Q: Why exactly is the test

if X - Y 

a Bad Thing?  Is it nonportable?  Not guaranteed to work?

Q: Is it ever acceptable to write "if X = Y" ?

Robert Dewar has stated in previous threads (one has the same name as the
subject of this post, so you can go to deja news to look it up), that it's
often OK to test for equality.  Robert: can you enumerate the conditions
under which such a test is appropriate, or refer us to a book explaining
why?


The AQ&S states that 

if abs (X - Y) <= Float_Type'Base'Small 

be used to test for "absolute equality in computation."

Q : What does "absolute equality in computation" mean?

Q: How is equality in "computation" different from "storage"?


The AQ&S states that 

if abs (X - Y) <= abs X * Float_Type'Epsilon 

be used to test for "relative equality in storage."

Q: What does "relative equality in storage" mean?

Q: How is "relative" equality different from "absolute" equality?


The AQ&S states that

if abs (X - Y) <= abs X * Float_Type'Base'Epsilon 

be used to test for "relative equality in computation."

Q: What does "relative equality in computation" mean?



Suppose I have 2 lines, and I want to make sure they aren't parallel.  How
should I write the predicate to compare the slopes of the lines, eg

if M1 = M2 then

or

if abs (M1 - M2) <= Slope'Small

or 

if abs (M1 - M2) <= abs M1 * Slope'Epsilon

or

<one of the forms that uses Slope'Base>

Honestly, the advice in the AQ&S is way over my head.  Does anyone out
there understand it, and care to explain it?

Is there a paper or book that explains Everything You Wanted To Know About
Floating Point Comparisons?

Under what circumstances would one use T'Small vs T'Epsilon?

I looked around in the Ada programming FAQ and couldn't find anything about
floating point comparisons.  Perhaps we can use the discussion this thread
engenders to add something to it.

Oddly enough, none of my Ada books state how to properly write a predicate
to compare floating point numbers.  Even Norm's book just had a vague
"compare the difference to a carefully chosen small value."  Anybody have
any substantive advice?

--------------------------------------------------------------------
Matthew Heaney
Software Development Consultant
<mailto:matthew_heaney@acm.org>
(818) 985-1271




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-30  0:00 ` Robert Dewar
@ 1997-07-30  0:00   ` Matthew Heaney
  1997-07-31  0:00     ` Jim Carr
  1997-07-30  0:00   ` Matthew Heaney
  1997-07-31  0:00   ` Gerald Kasner
  2 siblings, 1 reply; 105+ messages in thread
From: Matthew Heaney @ 1997-07-30  0:00 UTC (permalink / raw)



In article <dewar.870304869@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:

Another question then.  I was digging around deja news, and Bob Eachus
responded to a similar query with the answer:


>  > So,  How Do I Do It? 
>
>  First, don't.
>
>  Second, if you must, implement your own equality operation that
>allows for a small range of error:
>
>  function Equal(L,R: Float) return Boolean is
>  begin
>    if L = R then return True;
>    elsif abs(L-R) < abs(R) * Float'Epsilon then return True;
>    else return False;
>    end if;
>  end Equal;
>
>  (The first check is both for efficiency and to deal with the case
>where L = R = 0.0...)
>
>--
>
>               Robert I. Eachus


Why didn't he just say, "Compare them directly using the equality operator
for the floating point type."  Why did he bother writing the Equal
function?

--------------------------------------------------------------------
Matthew Heaney
Software Development Consultant
<mailto:matthew_heaney@acm.org>
(818) 985-1271




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-29  0:00 floating point comparison Matthew Heaney
  1997-07-30  0:00 ` Robert Dewar
@ 1997-07-30  0:00 ` Jan Galkowski
  1997-07-31  0:00   ` Don Taylor
  1997-08-02  0:00 ` Michael Sierchio
  1997-08-08  0:00 ` floating point conversions Mark Lusti
  3 siblings, 1 reply; 105+ messages in thread
From: Jan Galkowski @ 1997-07-30  0:00 UTC (permalink / raw)



Matthew Heaney wrote:
> 
> I need to do some floating point comparisons in Ada 83 (but I'd like Ada 95
> advice too), and needed someone to interpret the advice in AQ&S.
> 
> My immediate problem is, How do I test 2 floating point numbers for equality?
> 
[snip]

I'm answering this question with my numerical analyst's hat donned.
The comparison of two flonums for equality needs to be done in terms
of an application-dependent criterion for equality.  Generally, in a 
strongly typed language like Ada, that means the criterion should vary
from subtype to subtype and perhaps the variation should be even finer than
that.  Thus, while the precision of a hypothetical navigation package may
be a maximum of 3 meters (with precision increasing as distance goes smaller),
some times the comparison that makes sense may be 5 meters and sometimes it
may be 100 meters.  With statistical quantities, the comparison needs to
be made considering their inherent variability.  

> 
> Suppose I have 2 lines, and I want to make sure they aren't parallel.  How
> should I write the predicate to compare the slopes of the lines, eg
> 
> if M1 = M2 then
> 
> or
> 
> if abs (M1 - M2) <= Slope'Small
> 
>

First of all, if you are using slopes, the criterion for comparison will
necessarily be related in a nonlinear way with the slope, because slopes
vary as the tangent of the angle of attack.  The preferred way to test for
perpendicularity or being parallel is to use the inner product of the
measurement space involved -- if it is a typical graphical space, this means
the dot product.  Two lines are parallel if the absolute value of the dot 
product of unit vectors erected parallel to each of them (easily done -- 
the vector divided by its L2-norm) is one.  How close to one this needs
to get is application dependent and a little tricky.  That's because
the distance is non-linearly related to the angle between the lines.  

In practice, the absolute criterion for being parallel will depend upon
the precision of the representation for the flonums being used -- its
easy to require "digits 14" -- and the value of the cosine for the
smallest angle you'd like to resolve.  So, to continue the example
above, if one needs to resolve an angle at the Earth's center 
equivalent to a separation of 3 meters at its surface, that's
about 2.7e-5 degrees but the cosine differs from one by a tad
over 1 part in 10 trillion (1 part in 10**13).

> Honestly, the advice in the AQ&S is way over my head.  Does anyone out
> there understand it, and care to explain it?

To quote Barnes' PROGRAMMING IN ADA95, page 328 in his section on floating
point types: "But care is sometimes needed and the advice of a professional
numerical analyst should be sought when in doubt."  And, no, it isn't
any easier in C or C++: There the mess is merely more disguised.

> 
> Is there a paper or book that explains Everything You Wanted To Know About
> Floating Point Comparisons?

Yes, take an introductory course in numerical analysis.  "Numerical analysis
is a science-- computation is an art."   A couple of shortstops,
however:

  (1) The introductory chapter of the Press, Flannery, Teukolsky, and Vetterling
      NUMERICAL RECIPES usually has a section which discusses precision, representation,
      and accuracy.

  (2) Francis Scheid wrote a "Numerical Analysis" installment for the Schaum's Outline
      Series.  I don't know if it is still in print, and it is a tad out of date,
      suffering from an excessive focus upon scalar computations, but it is a good
      learning on one's own text.

  (3) The first couple of chapters of the text J.E.Dennis, Jr., R.B.Schnabel,
      NUMERICAL METHODS FOR UNCONSTRAINED OPTIMIZATION AND NONLINEAR EQUATIONS,
      has a pretty modern treatment of errors and precision, although from a 
      particular focus.

  (4) The first chapter of J.J.Dongarra, C.B.Moler, J.R.Bunch, G.W.Stewart, 
      LINPACK USERS' GUIDE (SIAM), has a good introduction to these concerns.

  (5) An older, but still good (IMO) text is Froberg, INTRODUCTION TO NUMERICAL
      ANALYSIS, Addison-Wesley Publishing, 1969, LCC Card No. 73-79592.

  (6) The arguably best treatment I have seen was by L.Fox, D.F.Mayers in their
      COMPUTING METHODS FOR SCIENTISTS AND ENGINEERS, Clarendon Press, Oxford,
      1968, in its first chapter.

  (7) There may be something published about numerics in Ada-- There was or is
      a working group which studied or studies Ada's numerics.  So there may
      be direct advice available.  

I recall taking a course in Ada a long time ago where they offered the opinion --
with which I happen to agree -- that starting with numbers in the teaching
of programming languages is a mistake.  That is, at least in part -- and this is
my own opinion, not that of those teachers -- because numbers beyond integers
are a lot more complicated than they look.  Ada83's model numbers were a very
good simplification of matters, but noone seemed to understand even those,
particularly commercial compiler writers.  


[snip]

> 
> --------------------------------------------------------------------
> Matthew Heaney
> Software Development Consultant
> <mailto:matthew_heaney@acm.org>
> (818) 985-1271

-- 
 Jan Theodore Galkowski, 
 developer, statistician,

  speaking only for myself,
  
  jan@digicomp.com 
  jtgalkowski@worldnet.att.net

  Member, 

    the American Statistical Association,
    the Union of Concerned Scientists.




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-29  0:00 floating point comparison Matthew Heaney
@ 1997-07-30  0:00 ` Robert Dewar
  1997-07-30  0:00   ` Matthew Heaney
                     ` (2 more replies)
  1997-07-30  0:00 ` Jan Galkowski
                   ` (2 subsequent siblings)
  3 siblings, 3 replies; 105+ messages in thread
From: Robert Dewar @ 1997-07-30  0:00 UTC (permalink / raw)



iMatthew Heaney quotes the AQ&S with respect to its advice on comparing
floating-point numbers.

Please don't quote this section, it contains a fair bit of embarassing
nonsense.

As for when it makes sense to compare floating-point values for equality,
that is a matter for the programmer to understand. Floating-point arithmetic
on modern IEEE machines is not some kind of approximate hocus-pocus, it
is a well defined arithmetic systenm, with well defined, well behaved
results, in which equality has a perfectly reasonable meaning.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-30  0:00 ` Robert Dewar
  1997-07-30  0:00   ` Matthew Heaney
@ 1997-07-30  0:00   ` Matthew Heaney
  1997-07-31  0:00     ` Samuel Mize
                       ` (4 more replies)
  1997-07-31  0:00   ` Gerald Kasner
  2 siblings, 5 replies; 105+ messages in thread
From: Matthew Heaney @ 1997-07-30  0:00 UTC (permalink / raw)



In article <dewar.870304869@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:


>As for when it makes sense to compare floating-point values for equality,
>that is a matter for the programmer to understand. Floating-point arithmetic
>on modern IEEE machines is not some kind of approximate hocus-pocus, it
>is a well defined arithmetic systenm, with well defined, well behaved
>results, in which equality has a perfectly reasonable meaning.

OK, so if  do this

declare
   m1, m2 : slope;
begin
   if m1 = m2 then

and I'm using IEEE floating point types (I'm on a SUN box), can I assume
that the lines are parallel?

--------------------------------------------------------------------
Matthew Heaney
Software Development Consultant
<mailto:matthew_heaney@acm.org>
(818) 985-1271




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-30  0:00 ` Robert Dewar
  1997-07-30  0:00   ` Matthew Heaney
  1997-07-30  0:00   ` Matthew Heaney
@ 1997-07-31  0:00   ` Gerald Kasner
  1997-07-31  0:00     ` Robert Dewar
  2 siblings, 1 reply; 105+ messages in thread
From: Gerald Kasner @ 1997-07-31  0:00 UTC (permalink / raw)



Robert Dewar wrote:
> 
> iMatthew Heaney quotes the AQ&S with respect to its advice on comparing
> floating-point numbers.
> 
> Please don't quote this section, it contains a fair bit of embarassing
> nonsense.
> 
> As for when it makes sense to compare floating-point values for equality,
> that is a matter for the programmer to understand. Floating-point arithmetic
> on modern IEEE machines is not some kind of approximate hocus-pocus, it
> is a well defined arithmetic systenm, with well defined, well behaved
> results, in which equality has a perfectly reasonable meaning.


Hm, have you ever tried to understand the first chapters on rounding
erros in some textbook on linear algebra ? 
(or try Wilkinson:Rounding Errors (very old, but good) )

Or try to understand the routines in the well known book of Wilkinson
and Reinsch (linear algebra II), which many routines of Linpack/Eispack
or Lapack are based on.

Things you call approximate hocus-pocus make life hard if you try to
implement some algortithms wich are exact (in exact arithmetic), but
fail due to rounding erros EVEN ON MODERN IEE MACHINES.

Comparing floating point numbers for equality should be avoided, the 
programmer need some deeper insight of the problem.

-Gerald


-- 
##############################################
# Dr. Gerald Kasner                          #       
# Gerald.Kasner@Physik.Uni-Magdeburg.de      #
# Tel.: Germany +391 / 67 / 12469            #
# Fax.: Germany +391 / 67 / 11131            #
##############################################




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-30  0:00   ` Matthew Heaney
  1997-07-31  0:00     ` Samuel Mize
@ 1997-07-31  0:00     ` Martin Tom Brown
  1997-07-31  0:00     ` Bob Binder  (remove .mapson to email)
                       ` (2 subsequent siblings)
  4 siblings, 0 replies; 105+ messages in thread
From: Martin Tom Brown @ 1997-07-31  0:00 UTC (permalink / raw)



In article <mheaney-ya023680003007972240570001@news.ni.net>
           mheaney@ni.net "Matthew Heaney" writes:

> In article <dewar.870304869@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:
> 
> 
> >As for when it makes sense to compare floating-point values for equality,
> >that is a matter for the programmer to understand. Floating-point arithmetic
> >on modern IEEE machines is not some kind of approximate hocus-pocus, it
> >is a well defined arithmetic systenm, with well defined, well behaved
> >results, in which equality has a perfectly reasonable meaning.
> 
> OK, so if  do this
> 
> declare
>    m1, m2 : slope;
> begin
>    if m1 = m2 then
> 
> and I'm using IEEE floating point types (I'm on a SUN box), can I assume
> that the lines are parallel?

There are two obvious situations where you can get into trouble -

#1 m1 equal to m2 	but the lines are not parallel, and numerical 
			cancellation gives fewer significant bits.

You can try and control this loss of significance.

#2 m1 not equal to m2 	but the lines are actually parallel and the
			small difference is due to rounding errors.

On some systems this may be unavoidable hence you have to accept
numbers as equal if they differ by less than some limit. 

If "slope" is a single floating point number then you are in trouble
when a line becomes parallel to one of the two axes (division by 0).

As Robert has said IEEE is a well defined floating-point number system
and so for any given calculation it is possible to determine bounds.
How much thought you give to a problem depends on how much it matters - 
for example there is probably no need to worry if a plotter pen is 
a few microns out of position after drawing a 1m line.

As a concrete and I think simpler example of the difference between
algebraic identities and numerical IEEE FP approximations try 
evaluating the following in the explicit order shown :

	f1(x) =	1 + x + x^2 + ... + x^(N-1) + x^N

	f2(x) = x^N + x^(N-1) + x^(N-2) + ... + x + 1

	f3(x) = (1 - x^(N+1))/(1-x)

Barring typos I hope everyone agrees these are algebraic identities
(at least for finite N)

If you now try to evaluate each of them numerically for say N=10 
and x = 10, 3, 2, 1, 1/2, 1/3, 1/10 you should see some differences.
(There is a gotcha in this list)

Common programmer errors often arise from exact decimal fractions 
which do not have a finite representation in the binary system.

Regards,
-- 
Martin Brown  <martin@nezumi.demon.co.uk>     __                CIS: 71651,470
Scientific Software Consultancy             /^,,)__/





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-30  0:00   ` Matthew Heaney
  1997-07-31  0:00     ` Samuel Mize
  1997-07-31  0:00     ` Martin Tom Brown
@ 1997-07-31  0:00     ` Bob Binder  (remove .mapson to email)
  1997-07-31  0:00       ` Robert Dewar
  1997-08-01  0:00       ` user
  1997-07-31  0:00     ` Robert Dewar
  1997-08-02  0:00     ` Lynn Killingbeck
  4 siblings, 2 replies; 105+ messages in thread
From: Bob Binder  (remove .mapson to email) @ 1997-07-31  0:00 UTC (permalink / raw)



Matthew Heaney <mheaney@ni.net> wrote in article
<mheaney-ya023680003007972240570001@news.ni.net>...
> In article <dewar.870304869@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:
> 
> 
> >As for when it makes sense to compare floating-point values for equality,
> >that is a matter for the programmer to understand. Floating-point arithmetic
> >on modern IEEE machines is not some kind of approximate hocus-pocus, it
> >is a well defined arithmetic systenm, with well defined, well behaved
> >results, in which equality has a perfectly reasonable meaning.
> 
> OK, so if  do this
> 
> declare
>    m1, m2 : slope;
> begin
>    if m1 = m2 then
> 
> and I'm using IEEE floating point types (I'm on a SUN box), can I assume
> that the lines are parallel?

Assuming = is the equality operator (not assignment), and something 
has placed *exactly* equal values in m1 and m2 before the compare, 
then they'll compare equal.

But this isn't the source of the problem. 

I haven't followed the IEEE fp standard, but I can't imagine how 
(without resorting to some kind of BCD representation) any floating 
point scheme using a non-decimal radix gets around the fundamental fact 
that there are non-terminal fractions in this radix which are not
non-terminal in a decimal radix.

Try this:

x, y float;

x = 1.0;
y = (x/10.0)*10.0;
if x == y {
	// not likely }
else {
	// more likely
};

E.g., with a hexadecimal fraction representation, there is no exact 
representation for 0.1, just at there is no exact decimal 
representation of 1/3.  In all the Fortans I used, the above hack 
would show this.  Unless your compiler designers have compensated
for this, the above (or another suitably choosen constant) will 
demonstrate this mismatch.


-- 
______________________________________________________________________________ 
Bob Binder                http://www.rbsc.com          RBSC Corporation
312 214-3280  tel         Software Engineering         3 First National Plaza 
312 214-3110  fax         Process Improvement          Suite 1400 
rbinder@rbsc.mapson.com   (remove .mapson to mail)     Chicago, IL 60602-4205














^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-30  0:00   ` Matthew Heaney
@ 1997-07-31  0:00     ` Samuel Mize
  1997-07-31  0:00     ` Martin Tom Brown
                       ` (3 subsequent siblings)
  4 siblings, 0 replies; 105+ messages in thread
From: Samuel Mize @ 1997-07-31  0:00 UTC (permalink / raw)



[reformatted for line length]
Matthew Heaney wrote:
> 
> In article <dewar.870304869@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:
> 
> >As for when it makes sense to compare floating-point values
> >for equality, that is a matter for the programmer to understand.
> >Floating-point arithmetic on modern IEEE machines is not some kind
> > of approximate hocus-pocus, it is a well defined arithmetic systenm,
> >with well defined, well behaved results, in which equality has a
> > perfectly reasonable meaning.
> 
> OK, so if  do this
> 
> declare
>    m1, m2 : slope;
> begin
>    if m1 = m2 then
> 
> and I'm using IEEE floating point types (I'm on a SUN box), can I assume
> that the lines are parallel?

IEEE float calculation is a well-defined arithmetic system, but it
does not always match exact real-number arithmetic.  You need to
read up in one or more of the references other people provided, to
determine how the differences impact your computations.

The short answer is that there is no short answer.

The longer short answer is that usually, the inaccuracy of the data
swamps the imprecision of discrete floating-point arithmetic.  If
you're measuring lengths under 10 yards to the nearest foot, it really
doesn't matter if you compute the rectangular area with 32 or 64 bits.
In this case, you might consider 10.01 and 9.09 equal.  If you're
measuring to the nearest inch, you might not.  In neither case are
you approaching the precision of the machine.

There are cases where the machine precision plays a part.  For instance,
solution by successive approximation may thrash indefinitely if the
value you are hunting is further than your "close enough" criterion
from two "adjacent" approximations.

You might also find that the computations you are doing introduce a
significant level of imprecision due to representation limits, although
usually anything that amplifies imprecision also amplifies the
inaccuracy of your data.

And there are special boundary cases, e.g. trig functions zooming off
to infinity.

There really is no substitute for some knowledge of numerical analysis,
even if self-taught.  You have to at least know enough to tell when you
are safe, and when you need to be careful.  There isn't a set of
cookbook directions that will guide you.

What makes Ada intimidating in this respect is that the standard tries
to make these issues visible and controllable, instead of giving you
whatever's on the machine and ignoring the problem.  (A language like
that wouldn't get a better grade than C.)*

Best,
Sam Mize

* Yes, it was gratuitous.  :-)




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-31  0:00   ` Gerald Kasner
@ 1997-07-31  0:00     ` Robert Dewar
  0 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-07-31  0:00 UTC (permalink / raw)



Gerald says

<<Things you call approximate hocus-pocus make life hard if you try to
implement some algortithms wich are exact (in exact arithmetic), but
fail due to rounding erros EVEN ON MODERN IEE MACHINES.

Comparing floating point numbers for equality should be avoided, the
programmer need some deeper insight of the problem.
>>


Nothing "fails" due to rounding errors on modern IEEE machines. What is
wrong here is not some kind of failure in the arithemtic, but the
programmers misuse of it. 

Of course if you expect to be able to take an algorithm which is
"exact" using real arithmetic, it is often completely bogus to
simply encode it using floating point arithmetic, that's exactly what
I mean by "approximate hocus pocus".

If you try to store 1001 numbers in a 1000 array, your program
malfunctions, but you immediately understand that the program is
wrong, you do not blame the array for "failing".

Misuse of IEEE floating-arithmetic is the same as any other bug a
careless programmer introduces into their program.

As to whether equality is what you want, it is a well defined
operation, you use it when and only when you want this operation.
It is certainly not the case that if the real arithmetic algorithm
you are using as an implementation model has an equality that you
should blindly use the equality IEEE operation. But then it is
equally foolish to think that you can blinndly translate an
addition to to a floating-point add. Any kind of blind unthinking
behavior is likely to get you into big trouble.

As (one) example of a case where equality is approrpiate, consider
the problem of computing a square root by Newton Raphson, using
round-towards-zero. In this situation, the estimate will converge
accurately to IEEE equality.

Floating-point code is had stuff, most people who do it shouldn't!





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-30  0:00   ` Matthew Heaney
                       ` (2 preceding siblings ...)
  1997-07-31  0:00     ` Bob Binder  (remove .mapson to email)
@ 1997-07-31  0:00     ` Robert Dewar
  1997-08-02  0:00     ` Lynn Killingbeck
  4 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-07-31  0:00 UTC (permalink / raw)



Matthew says

<<OK, so if  do this

declare
   m1, m2 : slope;
begin
   if m1 = m2 then

and I'm using IEEE floating point types (I'm on a SUN box), can I assume
that the lines are parallel?
>>


This code is obviously erroneous as written, since m1 and m2 are 

not initialized.

Presumbaly there is unknown code to compute m1 and m2 missing here, and
of coruse the answer to the question depends on this code, and on the
problem requirements, and on the accuracy of the input information.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-31  0:00     ` Bob Binder  (remove .mapson to email)
@ 1997-07-31  0:00       ` Robert Dewar
  1997-08-01  0:00         ` Dale Stanbrough
  1997-08-04  0:00         ` Paul Eggert
  1997-08-01  0:00       ` user
  1 sibling, 2 replies; 105+ messages in thread
From: Robert Dewar @ 1997-07-31  0:00 UTC (permalink / raw)



Bob Binder says

<<I haven't followed the IEEE fp standard, but I can't imagine how
(without resorting to some kind of BCD representation) any floating
point scheme using a non-decimal radix gets around the fundamental fact
that there are non-terminal fractions in this radix which are not
point scheme using a non-decimal radix gets around the fundamental fact
that there are non-terminal fractions in this radix which are not
non-terminal in a decimal radix.
>>

There is nothing to "get around" here. The IEEE floating-point standard
is a well defined arithmetic system, with a well defined set of values
with well defined operations on them.

It is NOT, and never pretends to be, a representation of real arithmetic.
It has its own laws and axioms, but they are of course different from
those of real arithmetic.

There is nothing special about decimal arithmetic here. The laws of
physics are not made up by a deity with ten fingers. Real arithmetic
is of course independent of the chosen base, but so are observations
in the physical world. 

The IEEE fpt standard (IEEE-854) provides binary and decimal variations.
It is of course the binary version that is used on most modern
computers.






^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-30  0:00   ` Matthew Heaney
@ 1997-07-31  0:00     ` Jim Carr
  0 siblings, 0 replies; 105+ messages in thread
From: Jim Carr @ 1997-07-31  0:00 UTC (permalink / raw)



mheaney@ni.net (Matthew Heaney) writes:
>
>Why didn't he just say, "Compare them directly using the equality operator
>for the floating point type."  Why did he bother writing the Equal
>function?

 Suppose the floating-point variable x is assigned the value of 
 some floating-point constant.  Would you want your program to 
 return 'true' for the question "Is the sum from 1 to N of x, 
 that is, $\sum_1^N x$ in TeX, equal to N times x?" for any valid 
 floating-point constant and, say, values of N where the sum does 
 not overflow? 

-- 
 James A. Carr   <jac@scri.fsu.edu>     | Commercial e-mail is _NOT_ 
    http://www.scri.fsu.edu/~jac/       | desired to this or any address 
 Supercomputer Computations Res. Inst.  | that resolves to my account 
 Florida State, Tallahassee FL 32306    | for any reason at any time. 




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-30  0:00 ` Jan Galkowski
@ 1997-07-31  0:00   ` Don Taylor
  1997-07-31  0:00     ` Russ Lyttle
  0 siblings, 1 reply; 105+ messages in thread
From: Don Taylor @ 1997-07-31  0:00 UTC (permalink / raw)



Matthew Heaney wrote:
> I need to do some floating point comparisons in Ada 83 (but I'd like Ada 95
> advice too), and needed someone to interpret the advice in AQ&S.

If anyone has any pointers to well written guidelines for floating point
usage I would dearly love to hear about them.

I have seen the OLD Prentice Hall series (if my brain hasn't failed me)
text "Floating Point Computation".  I have seen the article in the ACM
Computing Reviews (again if the brain hasn't failed) a few years ago.
I have seen the text "Improving Floating Point ?" published recently.

But none of these seem to try to lay down a reasonably comprehensive set
of guidelines that individuals should consider when they begin floating
point computation.  If there were any contributions to such a set I am
almost provoked enough to try to take the best of everything that we know
about this subject and get it into book form.

Many many thanks
please use email if possible
psu04033@odin.cc.pdx.edu




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-31  0:00   ` Don Taylor
@ 1997-07-31  0:00     ` Russ Lyttle
  1997-08-01  0:00       ` W. Wesley Groleau x4923
  0 siblings, 1 reply; 105+ messages in thread
From: Russ Lyttle @ 1997-07-31  0:00 UTC (permalink / raw)



Don Taylor wrote:
> 
> Matthew Heaney wrote:
> > I need to do some floating point comparisons in Ada 83 (but I'd like Ada 95
> > advice too), and needed someone to interpret the advice in AQ&S.
> 
> If anyone has any pointers to well written guidelines for floating point
> usage I would dearly love to hear about them.
> 
> I have seen the OLD Prentice Hall series (if my brain hasn't failed me)
> text "Floating Point Computation".  I have seen the article in the ACM
> Computing Reviews (again if the brain hasn't failed) a few years ago.
> I have seen the text "Improving Floating Point ?" published recently.
> 
> But none of these seem to try to lay down a reasonably comprehensive set
> of guidelines that individuals should consider when they begin floating
> point computation.  If there were any contributions to such a set I am
> almost provoked enough to try to take the best of everything that we know
> about this subject and get it into book form.
> 
> Many many thanks
> please use email if possible
> psu04033@odin.cc.pdx.edu
Great! Write the book. We need it. It seems to me that almost no one
understands computer based (pseudo)floating point computation.Write if
you need help or would like input.




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-31  0:00       ` Robert Dewar
@ 1997-08-01  0:00         ` Dale Stanbrough
  1997-08-04  0:00         ` Paul Eggert
  1 sibling, 0 replies; 105+ messages in thread
From: Dale Stanbrough @ 1997-08-01  0:00 UTC (permalink / raw)



Robert Dewar writes:

"The laws of physics are not made up by a deity with ten fingers"

This is of course, supposition on Robert's part - or do you have
inside information?

Dale :-)




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-31  0:00     ` Bob Binder  (remove .mapson to email)
  1997-07-31  0:00       ` Robert Dewar
@ 1997-08-01  0:00       ` user
  1997-08-02  0:00         ` Peter L. Montgomery
  1997-08-02  0:00         ` Lynn Killingbeck
  1 sibling, 2 replies; 105+ messages in thread
From: user @ 1997-08-01  0:00 UTC (permalink / raw)



Bob Binder wrote:

> I haven't followed the IEEE fp standard, but I can't imagine how
> (without resorting to some kind of BCD representation) any floating
> point scheme using a non-decimal radix gets around the fundamental fact
> that there are non-terminal fractions in this radix which are not
> non-terminal in a decimal radix.
> 
> Try this:
> 
> x, y float;
> 
> x = 1.0;
> y = (x/10.0)*10.0;
> if x == y {
>         // not likely }
> else {
>         // more likely
> };
Although I am reading this in comp.lang.ada, the suggestion to try a
simple example like this in C piqued my interest.

On a Sun SPARC-5, with the built in compiler supplied by Sun, this
code (fixed for syntax) found (x == y) to be true.  Using 3.0 and
7.0 (neither of which have any chance of yielding an exact
representation
after the division) also found (x == y) to be true.

The underlying machine arithmetic escapes me, but a simple mechanism
such as retaining more precision in computation results, and rounding
prior to comparison to some lesser degree of precision that still meets
the spec, would work.  Cheap calculators do this all the time, using
something known as "guard digits," which retain precision that is not
displayed in the (rounded) result.




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-31  0:00     ` Russ Lyttle
@ 1997-08-01  0:00       ` W. Wesley Groleau x4923
  1997-08-02  0:00         ` Robert Dewar
  1997-08-03  0:00         ` Brian Rogoff
  0 siblings, 2 replies; 105+ messages in thread
From: W. Wesley Groleau x4923 @ 1997-08-01  0:00 UTC (permalink / raw)



> > But none of these seem to try to lay down a reasonably comprehensive 
> > set of guidelines that individuals should consider when they begin 
> > floating-point computation.  If there were any contributions to such 
> > a set I am almost provoked enough to try to take the best of 
> > everything that we know about this subject and get it into book 
> > form.

> Great! Write the book. We need it. It seems to me that almost no one
> understands computer based (pseudo)floating point computation.Write if
> you need help or would like input.

I'd be pleased if I could just get my hands on a proposed alternative 
for the "fair bit of embarassing nonsense" in Ada Quality and Style.

It would not be appropriate for an general-purpose style guide to 
mandate "take an introductory course in numerical analysis."

-- 
----------------------------------------------------------------------
    Wes Groleau, Hughes Defense Communications, Fort Wayne, IN USA
Senior Software Engineer - AFATDS                  Tool-smith Wanna-be

Don't send advertisements to this domain unless asked!  All disk space
on fw.hac.com hosts belongs to either Hughes Defense Communications or 
the United States government.  Using email to store YOUR advertising 
on them is trespassing!
----------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-01  0:00       ` user
@ 1997-08-02  0:00         ` Peter L. Montgomery
  1997-08-04  0:00           ` W. Wesley Groleau x4923
  1997-08-02  0:00         ` Lynn Killingbeck
  1 sibling, 1 reply; 105+ messages in thread
From: Peter L. Montgomery @ 1997-08-02  0:00 UTC (permalink / raw)



In article <33E261F7.2EFB@mailgw.sanders.lockheed.com> 
user <user@mailgw.sanders.lockheed.com> writes:
>Bob Binder wrote:

>> I haven't followed the IEEE fp standard, but I can't imagine how
>> (without resorting to some kind of BCD representation) any floating
>> point scheme using a non-decimal radix gets around the fundamental fact
>> that there are non-terminal fractions in this radix which are not
>> non-terminal in a decimal radix.
 
>> Try this:
 
>> x, y float;
 
>> x = 1.0;
>> y = (x/10.0)*10.0;
>> if x == y {
>>         // not likely }
>> else {
>>         // more likely
>> };
>Although I am reading this in comp.lang.ada, the suggestion to try a
>simple example like this in C piqued my interest.
>
>On a Sun SPARC-5, with the built in compiler supplied by Sun, this
>code (fixed for syntax) found (x == y) to be true.  Using 3.0 and
>7.0 (neither of which have any chance of yielding an exact
>representation
>after the division) also found (x == y) to be true.
   

     You lucked out due to IEEE rounding to nearest.
Each nonzero floating point value is approximated by +- 1 times
an integer mantissa M in [2^52, 2^53) times a power 2^e, 
where e is an integer.
     
     When approximating 1/10 = 2^(-56) * (2^56/10),
we want M to be the integer nearest to 2^56/10.  
This is (2^56 + 4)/10.  The computed value of 1.0/10.0
becomes ((2^56 + 4)/10) * 2^(-56).

     Now multiply back by 10.0.  The exact product
would be (2^56 + 4)*2^(-56), but the mantissa
is required to be in [2^52, 2^53).  The result is scaled
to (2^52 + 0.25)*2^(-52).  Now a rounding gives
2^52 * 2^(-52), or 1.0.  The final result happens to
match the original x, even though intermediate results were inexact.

    On the other hand, try (1.0/49.0)*49.0.
The quotient 1/49 is approximated by ((2^58 - 23)/49) * 2^(-58).
Now multiply back by 49.  The normalized mantissa
2^53 - 23/32 rounds to 2^53 - 1, and the product
is approximated by (2^53 - 1)*2^(-53).

[Yes, I ignored denormals and other special operands above.]
-- 
        Peter L. Montgomery    pmontgom@cwi.nl    San Rafael, California

A mathematician whose age has doubled since he last drove an automobile.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-01  0:00       ` W. Wesley Groleau x4923
@ 1997-08-02  0:00         ` Robert Dewar
  1997-08-02  0:00           ` Matthew Heaney
  1997-08-04  0:00           ` W. Wesley Groleau x4923
  1997-08-03  0:00         ` Brian Rogoff
  1 sibling, 2 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-02  0:00 UTC (permalink / raw)



Wes asks

<<I'd be pleased if I could just get my hands on a proposed alternative
for the "fair bit of embarassing nonsense" in Ada Quality and Style.

It would not be appropriate for an general-purpose style guide to
mandate "take an introductory course in numerical analysis.">>


Of *course* it is appropriate for AQ&S to mandate this level of
knowledge. I am dubious as to whether it is possible to produce
a set of guidelines for fpt usage in Ada that are useful to
knowlegdable professionals (the audience at which AQ&S is aimed),
but for sure trying to tell people who know nothing about numerical
analysis how to use fpt is a lost cause.

Indeed, simple rules in this area are almost certain to be misleading
and unhelpful, if not, as are several of the rules in AQ&S as it
stands, downright incorrect.

This is simply not an area where a simple set of rules is likely to be
helpful. After all you do not look in AQ&S to learn how to program, it
is expected that anyone reading this guide already knows how to program.

Similarly, any style guide lines on fpt in AQ&S should reasonably be
directed at those who understand floating-point and numerical analysis
issues. For such an audience, the right level of discourse would probably
be a rather deep discussion of how the Ada floating-point model should
be used. But I doubt this audience is likely to look to AQ&S anyway.

You could I suppose ask the numerics working group to redo this section,
but I am dubious about whether it would be useful, and the idea that the
AQ&S rules could substitute for basic knowledge is unreasonable. Of
*course* the AQ&S can say "you need a level of knowledge at least
equivalent to a basic course in numerical analysis" to read this section.

In my view, no one who does NOT have this level of knowledge should ever
use the keyword digits in Ada (unless associated with delta), or use
the built in Float types!

I know that may read as arrogant or dismissive, but I have seen so much
chaos produced by programmers messing around with floating-point codes who
have not the slighest idea what they are doing. I really think this viewpoint
is no different than observing that people who have never done any
programming should take a programming course before reading AQ&S.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-29  0:00 floating point comparison Matthew Heaney
  1997-07-30  0:00 ` Robert Dewar
  1997-07-30  0:00 ` Jan Galkowski
@ 1997-08-02  0:00 ` Michael Sierchio
  1997-08-08  0:00 ` floating point conversions Mark Lusti
  3 siblings, 0 replies; 105+ messages in thread
From: Michael Sierchio @ 1997-08-02  0:00 UTC (permalink / raw)



Matthew Heaney wrote:

> My immediate problem is, How do I test 2 floating point numbers for equality?

Never test floating point numbers for equality! :-)

> The AQ&S states that
> 
> if abs (X - Y) <= Float_T'Small

Numerical analysis is a fun discipline!  For all you know, X and Y are
equal, except that you used different methods to derive them, and
they differ in one bit.  Arrgh.  Checking that they are within some
epsilon is proper,  but deciding upon that epsilon may be hard.  

There are multiple sources of error in a number:  imprecision, so that
roundoff error accumulates, and the measurement limits (in the case of
a physical process) are big ones.  The old physics/chem method of
retaining a fixed number of significant figures is naive at best --

Interval arithmetic may help -- you may choose to decide that two fp
nums are equal if their intervals overlap.




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-02  0:00         ` Robert Dewar
@ 1997-08-02  0:00           ` Matthew Heaney
  1997-08-03  0:00             ` Robert Dewar
  1997-08-04  0:00           ` W. Wesley Groleau x4923
  1 sibling, 1 reply; 105+ messages in thread
From: Matthew Heaney @ 1997-08-02  0:00 UTC (permalink / raw)



In article <dewar.870525385@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:


>In my view, no one who does NOT have this level of knowledge should ever
>use the keyword digits in Ada (unless associated with delta), or use
>the built in Float types!

Perhaps so, but if you're writing an application that uses real numbers,
then I assume your admonishment not to use floating point types means "use
fixed point types" or "use floating point types, but take a class in
numerical analysis first."

You must admit that using a fixed point types in Ada 83 was sometimes a
pain, compounded by the fact that often only T'Small that was a power of 2
was supported.

Consider a simple 16 bit heading type, say, typical of what you'd read off
a 1553B:

   Ownship_Heading_Delta : constant := 360.0 / 2 ** 16;
   type Ownship_Heading is delta Ownship_Heading_Delta range 0.0 .. 360.0;
   for Ownship_Heading'Small use Ownship_Heading_Delta;

How many Ada 83 or Ada 95 compilers support this declaration?  My bet is
few (if any), despite the fact that this declaration isn't unreasonable,
since it is typical of the needs of applications in the domain for which
Ada was designed.

What motivates the use of floating point is that the fixed point
declaration above doesn't compile.  So here's what an Ada programmer does:


We'll declare a fixed point type with a small that's a power of 2:

   Fixed_16_U_16_Delta : constant := 1.0 / 2 ** 16;
   type Fixed_16_U_16 is delta Fixed_16_U_16_Delta range 0.0 .. 1.0;
   for Fixed_16_U_16'Small use Fixed_16_U_16_Delta;

We'll declare a floating point type with the range we require.  Since there
are 16 bits of precision, we need a floating point type with ceiling (16 /
3.32) = 5 digits of precision.

   type Ownship_Heading is digits 5 range 0.0 .. 360.0;

So he reads the data off the interface using a fixed point type, and
converts it to a floating point type:

declare
   Normalized_Heading : Fixed_16_U_16;
   Heading : Ownship_Heading;
begin
   Read (fd, Normalized_Heading);
   Heading := 360.0 * Ownship_Heading (Normalized_Heading);
end;

Now I'm back to square one: how do I compare values of ownship heading?

Yes, people routinely use floating point types - probably inappropriately -
but fixed point type support has traditionally been weak.  Perhaps this
will change in Ada 95.

I'm not unsympathetic to your view that putting advice about floating point
types in the AQ&S may very well create more problems then it solves, but
there needs to be some published guidelines about how to use Ada's real
types to solve typical problems, such as reading a real number from an
external device.

--------------------------------------------------------------------
Matthew Heaney
Software Development Consultant
<mailto:matthew_heaney@acm.org>
(818) 985-1271




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-01  0:00       ` user
  1997-08-02  0:00         ` Peter L. Montgomery
@ 1997-08-02  0:00         ` Lynn Killingbeck
  1997-08-03  0:00           ` Robert Dewar
  1997-08-03  0:00           ` Bob Binder  (remove .mapson to email)
  1 sibling, 2 replies; 105+ messages in thread
From: Lynn Killingbeck @ 1997-08-02  0:00 UTC (permalink / raw)



user wrote:
> 
> Bob Binder wrote:
> 
> > I haven't followed the IEEE fp standard, but I can't imagine how
> > (without resorting to some kind of BCD representation) any floating
> > point scheme using a non-decimal radix gets around the fundamental fact
> > that there are non-terminal fractions in this radix which are not
> > non-terminal in a decimal radix.
> >
> > Try this:
> >
> > x, y float;
> >
> > x = 1.0;
> > y = (x/10.0)*10.0;
> > if x == y {
> >         // not likely }
> > else {
> >         // more likely
> > };
> Although I am reading this in comp.lang.ada, the suggestion to try a
> simple example like this in C piqued my interest.
> 
> On a Sun SPARC-5, with the built in compiler supplied by Sun, this
> code (fixed for syntax) found (x == y) to be true.  Using 3.0 and
> 7.0 (neither of which have any chance of yielding an exact
> representation
> after the division) also found (x == y) to be true.
> 
> The underlying machine arithmetic escapes me, but a simple mechanism
> such as retaining more precision in computation results, and rounding
> prior to comparison to some lesser degree of precision that still meets
> the spec, would work.  Cheap calculators do this all the time, using
> something known as "guard digits," which retain precision that is not
> displayed in the (rounded) result.

I did something like this a few years back, to show someone the problem
- and was very surprised when the results was equal, even though
(x/10.0) has no exact representation on a binary-based computer. Then I
looked at the generated code - and the COMPILER had optimised away all
the operations and just set x=y=1.0. There weren't any computations. I
think it was in someone's C (either Borland or Microsoft), but it's too
far back to remember the language/compiler details.

Lynn Killingbeck




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-30  0:00   ` Matthew Heaney
                       ` (3 preceding siblings ...)
  1997-07-31  0:00     ` Robert Dewar
@ 1997-08-02  0:00     ` Lynn Killingbeck
  4 siblings, 0 replies; 105+ messages in thread
From: Lynn Killingbeck @ 1997-08-02  0:00 UTC (permalink / raw)
  To: Matthew Heaney

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2841 bytes --]


Matthew Heaney wrote:
> 
> In article <dewar.870304869@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:
> 
> >As for when it makes sense to compare floating-point values for equality,
> >that is a matter for the programmer to understand. Floating-point arithmetic
> >on modern IEEE machines is not some kind of approximate hocus-pocus, it
> >is a well defined arithmetic systenm, with well defined, well behaved
> >results, in which equality has a perfectly reasonable meaning.
> 
> OK, so if  do this
> 
> declare
>    m1, m2 : slope;
> begin
>    if m1 = m2 then
> 
> and I'm using IEEE floating point types (I'm on a SUN box), can I assume
> that the lines are parallel?
> 
> --------------------------------------------------------------------
> Matthew Heaney
> Software Development Consultant
> <mailto:matthew_heaney@acm.org>
> (818) 985-1271

The slopes might compare equal when they are, in reality, different.
Just to be fair, the slopes also might compare unequal when they are, in
reality, the same. The following examples show this in decimal; but the
general problems exists in binary and all other bases, too.

Suppose you compute the slope of the line going through the origin (0,0)
and the point (1,2). The slope is simply 2. But, suppose the two points
are the origin (0,0) and the point (1, 2.00000...0001). To machine
accuracy, the second point (given enough implied zeroes) is
indistinguishable from the first. The distinct lines will appear to be
not just parallel, but also coincident.

Now try specifiying the first line, above, by two other points, such as
(1/3, 2/3) and (2/3, 4/3). To machine accuracy, these are
(0.333...333,0.666...667) and (0.666...667,1.333...333). Computing the
slope will give (0.666...6666/0.333...334) - watch the final digit
carefully - which is slightly less than a slope of 2, and not going
through the origin. A single line, here, appears to be distinct lines.
In general, depending on the exact two points and the effects of
rounding on all the numbers, you can get any results (e.g., the slope is
less than 2, equal to 2, or greater than 2). The slopes are all
approximately equal, but not idential.

I don�t know of any �easy answers� to floating point accuracy. I use
tests for exact equality when appropriate - based on understanding of
whatever problem I�m trying to solve, not rote translation of some
equation or procedure from a pure-math (i.e., infinite precision) to a
practical program. Exact equality is often not appropriate, at which
point some combination of absolute difference and relative difference is
generally effective - again, with the numeric tolerances based on
understanding of the problem being solved. I�ve even been able to use
"... even if off by a factor of a few billion, ..." - which is a
humungous percentage error.

Lynn Killingbeck




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-01  0:00       ` W. Wesley Groleau x4923
  1997-08-02  0:00         ` Robert Dewar
@ 1997-08-03  0:00         ` Brian Rogoff
  1997-08-03  0:00           ` Robert Dewar
  1 sibling, 1 reply; 105+ messages in thread
From: Brian Rogoff @ 1997-08-03  0:00 UTC (permalink / raw)



On Fri, 1 Aug 1997, W. Wesley Groleau x4923 wrote:
> I'd be pleased if I could just get my hands on a proposed alternative 
> for the "fair bit of embarassing nonsense" in Ada Quality and Style.

There are lots of references, numerical analysis is a well established
field, and the subset that you are interested in, error analysis and 
computer arithmetic, is the most basic part. A good place to start if 
you don't know much would be David Goldberg's article in ACM Computing 
Surveys, vol 23, no. 1, March '91. I think that there is also a new book 
on computer arithmetic from SIAM Press by N. Higham but that is probably 
overkill for your purposes.

> It would not be appropriate for an general-purpose style guide to 
> mandate "take an introductory course in numerical analysis."

But it is appropriate to assume some prerequisites, including some work in 
numerical analysis. Unfortunately, many "computer science" students don't 
like this material and manage to avoid it. 

-- Brian






^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-02  0:00         ` Lynn Killingbeck
  1997-08-03  0:00           ` Robert Dewar
@ 1997-08-03  0:00           ` Bob Binder  (remove .mapson to email)
  1997-08-03  0:00             ` Charles R. Lyttle
  1 sibling, 1 reply; 105+ messages in thread
From: Bob Binder  (remove .mapson to email) @ 1997-08-03  0:00 UTC (permalink / raw)



Lynn Killingbeck <killbeck@phoenix.net> wrote in article
<33E3ED97.4A56@phoenix.net>...
> user wrote:
> > 
> > Bob Binder wrote:
> > 
> > > I haven't followed the IEEE fp standard, but I can't imagine how
> > > (without resorting to some kind of BCD representation) any floating
> > > point scheme using a non-decimal radix gets around the fundamental fact
> > > that there are non-terminal fractions in this radix which are not
> > > non-terminal in a decimal radix.
> > >
> > > Try this:
> > >
> > > x, y float;
> > >
> > > x = 1.0;
> > > y = (x/10.0)*10.0;
> > > if x == y {
> > >         // not likely }
> > > else {
> > >         // more likely
> > > };
> > Although I am reading this in comp.lang.ada, the suggestion to try a
> > simple example like this in C piqued my interest.
> > 
> > On a Sun SPARC-5, with the built in compiler supplied by Sun, this
> > code (fixed for syntax) found (x == y) to be true.  Using 3.0 and
> > 7.0 (neither of which have any chance of yielding an exact
> > representation
> > after the division) also found (x == y) to be true.
> > 
> > The underlying machine arithmetic escapes me, but a simple mechanism
> > such as retaining more precision in computation results, and rounding
> > prior to comparison to some lesser degree of precision that still meets
> > the spec, would work.  Cheap calculators do this all the time, using
> > something known as "guard digits," which retain precision that is not
> > displayed in the (rounded) result.
> 
> I did something like this a few years back, to show someone the problem
> - and was very surprised when the results was equal, even though
> (x/10.0) has no exact representation on a binary-based computer. Then I
> looked at the generated code - and the COMPILER had optimised away all
> the operations and just set x=y=1.0. There weren't any computations. I
> think it was in someone's C (either Borland or Microsoft), but it's too
> far back to remember the language/compiler details.
> 
> Lynn Killingbeck
> 


Interesting -- things have improved some from the stone age tools I used.
I guess its a good thing that the simpler pitfalls are prevented -- 
but you'd have to look at the generated code to see just what is 
done by a particular compiler.   The mismatch still lurks.  The only 
way to deal with is to apply know good numerical design technique
with a clear understanding of the quirks and limitations of 
particular compiler.  Ironically, as compilers become better, these
quirks become more obscure, and can promote overconfidence.


-- 
______________________________________________________________________________ 
Bob Binder                http://www.rbsc.com          RBSC Corporation
312 214-3280  tel         Software Engineering         3 First National Plaza 
312 214-3110  fax         Process Improvement          Suite 1400 
rbinder@rbsc.mapson.com   (remove .mapson to mail)     Chicago, IL 60602-4205







^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-03  0:00         ` Brian Rogoff
@ 1997-08-03  0:00           ` Robert Dewar
  0 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-03  0:00 UTC (permalink / raw)



Brian says

<<But it is appropriate to assume some prerequisites, including some work in
numerical analysis. Unfortunately, many "computer science" students don't
like this material and manage to avoid it.
>>

NYU was one of the last schools to require all CS students to take a
numerical analysis course for a CS degree. But there were huge complaints
over a period of many years, and students indeed hated the course. So now
we are like other schools, most of our students graduate knowing nothing
about floating-point. 

I did in my Ada course try to at least make sure they were in the class
of "know not and know that they know not", but how successful I was
I don't know. It is certainly a common experience that people with no
background here keep asking for simple rules to make sure they use
fpt correctly -- sorry there aren't any!





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-02  0:00         ` Lynn Killingbeck
@ 1997-08-03  0:00           ` Robert Dewar
  1997-08-03  0:00           ` Bob Binder  (remove .mapson to email)
  1 sibling, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-03  0:00 UTC (permalink / raw)



Lynn says

<<I did something like this a few years back, to show someone the problem
- and was very surprised when the results was equal, even though
(x/10.0) has no exact representation on a binary-based computer. Then I
looked at the generated code - and the COMPILER had optimised away all
the operations and just set x=y=1.0. There weren't any computations. I
think it was in someone's C (either Borland or Microsoft), but it's too
far back to remember the language/compiler details.
>>


Note that such an optimization MUST give EXACTLY the same result as
would be obtained at run-time, or the compiler is violating a specific
requirement of the IEEE standard.

However, in Ada, watch out! The rules for static expressions conflict
with this requirement. So be careful using constants in Ada, you may
be requiring over-precise evaluation at compile time.

But if only variables are involved, an optimizer is being naughty if
it violates these rules. Similar optimizations that are verboten are:

    x := y / constant1;   =>   x := y * (1.0 / constant1);

Actually this is an interesting case, the Ada RM allows this transformation
if the model interval of the result is the same, but the IEEE standard
only permits the optimization if the results are bit for bit identical.

GNAT follows the IEEE standard here, or at least intends to, if you find
a counterexample, it is a bug, please submit it (we know of no such bugs,
and indeed the  gcc backend goes out of its way to avoid incorrect
optimizations of this type)

Robert Dewar
Ada Core Technologies





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-02  0:00           ` Matthew Heaney
@ 1997-08-03  0:00             ` Robert Dewar
  0 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-03  0:00 UTC (permalink / raw)



Matthew said:

  Perhaps so, but if you're writing an application that uses real numbers,
  then I assume your admonishment not to use floating point types means "use
  fixed point types" or "use floating point types, but take a class in
  numerical analysis first."
  
  You must admit that using a fixed point types in Ada 83 was sometimes a
  pain, compounded by the fact that often only T'Small that was a power of 2
  was supported.
  
  Consider a simple 16 bit heading type, say, typical of what you'd read off
  a 1553B:
  
     Ownship_Heading_Delta : constant := 360.0 / 2 ** 16;
     type Ownship_Heading is delta Ownship_Heading_Delta range 0.0 .. 360.0;
     for Ownship_Heading'Small use Ownship_Heading_Delta;
  
  How many Ada 83 or Ada 95 compilers support this declaration?  My bet is
  few (if any), despite the fact that this declaration isn't unreasonable,
  since it is typical of the needs of applications in the domain for which
  Ada was designed.
  
  What motivates the use of floating point is that the fixed point
  declaration above doesn't compile.  So here's what an Ada programmer does:

  We'll declare a fixed point type with a small that's a power of 2:
  
     Fixed_16_U_16_Delta : constant := 1.0 / 2 ** 16;
     type Fixed_16_U_16 is delta Fixed_16_U_16_Delta range 0.0 .. 1.0;
     for Fixed_16_U_16'Small use Fixed_16_U_16_Delta;
  
  We'll declare a floating point type with the range we require.  Since there
  are 16 bits of precision, we need a floating point type with ceiling (16 /
  3.32) = 5 digits of precision.
  
     type Ownship_Heading is digits 5 range 0.0 .. 360.0;
  
  So he reads the data off the interface using a fixed point type, and
  converts it to a floating point type:
  
  declare
     Normalized_Heading : Fixed_16_U_16;
     Heading : Ownship_Heading;
  begin
     Read (fd, Normalized_Heading);
     Heading := 360.0 * Ownship_Heading (Normalized_Heading);
  end;
  
  Now I'm back to square one: how do I compare values of ownship heading?
  
  Yes, people routinely use floating point types - probably inappropriately -
  but fixed point type support has traditionally been weak.  Perhaps this
  will change in Ada 95.

Robert replies

  I will say it again, to use floating-point without knowing anything about it
  makes as much sense as trying to write an Ada program when you know nothing
  about programming. That goes for all human endeavors, if you want to do
  something right, you need to know what you are doing. You are giving the
  impression that you really know *nothing* about fpt, which is fine, but
  then what makes you think you can possibly learn a complex subject by
  virtue of a few simple rules.
  
  Anyone who understands floating-point will be able to deal with scenarios
  such as you mention without any special knowledge of Ada. These are fpt
  problems, not Ada problems!
  
  As to your question, how do you compare values of ownship heading? Impossible
  of course to answer without specs as to the accuracy of the input, and the
  purpose of the comparison.
  
  By the way
  
     Ownship_Heading_Delta : constant := 360.0 / 2 ** 16;
     type Ownship_Heading is delta Ownship_Heading_Delta range 0.0 .. 360.0;
     for Ownship_Heading'Small use Ownship_Heading_Delta;
  
  of course compiles fine in GNAT, what might possibly give you the impression
  that it would not. GNAT supports arbitrary smalls, with a precision of up
  to 64 bits. Remember GNAT is a full language compiler, not a subset!





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-03  0:00           ` Bob Binder  (remove .mapson to email)
@ 1997-08-03  0:00             ` Charles R. Lyttle
  0 siblings, 0 replies; 105+ messages in thread
From: Charles R. Lyttle @ 1997-08-03  0:00 UTC (permalink / raw)



Bob Binder  wrote:
> 
> Lynn Killingbeck <killbeck@phoenix.net> wrote in article
> <33E3ED97.4A56@phoenix.net>...
> > user wrote:
> > >
> > > Bob Binder wrote:
> > >
> > > > I haven't followed the IEEE fp standard, but I can't imagine how
> > > > (without resorting to some kind of BCD representation) any floating
> > > > point scheme using a non-decimal radix gets around the fundamental fact
> > > > that there are non-terminal fractions in this radix which are not
> > > > non-terminal in a decimal radix.
> > > >
> > > > Try this:
> > > >
> > > > x, y float;
> > > >
> > > > x = 1.0;
> > > > y = (x/10.0)*10.0;
> > > > if x == y {
> > > >         // not likely }
> > > > else {
> > > >         // more likely
> > > > };
> > > Although I am reading this in comp.lang.ada, the suggestion to try a
> > > simple example like this in C piqued my interest.
> > >
> > > On a Sun SPARC-5, with the built in compiler supplied by Sun, this
> > > code (fixed for syntax) found (x == y) to be true.  Using 3.0 and
> > > 7.0 (neither of which have any chance of yielding an exact
> > > representation
> > > after the division) also found (x == y) to be true.
> > >
> > > The underlying machine arithmetic escapes me, but a simple mechanism
> > > such as retaining more precision in computation results, and rounding
> > > prior to comparison to some lesser degree of precision that still meets
> > > the spec, would work.  Cheap calculators do this all the time, using
> > > something known as "guard digits," which retain precision that is not
> > > displayed in the (rounded) result.
> >
> > I did something like this a few years back, to show someone the problem
> > - and was very surprised when the results was equal, even though
> > (x/10.0) has no exact representation on a binary-based computer. Then I
> > looked at the generated code - and the COMPILER had optimised away all
> > the operations and just set x=y=1.0. There weren't any computations. I
> > think it was in someone's C (either Borland or Microsoft), but it's too
> > far back to remember the language/compiler details.
> >
> > Lynn Killingbeck
> >
> 
> Interesting -- things have improved some from the stone age tools I used.
> I guess its a good thing that the simpler pitfalls are prevented --
> but you'd have to look at the generated code to see just what is
> done by a particular compiler.   The mismatch still lurks.  The only
> way to deal with is to apply know good numerical design technique
> with a clear understanding of the quirks and limitations of
> particular compiler.  Ironically, as compilers become better, these
> quirks become more obscure, and can promote overconfidence.
> 
> --
> 
The moral is don't rely on the compiler, but on proper program design.
As soon as you lear the quirks of one compiler, a new release will be
adopted that has a new set of quirks. Ada has all the tools to do it
right. Anyone for writing "Numerical Recipes in Ada"? A good on-line GPL
package would be very valuable.




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-07-31  0:00       ` Robert Dewar
  1997-08-01  0:00         ` Dale Stanbrough
@ 1997-08-04  0:00         ` Paul Eggert
  1997-08-06  0:00           ` Robert Dewar
  1 sibling, 1 reply; 105+ messages in thread
From: Paul Eggert @ 1997-08-04  0:00 UTC (permalink / raw)



dewar@merv.cs.nyu.edu (Robert Dewar) writes:

> IEEE floating-point standard
> is a well defined arithmetic system, with a well defined set of values
> with well defined operations on them.

True, but the IEEE standard is only well-defined at the machine level,
and even then some options are left to the implementer (e.g. whether
to round before or after tininess-checking during underflow detection).

The hairiest part about relying on the IEEE standard is that there are
no well-defined methods for mapping from the source language to the
machine level.  For example, can an implementer use an extra-long
80-bit representation instead of a 64-bit representation to implement
double precision floating point?  If you expect your answers to agree
to the bit level, the answer is NO; but if you expect your code to run
fast on the x86 architecture, you'd better relax your requirements and
allow a bit of fuzz in your implementation.

In other words, in practice, just because the hardware conforms to IEEE
doesn't mean that your program will give the exact same answer that it
did on some other IEEE-conforming platform.




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-02  0:00         ` Peter L. Montgomery
@ 1997-08-04  0:00           ` W. Wesley Groleau x4923
  1997-08-05  0:00             ` Bob Binder  (remove .mapson to email)
  0 siblings, 1 reply; 105+ messages in thread
From: W. Wesley Groleau x4923 @ 1997-08-04  0:00 UTC (permalink / raw)



> >> x, y float;
> 
> >> x = 1.0;
> >> y = (x/10.0)*10.0;
> >> if x == y {
> >>         // not likely }
> >> else {
> >>         // more likely
> >> };
> >Although I am reading this in comp.lang.ada, the suggestion to try a
> >simple example like this in C piqued my interest.
>      You lucked out due to IEEE rounding to nearest.
   [snip]
>     On the other hand, try (1.0/49.0)*49.0.

49 "works" in GNAT.  The following gets eight 'no' and 92 'yes' :

with Ada.Text_IO;
procedure FP_Test is
   ONE : constant Float := 1.0;
begin
   for I in 1 .. 100 loop
      if ONE = ( ONE / Float(I) ) * Float(I) then
        Ada.Text_IO.Put_Line ( Integer'Image(I) & ": YES !!" );
      else
        Ada.Text_IO.Put_Line ( Integer'Image(I) & ": no...." );
      end if;
   end loop;
end FP_Test;

-- 
----------------------------------------------------------------------
    Wes Groleau, Hughes Defense Communications, Fort Wayne, IN USA
Senior Software Engineer - AFATDS                  Tool-smith Wanna-be

Don't send advertisements to this domain unless asked!  All disk space
on fw.hac.com hosts belongs to either Hughes Defense Communications or 
the United States government.  Using email to store YOUR advertising 
on them is trespassing!
----------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-02  0:00         ` Robert Dewar
  1997-08-02  0:00           ` Matthew Heaney
@ 1997-08-04  0:00           ` W. Wesley Groleau x4923
  1997-08-05  0:00             ` Jan-Christoph Puchta
                               ` (3 more replies)
  1 sibling, 4 replies; 105+ messages in thread
From: W. Wesley Groleau x4923 @ 1997-08-04  0:00 UTC (permalink / raw)



> It would not be appropriate for an general-purpose style guide to
> mandate "take an introductory course in numerical analysis.">>
> 
> Of *course* it is appropriate for AQ&S to mandate this level of
> knowledge. I am dubious as to whether it is possible to produce
> a set of guidelines for fpt usage in Ada that are useful to
> knowlegdable professionals (the audience at which AQ&S is aimed),
> but for sure trying to tell people who know nothing about numerical
> analysis how to use fpt is a lost cause.

How about a compromise.  Something like

  * Use numerical analysis to determine appropriate algorithms when
    using floating point in <list of application areas or types of
    problems>

I understand floating point well enough to compute with pencil &
paper the correct bit pattern for Pi (3.14159.....) in a VAX 
D_FLOAT, but I have never seen IEEE 754.  Yet I almost always 
treat floats "as if" they were "true" real numbers--and I have 
NEVER had a bug traced to this "mistake".

I've worked with a lot of decent programmers whose degrees are in
English, History, EE, ME, .... and most of them could say the same.
I KNOW some of my colleagues have no clue how floating point numbers
are represented, yet still manage to produce working code.

-- 
----------------------------------------------------------------------
    Wes Groleau, Hughes Defense Communications, Fort Wayne, IN USA
Senior Software Engineer - AFATDS                  Tool-smith Wanna-be
                    wwgrol AT pseserv3.fw.hac.com

Don't send advertisements to this domain unless asked!  All disk space
on fw.hac.com hosts belongs to either Hughes Defense Communications or 
the United States government.  Using email to store YOUR advertising 
on them is trespassing!
----------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-04  0:00           ` W. Wesley Groleau x4923
@ 1997-08-05  0:00             ` Jan-Christoph Puchta
  1997-08-05  0:00               ` W. Wesley Groleau x4923
  1997-08-06  0:00             ` Robert Dewar
                               ` (2 subsequent siblings)
  3 siblings, 1 reply; 105+ messages in thread
From: Jan-Christoph Puchta @ 1997-08-05  0:00 UTC (permalink / raw)
  To: W. Wesley Groleau x4923


W. Wesley Groleau x4923 wrote:

> I understand floating point well enough to compute with pencil &
> paper the correct bit pattern for Pi (3.14159.....) in a VAX
> D_FLOAT, but I have never seen IEEE 754.  Yet I almost always
> treat floats "as if" they were "true" real numbers--and I have
> NEVER had a bug traced to this "mistake".
> 
> I've worked with a lot of decent programmers whose degrees are in
> English, History, EE, ME, .... and most of them could say the same.
> I KNOW some of my colleagues have no clue how floating point numbers
> are represented, yet still manage to produce working code.

And that's the problem. Programmers are satisfied if they can "produce
working code", i.e. the program does what I expect it to do. But if you
don't care about numerical problems, the program will give you a nice
result, which might be wrong. Just take SAS as an example of a
commercial widely used wrong software.

JCP




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-04  0:00           ` W. Wesley Groleau x4923
@ 1997-08-05  0:00             ` Bob Binder  (remove .mapson to email)
  0 siblings, 0 replies; 105+ messages in thread
From: Bob Binder  (remove .mapson to email) @ 1997-08-05  0:00 UTC (permalink / raw)





W. Wesley Groleau x4923 <wwgrol@pseserv3.fw.hac.com> wrote in article
<33E611E4.2E1A@pseserv3.fw.hac.com>...
> > >> x, y float;
> > 
> > >> x = 1.0;
> > >> y = (x/10.0)*10.0;
> > >> if x == y {
> > >>         // not likely }
> > >> else {
> > >>         // more likely
> > >> };
> > >Although I am reading this in comp.lang.ada, the suggestion to try a
> > >simple example like this in C piqued my interest.
> >      You lucked out due to IEEE rounding to nearest.
>    [snip]
> >     On the other hand, try (1.0/49.0)*49.0.
> 
> 49 "works" in GNAT.  The following gets eight 'no' and 92 'yes' :
> 
> with Ada.Text_IO;
> procedure FP_Test is
>    ONE : constant Float := 1.0;
> begin
>    for I in 1 .. 100 loop
>       if ONE = ( ONE / Float(I) ) * Float(I) then
>         Ada.Text_IO.Put_Line ( Integer'Image(I) & ": YES !!" );
>       else
>         Ada.Text_IO.Put_Line ( Integer'Image(I) & ": no...." );
>       end if;
>    end loop;
> end FP_Test;
> 


Way to go Wes!  



______________________________________________________________________________ 
Bob Binder                http://www.rbsc.com          RBSC Corporation
312 214-3280  tel         Software Engineering         3 First National Plaza 
312 214-3110  fax         Process Improvement          Suite 1400 
rbinder@rbsc.mapson.com   (remove .mapson to mail)     Chicago, IL 60602-4205





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-05  0:00             ` Jan-Christoph Puchta
@ 1997-08-05  0:00               ` W. Wesley Groleau x4923
  1997-08-05  0:00                 ` Samuel Mize
                                   ` (2 more replies)
  0 siblings, 3 replies; 105+ messages in thread
From: W. Wesley Groleau x4923 @ 1997-08-05  0:00 UTC (permalink / raw)



> > I've worked with a lot of decent programmers whose degrees are in
> > English, History, EE, ME, .... and most of them could say the same.
> > I KNOW some of my colleagues have no clue how floating point numbers
> > are represented, yet still manage to produce working code.
> 
> And that's the problem. Programmers are satisfied if they can "produce
> working code", i.e. the program does what I expect it to do. But if 
> you don't care about numerical problems, the program will give you a 
> nice result, which might be wrong. Just take SAS as an example of a
> commercial widely used wrong software.
 
"what I expect it to do" is meet its requirements.  And the requirements
rarely tolerate wrong answers.  So when I say "working code" I mean code
that gives right answers.  To put it another way, we did have a few
people on the project with degrees in mathematics, and I don't recall 
ever hearing one say, "That won't work because xxxx" where xxxx was 
something to do with floating point.
 
What I'm trying to say is that it should be possible to come up with a 
few simple guidelines that cover most cases, and a guideline for 
identifying the cases that require a mathematician--excuse me, a 
numerical analyst.

-- 
----------------------------------------------------------------------
    Wes Groleau, Hughes Defense Communications, Fort Wayne, IN USA
Senior Software Engineer - AFATDS                  Tool-smith Wanna-be

Don't send advertisements to this domain unless asked!  All disk space
on fw.hac.com hosts belongs to either Hughes Defense Communications or 
the United States government.  Using email to store YOUR advertising 
on them is trespassing!
----------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-05  0:00               ` W. Wesley Groleau x4923
@ 1997-08-05  0:00                 ` Samuel Mize
  1997-08-06  0:00                 ` Robert Dewar
  1997-08-06  0:00                 ` Chris L. Kuszmaul
  2 siblings, 0 replies; 105+ messages in thread
From: Samuel Mize @ 1997-08-05  0:00 UTC (permalink / raw)



W. Wesley Groleau x4923 wrote:
> What I'm trying to say is that it should be possible to come up with a
> few simple guidelines that cover most cases, and a guideline for
> identifying the cases that require a mathematician--excuse me, a
> numerical analyst.

Well, I'll take a WAG at it.  Let's see how quickly it gets shredded.
It's based on fairly cold memory, and it may miss important points.

***** DRAFT -- POSTED FOR COMMENT ONLY -- DO NOT USE *****

This estimation tells you if you are very safe using floating point.
If not, you need to use numerical analysis to determine the actual
characteristics of your data.

First, let's distinguish between accuracy and precision.  Accuracy
defines how close to the right number you are; precision defines how
finely you distinguish between numbers.  For instance, if we estimate
pi as 854.83049859234875034629348, this is very precise, but wildly
inaccurate.  You cannot be more accurate than you are precise.

Typically you are computing a result from some inputs.  First you
must determine how accurate your inputs are.  Express this as the
largest possible error for each input (its "error margin").

Each computation reduces the accuracy of its result.  If your
result's error margin is well below the accuracy you need for
your answer, you can use floating point.

ADDITION/SUBTRACTION: The error margin of the result is the largest
of the two input error margins.  If the smaller input error is
above 1% of the larger, assume that the larger error margin doubles.

MULTIPLICATION/DIVISION: The result error margin is the sum of the
two input error margins.

Thus, given the accuracies shown in the declarations below:
  declare
    A: float; -- error is 0.01
    B: float; -- error is 0.001
    C: float; -- error is 0.000001
  begin
    X := A + B; -- error is 0.02 (double, smaller margin within 1%)
    Y := X + A; -- error is 0.04
    Z := Y + C; -- error stays 0.04, C does almost nothing to result

    X := Z * B; -- error is 0.041
    Y := X * A; -- error is 0.051

SPECIAL WARNINGS: Don't depend on results from a computation where
one input value is about as big, or smaller than, the other input's
error margin.  Repeated computations that don't appear to affect the
error margin eventually will.  Assume that a loop will execute the
greatest possible number of times.

If you need to compare two numbers and see if they are "about" the
same, test whether their difference is less than the sum of their
error margins.  (If this doesn't give you close enough results,
your inputs aren't accurate enough to support that calculation.)

If you can't define an absolute error bound for an input, this
estimation can't help you.  You need to read up on numerical
analysis.  For instance, if your input is correct to five digits,
but it ranges from 0.0001 to 1000000, this estimation has to assume
that the input's error margin is 10.0 (the error of the largest
value). This is not likely to be helpful if it is meaningful to
measure down to 0.0001 with five-digit precision.

Sam Mize






^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-05  0:00               ` W. Wesley Groleau x4923
  1997-08-05  0:00                 ` Samuel Mize
  1997-08-06  0:00                 ` Robert Dewar
@ 1997-08-06  0:00                 ` Chris L. Kuszmaul
  1997-08-07  0:00                   ` Dave Sparks
       [not found]                   ` <5sbb90$qsc@redtail.cruzio.com>
  2 siblings, 2 replies; 105+ messages in thread
From: Chris L. Kuszmaul @ 1997-08-06  0:00 UTC (permalink / raw)



In article <33E74A62.53E2@pseserv3.fw.hac.com> "W. Wesley Groleau x4923" <wwgrol@pseserv3.fw.hac.com> writes:
> 
>What I'm trying to say is that it should be possible to come up with a 
>few simple guidelines that cover most cases, and a guideline for 
>identifying the cases that require a mathematician--excuse me, a 
>numerical analyst.
>


  Fair enough. There are two basic guidlines I follow:

1: Do not test for equality between floating point values. If you do, then
you better get a numerical analysit, or figure out how to avoid using floating
point, or see if you can use a 'close to equal' check.

2: Watch out when you sum or difference floating point numbers of similar
value. You can introduce arbitrarily large errors relative to the resulting
value. ((a-b) - a) + b can turn up as equal to b if a is so much larger than b
that subtracting it has no effect.

Also

3: If you just do multiplication and division, then it is hard to mess up.


CLK





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-05  0:00               ` W. Wesley Groleau x4923
  1997-08-05  0:00                 ` Samuel Mize
@ 1997-08-06  0:00                 ` Robert Dewar
  1997-08-07  0:00                   ` Shmuel (Seymour J.) Metz
  1997-08-06  0:00                 ` Chris L. Kuszmaul
  2 siblings, 1 reply; 105+ messages in thread
From: Robert Dewar @ 1997-08-06  0:00 UTC (permalink / raw)



Wes says

<<"what I expect it to do" is meet its requirements.  And the requirements
rarely tolerate wrong answers.  So when I say "working code" I mean code
that gives right answers.  To put it another way, we did have a few
people on the project with degrees in mathematics, and I don't recall
ever hearing one say, "That won't work because xxxx" where xxxx was
something to do with floating point.

What I'm trying to say is that it should be possible to come up with a
few simple guidelines that cover most cases, and a guideline for
identifying the cases that require a mathematician--excuse me, a
numerical analyst.
>>


The idea that having a math degree qualifies someone as a numerical
analyst, let alone a competent one, is about on the same level as
assuming anyone with a degree in computer science is an expert in
logic programming systems (or choose any other specialty here).

And no, sorry, it is NOT possible to come up with a few simple
guidelines. Ask anyone who *does* know about numerical analysis
and they will likely agree.

There simply is no susbtitute for knowing what you are doing when
it comes to floating-point computation.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-04  0:00           ` W. Wesley Groleau x4923
  1997-08-05  0:00             ` Jan-Christoph Puchta
@ 1997-08-06  0:00             ` Robert Dewar
  1997-08-07  0:00               ` Dr. Rex A. Dwyer
       [not found]               ` <33E8DFF6.6F44@pseserv3.fw.hac.com>
  1997-08-06  0:00             ` Robert Dewar
  1997-08-07  0:00             ` Do-While Jones
  3 siblings, 2 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-06  0:00 UTC (permalink / raw)



Wes says

<<I understand floating point well enough to compute with pencil &
paper the correct bit pattern for Pi (3.14159.....) in a VAX
D_FLOAT, but I have never seen IEEE 754.  Yet I almost always
treat floats "as if" they were "true" real numbers--and I have
NEVER had a bug traced to this "mistake".

I've worked with a lot of decent programmers whose degrees are in
English, History, EE, ME, .... and most of them could say the same.
I KNOW some of my colleagues have no clue how floating point numbers
are represented, yet still manage to produce working code.
>>


But how would you know it was working as expected? If you have not
done the numerical analysis required to ensure that the accuracy of
the results meets the specification, you are basically in the
hack-away-and-don't-worry-about-whether-it-works-if-it-looks-more-
or-less-reasonable-it-must-be-right mode.

Sure people write code in this mode all the time, and some of it works
by accident. But the cases in which people taking this kind of attitude 
to floating-point have created catastrophes are many!





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-04  0:00           ` W. Wesley Groleau x4923
  1997-08-05  0:00             ` Jan-Christoph Puchta
  1997-08-06  0:00             ` Robert Dewar
@ 1997-08-06  0:00             ` Robert Dewar
       [not found]               ` <33E8E3E1.17EA@pseserv3.fw.hac.com>
  1997-08-07  0:00             ` Do-While Jones
  3 siblings, 1 reply; 105+ messages in thread
From: Robert Dewar @ 1997-08-06  0:00 UTC (permalink / raw)



Wes said:

<<I understand floating point well enough to compute with pencil &
paper the correct bit pattern for Pi (3.14159.....) in a VAX
D_FLOAT, but I have never seen IEEE 754.  Yet I almost always>>

Hmmm! I hardly call a few digits of a decimal approximation a "bit
pattern". In fact computing the properly rounded binary approximation
to pi in Vax D_FLoat is not trivial at all, but we can clearly seee
the thinking here.

Look my number is about pi, since the decimal conversion looks right,
therefore the bit pattern must be correct.

Nope! Sorry, this simply is not a correct conclusion!





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-04  0:00         ` Paul Eggert
@ 1997-08-06  0:00           ` Robert Dewar
  1997-08-14  0:00             ` Paul Eggert
  0 siblings, 1 reply; 105+ messages in thread
From: Robert Dewar @ 1997-08-06  0:00 UTC (permalink / raw)



<<The hairiest part about relying on the IEEE standard is that there are
no well-defined methods for mapping from the source language to the
machine level.  For example, can an implementer use an extra-long
80-bit representation instead of a 64-bit representation to implement
double precision floating point?  If you expect your answers to agree
to the bit level, the answer is NO; but if you expect your code to run
fast on the x86 architecture, you'd better relax your requirements and
allow a bit of fuzz in your implementation.
>>

That is a fair comment (in fact I have a PhD student, Sam Figueroa,
whose thesis precisely addresses the issue of how IEEE 754/854 should
be mapped into high level languages). Note also that this is the issue
that LCAS, recently approved despite the US negative vote, addresses
(or is it LIAS these days -- Language Independent Arithmetic Standard)

But things are not as bad as you suggest in practice, and in particular
your comment on efficiency on the x86 is technically wrong. You can place
the x86 floating-point into 64 bit mode, so that there is no efficiency
penalty, and indeed at least some C compilers on the x86 do this.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-04  0:00           ` W. Wesley Groleau x4923
                               ` (2 preceding siblings ...)
  1997-08-06  0:00             ` Robert Dewar
@ 1997-08-07  0:00             ` Do-While Jones
  3 siblings, 0 replies; 105+ messages in thread
From: Do-While Jones @ 1997-08-07  0:00 UTC (permalink / raw)



In article <33E61497.33E2@pseserv3.fw.hac.com>,
W. Wesley Groleau x4923 <wwgrol@pseserv3.fw.hac.com> wrote:

[snip]

>I understand floating point well enough to compute with pencil &
>paper the correct bit pattern for Pi (3.14159.....) in a VAX 
>D_FLOAT, but I have never seen IEEE 754.  Yet I almost always 
>treat floats "as if" they were "true" real numbers--and I have 
>NEVER had a bug traced to this "mistake".
>
>I've worked with a lot of decent programmers whose degrees are in
>English, History, EE, ME, .... and most of them could say the same.
>I KNOW some of my colleagues have no clue how floating point numbers
>are represented, yet still manage to produce working code.
>

My degree is in EE, and my floating point code works because an enginner
knows about uncertainty.  When the sensor measures an angle to be 45.87
degrees, it might really be 45.87223 degrees.  If an algorithm will work
when the angle is 45.87 degrees, but not 45.87223 degrees, then the
algorithm is worthless.  Since the limitation of the floating-point
representation is usually negligible compared to the measurement
inaccuracy, we neglect floating-point representation errors.  The
protection we build in for measurement uncertainty also takes care of the
floating-point representation resolution limitations.

Do-While Jones

            +--------------------------------+
            |    Know                 Ada    |
            |        [Ada's Portrait]        |
            |    Will              Travel    |
            | wire do_while@ridgecrest.ca.us |
            +--------------------------------+
    





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-06  0:00             ` Robert Dewar
@ 1997-08-07  0:00               ` Dr. Rex A. Dwyer
       [not found]               ` <33E8DFF6.6F44@pseserv3.fw.hac.com>
  1 sibling, 0 replies; 105+ messages in thread
From: Dr. Rex A. Dwyer @ 1997-08-07  0:00 UTC (permalink / raw)



> Wes says
> 
> I KNOW some of my colleagues have no clue how floating point numbers
> are represented, yet still manage to produce working code.
> >>


This reminds me of a possibly apocryphal story I heard about a
programmer assigned to maintain numerical libraries at a well-known but
unnamed scientific installation.  He noticed that the square-root
routine happily processed negative arguments, so he changed it to
raise an exception.  He was then deluged with angry calls from
colleagues whose "working code" relied on this "feature".



*******************************************************************************
Rex A. Dwyer
Associate Professor and Director of Graduate Programs
Computer Science Department
North Carolina State University
Raleigh, NC 27695-8206
Telephone: 919-515-7028  Fax: 919-515-7896
E-mail:  dwyer@csc.ncsu.edu
*******************************************************************************




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-06  0:00                 ` Chris L. Kuszmaul
@ 1997-08-07  0:00                   ` Dave Sparks
  1997-08-08  0:00                     ` Robert Dewar
                                       ` (2 more replies)
       [not found]                   ` <5sbb90$qsc@redtail.cruzio.com>
  1 sibling, 3 replies; 105+ messages in thread
From: Dave Sparks @ 1997-08-07  0:00 UTC (permalink / raw)



>>>>> "CLK" == Chris L Kuszmaul <fyodor@sally.nas.nasa.gov> writes:

  CLK> In article <33E74A62.53E2@pseserv3.fw.hac.com> "W. Wesley Groleau
  CLK> x4923" <wwgrol@pseserv3.fw.hac.com> writes:
  >>  What I'm trying to say is that it should be possible to come up with
  >> a few simple guidelines that cover most cases, and a guideline for
  >> identifying the cases that require a mathematician--excuse me, a
  >> numerical analyst.
  >> 


  CLK>   Fair enough. There are two basic guidlines I follow:

  CLK> ...

  CLK> Also

  CLK> 3: If you just do multiplication and division, then it is hard to mess
  CLK>    up.

But I've seen a program where one of the inputs was a temperature
in degrees Centigrade, in the range 10 to 50 with no fractional
part.  The calculated results were displayed to six decimal places -
very misleading.  The calculation was accurate, but the results
were no more precise than the inputs.

It's wrong to assume that simple numerical analysis is so easy that
anyone can do it.

-- 
Dave Sparks, Staffordshire, England




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
       [not found]               ` <33E8DFF6.6F44@pseserv3.fw.hac.com>
@ 1997-08-07  0:00                 ` Robert Dewar
       [not found]                 ` <33EA1251.3466@link.com>
  1 sibling, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-07  0:00 UTC (permalink / raw)



Oh, and by the way, if you think that testing is a sufficient method
for ensuring lack of bugs in floating-point algorithms, you must have\
a little chat with Intel sometime about (a) the extensive testing they
did of the floating-point algorithms on the Pentium and (b) the half
billion dollar bug that turned up later :-)





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
       [not found]                 ` <5sbgpk$q0n$1@goanna.cs.rmit.edu.au>
@ 1997-08-07  0:00                   ` Robert Dewar
       [not found]                     ` <33FE4603.1B6B@pseserv3.fw.hac.com>
  1997-08-08  0:00                   ` W. Wesley Groleau x4923
  1 sibling, 1 reply; 105+ messages in thread
From: Robert Dewar @ 1997-08-07  0:00 UTC (permalink / raw)



iRichard said

<<if a major component of what you need is already available in a good
free (netlib, Harwell library, others) or commercial (NAG, IMSL, ...)
library, DON'T write your own.>>

Ah ha! Finally, a simple guideline for fpt calculations that I can agree
with :-)





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
       [not found]                     ` <5scugs$jdc$1@cnn.nas.nasa.gov>
@ 1997-08-07  0:00                       ` Robert Dewar
  0 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-07  0:00 UTC (permalink / raw)



<<  However, it is true that if you know what is going on, and know what you
want that you might get away with it. But *I* would never write a code to
go into a life-critical system that had the line

if(a == b) return -1;

  Or the like, where a and b are floats.>>


There is zero justification for this statement. The general rule is
that you had better not write any floating-point operations in life
critical systems that have not been extremely carefully analyzed by
competent numerical analysis (many systems of rules for such systems
eliminate the use of floating-point completely for this reason).

If you are using floatingpoint in such a system, with proper analysis,
then it is entirely possible that this analysis may indicate not only
that an equality operation of the above type is acceptable, but it may

be correct where some kind of attempt at apprixmate testing would be
incorrect.

I canot imagine that anyone would think that simple (and incorrect) rules
such as the above would make it safe to employ floating-point 
calculations in life critical systems.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-06  0:00                 ` Robert Dewar
@ 1997-08-07  0:00                   ` Shmuel (Seymour J.) Metz
  1997-08-08  0:00                     ` Peter Shenkin
  0 siblings, 1 reply; 105+ messages in thread
From: Shmuel (Seymour J.) Metz @ 1997-08-07  0:00 UTC (permalink / raw)



Robert Dewar wrote:
> 
> The idea that having a math degree qualifies someone as a numerical
> analyst, let alone a competent one, is about on the same level as
> assuming anyone with a degree in computer science is an expert in
> logic programming systems (or choose any other specialty here).

Absolutely!. Most of Mathematics concerns issues other than computation.
In fact, huge chunks of Mathematics don't deal with numbers at all. 
Furher, numerical analysis is more of an engineering discipline that a
mathematical one. If I have a visual problem I want an Opthamologist; an
ear, nose and throat specialist won't do.

-- 

                        Shmuel (Seymour J.) Metz
                        Senior Software SE

The values in from and reply-to are for the benefit of spammers:
reply to domain eds.com, user msustys1.smetz or to domain gsg.eds.com,
user smetz.




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-07  0:00                   ` Shmuel (Seymour J.) Metz
@ 1997-08-08  0:00                     ` Peter Shenkin
  1997-08-09  0:00                       ` Albert Y.C. Lai
  0 siblings, 1 reply; 105+ messages in thread
From: Peter Shenkin @ 1997-08-08  0:00 UTC (permalink / raw)



Shmuel (Seymour J.) Metz wrote:
> 
> Robert Dewar wrote:
> >
> > The idea that having a math degree qualifies someone as a numerical
> > analyst, let alone a competent one, is about on the same level as
> > assuming anyone with a degree in computer science is an expert in
> > logic programming systems (or choose any other specialty here).

Or assuming that anyone with a degree in English knows how to read
and write....  :-)

	-P.

-- 
"Making Barbie smart is like making GI Joe a conscientious objector"
M.Dowd
* Peter S. Shenkin; Chemistry, Columbia U.; 3000 Broadway, Mail Code
3153 *
** NY, NY  10027;  shenkin@columbia.edu;  (212)854-5143;  FAX: 678-9039
***
*MacroModel WWW page:
http://www.columbia.edu/cu/chemistry/mmod/mmod.html *




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-08  0:00                     ` Gerhard Heinzel
@ 1997-08-08  0:00                       ` Daniel Villeneuve
  1997-08-08  0:00                       ` schlafly
  1997-08-09  0:00                       ` Robert Dewar
  2 siblings, 0 replies; 105+ messages in thread
From: Daniel Villeneuve @ 1997-08-08  0:00 UTC (permalink / raw)




Gerhard Heinzel <ghh@mpq.mpg.de> writes:
> [snip]
> Second, if you want to compare two numbers that come from some algorithm
> (i.e. is an eigenvalue zero? is a pivot zero? is a correlation coefficient
> unity?) In these cases one must NEVER test for exact equality, but always
> take into account rounding errors and devise some test such as
> 
> (1) fabs(x-y)<EPSILON
> 
> or (2) fabs((x-y)/y)<EPSILON
> 
> or (3) something similar, depending on the situation.
[ numbers mine] 

Ok for not testing true equality, but the two cases above are quite
different.  One of the problems I'm faced is to verify that a point x in
R^n is on one side of an hyperplane, i.e.

  sum_i(a_i*x_i) >= b_i.

Should I:

a1) compute the sum in x, setting y to b_i and using absolute test (1),

a2) same as a1), but using relative test (2),

b1) try to avoid numerical cancellation by summing negative terms and
    positive terms separately, setting x to the positive sum and y to the
    negative of the negative sum, than using absolute test (1),

b2) same as b1), but using relative test (2),

Of course, each of the above computed sums is assumed to be computed with
the right algorithm, depending on the precision needed (e.g., using Kahan's
formula (*) if appropriate).

Daniel

(*) from David Goldberg's article "What Every Computer Scientist Should
    Know About Floating-Point Arithmetic"




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-07  0:00                   ` Dave Sparks
  1997-08-08  0:00                     ` Robert Dewar
  1997-08-08  0:00                     ` Jan-Christoph Puchta
@ 1997-08-08  0:00                     ` Mark Eichin
  2 siblings, 0 replies; 105+ messages in thread
From: Mark Eichin @ 1997-08-08  0:00 UTC (permalink / raw)



hmm, is there an Ada package to do error-bounds-propagated arithmetic?
(I recall writing a FORTRAN package for this in a high school
Chemistry class, for doing lab report calculations, a couple of
million years ago :-)  It was an implementation of the simple
intro-to-chemistry rules for error bounds (I don't recall if it
handled significant figures.)  Then you end up with the error-bounds
upon error-bounds issue ("25 +/- 3.013597%" :-) Needless to say this
doesn't handle the whole issue, it was definitely an amateur shot at
the problem - but given that it's a "real world" problem, I'd be
curious to see a real world solution...)





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-08  0:00                     ` Gerhard Heinzel
  1997-08-08  0:00                       ` Daniel Villeneuve
@ 1997-08-08  0:00                       ` schlafly
  1997-08-09  0:00                       ` Robert Dewar
  2 siblings, 0 replies; 105+ messages in thread
From: schlafly @ 1997-08-08  0:00 UTC (permalink / raw)



In article <Pine.SOL.3.91.970808104726.13390B-100000@mpqgrav1.mpq.mpg.de>, Gerhard Heinzel <ghh@mpq.mpg.de> writes:
> On 7 Aug 1997 schlafly@bbs.cruzio.com wrote:
> Second, if you want to compare two numbers that come from some algorithm
> (i.e. is an eigenvalue zero? is a pivot zero? is a correlation coefficient
> unity?) In these cases one must NEVER test for exact equality, but always
> take into account rounding errors and devise some test such as
> 
> fabs(x-y)<EPSILON
> 
> or fabs((x-y)/y)<EPSILON
> 
> or something similar, depending on the situation.

I don't agree.  A program might have to only test for 0 pivots
because it already has an error estimation piece that measures
the effect of small pivots.  Or it might have a lot of
correlation coefficients which are exactly 1, and only want to
compute some auxillary numbers when the coefficient is not 1.

You seem to be presuming that if someone does not do error
analysis in the style that you usually do it, then no error
analysis is being done.  On the contrary, if someone does a
careful error analysis elsewhere in the program, then a floating
point equality test can be quite harmless.

Roger





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point conversions
  1997-07-29  0:00 floating point comparison Matthew Heaney
                   ` (2 preceding siblings ...)
  1997-08-02  0:00 ` Michael Sierchio
@ 1997-08-08  0:00 ` Mark Lusti
  3 siblings, 0 replies; 105+ messages in thread
From: Mark Lusti @ 1997-08-08  0:00 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1781 bytes --]


This is a multi-part message in MIME format.
--------------4BB980C363E372AD2CEBFB99
Content-Type: text/plain; charset=koi8-r
Content-Type: text/plain; charset=koi8-r
Content-Transfer-Encoding: 7bit
Content-Transfer-Encoding: 7bit

What about type conversion. I tried lately to calculate the floor after a
floating point calculation. I was quite surprised about the result.

result : integer;
fval : float;
  ...
for i in 1..10 loop
   fval := float(2 * i + 1);

  result := integer( fval / 2.0 - 0.5);
  put(result);
  put_line("");
end loop;

fval ->  3, 5, .. 21

expected result
1, 2, 3, 4, 5, 6, 7, 8, 9, 10

on the SUN I got the correct result but on the target (SVME160) the result was:
0, 2, 2, 4, 4, 6, 6, 8, 8, 10

Is this a bug? Or I'm I doing something wrong?
I found in ANSI/MIL-STD1815A under 4.6 Type Conversion
[...]
(a) Numeric types
 [...] The conversion of a real value to an integer type rounds to the nearest
integer; if the operand is halfway between two integers (within the accuracy of
the real subtype) rounding may be either up or down.

Shouldn't "either" mean one or the other?

Does anyone have a workaround without adding some EPSILON?

Thanx

mark






--------------4BB980C363E372AD2CEBFB99
Content-Type: text/x-vcard; charset=koi8-r; name="vcard.vcf"
Content-Transfer-Encoding: 7bit
Content-Description: Card for Lusti, Mark
Content-Disposition: attachment; filename="vcard.vcf"

begin:          vcard
fn:             Lusti, Mark
n:              Lusti;Mark
org:            @Home
adr:            Mattackerstrasse 6;;;Z�rich;;CH-8052 ;
email;internet: mark.lusti@bluewin.ch
tel;work:       +41-1-316 2967
tel;home:       +41-1-301 0842
x-mozilla-cpt:  ;0
x-mozilla-html: FALSE
end:            vcard


--------------4BB980C363E372AD2CEBFB99--






^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-07  0:00                   ` Dave Sparks
@ 1997-08-08  0:00                     ` Robert Dewar
  1997-08-08  0:00                     ` Jan-Christoph Puchta
  1997-08-08  0:00                     ` Mark Eichin
  2 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-08  0:00 UTC (permalink / raw)



Dave Sparks said

<<It's wrong to assume that simple numerical analysis is so easy that
anyone can do it.
>>

The interesting thing is that most people would agree with this statement
if you susbvtitute almost any other technical aspect of computing for
"floating-point", but when it comes to floating-point, there seems to 
always be a fair number of people who are seduced into believing it
is false for floating-point, probably because:

  a) syntactically, the use of floating-point presents no problems
  b) there is a simple (but wrong) semantic model -- real arithmetic
  c) using this wrong semantic model often works well enough

The trouble is that, as I and others have said in this thread before, 
telling when c) might not be true is very hard. Note that we have not
had contributions in this thread that say:

"I know all about that numerical analysis stuff, but don't worry, you
don't need to know it, just go straight ahead and use floating-point
without worrying about it."

I think that says something .... :-)





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
       [not found]                 ` <5sbgpk$q0n$1@goanna.cs.rmit.edu.au>
  1997-08-07  0:00                   ` Robert Dewar
@ 1997-08-08  0:00                   ` W. Wesley Groleau x4923
  1 sibling, 0 replies; 105+ messages in thread
From: W. Wesley Groleau x4923 @ 1997-08-08  0:00 UTC (permalink / raw)



> >the point is I don't quite believe the sky will fall if
> >programmers don't have dual-degrees in Math and Comp. Sci.
> >(And yes, I'm exaggerating for effect--or for frustration.)
> 
> The sky won't fall, but planes might.
> I still flinch when I remember a commercial statistics package
> (not SAS) that happily `inverted' a singular matrix and kept going...

Even if I thought (I don't) that only one percent of all floating 
point code needs numerical analysis, I'd still recommend that a
numerical analyst at least be offered the chance to look at any
safety-critical code AND its design and requirements documents.
That's not the same as saying only numerical analysts should write
code.

-- 
----------------------------------------------------------------------
    Wes Groleau, Hughes Defense Communications, Fort Wayne, IN USA
Senior Software Engineer - AFATDS                  Tool-smith Wanna-be

Don't send advertisements to this domain unless asked!  All disk space
on fw.hac.com hosts belongs to either Hughes Defense Communications or 
the United States government.  Using email to store YOUR advertising 
on them is trespassing!
----------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
       [not found]                   ` <33EA46CC.226@pseserv3.fw.hac.com>
@ 1997-08-08  0:00                     ` Christian Bau
  1997-08-12  0:00                     ` Martin Tom Brown
  1 sibling, 0 replies; 105+ messages in thread
From: Christian Bau @ 1997-08-08  0:00 UTC (permalink / raw)



In article <33EA46CC.226@pseserv3.fw.hac.com>, "W. Wesley Groleau x4923"
<wwgrol@pseserv3.fw.hac.com> wrote:

> Well, the part about "((a-b) - a) + b can turn up as equal to b"
> might use some clarification (as Chris L. Kuszmaul offered) for 
> some readers.  And in fact, it might be expressed as a - b - c + d
> where a & c are close in value and large, and b & d are close in
> value but small.  Because I always try to algebraically simplify
> expressions, and "((a-b) - a) + b" would never appear in my code
> unless a and b were functions that could change values between calls.

One interesting case where you might use exactly the expression "((a-b) -
a) + b": If you are using IEEE 754 arithmetic, and a >= b >= 0, then the
expression

   ((a-b) - a) + b

computes _exactly_ the rounding error made in computing a-b.

-- For email responses, please remove the last emm from my address. 
-- For spams, please send them to whereever.you.want@but.not.here




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-07  0:00                   ` Dave Sparks
  1997-08-08  0:00                     ` Robert Dewar
@ 1997-08-08  0:00                     ` Jan-Christoph Puchta
  1997-08-09  0:00                       ` Robert Dewar
  1997-08-10  0:00                       ` Lynn Killingbeck
  1997-08-08  0:00                     ` Mark Eichin
  2 siblings, 2 replies; 105+ messages in thread
From: Jan-Christoph Puchta @ 1997-08-08  0:00 UTC (permalink / raw)



Dave Sparks wrote:
> But I've seen a program where one of the inputs was a temperature
> in degrees Centigrade, in the range 10 to 50 with no fractional
> part.  The calculated results were displayed to six decimal places -
> very misleading.

Not so misleading. Many just laugh about it.

As a physicist told me: First digit nice, second digit guessed, third
digit lied.

But try to explain this to someone who is proud of his PhD in medicine
...


JCP




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
       [not found]                   ` <5sbb90$qsc@redtail.cruzio.com>
       [not found]                     ` <5scugs$jdc$1@cnn.nas.nasa.gov>
@ 1997-08-08  0:00                     ` Gerhard Heinzel
  1997-08-08  0:00                       ` Daniel Villeneuve
                                         ` (2 more replies)
  1 sibling, 3 replies; 105+ messages in thread
From: Gerhard Heinzel @ 1997-08-08  0:00 UTC (permalink / raw)



On 7 Aug 1997 schlafly@bbs.cruzio.com wrote:

> In article <5sar4r$t7m$1@cnn.nas.nasa.gov>, fyodor@sally.nas.nasa.gov (Chris L. Kuszmaul) writes:
> >   Fair enough. There are two basic guidlines I follow:
> > 
> > 1: Do not test for equality between floating point values. If you do, then
> > you better get a numerical analysit, or figure out how to avoid using floating
> > point, or see if you can use a 'close to equal' check.
> 
> There is nothing inherently wrong with testing for equality,
> if that is what you really want.  It is a very easily
> understandable and reliable operation, except for some funny
> business involving NANs.
> 

Maybe it should be clarified WHY one would want to test for equality.
I can imagine two distinct situations:

First, to test whether two numbers that have not been computed, but only
entered and maybe stored back and forth, are the same. I rarely have that
case (It is usually much better to mark things with logical or integer
variables in that case, and test the integers). But I assume that testing
for equality may be ok in that case.

Second, if you want to compare two numbers that come from some algorithm
(i.e. is an eigenvalue zero? is a pivot zero? is a correlation coefficient
unity?) In these cases one must NEVER test for exact equality, but always
take into account rounding errors and devise some test such as

fabs(x-y)<EPSILON

or fabs((x-y)/y)<EPSILON

or something similar, depending on the situation.

=====================================================================
 Gerhard Heinzel                            E-mail:   ghh@mpq.mpg.de
 Max-Planck-Institut fuer Quantenoptik   Phone: +49(89)32905-268/252
 Hans-Kopfermann-Str. 1            Phone (30m Lab): +49(89)3299-3282
 D-85748 Garching                              Fax: +49(89)32905-200  
 Germany                           http://www.geo600.uni-hannover.de
=====================================================================





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-08  0:00                     ` Gerhard Heinzel
  1997-08-08  0:00                       ` Daniel Villeneuve
  1997-08-08  0:00                       ` schlafly
@ 1997-08-09  0:00                       ` Robert Dewar
  1997-08-09  0:00                         ` David Ullrich
  2 siblings, 1 reply; 105+ messages in thread
From: Robert Dewar @ 1997-08-09  0:00 UTC (permalink / raw)



Gerhard says

<<Maybe it should be clarified WHY one would want to test for equality.
I can imagine two distinct situations:
>>

There are many other situations besides those that you mentioned, the
following immediately come to mind:

  1. A test for zero may be quite reasonable, even with signed zeroes,
     and infinities, the case of zero may require special handling
     which is not required for small non-zero values.

  2. Careful analysis may show that an iterative algorithm converges
     to precise equality under the roudning regime being used. In such
     a case, it may be both more accurate and more efficient to check
     for precise convergence.

  3. If you know that the values that are represented are all precisely
     represented (e.g. integersin a representable range), then exact
     equality is perfectly reasonable.

  4. Precise equality comparisons with infinity are often perfectly
     appropriate (and indeed the idea of epsilon testing here is
     completely bogus)

Plus many more ...





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-09  0:00                       ` Robert Dewar
@ 1997-08-09  0:00                         ` David Ullrich
  1997-08-10  0:00                           ` Robert Dewar
  0 siblings, 1 reply; 105+ messages in thread
From: David Ullrich @ 1997-08-09  0:00 UTC (permalink / raw)



Robert Dewar wrote:
[...]

	Well, curiously it turns out that the question of the accuracy of
floating-point operations matters to me more than it did when the thread
started. You and a few others have made points about how in some situations
it can be perfectly appropriate to test for equality. If we're talking
about comparing two numbers that we just came up with somehow fine, I
can believe that.

	But what if, say, an integer of some sort is going to be read from
a file, and then converted to another integer by some algorithm? Maybe we
read a byte b from the file, and then we want to use
n:=  Trunc(Log(b + 1) * 64000) later, "or something like that". It doesn't
matter whether n gets exactly the "right" value or not, but it's crucial
that the same b _always_ leads to the same n, even on different systems.
Surely I can't depend on floating-point operations being standard enough
for this, can I? (My plan at present is to use the floating-point to give
a first approximation, and then verify/tweak n by purely integer operations
until it passes some test defined by integer operations. Doesn't have to
be fast, this only happens once or twice.)

	Seems like there must be _some_ error allowed in the standard
standards, and any error at all means I can't trust the floating-point
here. Please tell me I'm wrong; it would save me some work.

-- 
David Ullrich

?his ?s ?avid ?llrich's ?ig ?ile
(Someone undeleted it for me...)




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-08  0:00                     ` Jan-Christoph Puchta
@ 1997-08-09  0:00                       ` Robert Dewar
  1997-08-10  0:00                       ` Lynn Killingbeck
  1 sibling, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-09  0:00 UTC (permalink / raw)



Dave Sparks wrote:
> But I've seen a program where one of the inputs was a temperature
> in degrees Centigrade, in the range 10 to 50 with no fractional
> part.  The calculated results were displayed to six decimal places -
> very misleading.



This output may or may not be misleading. The question is whether or not
a one degree difference in the input could affect the result to the
extent of one part in 100,000 or not, if not, then six decimal places
may be quite reasonable -- careful analysis is needed in all such cases.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-08  0:00                     ` Peter Shenkin
@ 1997-08-09  0:00                       ` Albert Y.C. Lai
  0 siblings, 0 replies; 105+ messages in thread
From: Albert Y.C. Lai @ 1997-08-09  0:00 UTC (permalink / raw)



In article <33EB29EE.446B@still3.chem.columbia.edu>,
Peter Shenkin <shenkin@still3.chem.columbia.edu> wrote:
>Or assuming that anyone with a degree in English knows how to read
>and write....  :-)

Or that anyone capable of switching on an oven knows how to cook, that
anyone knowing how to hold a knife also knows surgery, that anyone who
knows a programming language also knows everything about problem
solving with computers.

Some programmers believe that careful analysis is not necessary in
simple, obvious situations.  This is correct, but I raise the question:
how do you know that it is simple and obvious enough, unless someone
carries out a careful analysis?  Intuition is great only when it is
rigorously justified.

As for non-safety-critical applications, the prevailing attitude among
programmers is that all bugs are fine, not to mention numerical bugs.
Their sloppiness on correctness in general is the real reason behind
their sloppiness on numerical analysis.  Thus they replace logic with
general-purpose programming style guides, thus they replace numerical
analysis with general-purpose numerical style guides.  To this I do not
have much more to say, since they would whine about making a living and
not wanting to invest in education and thinking; but I must remark that
thieves become thieves also because they have to make a living.

-- 
Albert Y.C. Lai   trebla@vex.net   http://www.vex.net/~trebla/
-- 
Albert Y.C. Lai   trebla@vex.net   http://www.vex.net/~trebla/




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-09  0:00                         ` David Ullrich
@ 1997-08-10  0:00                           ` Robert Dewar
  1997-08-16  0:00                             ` Andrew V. Nesterov
  0 siblings, 1 reply; 105+ messages in thread
From: Robert Dewar @ 1997-08-10  0:00 UTC (permalink / raw)



David says

<<        Seems like there must be _some_ error allowed in the standard
standards, and any error at all means I can't trust the floating-point
here. Please tell me I'm wrong; it would save me some work.>


If we are talking about IEEE floating-point, the operations have no errors
at all. There are no rounding errors or anything else in the arithemtic
model defined in this standard.

The results of the operations are exactly well defined to the last bit (*)
and must be the same on all machines.

If you regard floating-point as equivalent to real arithmetic, then of
course the results do not match those of real arithmetic. But to regard
floating-point arithmetic as real arithmetic is fundamentally wrong, just
as regarding integer arithmetic in a language with limited range integers
as an implementation of mathematical natural numbers is misleading.

It would be nice to have real arithmetic on computers. There is a very
nice little program for computing pi to a huge number of places if you
had this capability (See Martin Gardner's column on how to compute pi to
10000 digits on a pocket calculator in Scientific American -- the catch
is your calculator must have real arithmetic :-)

You can trust floating-point completely if the following conditions are
met:

   1. You are using IEEE, or some other well defined standard (many non-IEEE
      floating-point systems do not meet this requirement)

   2. You understand the system you are using, and do not expect it to
      be equivalent to real arithmetic.

   3. The mapping from the language/implementation you are using to the
      floating-point operations is well defined, and you understand it.

1 is little problem today, most modern computer systems support IEEE
arithmetic, although DEC and SGI have introduced unwelcome variations
in their most recent high performance hardware (the Intel implemenmtation
is complete and accurate, as are IBM, HP, Sun and most other implementations).

2 is the subject of this thread. It is a big problem, but one made by people
not by computers.

3 is definitely a problem, but a managable one in practice. I have a thesis
student, Sam Figueroa, who recently moved from Next to Apple :-), who is
working on exactly this problem, and the recent LCAS/LIAS standardization
is a step in the right direction.

I really dislike the term "rounding error". Error is a loaded term which
somehow indicates that something is wrong. When you take two IEEE numbers
and do an addition, you get a precise answer, with no error. Perhaps a
term like "real discrepancy" would be better, and would help avoid
propagating the dangerous impression that floating-point arithmetic *is*
real arithmetic that does not work right.

(*) Yes, for the IEEE experts, I know perfectly well that in marginal 
cases involving denormals (cases that most traditional floating point
systems blow completely with sudden underflow), there is an implementation
defined non-determinacy arising from double rounding. It's a pity that this
undermines the absolute statements that results are always entirely
defined, but in practice, it is very unlikely that this is a problem.
For a complete discussion of this issue, you can look at the paper that
Sam Figueroa wrote on this subject. Email me if you are interested.






^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-08  0:00                     ` Jan-Christoph Puchta
  1997-08-09  0:00                       ` Robert Dewar
@ 1997-08-10  0:00                       ` Lynn Killingbeck
  1 sibling, 0 replies; 105+ messages in thread
From: Lynn Killingbeck @ 1997-08-10  0:00 UTC (permalink / raw)



Jan-Christoph Puchta wrote:
> 
> Dave Sparks wrote:
> > But I've seen a program where one of the inputs was a temperature
> > in degrees Centigrade, in the range 10 to 50 with no fractional
> > part.  The calculated results were displayed to six decimal places -
> > very misleading.
> 
> Not so misleading. Many just laugh about it.
> 
> As a physicist told me: First digit nice, second digit guessed, third
> digit lied.
> 
> But try to explain this to someone who is proud of his PhD in medicine
> ...
> 
> JCP

I like A.K. Dewdney's term from '200% of Nothing', at the end of chapter
3, under 'The 226.8 Gram Canary'. "... dramadigits, ... add a certain
drama ... the more there are ..."

Lynn Killingbeck




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
       [not found]                   ` <33EA46CC.226@pseserv3.fw.hac.com>
  1997-08-08  0:00                     ` Christian Bau
@ 1997-08-12  0:00                     ` Martin Tom Brown
  1997-08-23  0:00                       ` W. Wesley Groleau x4923
  1 sibling, 1 reply; 105+ messages in thread
From: Martin Tom Brown @ 1997-08-12  0:00 UTC (permalink / raw)



In article <33EA46CC.226@pseserv3.fw.hac.com>
           wwgrol@pseserv3.fw.hac.com "W. Wesley Groleau x4923" writes:

> Samuel Mize wrote:
> > Well, all fairness aside, I was trying to quantify what cases DON'T
> > need deeper numerical analysis.  
> 
> I realize that.  What I've been arguing against is the implication
> by some that _all_ cases need deeper numerical analysis.  Perhaps
> I'm just misinterpreting what they are saying.

As a concrete example of a "simple" numerical analysis problem
familiar from high school algebra which has enough traps to be
illustrative try considering the accurate solution of a quadratic

	ax^2 + bx + c = 0	for real coefficients a, b, c

This toy problem was previously discussed last November.

The problem is that for most test data the naieve "solution" may 
well appear to work, but for various special cases it fails, and 
without the appropriate numerical analysis you have no idea when.

Developing robust algorithms is best left to experts, and there are
plenty of well tested libraries like NAG, Harwell to choose from.
It does not make sense to keep on reinventing the wheel.

Regards,
-- 
Martin Brown  <martin@nezumi.demon.co.uk>     __                CIS: 71651,470
Scientific Software Consultancy             /^,,)__/





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-06  0:00           ` Robert Dewar
@ 1997-08-14  0:00             ` Paul Eggert
  0 siblings, 0 replies; 105+ messages in thread
From: Paul Eggert @ 1997-08-14  0:00 UTC (permalink / raw)



dewar@merv.cs.nyu.edu (Robert Dewar) writes:

> your comment on efficiency on the x86 is technically wrong. You can place
> the x86 floating-point into 64 bit mode, so that there is no efficiency
> penalty, and indeed at least some C compilers on the x86 do this.

But in 64-bit mode, the x86 doesn't round denormalized numbers properly.
It simply rounds the mantissa at 53 bits, resulting in a double-rounding error.
The proper behavior is to round at fewer bits.

If you know of an efficient workaround for this problem I'd love to
hear about it!  Otherwise, I still think that you'll need to live with
some fuzz in your x86 IEEE double implementation, unless you are
willing to suffer a performance penalty.




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-10  0:00                           ` Robert Dewar
@ 1997-08-16  0:00                             ` Andrew V. Nesterov
  1997-08-18  0:00                               ` Robert Dewar
  0 siblings, 1 reply; 105+ messages in thread
From: Andrew V. Nesterov @ 1997-08-16  0:00 UTC (permalink / raw)



In article <dewar.871221105@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:
[snip]
>If we are talking about IEEE floating-point, the operations have no errors
>at all. There are no rounding errors or anything else in the arithemtic
>model defined in this standard.

        Strongly disagree. There ARE roundoff errors even in the IEEE 754
arithemtic model. Moreover, the standard clearly specifies rounding models.

>
>The results of the operations are exactly well defined to the last bit (*)
>and must be the same on all machines.
>
        Could be different due to unspecified length of the register
or extended precision numbers or rounding model chosen.

>
[snip]
>
>I really dislike the term "rounding error". Error is a loaded term which
>somehow indicates that something is wrong. When you take two IEEE numbers
>and do an addition, you get a precise answer, with no error. Perhaps a
>term like "real discrepancy" would be better, and would help avoid
>propagating the dangerous impression that floating-point arithmetic *is*
>real arithmetic that does not work right.

        When two representable (i.e. having no roundoff error) IEEE
numbers are added the result could be rounded, that is true result
cannot be stored as exact IEEE number. Similarly, results of other
operations (multiplications, square roots, etc) could be rounded.

Concidering the term itself, it sounds to me reasonably adequate. As
far as I could see, and since I am not native english I could be wrong, 
the term means "an error result from a roundoff", i.e. a floating point
number is approximate estimate to a precise number. Well, I better
refer to J.Wilkinson books, he used the term many times and thought
nothing is wrong with it.

[snip]

--
Andrew.




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-16  0:00                             ` Andrew V. Nesterov
@ 1997-08-18  0:00                               ` Robert Dewar
  1997-08-19  0:00                                 ` Jim Carr
                                                   ` (2 more replies)
  0 siblings, 3 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-18  0:00 UTC (permalink / raw)



Andrew says

<<        Strongly disagree. There ARE roundoff errors even in the IEEE 754
arithemtic model. Moreover, the standard clearly specifies rounding models.>>

You completely miss the point I am making.

There are no *errors*, the discrepancies between IEEE arithmetic and
real arithmetic are not errors, they are simply differences that come from
two different arithmetic models.

When we have integer arithmetic and we divide 10 by 3 to get 3, we do
not say this is an error. The result is different from the mathematical
value of 10.0/3.0, but there is no error here, just a different arithmetic 
model.

I know perfectly well that the phrase "rounding error" is well established,
but my point is that calling it an error leads people into the niave trap
of thinking of floating-point arithmetic as being real arithmetic.

In fact I received quite a few email messages, from some quite interesting
people :-) saying that they agreed that it was a pity that the term
rounding error had ever got into the literature, but of course
it is much too entrnched to get rid of.

But your repsonse tends to make me think that you are indeed fallling
into the trap of thinking of these xdiscepancies as errors. It's a mistake!

As for your comment about different register lengths etc. This is a matter of binding
of the language you are using to IEEE. The set of IEEE operatoins
contains no such uncertainty, and a decent binding of a high level language
to IEEE (e.g. SANE from Apple) must avoid such uncertainties.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-18  0:00                               ` Robert Dewar
@ 1997-08-19  0:00                                 ` Jim Carr
  1997-08-21  0:00                                   ` Christian Bau
  1997-08-19  0:00                                 ` Hans Olsson
  1997-08-30  0:00                                 ` Paul Eggert
  2 siblings, 1 reply; 105+ messages in thread
From: Jim Carr @ 1997-08-19  0:00 UTC (permalink / raw)



Andrew says
 
 <<        Strongly disagree. There ARE roundoff errors even in the IEEE 754
  arithemtic model. Moreover, the standard clearly specifies rounding models.>>
 
dewar@merv.cs.nyu.edu (Robert Dewar) writes:
>
>There are no *errors*, the discrepancies between IEEE arithmetic and
>real arithmetic are not errors, they are simply differences that come from
>two different arithmetic models.

 It could be useful to adopt the terminology of experimental science 
 here, and use "uncertainty" to denote the variance between the 
 results of real arithmetic and floating point arithmetic done 
 'correctly' in a particular model.  The two have much in common, 
 since one propagates those uncertainties in a similar way, and 
 since the confusion associated with calling them "errors" is 
 equally strong.  And both are poorly known.  ;-) 

>When we have integer arithmetic and we divide 10 by 3 to get 3, we do
>not say this is an error. The result is different from the mathematical
>value of 10.0/3.0, but there is no error here, just a different arithmetic 
>model.

 And you _should_ know that 10/3 = 3 could mean 3 +/- 0.5, or it 
 could mean (10 +/- 0.5)/(3 +/- 0.5) depending on the circumstances. 

 Or it could be an _error_ if you are using 10/3 as an exponent when 
 you meant to use 10./3. = 3.33333....

>I know perfectly well that the phrase "rounding error" is well established,
>but my point is that calling it an error leads people into the niave trap
>of thinking of floating-point arithmetic as being real arithmetic.

 So is the phrase "experimental error" in physics and other fields, 
 and it causes similar difficulties.  It is not an error to read a 
 ruler with the maximum possible accuracy.  Similarly, it is not an 
 error to store a floating point approximation with the maximum 
 possible bits, correctly rounded per the floating point model. 

 It _is_ an error to treat those numbers as if they were real numbers. 

-- 
 James A. Carr   <jac@scri.fsu.edu>     | Commercial e-mail is _NOT_ 
    http://www.scri.fsu.edu/~jac/       | desired to this or any address 
 Supercomputer Computations Res. Inst.  | that resolves to my account 
 Florida State, Tallahassee FL 32306    | for any reason at any time. 




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-18  0:00                               ` Robert Dewar
  1997-08-19  0:00                                 ` Jim Carr
@ 1997-08-19  0:00                                 ` Hans Olsson
  1997-08-30  0:00                                 ` Paul Eggert
  2 siblings, 0 replies; 105+ messages in thread
From: Hans Olsson @ 1997-08-19  0:00 UTC (permalink / raw)



In article <dewar.871925372@merv>, Robert Dewar <dewar@merv.cs.nyu.edu> wrote:
>Andrew says
>
><<        Strongly disagree. There ARE roundoff errors even in the IEEE 754
>arithemtic model. Moreover, the standard clearly specifies rounding models.>>
>
>You completely miss the point I am making.
>
>There are no *errors*, the discrepancies between IEEE arithmetic and
>real arithmetic are not errors, they are simply differences that come from
>two different arithmetic models.
>
>When we have integer arithmetic and we divide 10 by 3 to get 3, we do
>not say this is an error. The result is different from the mathematical
>value of 10.0/3.0, but there is no error here, just a different arithmetic 
>model.
>
>I know perfectly well that the phrase "rounding error" is well established,
>but my point is that calling it an error leads people into the niave trap
>of thinking of floating-point arithmetic as being real arithmetic.
>
>In fact I received quite a few email messages, from some quite interesting
>people :-) saying that they agreed that it was a pity that the term
>rounding error had ever got into the literature, but of course
>it is much too entrnched to get rid of.
>
>But your repsonse tends to make me think that you are indeed fallling
>into the trap of thinking of these xdiscepancies as errors. It's a mistake!

No, it's a way of seeing things. By seeing computer arithmetic as
real arithmetic combined with rounding errors obeying some simple rules,
some algorithms can easily be analyzed.

That model of arithmetic is not appropiate for all purposes, but it's 
in general appropiate for the numerical analysis I'm interested in.
IEEE makes the rounding predictable, which can help in some cases, and 
in _those_cases_ the term rounding error can be misleading.

Note that calling a well-defined and predictable discrepancy for error
instead of the result of different arithmetic/formula/model is very common in
numerical analysis, because it often gives a simple insight.
In some cases the errors are further analyzed, but are still called errors.

Consider discretizations of ODE/DAE/PDE. One could make an equally good
case that the error of a Runge-Kutta method is not an error, but the 
time-discretization is a completely different problem and should
be analyzed as such. Seeing the Runge-Kutta discretization as a new problem
is appropiate in some cases, but I'm still happy with the term "global error".

BTW:
Does there even exist error at all in your view of numerical analysis?
(Excluding programming errors).
If so, can you give any examples?
--
// Homepage  http://www.dna.lth.se/home/Hans_Olsson/
// Email To..Hans.Olsson@dna.lth.se [Please no junk e-mail]






^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-19  0:00                                 ` Jim Carr
@ 1997-08-21  0:00                                   ` Christian Bau
  1997-08-21  0:00                                     ` Jim Carr
  0 siblings, 1 reply; 105+ messages in thread
From: Christian Bau @ 1997-08-21  0:00 UTC (permalink / raw)



In article <5tc7kl$a19$1@news.fsu.edu>, jac@ibms48.scri.fsu.edu (Jim Carr)
wrote:

> dewar@merv.cs.nyu.edu (Robert Dewar) writes:
> >There are no *errors*, the discrepancies between IEEE arithmetic and
> >real arithmetic are not errors, they are simply differences that come from
> >two different arithmetic models.

As an example: Given a floating point number with an absolute value less
than 10^15, find an integer number that is close to it. 

Solution in C with IEEE arithmetic: 

double nearby_number (double x)
{
   x += 6.0 * 1024.0 * 1024.0 * 1024.0 * 1024.0 * 1024.0;
   x -= 6.0 * 1024.0 * 1024.0 * 1024.0 * 1024.0 * 1024.0;
   return x;
}

Adding 1.5 * 2^52 will produce a result that is rounded to an integer
number. I cant call this a rounding error (or "uncertainty") since it is
exactly the effect that I wanted to achieve. 

> 
>  It could be useful to adopt the terminology of experimental science 
>  here, and use "uncertainty" to denote the variance between the 
>  results of real arithmetic and floating point arithmetic done 
>  'correctly' in a particular model.  The two have much in common, 
>  since one propagates those uncertainties in a similar way, and 
>  since the confusion associated with calling them "errors" is 
>  equally strong.  And both are poorly known.  ;-) 

That term "uncertainty" makes things much much worse. The difference
between well designed floating point arithmetic and real arithmetic is in
no way uncertain. It is absolutely certain and predictable. 

For example, with IEEE arithmetic it is relatively easy to compute the
difference between a+b (using real arithmetic) and a+b (using IEEE
arithmetic) _exactly_. But you cant call something "uncertainty" that can
be computed exactly. 

>  It _is_ an error to treat those numbers as if they were real numbers. 

Absolutely right.

-- For email responses, please remove the last emm from my address. 
-- For spams, please send them to whereever.you.want@but.not.here




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-21  0:00                                   ` Christian Bau
@ 1997-08-21  0:00                                     ` Jim Carr
  1997-08-21  0:00                                       ` Robert Dewar
  0 siblings, 1 reply; 105+ messages in thread
From: Jim Carr @ 1997-08-21  0:00 UTC (permalink / raw)



dewar@merv.cs.nyu.edu (Robert Dewar) writes:
} There are no *errors*, the discrepancies between IEEE arithmetic and
} real arithmetic are not errors, they are simply differences that come from
} two different arithmetic models.

christian.bau@isltd.insignia.comm (Christian Bau) writes:
>
>As an example: Given a floating point number with an absolute value less
>than 10^15, find an integer number that is close to it. 

 That is an interesting choice since you are not guaranteed to be 
 able to represent 15 decimal digits in a float data type in C.  
 Casting up to double to use your routine below will not change 
 that.  And don't tell me I pick nits; plenty of C programmers 
 will cast a float to a double to call the intrinsic double 
 routine to get a square root without narry a thought about it. 

>Solution in C with IEEE arithmetic: 
>
>double nearby_number (double x)
>{
>   x += 6.0 * 1024.0 * 1024.0 * 1024.0 * 1024.0 * 1024.0;
>   x -= 6.0 * 1024.0 * 1024.0 * 1024.0 * 1024.0 * 1024.0;
>   return x;
>}
>
>Adding 1.5 * 2^52 will produce a result that is rounded to an integer
>number. I cant call this a rounding error (or "uncertainty") since it is
>exactly the effect that I wanted to achieve. 

 The uncertainty is in the representation of the original real value, 
 say 1/10 or pi (or pi*10^14 in your case), as a float (or a double). 
 
jac@ibms48.scri.fsu.edu (Jim Carr) wrote:
|
| It could be useful to adopt the terminology of experimental science 
| here, and use "uncertainty" to denote the variance between the 
| results of real arithmetic and floating point arithmetic done 
| 'correctly' in a particular model.  The two have much in common, 
| since one propagates those uncertainties in a similar way, and 
| since the confusion associated with calling them "errors" is 
| equally strong.  And both are poorly known.  ;-) 

>That term "uncertainty" makes things much much worse. The difference
>between well designed floating point arithmetic and real arithmetic is in
>no way uncertain. It is absolutely certain and predictable. 

 So is the process of measurement with a device with less precision 
 than necessary to make the measurement.  For example, in your case 
 there is no uncertainty in the integer part of a 14 digit real in 
 a double precision IEEE representation that has been properly 
 rounded, but there is an uncertainty if it was stored as a float. 

>For example, with IEEE arithmetic it is relatively easy to compute the
>difference between a+b (using real arithmetic) and a+b (using IEEE
>arithmetic) _exactly_. But you cant call something "uncertainty" that can
>be computed exactly. 

 The uncertainty is that more than one value of a and b will give 
 the same a+b result on the IEEE side. 

-- 
 James A. Carr   <jac@scri.fsu.edu>     | Commercial e-mail is _NOT_ 
    http://www.scri.fsu.edu/~jac/       | desired to this or any address 
 Supercomputer Computations Res. Inst.  | that resolves to my account 
 Florida State, Tallahassee FL 32306    | for any reason at any time. 




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-21  0:00                                     ` Jim Carr
@ 1997-08-21  0:00                                       ` Robert Dewar
  1997-08-22  0:00                                         ` Jim Carr
  1997-08-23  0:00                                         ` W. Wesley Groleau x4923
  0 siblings, 2 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-21  0:00 UTC (permalink / raw)



<< So is the process of measurement with a device with less precision
 than necessary to make the measurement.  For example, in your case
 there is no uncertainty in the integer part of a 14 digit real in
 a double precision IEEE representation that has been properly
 rounded, but there is an uncertainty if it was stored as a float.>>


I strongly agree with Christian here, uncertainty is an even WORSE
term than round off error.

A common viewpoint of floating-point arithmetic held by many who don't
know too much about it is that somehow floating-point arithmetic is
inherently (slightly) unreliable and can't be trusted. The term
uncertainty encourages this totally unuseful attitude.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-21  0:00                                       ` Robert Dewar
@ 1997-08-22  0:00                                         ` Jim Carr
  1997-08-22  0:00                                           ` Robert Dewar
  1997-08-23  0:00                                         ` W. Wesley Groleau x4923
  1 sibling, 1 reply; 105+ messages in thread
From: Jim Carr @ 1997-08-22  0:00 UTC (permalink / raw)



dewar@merv.cs.nyu.edu (Robert Dewar) writes:
>
><< So is the process of measurement with a device with less precision
> than necessary to make the measurement.  For example, in your case
> there is no uncertainty in the integer part of a 14 digit real in
> a double precision IEEE representation that has been properly
> rounded, but there is an uncertainty if it was stored as a float.>>
>
>I strongly agree with Christian here, uncertainty is an even WORSE
>term than round off error.

 I wrote the above, not Christian. 

 I prefer uncertainty because, in a university context, it is 
 familiar to students from their chemistry and physics labs and 
 the rules for propagating it are the same.  The tradition in 
 numerical analysis has always (?) been to identify two kinds 
 of "error" -- formula and round-off -- that compete in any 
 given kind of calculation.  Depends on the audience. 

>A common viewpoint of floating-point arithmetic held by many who don't
>know too much about it is that somehow floating-point arithmetic is
>inherently (slightly) unreliable and can't be trusted. The term
>uncertainty encourages this totally unuseful attitude.

 Are you saying that the floating point representation is not 
 an approximation to the real number being stored?  I don't 
 think so.  I think you are saying that the results of floating 
 point operations on numbers in that floating point representation 
 are deterministic.  I agree that they are.  The point is that 
 the difference between the real number being represented in the 
 machine and a particular floating point representation will be 
 propagated by the "exact" procedure and sometimes dramatically 
 increase the difference between the result of real arithmetic 
 on the original real number and the result of floating point 
 arithmetic on the original fl-pt representation. 

 The propagation of this uncertainty also follows deterministic 
 (albeit statistical) rules quite familiar from the sciences 
 in which measurement is commonly used.  I do not see how you 
 can claim that the difference between the result of real 
 arithmetic and floating point arithmetic for a range of input 
 values will not show a distribution of the type normally 
 associated with measurement uncertainty.  It does. 

 In the example cited, the uncertainty in one possible representation 
 (float) is greater than in another (double) with the result that 
 the desired integer conversion is not exact in some cases that 
 original author claimed they would be.  (The example also appeared 
 to ignore the minimum size of int mandated by C and present in 
 many common implementations, but that is irrelevant here.)  This 
 is obvious if one knows the relative uncertainty in those two 
 IEEE representations of real numbers. 

-- 
 James A. Carr   <jac@scri.fsu.edu>     | Commercial e-mail is _NOT_ 
    http://www.scri.fsu.edu/~jac/       | desired to this or any address 
 Supercomputer Computations Res. Inst.  | that resolves to my account 
 Florida State, Tallahassee FL 32306    | for any reason at any time. 




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-22  0:00                                         ` Jim Carr
@ 1997-08-22  0:00                                           ` Robert Dewar
  1997-08-23  0:00                                             ` Jim Carr
  0 siblings, 1 reply; 105+ messages in thread
From: Robert Dewar @ 1997-08-22  0:00 UTC (permalink / raw)



Jim Carr said

<<>I strongly agree with Christian here, uncertainty is an even WORSE
>term than round off error.

 I wrote the above, not Christian.>>



You misread my message, I did not say that Christian recommended the
term uncertainty, on the contrary, he objected to it strongly, and so do I,
which is why I was agreeing with him in disagreeing with you.l Sorry for
the confusion, but I still strongly disagree with the term. For reasons I
gave earlier, it is evenn worse than error. There is no uncertaintly
in the results of IEEE operations.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-22  0:00                                           ` Robert Dewar
@ 1997-08-23  0:00                                             ` Jim Carr
  1997-08-24  0:00                                               ` Robert Dewar
  0 siblings, 1 reply; 105+ messages in thread
From: Jim Carr @ 1997-08-23  0:00 UTC (permalink / raw)



dewar@merv.cs.nyu.edu (Robert Dewar) writes:
>
> There is no uncertaintly in the results of IEEE operations.

 The term "rounding error" and the alternative "rounding uncertainty" 
 both refer to the fact that the precise and deterministic result of 
 the conversion of a real number to a floating point value gives 
 a bit pattern that *also* results from the precise and deterministic 
 conversion of an uncountably infinite number of other real numbers. 

 It implies no prejudice concerning whether this is good or bad, 
 because it is inevitable.  That is what it has in common with 
 random (not systematic) measurement uncertainties, besides the 
 fact that students are familiar with the latter and propagation 
 of those uncertainties, that arise from the intrinsic limitations 
 in the precision of measurement apparatus -- which is why I have 
 found it to be a helpful alternative, a synonym as it were, when 
 introducing the concept to non-math-major (usually CS) students.  

 The difference is inevitable.  It does exist.  It has important 
 consequences when interpreting the result of a calculation that 
 is being used as a substitute for working with real numbers, a 
 major reason computers exist.  If you do not like the name, 
 propose another -- but do not pretend that it does not happen. 

-- 
 James A. Carr   <jac@scri.fsu.edu>     | Commercial e-mail is _NOT_ 
    http://www.scri.fsu.edu/~jac/       | desired to this or any address 
 Supercomputer Computations Res. Inst.  | that resolves to my account 
 Florida State, Tallahassee FL 32306    | for any reason at any time. 




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-21  0:00                                       ` Robert Dewar
  1997-08-22  0:00                                         ` Jim Carr
@ 1997-08-23  0:00                                         ` W. Wesley Groleau x4923
  1997-08-23  0:00                                           ` Robert Dewar
  1 sibling, 1 reply; 105+ messages in thread
From: W. Wesley Groleau x4923 @ 1997-08-23  0:00 UTC (permalink / raw)



> A common viewpoint of floating-point arithmetic held by many who don't
> know too much about it is that somehow floating-point arithmetic is
> inherently (slightly) unreliable and can't be trusted. ....

But you and others seem to say that it IS unreliable because it is
usually written by coders that don't know too much about it.  Are you
now saying that these people hold the view that it is unreliable but
they use it anyway? 

-- 
----------------------------------------------------------------------
    Wes Groleau, Hughes Defense Communications, Fort Wayne, IN USA
Senior Software Engineer - AFATDS                  Tool-smith Wanna-be
                    wwgrol AT pseserv3.fw.hac.com

Don't send advertisements to this domain unless asked!  All disk space
on fw.hac.com hosts belongs to either Hughes Defense Communications or 
the United States government.  Using email to store YOUR advertising 
on them is trespassing!
----------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-12  0:00                     ` Martin Tom Brown
@ 1997-08-23  0:00                       ` W. Wesley Groleau x4923
  1997-08-23  0:00                         ` Robert Dewar
  1997-09-05  0:00                         ` Robert I. Eachus
  0 siblings, 2 replies; 105+ messages in thread
From: W. Wesley Groleau x4923 @ 1997-08-23  0:00 UTC (permalink / raw)



> > > Well, all fairness aside, I was trying to quantify what cases 
> > > DON'T need deeper numerical analysis.
> >
> > I realize that.  What I've been arguing against is the implication
> > by some that _all_ cases need deeper numerical analysis.  Perhaps
> > I'm just misinterpreting what they are saying.

Let me put my original question another way.  Can we (as Sam 
suggested above) come up with something _at_the_level_of_ Ada 
Quality and Style that tells the "analysis-challenged" when to 
call for help--rather than telling them to never use floating point?

-- 
----------------------------------------------------------------------
    Wes Groleau, Hughes Defense Communications, Fort Wayne, IN USA
Senior Software Engineer - AFATDS                  Tool-smith Wanna-be
                    wwgrol AT pseserv3.fw.hac.com

Don't send advertisements to this domain unless asked!  All disk space
on fw.hac.com hosts belongs to either Hughes Defense Communications or 
the United States government.  Using email to store YOUR advertising 
on them is trespassing!
----------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
       [not found]                     ` <33FE4603.1B6B@pseserv3.fw.hac.com>
@ 1997-08-23  0:00                       ` Robert Dewar
  0 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-23  0:00 UTC (permalink / raw)



Wes said

<<There's another long thread about re-use without proper analysis
going right now (which just happens to have a floating point
aspect to it).....

IF the claim is true that no one should use floating point
   without significant training in analysis
THEN
   no one should re-use floating point
   without significant training in analysis

so even though this is good advice, if some people are to be
believed, you have to be able to write your own before you
can use someone else's because whoever wrote it probably was
not qualified to avoid catastrophe.>>

I am not sure whether a smiley was missing here, but I am assuming that
Wes is making a serious point here. If so, it is very wrong, there are
well established numerical libraries that can be used with confidence.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-23  0:00                                         ` W. Wesley Groleau x4923
@ 1997-08-23  0:00                                           ` Robert Dewar
  0 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-23  0:00 UTC (permalink / raw)



Wes said

<<But you and others seem to say that it IS unreliable because it is
usually written by coders that don't know too much about it.  Are you
now saying that these people hold the view that it is unreliable but
they use it anyway?
>>

I don't know about "usually", this is your judgment, not mine. There are
lots of people writing floating-point who *do* know what they are doing.
Any feature used by people who do not know what they are doing can
prove unreliable (e.g. tasking in Ada is a good example, but there are
many others).






^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-23  0:00                       ` W. Wesley Groleau x4923
@ 1997-08-23  0:00                         ` Robert Dewar
  1997-09-05  0:00                         ` Robert I. Eachus
  1 sibling, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-23  0:00 UTC (permalink / raw)



Wes says

<<Let me put my original question another way.  Can we (as Sam
suggested above) come up with something _at_the_level_of_ Ada
Quality and Style that tells the "analysis-challenged" when to
call for help--rather than telling them to never use floating point?>>

The general rule is: don't use features of a language that you do not
fully understand. I don't really see such a guideline as useful in 
AQ&S. And actually I don't think there is much helpful that one can
put in AQ&S. Remember that AQ&S assumes that you *know* Ada. In the
case of floating-point, it should assume that you know the Ada features
in this area, and fully understand the floating-point model and the
associated attributes.

AQ&S is NOT a tutorial in Ada!

My own feeling is that everyone doing any programming should know enough
about floating-point and numerical analysis to answer Wes' question
for themselves. If they don't then they have a gap in their technical
education -- that's not so terrible, we all have such gaps (I personally
wish I had studied mathematical logic more deeply).

But you need to be aware of the gaps you have, and either work to plug
them, or steer clear of situations where they get you into trouble!

I understand that people are still wanting some simple guidelines here,
but isn't it interesting that many of those who might know enough to
write such guidelines are exactly the people saying that they cannot
be usefully written!

Note that AQ&S is about improving quality through use of good style. What
we are talking about here is substance, not style, and I as I said earlier,
I don't think that AQ&S is successful when it crosses this line (which it
does occasionally in various areas).





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-23  0:00                                             ` Jim Carr
@ 1997-08-24  0:00                                               ` Robert Dewar
  1997-08-29  0:00                                                 ` Andrew V. Nesterov
                                                                   ` (2 more replies)
  0 siblings, 3 replies; 105+ messages in thread
From: Robert Dewar @ 1997-08-24  0:00 UTC (permalink / raw)



James Carr says

<< The difference is inevitable.  It does exist.  It has important
 consequences when interpreting the result of a calculation that
 is being used as a substitute for working with real numbers, a
 major reason computers exist.  If you do not like the name,
 propose another -- but do not pretend that it does not happen.
>>


I think you should assume that I do understand how floating-point works,
and I have written hundreds of thousands of lines of numerical Fortran
code, carefully analyzed, much of which is still in use today, even
though that was some thirty years ago!

The point, which perhaps you just don't see, because I assume you also
understand floating-point well, is that I think the term "error" for
the discrepancies that occur when an arbitrary decimal number is converted
to binary floating-point are not errors. An error is something wrong. There
is nothing wrong here, you habve made the decision to represent a decimal
value in binary form, and used a method that gives a very well defined
result. If this is indeed an "error", then please don't use methods that
are in error, represent the number some other way. If on the other hand,
your analysis shows that the requirements of the calculation are met by
the use of this conversion, then there is no error here!

Similarly when we write a := b + c, the result in a is of course not the
mathematical result of adding the two real numbers represented by b and c,
but there is no "error". An error again is something wrong. If an addition
like that is an error, i.e. causes your program to malfunction, then don't
do the addition.

If on the other hand, your analysis shows that it is appropriate to perform
the computation a = b + c, where a is defined as the result required by
IEEE854, then there is no error.

Yes, it is useful to have a term to describe the difference between the
IEEE result and the true real arithmetic result, but it is just a
difference, and perhaps it would be better if this value had been called
something like "rounding difference", i.e. something more neutral than
error.

The trouble with the term error, is that it feeds the impression, seen
more than once in this thread, that floating-point is somehow unreliable
and full of errors etc.

Go back to the original subject line of this thread for a moment.

Is it wrong to say

   if a = b then

where we know exactly how a and b are represnted in terms of IEEE854, and
we know the IEEE854 equality test that will be performed/

The answer is that it might be wrong or it might be right. it has a well
defined meaning, and if that meaning is such that your program works, it
is right, if that meaning is such that your program fails, then it is wrong.

But --- we could make the same statement about any possible programming
language statement we might write down.

There is nothing inherently wrong, bad, erroneous, or anything else like
that about floating-point.

Now one caveat is that we often do NOT know how statements we write will
map into IEEE854 operations unless we are operating in a well defined
environment like SANE. As I have noted before, this is a definite gap
in programming language design, and is specifically the topic that my
thesis student Sam Figueroa is working on.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-24  0:00                                               ` Robert Dewar
@ 1997-08-29  0:00                                                 ` Andrew V. Nesterov
  1997-08-29  0:00                                                   ` Robert Dewar
       [not found]                                                 ` <5u4eq6$30b$1@news.lth.se>
  1997-09-01  0:00                                                 ` Jim Carr
  2 siblings, 1 reply; 105+ messages in thread
From: Andrew V. Nesterov @ 1997-08-29  0:00 UTC (permalink / raw)



In article <dewar.872395110@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:
>James Carr says
[...]
>The point, which perhaps you just don't see, because I assume you also
>understand floating-point well, is that I think the term "error" for
>the discrepancies that occur when an arbitrary decimal number is converted
>to binary floating-point are not errors. An error is something wrong. There
>is nothing wrong here, you habve made the decision to represent a decimal
>value in binary form, and used a method that gives a very well defined
>result. If this is indeed an "error", then please don't use methods that
>are in error, represent the number some other way. If on the other hand,
>your analysis shows that the requirements of the calculation are met by
>the use of this conversion, then there is no error here!
>
>Similarly when we write a := b + c, the result in a is of course not the
>mathematical result of adding the two real numbers represented by b and c,
>but there is no "error". An error again is something wrong. If an addition
>like that is an error, i.e. causes your program to malfunction, then don't
>do the addition.
        The result in "a" of above expression could be either exact
or rounded (or inexact or approximate) in the sence of infinite
presicion algebra. The answer is whether the terms and the result
have finite representation in the particular floating point model and
does it fit in or can it be completely placed to a given finite number
of binary or other n-ary digits places.

From a more general point (Hans Olsson has already mentioned that) there
is much more simple and robust and not bounded to any radix arithmetic,
way of analizing floating-point computations. It is said that floating
poing representation fl(x) of a number x is such that

                     fl(x) = x(1 + delta)

where delta is small (in some definite sence) positive number, called
roundoff error. Indeed, it does not meant that something wrong has been
done, it mean that floating point representation of an exact value is
known imprecise. And indeed, it is quite similar to a measurement
process, where a result is also usually known imprecise or equivalently,
with an error. This analogue could be extended even to systematic
(or directed) and random errors, round to nearest would produce
random errors, while truncation toward +-infinity would introduce
systematic error.

I am a little skeptical about the "uncertainty" term that
was proposed by James A. Carr, because in quantum mechanics it is
relevant to a value that not only "undetermined" i.e. unknown at
the moment but moreover has not any value at all, as of coordinate
and momentum pair if the former is known the latter could have any
value.

The next step in error analysis is to suggest that a binary (i.e.
involving two terms or factors) operation, the operands of which
are in the floating-point domain, is performed exact and the final
result is rounded ( * stands for any binary operation)

              fl(x*y) = (x*y)(1 + delta)

This way eliminates any difference between so-called "real" numbers
and "fp" numbers whose are simply subset of the former more wide set,
and by no means are somewhat "artificial" or "unreal". Complete
discussion of above method is in chapter 20 of G.E.Forsythe and
C.B.Moler "Computer Solution of Linear Algebraic Systems", Prentice-
Hall, 1967 -- long before the IEEE standard!

This way of analyzing floating point numbers and operations is invariant
to any radix or precision fp arithmetic, and correct not just for IEEE
standards. They was adopted relatively recently, while many computational
programs based on above analysis have long been working on many different
architectures quite well.

>
>If on the other hand, your analysis shows that it is appropriate to perform
>the computation a = b + c, where a is defined as the result required by
>IEEE854, then there is no error.
        Once again, as I am sure somebody had already noticed that,
the standard has been drawn to merely standardize parameters of floating 
point arithmetic -- ranges, mantissa length, gradual underflow process,
exception signals, etc, because to then in 70s (and to now on) there were
plenty of different fp implementations, although all they behave more
or less similarly. By no means IEEE will save computations from roundoff
errors!

>
>Yes, it is useful to have a term to describe the difference between the
>IEEE result and the true real arithmetic result, but it is just a
>difference, and perhaps it would be better if this value had been called
>something like "rounding difference", i.e. something more neutral than
>error.
>
        The term "difference" is already used in other terms of numerical 
computations, e.g. "finite-difference methods", "finite-difference
equations", why make things even more tangled?

>The trouble with the term error, is that it feeds the impression, seen
>more than once in this thread, that floating-point is somehow unreliable
>and full of errors etc.

The strength of the term is that it can be smoothly combined with other
sources of errors in any calculations, pertain to not only computer
calculations. A model for a calculation can be imprecise i.e. with
errors, input data could be as well imprecise, thus the result would
be computed imprecise that is with errors of the model and input data. 
All these errors (of model, input data and fp arithmetic) can be
compared and analized to get the estimate of how close the computed result
to a perfect one. 

>
>Go back to the original subject line of this thread for a moment.
>
>Is it wrong to say
>
>   if a = b then
>
>where we know exactly how a and b are represnted in terms of IEEE854, and
>we know the IEEE854 equality test that will be performed/
>
>The answer is that it might be wrong or it might be right. it has a well
>defined meaning, and if that meaning is such that your program works, it
>is right, if that meaning is such that your program fails, then it is wrong.
>

Yes, indeed equality test has its well-defined meaning, although could
be dangerous in its naive useage because
A and B usually (or possibly) are results of a great deal of calculations
involving roundoffs. A possibility that A and B would coinside in ALL
binary or whatever places is just very small. On the other hand it could
be very well justified, as an example tests for iteration termination in
EISPACK codes (e.g. TSTURM) can be mentioned. A funny thing, right in
the mentioned tests there is also an excellent example of how roundoff
errors works to ensure iterations converge. Further, one can easily figure
out how some kind of optimization could corrupt those tests and how to
prevent optimization to do so.

>But --- we could make the same statement about any possible programming
>language statement we might write down.
>
>There is nothing inherently wrong, bad, erroneous, or anything else like
>that about floating-point.
>
>Now one caveat is that we often do NOT know how statements we write will
>map into IEEE854 operations unless we are operating in a well defined
>environment like SANE. As I have noted before, this is a definite gap
>in programming language design, and is specifically the topic that my
>thesis student Sam Figueroa is working on.
>





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
       [not found]                                                 ` <5u4eq6$30b$1@news.lth.se>
@ 1997-08-29  0:00                                                   ` Robert Dewar
  1997-09-01  0:00                                                     ` Chris RL Morgan
  0 siblings, 1 reply; 105+ messages in thread
From: Robert Dewar @ 1997-08-29  0:00 UTC (permalink / raw)



<<<<You're using the wrong definition of 'error'. Webster gives:

error

 4 a : the difference between an observed or calculated value and a true
value; specifically : variation in measurements, calculations, or
   observations of a quantity due to mistakes or to uncontrollable factors
---

This explains the use of the term 'error' in numerical analysis and
gives that round-off error can be defined as the difference between
the calculated value and the true mathematical value.>>



It is especially difficult for non-native speakers of a language to
accurately understand the connotation of words, as opposted to the
denotation. Dictionaries tend to concentrate only on the denotation,
especially if you just quote the definition -- you can sometimes get
a better feel for the connotation by looking through the references in
a comprehensive dictionary (really the only one for the English language
is the full edition of the OED).

The problem with the word error is that, while of course everyone knows
its denotation (certainly you don't have to go to a dictionary for that),
all native speakers of English also very much no its connotation, which
is bad, horrible, something-to-avoid, evil, unacceptable ... i.e. uniformly
negative (for a feeling of this connotation have a look at the use of the
word error by Mary Baker Eddy, who essentially uses it as a synonym for
what other religeons call sin -- now to be fair it is not as negative as
sin, and that was MBE's reason for arguing for using the word error instead,
but it most assuredly is something to avoid).

Of course the phrase "roundoff error" is denotationally appropriate, and
is not about to mislead people who understand FPT semantics and numerical
analysis.

But as this whole thread has made clear, there are a lot of people who
(a) want to use fpt, and (b) meet neither of these criteria.

FOr such people, the use of the word error is in practice damaging because
it suggests there is something wrong with floating-point, and it leads to
superstitious mumbo-jumbo like always use Float'Epsilon when comparing
any two floating-point quantities, to avoid the dreaded ERROR!

As I have repeatedly said, I do not seriously suggest the entire community
change its terminology, that is of course impossible, but I am suggesting
that by concentrating on the effect of this word, and suggesting that a
more neutral word like discrepancy would have been better (note that your
dictionary definition would apply almost unchanged to disccrepancy), we might
avoid some of this superstition!

Please understand that my point here is not technical. I perfectly well
understand how fpt works, and have written large numerical codes, properly
and carefully analyzed with respect to "error" propagation that are still
in wide use. My point is entirely about trying to get people to understand
that the use and implementation of floating-point is well defined in 
computer systems, and does not involve "error" in the informal connotative
sense.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-29  0:00                                                 ` Andrew V. Nesterov
@ 1997-08-29  0:00                                                   ` Robert Dewar
       [not found]                                                     ` <340DF1DD.2736@iop.com>
  0 siblings, 1 reply; 105+ messages in thread
From: Robert Dewar @ 1997-08-29  0:00 UTC (permalink / raw)



<<The next step in error analysis is to suggest that a binary (i.e.
involving two terms or factors) operation, the operands of which
are in the floating-point domain, is performed exact and the final
result is rounded ( * stands for any binary operation)

              fl(x*y) = (x*y)(1 + delta)

This way eliminates any difference between so-called "real" numbers
and "fp" numbers whose are simply subset of the former more wide set,
and by no means are somewhat "artificial" or "unreal". Complete>>


Yes, I think we all know this :-)

I do not think we need an elementary lesson in error analysis here, especially
one so simplistic as the one you gave. That is not the issue that we are
discussing!

As yoy say this material can be read in many text books (I have used many
of them to teach courses in numerical analysis and floating point
computation, and found good ones, although my awareness of such books
is a bit out of date now, it is a while since I taught NA :-)





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-18  0:00                               ` Robert Dewar
  1997-08-19  0:00                                 ` Jim Carr
  1997-08-19  0:00                                 ` Hans Olsson
@ 1997-08-30  0:00                                 ` Paul Eggert
  2 siblings, 0 replies; 105+ messages in thread
From: Paul Eggert @ 1997-08-30  0:00 UTC (permalink / raw)



dewar@merv.cs.nyu.edu (Robert Dewar) writes:
> This is a matter of binding of the language you are using to IEEE.
> The set of IEEE operatoins contains no such uncertainty,
> and a decent binding of a high level language
> to IEEE (e.g. SANE from Apple) must avoid such uncertainties.

But SANE never really caught on, nor did LIA-1 (the ISO standard
with weaker goals along this line).  Current Ada, C, Fortran, etc.
programs must therefore cope with this ``binding uncertainty''.

Let me elaborate for a bit.

One of the goals for IEEE floating point was to have floating point
operations be bit-for-bit compatible across hosts.  The sloppinesses
of language bindings are blocking us from reaching this goal in
practice.  It's obvious why bindings are sloppy: this lets
implementations (particularly x86 implementations) achieve much
better performance.

For the vast majority of floating point applications, performance is
more important than bit-for-bit compatibility, so it's easy to see why
bit-for-bit compatibility has fallen by the wayside.

Java has tried to fix this problem by insisting on strict
IEEE double->double arithmetic, but unfortunately this is
impossible to implement efficiently on the x86 architecture.
I believe most x86 Java implementers just fudge things:
that is, they don't quite exactly conform to the Java/IEEE standard.

(In an earlier part of this thread, you said that it was possible
to do double->double IEEE arithmetic efficently on the x86,
but I read the results in your ex-grad-student's rounding paper,
and nice as they are, they still don't solve the problem.
With IEEE double destinations, you'd need a 108-bit fraction in your
long registers to avoid double-rounding problems, and the x86
long-double fraction is much smaller than that.)

We'll have to live with this problem for many, many years.
It's not going away soon.  So it's better to deal with it
than to wish it away with ``it's just a matter of binding''.




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-29  0:00                                                   ` floating point comparison Robert Dewar
@ 1997-09-01  0:00                                                     ` Chris RL Morgan
  0 siblings, 0 replies; 105+ messages in thread
From: Chris RL Morgan @ 1997-09-01  0:00 UTC (permalink / raw)





>The problem with the word error is that, while of course everyone knows
>its denotation (certainly you don't have to go to a dictionary for that),
>all native speakers of English also very much no its connotation, which
>is bad, horrible, something-to-avoid, evil, unacceptable ... i.e. uniformly
>negative (for a feeling of this connotation have a look at the use of the
>word error by Mary Baker Eddy, who essentially uses it as a synonym for
>what other religeons call sin -- now to be fair it is not as negative as
>sin, and that was MBE's reason for arguing for using the word error instead,
>but it most assuredly is something to avoid).

I don't find this connotation that strong when I read the word in this context. Thinking about it though I am sure that is because it is used heavily in systems theory to explain feedback, so I am sure someone coming upon the word without some technical background (in my case a Mech Eng. degree) in which the word is not used to imply something "bad" would indeed pick up on those everyday connotations and get the wrong idea. This had never occurred to me before. More than ever I think the guys who invented quark property names (strangeness and charm) had the right idea :-)

Any suggestions for Ada technical terms that have unfortunate connotations?

Chris




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-24  0:00                                               ` Robert Dewar
  1997-08-29  0:00                                                 ` Andrew V. Nesterov
       [not found]                                                 ` <5u4eq6$30b$1@news.lth.se>
@ 1997-09-01  0:00                                                 ` Jim Carr
       [not found]                                                   ` <checkerEFx6xI.FCM@netcom.com>
  1997-09-05  0:00                                                   ` Robert Dewar
  2 siblings, 2 replies; 105+ messages in thread
From: Jim Carr @ 1997-09-01  0:00 UTC (permalink / raw)



James Carr says
|
| The difference is inevitable.  It does exist.  It has important
| consequences when interpreting the result of a calculation that
| is being used as a substitute for working with real numbers, a
| major reason computers exist.  If you do not like the name,
| propose another -- but do not pretend that it does not happen.

dewar@merv.cs.nyu.edu (Robert Dewar) writes:
>
>I think you should assume that I do understand how floating-point works,

 Understood (from the beginning). 

>The point, which perhaps you just don't see, because I assume you also
>understand floating-point well, is that I think the term "error" for
>the discrepancies that occur when an arbitrary decimal number is converted
>to binary floating-point are not errors. An error is something wrong. 

 Yes, in one of its meanings.  That this is the most commonly 
 understood meanings among the general public, even though it 
 is not the meaning used in the sciences, is why I seek good 
 synonyms that will clarify its meaning in numerical computation. 

>There
>is nothing wrong here, you habve made the decision to represent a decimal
>value in binary form, and used a method that gives a very well defined
>result. 

 Not quite.  I have made the decision to represent a real number in 
 a way restricted to a finite number of digits.  It does not matter 
 if they are binary or decimal.  (Yes, I know what you probably
 meant to write, but saying "decimal value" carries various meanings 
 as well.)  

>Similarly when we write a := b + c, the result in a is of course not the
>mathematical result of adding the two real numbers represented by b and c,
>but there is no "error". An error again is something wrong. 

 But now something new happens, because a is not the representation 
 of the the real number (b+c).  That is the important fact and the 
 one that *demands* that this quantity have a name so it can be 
 discussed and analyzed. 

>If on the other hand, your analysis shows that it is appropriate to perform
>the computation a = b + c, where a is defined as the result required by
>IEEE854, then there is no error.

 That analysis must include the effect of the propagation of the 
 quantity known as "roundoff error" in numerical analysis through 
 those calculations, with emphasis on whether it gets bigger or not 
 and whether those changes are acceptable.  

>Yes, it is useful to have a term to describe the difference between the
>IEEE result and the true real arithmetic result, but it is just a
>difference, and perhaps it would be better if this value had been called
>something like "rounding difference", i.e. something more neutral than
>error.

 That is a useful suggestion.  However, the field itself uses only 
 "rounding error" so it is important that programmers learn what 
 numerical analysts mean when they say this and that the term 
 "error" does not convey any value judgement.  

 The value judgement is made when one says that the rounding 
 error is unacceptably large.  After all, there are problems 
 (with the very value-loaded name "ill conditioned") where 
 that rounding difference results in nicely deterministic 
 results that are always the same and always have essentially 
 no relation to the answer found with real arithmetic. 

 My suggestion would be to use difference as part of the arsenal 
 (with the laboratory experience with "uncertainty" as another 
 tool in that arsenal) one might use to teach what rounding error 
 is and when it can become significant. 

-- 
 James A. Carr   <jac@scri.fsu.edu>     | Commercial e-mail is _NOT_ 
    http://www.scri.fsu.edu/~jac/       | desired to this or any address 
 Supercomputer Computations Res. Inst.  | that resolves to my account 
 Florida State, Tallahassee FL 32306    | for any reason at any time. 




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
       [not found]                                                   ` <checkerEFx6xI.FCM@netcom.com>
@ 1997-09-03  0:00                                                     ` Chris L. Kuszmaul
  1997-09-05  0:00                                                       ` Malome Khomo
  1997-09-07  0:00                                                       ` Robert Dewar
  0 siblings, 2 replies; 105+ messages in thread
From: Chris L. Kuszmaul @ 1997-09-03  0:00 UTC (permalink / raw)



In article <checkerEFx6xI.FCM@netcom.com> checker@netcom.com (Chris Hecker) writes:
<snip>
>
>I don't mean to flame here, because I agree that precise terminology is
>very important, but do you guys (or anyone) actually have any useful
>and applicable hints for someone trying to make numerical code work? 


  If you are doing matrix operations, then you can get good
error estimates from commercial packages without doing much work.
If your error margins are large, then you may need to find out the 
condition number of your matrix (or matrices), and see if you can 
reduce this. It may turn out that the nature of the problem you
are trying to solve is fundamentally prone to error.

>
>Completely analyzing a quadratic equation makes for a great Chapter 1
>(as does a mention of the e^-x series and the other classics), but I've
>got the results of a quadratic solve determining some elements of
>vectors that get fed through a gaussian eliminator and they're supposed
>to be compatible systems but the input data is slightly off and...you
>get the point.
>

  I would begin with the matrix you generate with your quadratic solve.
I would get a hold of a commercial linear system solver package and 
find out the estimated condition number of the matrix you are going
Gaussian elimination on. If this shows a near-singular matrix, then
one problem is in the creation of the matrix (or something fundamental
about your problem). If the matrix is well conditioned, then you need
to look more closely at errors introduced in the 'quadratic solve'.

CLK






^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-08-23  0:00                       ` W. Wesley Groleau x4923
  1997-08-23  0:00                         ` Robert Dewar
@ 1997-09-05  0:00                         ` Robert I. Eachus
  1997-09-06  0:00                           ` schlafly
                                             ` (2 more replies)
  1 sibling, 3 replies; 105+ messages in thread
From: Robert I. Eachus @ 1997-09-05  0:00 UTC (permalink / raw)




In article <33FE4D83.4DCD@pseserv3.fw.hac.com> "W. Wesley Groleau x4923" <wwgrol@pseserv3.fw.hac.com> writes:

  > Let me put my original question another way.  Can we (as Sam 
  > suggested above) come up with something _at_the_level_of_ Ada 
  > Quality and Style that tells the "analysis-challenged" when to 
  > call for help--rather than telling them to never use floating point?

  For many calculations floating point offers convenience while
preserving more than enough numerical accuracy.  On other types of
calculations, fixed point accuracy is required.  (For example,
checking your bank statement can only be done correctly using fixed
point, and knowing the rules for computing interest on that account.)
If you can't tell which you should be using, get help fast.

   In general it is much more work to write fixed point than floating
point code, but especially in Ada, the error analysis is easy.  Errors
are only introduced by explicit conversions never by arithmetic
operations.  The much more work comes from defining the various types
and putting in the conversions.  It used to be difficult to use Ada
fixed point with some compilers, but in Ada 95, most vendors are
supporting the decimal types and 64-bit binary fixed point types.

   (If that sounds like I recommend using fixed point everywhere, far
from it.  If you are building flight control system, for example, and
your control laws are stable over the envelope, then go ahead and use
floating point.  But if you are inverting large matrices or solving
linear programming problems, then the real work needs to be done not
only in fixed point, but with appropriate scaling, rebuilding of
basises etc.)

--

					Robert I. Eachus

with Standard_Disclaimer;
use  Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-09-03  0:00                                                     ` Chris L. Kuszmaul
@ 1997-09-05  0:00                                                       ` Malome Khomo
  1997-09-07  0:00                                                       ` Robert Dewar
  1 sibling, 0 replies; 105+ messages in thread
From: Malome Khomo @ 1997-09-05  0:00 UTC (permalink / raw)




On 3 Sep 1997, Chris L. Kuszmaul wrote:

> In article <checkerEFx6xI.FCM@netcom.com> checker@netcom.com (Chris Hecker) writes:
> <snip>
> >
> >I don't mean to flame here, because I agree that precise terminology is
> >very important, but do you guys (or anyone) actually have any useful
> >and applicable hints for someone trying to make numerical code work? 
> 
> 
>   If you are doing matrix operations, then you can get good
> ...<snip>

More hints for 'scalar' problems:
Conversely if you are doing scalar operations and/or do not have access
to canned numerical packages here are a few rules of thumb that can go
a long way:
1) don't use floats to sequence with, because you have to compare them
   for your stopping conditions( it won't stop where you expect):
	NEVER this: float seq; for(seq=0.0;seq<2.0;seq+=0.1){...}
        RATHER this: int seq; for(seq=0;seq<20;seq+=1){...}
2) When you must compare floats for equality use integral values:
        double var; if( floor(var*1000)==ceil(1.23456*200) ) { ... }
	The expression is highly contrived for illustration purposes
3) If you are near critical regions, but must divide, consider using
   type systems that support arbitrary precision. presumably you do not
   need high precision _all_ the time. These 'side' computations need
   special handling, usually separate link and programing to integrate
   the result back into your floating point 'scalar' computation.
4) If you really must you can even use an arithmetic package that
   performs exact arithmetic ( +, * and - are always so; exact here mean
   zero-remainder division over rationals). Do not try this on irrationals
   so if square root or some such thing features in your computation,
   go back and study numerical techniques.
5) The primary goal of all the numerical methods is to converge most
   rapidly to the desired solution. Try to understand what your process
   is achieving and evaluate the benefit/cost of improving the algorithm.
   As two extreme examples if you are servicing a joy-stuck gammer with
   graphics, it's not worth the trouble getting an exact value for a frame
   that will flash by and be forgotten. If on the other hand you're to
   guide the MIR crew home, I'd pull all the stops and get exactitude!
6) I hope you're also aware that anything you do outside the FPU will
   generally slow your computations by _many_ orders of magnitude.

The above may be obvious to most but I hope its helpful.

Malome Khomo





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-09-01  0:00                                                 ` Jim Carr
       [not found]                                                   ` <checkerEFx6xI.FCM@netcom.com>
@ 1997-09-05  0:00                                                   ` Robert Dewar
  1997-09-10  0:00                                                     ` Jim Carr
  1 sibling, 1 reply; 105+ messages in thread
From: Robert Dewar @ 1997-09-05  0:00 UTC (permalink / raw)



James said

<< Not quite.  I have made the decision to represent a real number in
 a way restricted to a finite number of digits.  It does not matter
 if they are binary or decimal.  (Yes, I know what you probably
 meant to write, but saying "decimal value" carries various meanings
 as well.)>>


Actually I meant decimal, see your original post, you specifically have
given an example of a number representable exactly in decimal, and not
in binary, and this is indeed a common case.

After all input to a program can never be real numbers, it can only be
representations of real numbers!





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-09-05  0:00                         ` Robert I. Eachus
@ 1997-09-06  0:00                           ` schlafly
  1997-09-09  0:00                             ` Robert Dewar
  1997-09-07  0:00                           ` M. J. Saltzman
  1997-09-11  0:00                           ` Robin Rosenberg
  2 siblings, 1 reply; 105+ messages in thread
From: schlafly @ 1997-09-06  0:00 UTC (permalink / raw)



In article <EACHUS.97Sep5165959@spectre.mitre.org>, eachus@spectre.mitre.org (Robert I. Eachus) writes:
>    In general it is much more work to write fixed point than floating
> point code, but especially in Ada, the error analysis is easy.  Errors
> are only introduced by explicit conversions never by arithmetic
> operations.

Not true.  Try computing 1/3 in fixed point.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Get_Immediate
       [not found]                                                     ` <340DF1DD.2736@iop.com>
@ 1997-09-07  0:00                                                       ` Robert Dewar
  1997-09-07  0:00                                                       ` Get_Immediate Robert Dewar
  1997-09-08  0:00                                                       ` Get_Immediate J Giffen
  2 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-09-07  0:00 UTC (permalink / raw)




<<I tried using Get_Immediate with GNAT (3.05) on Solaris and I didn't get
the results I wanted.

What I got:

My program reads each key, and I don't have to type Enter, but it also
echoes each key. When I type a key that sends an escape sequence, like
an arrow key, all of the codes in the sequence are echoed, e.g. ^[[A,
before my program gets the first character of the sequence. Also, some
keys, like Ctrl-C did not get read by my program.

What I want:

I want to read without any echoing and be able to get all of the codes
the keyboard can generate.

Can you tell me how to get these results?>>


You probably need to give more detail, what machine are you on? (remember
Solaris does not imply Sparc!)

Also, you need to say what interface you are using, since keyboard input
is often filtered in various ways by the environment. For example, the
situation using X directly on a Sun work station may be radically diferent
from the situation of using a terminal emulator remotely (in the latter
case, the "codes" from the keyboard may be dependent on the terminal
emulator).

If the problem is echoing, then you may well want to use appropriate
OS dependent routines that are around in your environment to do exactly
what you want, using pragma Interface.

But in any case, update to a more recent version, 3.05 is pretty ancient
at this stage!





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-09-03  0:00                                                     ` Chris L. Kuszmaul
  1997-09-05  0:00                                                       ` Malome Khomo
@ 1997-09-07  0:00                                                       ` Robert Dewar
  1 sibling, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-09-07  0:00 UTC (permalink / raw)



<<  If you are doing matrix operations, then you can get good
error estimates from commercial packages without doing much work.
If your error margins are large, then you may need to find out the
condition number of your matrix (or matrices), and see if you can
reduce this. It may turn out that the nature of the problem you
are trying to solve is fundamentally prone to error.>>


This point cannot be overemphasized. Nearly all standard numerical
calculations that most people typically need to perform have been
programmed to death over the years by people who really understand
this stuff (rounding errors, floating-point etc), and whereever 
possible, you should borrow this expertise, and use standard library
routines, rather than rolling your own.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Get_Immediate
       [not found]                                                     ` <340DF1DD.2736@iop.com>
  1997-09-07  0:00                                                       ` Get_Immediate Robert Dewar
@ 1997-09-07  0:00                                                       ` Robert Dewar
  1997-09-08  0:00                                                       ` Get_Immediate J Giffen
  2 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-09-07  0:00 UTC (permalink / raw)




Jeff said

<<I tried using Get_Immediate with GNAT (3.05) on Solaris and I didn't get
the results I wanted.

What I got:

My program reads each key, and I don't have to type Enter, but it also
echoes each key. When I type a key that sends an escape sequence, like
an arrow key, all of the codes in the sequence are echoed, e.g. ^[[A,
before my program gets the first character of the sequence. Also, some
keys, like Ctrl-C did not get read by my program.

What I want:

I want to read without any echoing and be able to get all of the codes
the keyboard can generate.

Can you tell me how to get these results?>>


GNAT 3.05 is very far out of date, get a more recent version.





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-09-05  0:00                         ` Robert I. Eachus
  1997-09-06  0:00                           ` schlafly
@ 1997-09-07  0:00                           ` M. J. Saltzman
  1997-09-11  0:00                           ` Robin Rosenberg
  2 siblings, 0 replies; 105+ messages in thread
From: M. J. Saltzman @ 1997-09-07  0:00 UTC (permalink / raw)



eachus@spectre.mitre.org (Robert I. Eachus) writes:
>But if you are inverting large matrices or solving
>linear programming problems, then the real work needs to be done not
>only in fixed point, but with appropriate scaling, rebuilding of
>basises etc.)

Why fixed point?  Of course, due attention must be paid to scaling,
pivoting, refactoring, tolerances, etc., but all industrial-quality LP
codes that I know about use double-precision floating point for all
calculations.  LAPACK comes in float and double versions, but not
fixed-point.  It's probably one of the most widely used and respected
linear algebra packages available.
-- 
		Matthew Saltzman
		Clemson University Math Sciences
		mjs@clemson.edu




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Get_Immediate
       [not found]                                                     ` <340DF1DD.2736@iop.com>
  1997-09-07  0:00                                                       ` Get_Immediate Robert Dewar
  1997-09-07  0:00                                                       ` Get_Immediate Robert Dewar
@ 1997-09-08  0:00                                                       ` J Giffen
  2 siblings, 0 replies; 105+ messages in thread
From: J Giffen @ 1997-09-08  0:00 UTC (permalink / raw)
  To: Jeff Glenn


Jeff Glenn wrote:
> 
> Mr. Dewar,
> 
> I tried using Get_Immediate with GNAT (3.05) on Solaris and I didn't get
> the results I wanted.
> 
> What I got:
> 
> My program reads each key, and I don't have to type Enter, but it also
> echoes each key. When I type a key that sends an escape sequence, like
> an arrow key, all of the codes in the sequence are echoed, e.g. ^[[A,
> before my program gets the first character of the sequence. Also, some
> keys, like Ctrl-C did not get read by my program.
> 
> What I want:
> 
> I want to read without any echoing and be able to get all of the codes
> the keyboard can generate.
> 
> Can you tell me how to get these results?
> 
> Thanks!
> 
> Jeff Glenn
> 
> jeff@iop.com

It sounds like the computer's sampling rate to the keyboard is faster
than what the program is willing to accomodate.  This might be solved
by going into CMOS Setup and altering the keyboard sampling rate.  I'd
try slowing it down some.

ASCII for cursor right, left, up and down is 1C, 1D, 1E & 1F.  Other
keys have others.  A switch closure for a certain row and column on
the keyboard is picked up when a scanning frequency detects it and
sends the signal to the motherboard.




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-09-06  0:00                           ` schlafly
@ 1997-09-09  0:00                             ` Robert Dewar
  0 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-09-09  0:00 UTC (permalink / raw)



schafly said, replying to Eachus

<<>    In general it is much more work to write fixed point than floating
> point code, but especially in Ada, the error analysis is easy.  Errors
> are only introduced by explicit conversions never by arithmetic
> operations.

Not true.  Try computing 1/3 in fixed point.>>

This shows a lack of appreciation for how things are done in Ada. if you
have two fixed-point values of 1.0 and 3.0, then the division 1.0/3.0
gives a semantically precise and exact answer. That is true of any
division of fixed-point values.

BUT, to do anything interesting with this exact value, it must be converted
to a specific fixed-point type, hence Eachus' comment that "errors" [that
word again :-)] are introduced only by conversions.

This approach greatly simplifies analysis of the propagation of these
"errors".





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-09-05  0:00                                                   ` Robert Dewar
@ 1997-09-10  0:00                                                     ` Jim Carr
  1997-09-12  0:00                                                       ` Robert Dewar
  0 siblings, 1 reply; 105+ messages in thread
From: Jim Carr @ 1997-09-10  0:00 UTC (permalink / raw)



James said
|
| Not quite.  I have made the decision to represent a real number in
| a way restricted to a finite number of digits.  It does not matter
| if they are binary or decimal.  (Yes, I know what you probably
| meant to write, but saying "decimal value" carries various meanings
| as well.)

dewar@merv.cs.nyu.edu (Robert Dewar) writes:
>
>Actually I meant decimal, see your original post, you specifically have
>given an example of a number representable exactly in decimal, and not
>in binary, and this is indeed a common case.

 Sure, but I could have given a number like 1./3. that is not 
 exactly represented in decimal.  When exploring whether your 
 calculator uses a binary or decimal representation in its 
 registers, this is also a common case. 

-- 
 James A. Carr   <jac@scri.fsu.edu>     | Commercial e-mail is _NOT_ 
    http://www.scri.fsu.edu/~jac/       | desired to this or any address 
 Supercomputer Computations Res. Inst.  | that resolves to my account 
 Florida State, Tallahassee FL 32306    | for any reason at any time. 




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-09-05  0:00                         ` Robert I. Eachus
  1997-09-06  0:00                           ` schlafly
  1997-09-07  0:00                           ` M. J. Saltzman
@ 1997-09-11  0:00                           ` Robin Rosenberg
  2 siblings, 0 replies; 105+ messages in thread
From: Robin Rosenberg @ 1997-09-11  0:00 UTC (permalink / raw)



eachus@spectre.mitre.org (Robert I. Eachus) writes:

>   For many calculations floating point offers convenience while
> preserving more than enough numerical accuracy.  On other types of
> calculations, fixed point accuracy is required.  (For example,
> checking your bank statement can only be done correctly using fixed
> point, and knowing the rules for computing interest on that account.)
> If you can't tell which you should be using, get help fast.

I used to think so too, until I read Sun's Numerical Computation Guide
(http://docs.sun.com), which states otherwise. In particular, see the
chapter "What Every Computer Scientist Should Know About Floating
Point Arithmetic".

-- 
Robin Rosenberg,  | Voice: +46-8-7036200 | "Any opinions are my own, etc. etc."
Enator Objective  | Fax:  (+46-8-7036283)|      <this line left blank>
Management AB     | Mail:  rrg@funsys.se | 




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-09-10  0:00                                                     ` Jim Carr
@ 1997-09-12  0:00                                                       ` Robert Dewar
  1997-09-15  0:00                                                         ` James Pauley
  0 siblings, 1 reply; 105+ messages in thread
From: Robert Dewar @ 1997-09-12  0:00 UTC (permalink / raw)



James said

<< Sure, but I could have given a number like 1./3. that is not
 exactly represented in decimal.  When exploring whether your
 calculator uses a binary or decimal representation in its
 registers, this is also a common case.>>

It may surprise you, but actually I know that 1./3. cannot be represented
exactly in decimal. I realize that the general readership of CLA may not
have access to such highly technical theoretical mathematical knowledge,
but I seem to have picked up that particular knowledge somewhere.

Now once again, what did we decide on for the "this is irony" symbol :-)





^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-09-12  0:00                                                       ` Robert Dewar
@ 1997-09-15  0:00                                                         ` James Pauley
  1997-09-16  0:00                                                           ` Robert Dewar
  0 siblings, 1 reply; 105+ messages in thread
From: James Pauley @ 1997-09-15  0:00 UTC (permalink / raw)



Robert Dewar wrote:
<GURP!>

> Now once again, what did we decide on for the "this is irony" symbol :-)

IRONY = <FeY>

<Nyuk, Nyuk, Nyuk> 

Now back to our regularly scheduled transmizzzzzzionnnzzzzzz......

----------------------------------------------------------------
 James Pauley <jhpauley@tasc.com> | Certainty is the downfall of
 Principal MTS                    |       true practice.
 TASC - ISG/STBU                  ------------------------------
 Ft. Walton Beach, FL
----------------------------------------------------------------
    <nudge> Return address contains SpamBOT-mask </nudge>




^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: floating point comparison
  1997-09-15  0:00                                                         ` James Pauley
@ 1997-09-16  0:00                                                           ` Robert Dewar
  0 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 1997-09-16  0:00 UTC (permalink / raw)



James,

IRONY = <FeY>

as a chemist (all my degrees are in chemistry), I really appreciate that :-)





^ permalink raw reply	[flat|nested] 105+ messages in thread

end of thread, other threads:[~1997-09-16  0:00 UTC | newest]

Thread overview: 105+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1997-07-29  0:00 floating point comparison Matthew Heaney
1997-07-30  0:00 ` Robert Dewar
1997-07-30  0:00   ` Matthew Heaney
1997-07-31  0:00     ` Jim Carr
1997-07-30  0:00   ` Matthew Heaney
1997-07-31  0:00     ` Samuel Mize
1997-07-31  0:00     ` Martin Tom Brown
1997-07-31  0:00     ` Bob Binder  (remove .mapson to email)
1997-07-31  0:00       ` Robert Dewar
1997-08-01  0:00         ` Dale Stanbrough
1997-08-04  0:00         ` Paul Eggert
1997-08-06  0:00           ` Robert Dewar
1997-08-14  0:00             ` Paul Eggert
1997-08-01  0:00       ` user
1997-08-02  0:00         ` Peter L. Montgomery
1997-08-04  0:00           ` W. Wesley Groleau x4923
1997-08-05  0:00             ` Bob Binder  (remove .mapson to email)
1997-08-02  0:00         ` Lynn Killingbeck
1997-08-03  0:00           ` Robert Dewar
1997-08-03  0:00           ` Bob Binder  (remove .mapson to email)
1997-08-03  0:00             ` Charles R. Lyttle
1997-07-31  0:00     ` Robert Dewar
1997-08-02  0:00     ` Lynn Killingbeck
1997-07-31  0:00   ` Gerald Kasner
1997-07-31  0:00     ` Robert Dewar
1997-07-30  0:00 ` Jan Galkowski
1997-07-31  0:00   ` Don Taylor
1997-07-31  0:00     ` Russ Lyttle
1997-08-01  0:00       ` W. Wesley Groleau x4923
1997-08-02  0:00         ` Robert Dewar
1997-08-02  0:00           ` Matthew Heaney
1997-08-03  0:00             ` Robert Dewar
1997-08-04  0:00           ` W. Wesley Groleau x4923
1997-08-05  0:00             ` Jan-Christoph Puchta
1997-08-05  0:00               ` W. Wesley Groleau x4923
1997-08-05  0:00                 ` Samuel Mize
1997-08-06  0:00                 ` Robert Dewar
1997-08-07  0:00                   ` Shmuel (Seymour J.) Metz
1997-08-08  0:00                     ` Peter Shenkin
1997-08-09  0:00                       ` Albert Y.C. Lai
1997-08-06  0:00                 ` Chris L. Kuszmaul
1997-08-07  0:00                   ` Dave Sparks
1997-08-08  0:00                     ` Robert Dewar
1997-08-08  0:00                     ` Jan-Christoph Puchta
1997-08-09  0:00                       ` Robert Dewar
1997-08-10  0:00                       ` Lynn Killingbeck
1997-08-08  0:00                     ` Mark Eichin
     [not found]                   ` <5sbb90$qsc@redtail.cruzio.com>
     [not found]                     ` <5scugs$jdc$1@cnn.nas.nasa.gov>
1997-08-07  0:00                       ` Robert Dewar
1997-08-08  0:00                     ` Gerhard Heinzel
1997-08-08  0:00                       ` Daniel Villeneuve
1997-08-08  0:00                       ` schlafly
1997-08-09  0:00                       ` Robert Dewar
1997-08-09  0:00                         ` David Ullrich
1997-08-10  0:00                           ` Robert Dewar
1997-08-16  0:00                             ` Andrew V. Nesterov
1997-08-18  0:00                               ` Robert Dewar
1997-08-19  0:00                                 ` Jim Carr
1997-08-21  0:00                                   ` Christian Bau
1997-08-21  0:00                                     ` Jim Carr
1997-08-21  0:00                                       ` Robert Dewar
1997-08-22  0:00                                         ` Jim Carr
1997-08-22  0:00                                           ` Robert Dewar
1997-08-23  0:00                                             ` Jim Carr
1997-08-24  0:00                                               ` Robert Dewar
1997-08-29  0:00                                                 ` Andrew V. Nesterov
1997-08-29  0:00                                                   ` Robert Dewar
     [not found]                                                     ` <340DF1DD.2736@iop.com>
1997-09-07  0:00                                                       ` Get_Immediate Robert Dewar
1997-09-07  0:00                                                       ` Get_Immediate Robert Dewar
1997-09-08  0:00                                                       ` Get_Immediate J Giffen
     [not found]                                                 ` <5u4eq6$30b$1@news.lth.se>
1997-08-29  0:00                                                   ` floating point comparison Robert Dewar
1997-09-01  0:00                                                     ` Chris RL Morgan
1997-09-01  0:00                                                 ` Jim Carr
     [not found]                                                   ` <checkerEFx6xI.FCM@netcom.com>
1997-09-03  0:00                                                     ` Chris L. Kuszmaul
1997-09-05  0:00                                                       ` Malome Khomo
1997-09-07  0:00                                                       ` Robert Dewar
1997-09-05  0:00                                                   ` Robert Dewar
1997-09-10  0:00                                                     ` Jim Carr
1997-09-12  0:00                                                       ` Robert Dewar
1997-09-15  0:00                                                         ` James Pauley
1997-09-16  0:00                                                           ` Robert Dewar
1997-08-23  0:00                                         ` W. Wesley Groleau x4923
1997-08-23  0:00                                           ` Robert Dewar
1997-08-19  0:00                                 ` Hans Olsson
1997-08-30  0:00                                 ` Paul Eggert
1997-08-06  0:00             ` Robert Dewar
1997-08-07  0:00               ` Dr. Rex A. Dwyer
     [not found]               ` <33E8DFF6.6F44@pseserv3.fw.hac.com>
1997-08-07  0:00                 ` Robert Dewar
     [not found]                 ` <33EA1251.3466@link.com>
     [not found]                   ` <33EA46CC.226@pseserv3.fw.hac.com>
1997-08-08  0:00                     ` Christian Bau
1997-08-12  0:00                     ` Martin Tom Brown
1997-08-23  0:00                       ` W. Wesley Groleau x4923
1997-08-23  0:00                         ` Robert Dewar
1997-09-05  0:00                         ` Robert I. Eachus
1997-09-06  0:00                           ` schlafly
1997-09-09  0:00                             ` Robert Dewar
1997-09-07  0:00                           ` M. J. Saltzman
1997-09-11  0:00                           ` Robin Rosenberg
1997-08-06  0:00             ` Robert Dewar
     [not found]               ` <33E8E3E1.17EA@pseserv3.fw.hac.com>
     [not found]                 ` <5sbgpk$q0n$1@goanna.cs.rmit.edu.au>
1997-08-07  0:00                   ` Robert Dewar
     [not found]                     ` <33FE4603.1B6B@pseserv3.fw.hac.com>
1997-08-23  0:00                       ` Robert Dewar
1997-08-08  0:00                   ` W. Wesley Groleau x4923
1997-08-07  0:00             ` Do-While Jones
1997-08-03  0:00         ` Brian Rogoff
1997-08-03  0:00           ` Robert Dewar
1997-08-02  0:00 ` Michael Sierchio
1997-08-08  0:00 ` floating point conversions Mark Lusti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox