comp.lang.ada
 help / color / mirror / Atom feed
* fixed point vs floating point
@ 1997-11-22  0:00 Matthew Heaney
  1997-11-22  0:00 ` Tucker Taft
                   ` (4 more replies)
  0 siblings, 5 replies; 64+ messages in thread
From: Matthew Heaney @ 1997-11-22  0:00 UTC (permalink / raw)




Matthew asks:

<<Can you elaborate on that?  I've often observed that floating point is
used when fixed point is clearly called for, ie a bearing or latitude.
What are the specific reasons to avoid fixed point?>>

Robert replies:

<<Most obviously, on most architectures, fixed-point is drastically slower 
than flowting-point.>>


Matthew asks

<<Well, can you elaborate on that?  Isn't fixed-point arithematic
implemented using integers?  What are the specific reasons why it would
be slower?  Is it because floating point can be done in parallel on many
machines?

My real reason for all these questions is because I still don't
understand why someone declares heading as

   type Heading_In_Degrees is digits 6 range 0.0 .. 360.0;

when a better fit to the problem seems to be

   type Heading_In_Degrees is delta 0.1 range 0.0 .. 360.0;

The error in my value is the same no matter which way I'm pointing, so
why wouldn't I declare the type as a fixed point type?  However, almost
universally, programmers use a floating point type, but I suspect the
reason has nothing to do with efficiency considerations.  Any ideas?>>

Robert replies:

<<Probably this is no longer GNAT specific enough to be here, this thread
should be moved to CLA. Briefly, fpt is faster than integer because chips
are designed that way (I am specifically thinking of multiplication and
division here). Second your definition for Heading_In_Degrees is wrong,
a small of 0.0625 seems quite inappropriate. Third, I think you should
look at generated code to better undertasnd fixed-point issues.>>

Matthew replies, and then asks:

OK, OK, I was being lazy.  How about this definition of heading:

   type Heading_In_Degrees_Base is 
      delta 512.0 * System.Fine_Delta range -512.0 .. 512.0;

   subtype Heading_In_Degrees is 
      Heading_In_Degrees_Base range 0.0 .. 360.0;

Now I know on GNAT that the delta is considerably smaller than 0.0625. (I
really do wish implementations would give me extra accuracy and not extra
range.  Why do they do that?)

Anyway, I'll accept that fixed point operations are slower, but the
question remains: Why do most programmers insist on using floating point
when a fixed point better models the abstraction?  Is the reason solely
based on efficiency?

By the way, are there off-the-shelf subprograms (in Ada preferably, but I'd
take any language) for computing the elementary functions (sin, cos, etc)
of a fixed point angle?

--------------------------------------------------------------------
Matthew Heaney
Software Development Consultant
<mailto:matthew_heaney@acm.org>
(818) 985-1271




^ permalink raw reply	[flat|nested] 64+ messages in thread
[parent not found: <9711221603.AA03295@nile.gnat.com>]
* Re: fixed point vs floating point
@ 1997-11-27  0:00 tmoran
  1997-11-27  0:00 ` Robert Dewar
  0 siblings, 1 reply; 64+ messages in thread
From: tmoran @ 1997-11-27  0:00 UTC (permalink / raw)



>Indeed with the modern progression of ever increasing values for the
>ratio (processor-speed / main-memory-speed), table lookup can often
>be quite unattractive. On a really fast processor a second level cache
>miss is enough time to compute LOTS of floating-point stuff.
  So the current crop of systems has fast fpt, slow integer */, and
slow memory.  A few years ago that was not the case.  Things like
MMX instructions start to move back toward fast integer arithmetic.
What's fast this year is not the same as what's fast next year.
As Mathew Heaney pointed out earlier, it would be wise to design
data types, etc, to match the problem, and then, if speed is
an issue, change things to optimize for the particular current target.




^ permalink raw reply	[flat|nested] 64+ messages in thread
* Re: fixed point vs floating point
@ 1997-11-28  0:00 tmoran
  1997-11-28  0:00 ` Robert Dewar
  0 siblings, 1 reply; 64+ messages in thread
From: tmoran @ 1997-11-28  0:00 UTC (permalink / raw)



In <dewar.880661467@merv> Robert Dewar said:
>decent PC has a 266MHz processor (60 times faster clock, and of course
>really the factor is higher), yet memory is only 5-6 times faster.
Comparing main memory speed and ignoring cache is hardly reasonable.

>Matthew's point that because the definition of angle nicely fits the
>fixed-point declaration, that is what should be used, is fundamentally
>flawed. The decision on whether to represent an angle as fixed-point
>or floating-point depends on the operations to be performed on the
>quantity and not much else!
  I agree with the point I understood him to be making, that the
program should be designed with an application oriented, not a machine
oriented, view.  It sounds like you are agreeing that matching the
semantics of the type to the application is what's important, and
optimization decisions should be made after it's been determined that
there is an excessive cost in using the code the compiler at hand
generates for the type in use, on the target machine, as compared to
switching to another, less natural, type.  Or in the case at hand:
does the part of his app that multiplies or divides angles have such a
performance impact that it should force the decision on their
representation?

>By the way, Tom, you keep mentioning MMX (one might almost think you were
>an Intel salesperson), but I see MMX has having zero impact on general
  Perhaps MMX is strictly Intel hype (no, I'd not an Intel employee or
salesperson), but the general issue remains.  MMX is a way to speed up
certain repetitive operations using SIMD parallelism.  But the
opportunity for such parallelism must be recognized, either by the
programmer, who codes in asm or calls an asm library routine, or by the
compiler/optimizer.  It's easy for a programmer to recognize that his
signal filtering loop can be done faster with a library procedure that
uses MMX, but what about something like using PMADD to calculate an
index into a multi-dimensional array?  It's much more practical if the
compiler can recognize that and generate the MMX code.
  Perhaps today's MMX operations are so restricted, or have so little
benefit, they can be ignored by a compiler.  But SIMD pallelism may
well be used more in hardware in the future, in which case
compilers/optimizers that can make use of it will have an advantage
over those that don't.
  The original reason to mention MMX in this thread, of course, was as
an illustration that, though hardware designers may have been working
in the last few years to optimize fpt performance, they have also
started more recently to remember that some kinds of integer arithmetic
are extremely important in a growing class of programs, and they've
started to optimize integer performance.  The pendulum swings both ways.




^ permalink raw reply	[flat|nested] 64+ messages in thread
* Re: fixed point vs floating point
@ 1997-12-02  0:00 Robert Dewar
  1997-12-02  0:00 ` Joe Gwinn
  1997-12-03  0:00 ` robin
  0 siblings, 2 replies; 64+ messages in thread
From: Robert Dewar @ 1997-12-02  0:00 UTC (permalink / raw)



Joe says

  <<I neither know nor care if this is an Ada language design issue.  I don't
  buy and use language designs or even languages, I buy and use compilers.
  It doesn't much matter if we are today convinced or not; the decision was
  made then, based on the compilers then available.  It may be different
  today, but I don't see many people using fixed point where floating point
  is available.>>

That to me is an attitude that is undesirable. It is very important for
programmers to understand the difference between the semantics of a
language and the behavior of a specific compiler. Otherwise they will
never be able to successfully write portable code. Portable code is
code that strictly obeys the semantics of the language. This is quite
a different criterion than "works with my compiler." Many, perhaps nearly
all problems with portability when using supposeldly portable languages
(such as Ada) come from not understanding this distinction, and not being
sufficiently aware of code that uses implementation dependent features.

It is still quite unclear from your description whether the problems you
report are

  1. Language design problems
  2. Compiler implementation problems
  3. Misunderstandings of the Ada fixed-point semantics

I tend to suspect 3 at this stage.

Robert said

  > I think any attempt to automatically determine the scaling of intermediate
  > results in multiplications and divisions in fixed-point is doomed to failure.
  > This just *has* to be left up to the programmer, since it is highly
  > implementation dependent.

Joe said

  <<No, it isn't hopeless.  It's quite simple and mechanical, actually.  If it
  were imposssible, why then did Ada83 attempt it?  It's exactly the same
  rules we were taught in grade school for the handling of decimal points
  after arithmetic on decimals, especially multiplication and division.
  Ada83 should have been able to do it, given the information provided when
  the various variables types were defined.>>

Actually it is this paragraph that makes me pretty certain that the problems
may have been misunderstandings of how Ada works. First of all, it is simply
wrong that Ada83 attempts to determine the proper normalization (to use
Joe's term) of multiplication and division. Both the Ada 83 and Ada 95
design require that the programmer specify the scale and precision of
intermediate results for multiplication and division (in fact in Ada 83,
this specification is *always* explicit, in some cases in Ada 95 it is
implicit, but only in trivial cases, e.g. where in Ada 83 you have to
write

   X := Type_Of_X (Y * Z);

In Ada 95, this particular conversion can be omitted.

Secondly, it is not at all true that the rules are obvious or that they
are exactly the same as the "rules we were taught in grade school". If
you write

  x := y * (V1 / V3);

where V1 is 1.0 and V3 is 3.0, then the issue of the precision of the
intermediate value is critical, and no, there are no simple grade school
answers to answer this. I suggest looking at PL/1 for an attempt at
defining this, an attempt which most consider a failure. As I mentioned
earlier, COBOL simply gives up in the COMPUTE statement and says that
the results are implementation dependent.

Robert Dewar





^ permalink raw reply	[flat|nested] 64+ messages in thread
* fixed point vs floating point
@ 2011-09-29 10:25 RasikaSrinivasan@gmail.com
  2011-09-29 10:49 ` AdaMagica
  2011-09-30 10:17 ` Stephen Leake
  0 siblings, 2 replies; 64+ messages in thread
From: RasikaSrinivasan@gmail.com @ 2011-09-29 10:25 UTC (permalink / raw)


friends

I am investigating the applicability of fixed point to a numerical
problem. I would like to develop the algorithm as a generic and test
with different floating and fixed point types to decide which one to
go with.

Questions:

- ada.numerics family is pretty much floating point only - is this
correct?
- can we design a generic (function or procedure) that can accept
either fixed point or floating point data types at the same time
excluding other types

thanks for hints/pointers,

srini



^ permalink raw reply	[flat|nested] 64+ messages in thread

end of thread, other threads:[~2011-10-02 14:19 UTC | newest]

Thread overview: 64+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1997-11-22  0:00 fixed point vs floating point Matthew Heaney
1997-11-22  0:00 ` Tucker Taft
1997-11-22  0:00   ` Robert Dewar
1997-11-22  0:00     ` Matthew Heaney
1997-11-23  0:00 ` Geert Bosch
1997-11-23  0:00   ` Tom Moran
1997-11-25  0:00     ` John A. Limpert
1997-11-25  0:00       ` Robert Dewar
1997-11-25  0:00       ` Robert Dewar
1997-11-23  0:00   ` Matthew Heaney
1997-11-23  0:00     ` Robert Dewar
1997-11-24  0:00       ` Herman Rubin
1997-11-24  0:00         ` Robert Dewar
1997-11-25  0:00           ` Joe Gwinn
1997-11-25  0:00             ` Matthew Heaney
1997-11-25  0:00             ` Robert Dewar
1997-11-25  0:00               ` Joe Gwinn
1997-11-25  0:00                 ` Robert Dewar
1997-11-26  0:00                   ` Joe Gwinn
1997-11-26  0:00                     ` Robert Dewar
1997-12-01  0:00                       ` Joe Gwinn
1997-12-01  0:00                         ` Robert Dewar
1997-12-01  0:00                           ` Joe Gwinn
1997-12-03  0:00                           ` robin
1997-11-26  0:00             ` William A Whitaker
1997-11-24  0:00     ` Geert Bosch
1997-11-24  0:00 ` Vince Del Vecchio
1997-11-24  0:00 ` Vince Del Vecchio
1997-12-03  0:00 ` robin
     [not found] <9711221603.AA03295@nile.gnat.com>
1997-11-22  0:00 ` Ken Garlington
  -- strict thread matches above, loose matches on Subject: below --
1997-11-27  0:00 tmoran
1997-11-27  0:00 ` Robert Dewar
1997-11-29  0:00   ` Tarjei T. Jensen
1997-11-28  0:00 tmoran
1997-11-28  0:00 ` Robert Dewar
1997-12-02  0:00 Robert Dewar
1997-12-02  0:00 ` Joe Gwinn
1997-12-02  0:00   ` Ken Garlington
1997-12-03  0:00     ` Joe Gwinn
1997-12-04  0:00       ` Robert Dewar
1997-12-04  0:00         ` Shmuel (Seymour J.) Metz
1997-12-02  0:00   ` Robert Dewar
1997-12-02  0:00     ` Matthew Heaney
1997-12-03  0:00       ` Robert Dewar
1997-12-03  0:00     ` robin
1997-12-03  0:00       ` Robert Dewar
1997-12-03  0:00     ` Shmuel (Seymour J.) Metz
1997-12-03  0:00       ` Robert Dewar
1997-12-03  0:00       ` Matthew Heaney
1997-12-04  0:00         ` Shmuel (Seymour J.) Metz
1997-12-04  0:00           ` Robert Dewar
1997-12-03  0:00       ` Robert Dewar
1997-12-03  0:00 ` robin
2011-09-29 10:25 RasikaSrinivasan@gmail.com
2011-09-29 10:49 ` AdaMagica
2011-09-29 13:38   ` Martin
2011-09-30 10:17 ` Stephen Leake
2011-09-30 16:25   ` tmoran
2011-09-30 16:52     ` Dmitry A. Kazakov
2011-10-01 11:09     ` Stephen Leake
2011-09-30 19:26   ` tmoran
2011-09-30 22:31   ` tmoran
2011-10-01 13:37   ` RasikaSrinivasan@gmail.com
2011-10-02 14:19     ` Stephen Leake

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox