comp.lang.ada
 help / color / mirror / Atom feed
* fixed point vs floating point
@ 1997-11-22  0:00 Matthew Heaney
  1997-11-22  0:00 ` Tucker Taft
                   ` (4 more replies)
  0 siblings, 5 replies; 64+ messages in thread
From: Matthew Heaney @ 1997-11-22  0:00 UTC (permalink / raw)




Matthew asks:

<<Can you elaborate on that?  I've often observed that floating point is
used when fixed point is clearly called for, ie a bearing or latitude.
What are the specific reasons to avoid fixed point?>>

Robert replies:

<<Most obviously, on most architectures, fixed-point is drastically slower 
than flowting-point.>>


Matthew asks

<<Well, can you elaborate on that?  Isn't fixed-point arithematic
implemented using integers?  What are the specific reasons why it would
be slower?  Is it because floating point can be done in parallel on many
machines?

My real reason for all these questions is because I still don't
understand why someone declares heading as

   type Heading_In_Degrees is digits 6 range 0.0 .. 360.0;

when a better fit to the problem seems to be

   type Heading_In_Degrees is delta 0.1 range 0.0 .. 360.0;

The error in my value is the same no matter which way I'm pointing, so
why wouldn't I declare the type as a fixed point type?  However, almost
universally, programmers use a floating point type, but I suspect the
reason has nothing to do with efficiency considerations.  Any ideas?>>

Robert replies:

<<Probably this is no longer GNAT specific enough to be here, this thread
should be moved to CLA. Briefly, fpt is faster than integer because chips
are designed that way (I am specifically thinking of multiplication and
division here). Second your definition for Heading_In_Degrees is wrong,
a small of 0.0625 seems quite inappropriate. Third, I think you should
look at generated code to better undertasnd fixed-point issues.>>

Matthew replies, and then asks:

OK, OK, I was being lazy.  How about this definition of heading:

   type Heading_In_Degrees_Base is 
      delta 512.0 * System.Fine_Delta range -512.0 .. 512.0;

   subtype Heading_In_Degrees is 
      Heading_In_Degrees_Base range 0.0 .. 360.0;

Now I know on GNAT that the delta is considerably smaller than 0.0625. (I
really do wish implementations would give me extra accuracy and not extra
range.  Why do they do that?)

Anyway, I'll accept that fixed point operations are slower, but the
question remains: Why do most programmers insist on using floating point
when a fixed point better models the abstraction?  Is the reason solely
based on efficiency?

By the way, are there off-the-shelf subprograms (in Ada preferably, but I'd
take any language) for computing the elementary functions (sin, cos, etc)
of a fixed point angle?

--------------------------------------------------------------------
Matthew Heaney
Software Development Consultant
<mailto:matthew_heaney@acm.org>
(818) 985-1271




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-22  0:00 Matthew Heaney
@ 1997-11-22  0:00 ` Tucker Taft
  1997-11-22  0:00   ` Robert Dewar
  1997-11-23  0:00 ` Geert Bosch
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 64+ messages in thread
From: Tucker Taft @ 1997-11-22  0:00 UTC (permalink / raw)



Matthew Heaney (mheaney@ni.net) wrote:

: Matthew asks:

: <<Can you elaborate on that?  I've often observed that floating point is
: used when fixed point is clearly called for, ie a bearing or latitude.
: What are the specific reasons to avoid fixed point?>>

: Robert replies:

: <<Most obviously, on most architectures, fixed-point is drastically slower 
: than flowting-point.>>

I'm not sure to what Robert is referring here.  Fixed + fixed is
as efficient as the normal integer operation, as is Fixed - Fixed, 
Fixed * Integer, and Fixed / Integer.  The only operations that are 
potentially inefficient are Fixed * Fixed and Fixed / Fixed, neither of which
are particularly likely when the fixed-point type represents an angle ;-).

Even if you do use Fixed * Fixed or Fixed / Fixed, the inefficiency
has to do with sometimes having to shift the result or one of the
operands after/before performing the operation (presuming binary
smalls).  If the machine has efficient shifting, this is not a major 
overhead.  However, it may be that on some compilers, the fixed-fixed 
multiplication/division operations are handled out-of-line, and the 
procedure-call overhead is the primary added expense.

: ...
: Matthew Heaney
: Software Development Consultant
: <mailto:matthew_heaney@acm.org>
: (818) 985-1271

--
-Tucker Taft   stt@inmet.com   http://www.inmet.com/~stt/
Intermetrics, Inc.  Burlington, MA  USA




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
       [not found] <9711221603.AA03295@nile.gnat.com>
@ 1997-11-22  0:00 ` Ken Garlington
  0 siblings, 0 replies; 64+ messages in thread
From: Ken Garlington @ 1997-11-22  0:00 UTC (permalink / raw)
  To: chat


Robert Dewar wrote:
> 
> <<The primary reason we use floating point and not fixed point is that
> fixed point is, to us, unreliable.  By that I mean the ultimate result
> is not what we want.  We agree that fixed point is much faster, but
> floating point is, almost always, plenty fast enough.>>
> 
> *very odd*
> 
> First, please do some experiments, all of those who believe that fixed-point
> is faster than floating-point. Especially if you include some multiplications,
> and especially if you are using precisions greater than 32 bits on a 32-bit
> machine, you are going to be MIGHTY surprised, I would even say schocked
> by what you see. Fixed point arithmetic other than very simple adition
> and subtraction of identical types (which is the same speed as fpt on
> typical machines) is drastically slower than floating-point. If you don't
> believe me, try it out and see!

This is probably true if you include your earlier qualifier - on
"modern"
architectures. On MIL-STD-1750s, however, most implementations have far
more
clock cycles required for floating-point multiplies and divides than for
integer shifts -- on the order of 30-50 cycles for the floats vs. 5-8
for
the shifts. There is also a penalty if you're dealing with slow memory,
and
having to fetch the 32-bit float vs. the 16-bit integer. So, IF...

- you can live with the precision of 16 bits, AND
- you use power-of-two deltas, AND
- you have an Ada compiler that optimizes well for fixed-point

you could get better performance for fixed-point. I was on a project
where
one team used fixed-point extensively, and another used mostly
floating-point.
The fixed-point and floating-point appeared to be about the same
performance-wise,
mostly because the Ada 83 compiler used did not do a lot of fixed-point
optimization.

> Second, fixed-point is FAR more reliable in practice, because as Bob
> Eachus notes, the error analysis is far simpler. Mars seems to say that
> fixed-point is unreliable because off incompetent programmers and
> unreliable compilers, but this is a comment on a particular set of
> incopmpetent programmers and unrealiable compilers, not on the fundamental
> nature of fixed vs floating-point.

We did notice incorrect object code for fixed-point generated by our
compiler
for fixed-point, but I agree that fixed-point is more reliable in the
abstract.

> In my experience programmers are woefully ignorant about floating-point
> (the completely extraordinary thread on CLA), and this leads to a lot of
> serious problems in floating-point code.
> 
> Interestingly in high critical systens, the objection to fpt is related
> to Mars' concern about reliability. The general perception is that fpt
> hardware is too complex to be reliable. Intels' widely publicized
> difficulties in this area can only help to confirm this worry!

We've never had anyone raise this objection in our systems, although I
have
yet to use a 1750 CPU that didn't have a bug in its floating point
implementation. I guess its due to the "obvious" fact that hardware is
more
reliable than software :)




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-22  0:00   ` Robert Dewar
@ 1997-11-22  0:00     ` Matthew Heaney
  0 siblings, 0 replies; 64+ messages in thread
From: Matthew Heaney @ 1997-11-22  0:00 UTC (permalink / raw)



In article <dewar.880248826@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:


>Sure if you never do multiplications, then the penalty for fixed-point
>can be close to zero (compared with flaoting-point), unless of course
>you want more than 32 bits of precision and you are on a 32 bit machine,
>then fixed-point gets very expensive.

Let's change the problem a bit.  What's more expensive: to calculate the
sine of a fixed point with a binary small, or to calculate the sine of a
floating point?  What about making the fixed point 32 bits instead of 64 -
will that make it more efficient?

That processor Intermetrics is building an Ada compiler for (SHARC, or
something like that) comes with "fixed point math in hardware."  Will that
make any difference in efficiency?

BTW: how do I calculate the sine of a fixed point number?

--------------------------------------------------------------------
Matthew Heaney
Software Development Consultant
<mailto:matthew_heaney@acm.org>
(818) 985-1271




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-22  0:00 ` Tucker Taft
@ 1997-11-22  0:00   ` Robert Dewar
  1997-11-22  0:00     ` Matthew Heaney
  0 siblings, 1 reply; 64+ messages in thread
From: Robert Dewar @ 1997-11-22  0:00 UTC (permalink / raw)



Tuck said

<<I'm not sure to what Robert is referring here.  Fixed + fixed is
as efficient as the normal integer operation, as is Fixed - Fixed,
Fixed * Integer, and Fixed / Integer.  The only operations that are
potentially inefficient are Fixed * Fixed and Fixed / Fixed, neither of which
are particularly likely when the fixed-point type represents an angle ;-).

Even if you do use Fixed * Fixed or Fixed / Fixed, the inefficiency
has to do with sometimes having to shift the result or one of the
operands after/before performing the operation (presuming binary
smalls).  If the machine has efficient shifting, this is not a major
overhead.  However, it may be that on some compilers, the fixed-fixed
multiplication/division operations are handled out-of-line, and the
procedure-call overhead is the primary added expense.
>>

Surely Tuck, you know that on almost all modern machines, integer
multiplication is much slower than floating-point multiplication?

So even without the sift, you are behind.

Sure if you never do multiplications, then the penalty for fixed-point
can be close to zero (compared with flaoting-point), unless of course
you want more than 32 bits of precision and you are on a 32 bit machine,
then fixed-point gets very expensive.





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-23  0:00   ` Matthew Heaney
@ 1997-11-23  0:00     ` Robert Dewar
  1997-11-24  0:00       ` Herman Rubin
  1997-11-24  0:00     ` Geert Bosch
  1 sibling, 1 reply; 64+ messages in thread
From: Robert Dewar @ 1997-11-23  0:00 UTC (permalink / raw)



Matthew says

<<Remember that thread a few months back about floating point comparison?  As
Robert reminded us, you have to know what you're doing when you use
floating point arithmetic.  I was trying to avoid floating point types,
because I'm not a numerical analyst, and don't want to just *hope* my
floating point calculations produce a correct result.  I want to *know*
they're correct.  My error analysis is simpler if use a fixed point type,
right?

If my abstraction calls for a real type having an absolute error and not a
relative error, then clearly a fixed point type is called for, even if it's
less efficient, right?  Isn't it always true that we should code for
correctness first, by using a type that directly models the abstraction,
and then optimize only if necessary?  Are you making an exception to this
rule for fixed point types?>>


When Robert was reminding you that you need to know what you are doing
when using Floating-Point arithmetic, he did not for a moment mean to say
that somehow the problem is trivial in fixed-point arithmetic. Computing
trig functions in limited precision fixed-point and retaining sufficient
accruacy is a VERY hard problem.

Note that neither fixed-point nor floating-point models the abstraction,
since the abstraction is real! The only difference is whether the error
control is relative or absolute. FOr some purposes relative error is
*easier* to analyze than *absolute* error, but there is no sense in which
fixed-point is somehow a more accurate abstract representation of real
than flaoting-point!





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-23  0:00 ` Geert Bosch
@ 1997-11-23  0:00   ` Tom Moran
  1997-11-25  0:00     ` John A. Limpert
  1997-11-23  0:00   ` Matthew Heaney
  1 sibling, 1 reply; 64+ messages in thread
From: Tom Moran @ 1997-11-23  0:00 UTC (permalink / raw)



>Converting a fixed point value to fpt, issue the sinus instruction
>and convert it back is much faster than even thinking about doing it in
>fixed point. ;-)
But "type degrees is delta 1.0/16 range 0.0 .. 360.0;" has only 5760
possible values, and a table lookup with a table of 5760 entries is
often quite reasonable, and surely faster than converting to fp,
calculating a sin, and converting back.  Not to mention if the
function is not just sin, but, say the 6th power of the cos (in a
Phong illumination calculation, say).
  I can't seem to find my Cody&Waite, but I think they give fixed
point algorithms, and I also think I've seen mention of Ada code for
it in the PAL or some such public place.




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-22  0:00 Matthew Heaney
  1997-11-22  0:00 ` Tucker Taft
@ 1997-11-23  0:00 ` Geert Bosch
  1997-11-23  0:00   ` Tom Moran
  1997-11-23  0:00   ` Matthew Heaney
  1997-11-24  0:00 ` Vince Del Vecchio
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 64+ messages in thread
From: Geert Bosch @ 1997-11-23  0:00 UTC (permalink / raw)



Matthew Heaney <mheaney@ni.net> wrote:

   Anyway, I'll accept that fixed point operations are slower, but the
   question remains: Why do most programmers insist on using floating point
   when a fixed point better models the abstraction?  Is the reason solely
   based on efficiency?

   By the way, are there off-the-shelf subprograms (in Ada preferably, but I'd
   take any language) for computing the elementary functions (sin, cos, etc)
   of a fixed point angle?


Maybe your second questions answers the first. It is really useless to
try hard to calculate a sinus using integer arithmetic when you have a
perfectly fine and very fast, well tested floating-point unit in your
computer that can calculate the sinus with a delta smaller than
0.000000000000001 probably.

Converting a fixed point value to fpt, issue the sinus instruction
and convert it back is much faster than even thinking about doing it in
fixed point. ;-)

Regards,
   Geert




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-23  0:00 ` Geert Bosch
  1997-11-23  0:00   ` Tom Moran
@ 1997-11-23  0:00   ` Matthew Heaney
  1997-11-23  0:00     ` Robert Dewar
  1997-11-24  0:00     ` Geert Bosch
  1 sibling, 2 replies; 64+ messages in thread
From: Matthew Heaney @ 1997-11-23  0:00 UTC (permalink / raw)



In article <65846t$4vq$1@gonzo.sun3.iaf.nl>, Geert Bosch
<geert@gonzo.sun3.iaf.nl> wrote:


>Maybe your second questions answers the first. It is really useless to
>try hard to calculate a sinus using integer arithmetic when you have a
>perfectly fine and very fast, well tested floating-point unit in your
>computer that can calculate the sinus with a delta smaller than
>0.000000000000001 probably.

>Converting a fixed point value to fpt, issue the sinus instruction
>and convert it back is much faster than even thinking about doing it in
>fixed point. ;-)

Sherman, Peabody: I think it's time for us to step into the way-back machine.

Remember that thread a few months back about floating point comparison?  As
Robert reminded us, you have to know what you're doing when you use
floating point arithmetic.  I was trying to avoid floating point types,
because I'm not a numerical analyst, and don't want to just *hope* my
floating point calculations produce a correct result.  I want to *know*
they're correct.  My error analysis is simpler if use a fixed point type,
right?

If my abstraction calls for a real type having an absolute error and not a
relative error, then clearly a fixed point type is called for, even if it's
less efficient, right?  Isn't it always true that we should code for
correctness first, by using a type that directly models the abstraction,
and then optimize only if necessary?  Are you making an exception to this
rule for fixed point types?

You seem to be suggesting - in the name of efficiency - that we convert a
fixed point number into a floating point, calculate the sine, and then
convert the result back into fixed point.  OK, fair enough, but what is the
accuracy of the result?  Isn't this exactly the kind of thing Robert was
warning us about?  How much precision does one require for the floating
point type?  And you're sure that the conversion process doesn't add more
expense than just calculating the sine directly in fixed point?

Tell me, Geert, why do we even have fixed point in the language at all, if
it's too inefficient to use?  All my Ada books tell me to use a fixed point
type when my abstraction has absolute error.  But I don't remember them
saying, Do not use fixed point types because they're too inefficient.  None
say "you shouldn't even think about calculating a sine using fixed point." 
Under what circumstances do you use fixed point?  Or are you saying never
to use fixed point?

I still would like someone to give me a reason - unrelated to efficiency -
why you'd model a heading (or any other kind of angle) using a floating
point type instead of a fixed point type.

--------------------------------------------------------------------
Matthew Heaney
Software Development Consultant
<mailto:matthew_heaney@acm.org>
(818) 985-1271




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-23  0:00   ` Matthew Heaney
  1997-11-23  0:00     ` Robert Dewar
@ 1997-11-24  0:00     ` Geert Bosch
  1 sibling, 0 replies; 64+ messages in thread
From: Geert Bosch @ 1997-11-24  0:00 UTC (permalink / raw)



Matthew Heaney <mheaney@ni.net> wrote:
 ``If my abstraction calls for a real type having an absolute error and not a
   relative error, then clearly a fixed point type is called for, even if it's
   less efficient, right?  Isn't it always true that we should code for
   correctness first, by using a type that directly models the abstraction,
   and then optimize only if necessary?  Are you making an exception to this
   rule for fixed point types?

   You seem to be suggesting - in the name of efficiency - that we convert a
   fixed point number into a floating point, calculate the sine, and then
   convert the result back into fixed point.  OK, fair enough, but what is the
   accuracy of the result?  ''

The fixed point type is the abstraction that you do calculations with
when you are calculating with currencies for example. The heading of a ship
is of course a real value. So any difference between this real value and
the calculated one (whether fixed or float) is error for you.

When converting a real value between 0.0 and 360.0 to a value in radians and
then taking the sine using floating pt you will have a result that has
a certain relative error. Since you also know that the result
interval is in -1.0 .. 1.0, you have a result with a known absolute
precision. 

On typical hardware this absolute error will be much smaller than 1.0E-12.

Regards,
   Geert

Disclaimer: I'm not a numerical analyst!




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-22  0:00 Matthew Heaney
  1997-11-22  0:00 ` Tucker Taft
  1997-11-23  0:00 ` Geert Bosch
@ 1997-11-24  0:00 ` Vince Del Vecchio
  1997-11-24  0:00 ` Vince Del Vecchio
  1997-12-03  0:00 ` robin
  4 siblings, 0 replies; 64+ messages in thread
From: Vince Del Vecchio @ 1997-11-24  0:00 UTC (permalink / raw)



	<mheaney-ya023680002211972146020001@news.ni.net>
Date: 24 Nov 1997 16:43:30 -0500
Message-ID: <nb4u3d2m0hp.fsf@dejavu.spd.analog.com>
Organization: Analog Devices CPD
Lines: 39
X-Newsreader: Gnus v5.3/Emacs 19.33

mheaney@ni.net (Matthew Heaney) writes:

> Let's change the problem a bit.  What's more expensive: to calculate the
> sine of a fixed point with a binary small, or to calculate the sine of a
> floating point?  What about making the fixed point 32 bits instead of 64 -
> will that make it more efficient?
> 
> That processor Intermetrics is building an Ada compiler for (SHARC, or
> something like that) comes with "fixed point math in hardware."  Will that
> make any difference in efficiency?

The SHARC (and other Analog Devices DSPs) only support fixed-point math
with a fixed binary point.  Specifically, they support
  type Signed_Fixed is delta 2**(-Word_Size + 1) range -1.0 .. 1.0;
and
  type Unsigned_Fixed is delta 2**(-Word_Size) range 0.0 .. 1.0;

where small == delta, upper endpoint is not a member of the type,
and, for Sharc, Word_Size == 32.

The difference from many other processors is that they have operations for
  Signed_Fixed * Signed_Fixed => Signed_Fixed
  Unsigned_Fixed * Unsigned_Fixed => Unsigned_Fixed
where the shifting you need to do is built to the instruction.

For arbitrary smalls, it probably won't be any faster than on other
processors, but if you restricted yourself to certain smalls, it would
be possible to be as efficient as floating-point.  

On the other hand, for the Sharc specifically, there is no fixed point
sin routine in the C runtime library (or the Ada runtime library, for
that matter).  That is mostly because the processor also has floating
point, and I think most people use the floating point.  Fixed point
transcendental operations are much more prevalent on our fixed-point-only
processors.

-Vince Del Vecchio
vince . delvecchio @ analog . com
Not speaking for Analog Devices




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-24  0:00       ` Herman Rubin
@ 1997-11-24  0:00         ` Robert Dewar
  1997-11-25  0:00           ` Joe Gwinn
  0 siblings, 1 reply; 64+ messages in thread
From: Robert Dewar @ 1997-11-24  0:00 UTC (permalink / raw)



Herman says

<<That fixed point arithmetic is so slow on computers is due to the fact
that the hardware manufacturers, partly influenced by those gurus who
do not see the uses of fixed point, have combined several fixed point
instructions to produce floating point in such a way that it is very
difficult to use it for anything else.  It would not be difficult for
the hardware to allow the fixed point part of the floating unit to be
available to the user, and it would allow considerable speedup.
<<

There is no problem in implementing fast integer or fractional integer
multiply and divide, using the same techniques as are used for floating-point.
However, apart from a few people on soapboxes, there is no demand for such
instructions, let alonge fast implementations of them (the Transputer is
one of the very few machines that provides a fractional binary scaled
multiply instruction).





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-23  0:00     ` Robert Dewar
@ 1997-11-24  0:00       ` Herman Rubin
  1997-11-24  0:00         ` Robert Dewar
  0 siblings, 1 reply; 64+ messages in thread
From: Herman Rubin @ 1997-11-24  0:00 UTC (permalink / raw)



In article <dewar.880302541@merv>, Robert Dewar <dewar@merv.cs.nyu.edu> wrote:
>Matthew says

><<Remember that thread a few months back about floating point comparison?  As
>Robert reminded us, you have to know what you're doing when you use
>floating point arithmetic.  I was trying to avoid floating point types,
>because I'm not a numerical analyst, and don't want to just *hope* my
>floating point calculations produce a correct result.  I want to *know*
>they're correct.  My error analysis is simpler if use a fixed point type,
>right?

>If my abstraction calls for a real type having an absolute error and not a
>relative error, then clearly a fixed point type is called for, even if it's
>less efficient, right?  Isn't it always true that we should code for
>correctness first, by using a type that directly models the abstraction,
>and then optimize only if necessary?  Are you making an exception to this
>rule for fixed point types?>>

In principle, there is no such thing as floating point arithmetic.
Fixed point arithmetic can be done to arbitrary precision; to do
floating arithmetic to much more than what is provided, it is 
necessary to emulating the floating in fixed, and to do the quite
clumsy fixed point in floating.

That fixed point arithmetic is so slow on computers is due to the fact
that the hardware manufacturers, partly influenced by those gurus who
do not see the uses of fixed point, have combined several fixed point
instructions to produce floating point in such a way that it is very
difficult to use it for anything else.  It would not be difficult for
the hardware to allow the fixed point part of the floating unit to be
available to the user, and it would allow considerable speedup.
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
hrubin@stat.purdue.edu         Phone: (765)494-6054   FAX: (765)494-0558




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-22  0:00 Matthew Heaney
                   ` (2 preceding siblings ...)
  1997-11-24  0:00 ` Vince Del Vecchio
@ 1997-11-24  0:00 ` Vince Del Vecchio
  1997-12-03  0:00 ` robin
  4 siblings, 0 replies; 64+ messages in thread
From: Vince Del Vecchio @ 1997-11-24  0:00 UTC (permalink / raw)



	<mheaney-ya023680002211972146020001@news.ni.net>
Date: 24 Nov 1997 12:10:12 -0500
Message-ID: <nb4wwhymd57.fsf@dejavu.spd.analog.com>
Organization: Analog Devices CPD
Lines: 39
X-Newsreader: Gnus v5.3/Emacs 19.33

mheaney@ni.net (Matthew Heaney) writes:

> Let's change the problem a bit.  What's more expensive: to calculate the
> sine of a fixed point with a binary small, or to calculate the sine of a
> floating point?  What about making the fixed point 32 bits instead of 64 -
> will that make it more efficient?
> 
> That processor Intermetrics is building an Ada compiler for (SHARC, or
> something like that) comes with "fixed point math in hardware."  Will that
> make any difference in efficiency?

The SHARC (and other Analog Devices DSPs) only support fixed-point math
with a fixed binary point.  Specifically, they support
  type Signed_Fixed is delta 2**(-Word_Size + 1) range -1.0 .. 1.0;
and
  type Unsigned_Fixed is delta 2**(-Word_Size) range 0.0 .. 1.0;

where small == delta, upper endpoint is not a member of the type,
and, for Sharc, Word_Size == 32.

The difference from many other processors is that they have operations for
  Signed_Fixed * Signed_Fixed => Signed_Fixed
  Unsigned_Fixed * Unsigned_Fixed => Unsigned_Fixed
where the shifting you need to do is built to the instruction.

For arbitrary smalls, it probably won't be any faster than on other
processors, but if you restricted yourself to certain smalls, it would
be possible to be as efficient as floating-point.  

On the other hand, for the Sharc specifically, there is no fixed point
sin routine in the C runtime library (or the Ada runtime library, for
that matter).  That is mostly because the processor also has floating
point, and I think most people use the floating point.  Fixed point
transcendental operations are much more prevalent on our fixed-point-only
processors.

-Vince Del Vecchio
vince.delvecchio@analog.com
Not speaking for Analog Devices




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-24  0:00         ` Robert Dewar
@ 1997-11-25  0:00           ` Joe Gwinn
  1997-11-25  0:00             ` Robert Dewar
                               ` (2 more replies)
  0 siblings, 3 replies; 64+ messages in thread
From: Joe Gwinn @ 1997-11-25  0:00 UTC (permalink / raw)



In article <dewar.880417337@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:

> There is no problem in implementing fast integer or fractional integer
> multiply and divide, using the same techniques as are used for floating-point.
> However, apart from a few people on soapboxes, there is no demand for such
> instructions, let alonge fast implementations of them (the Transputer is
> one of the very few machines that provides a fractional binary scaled
> multiply instruction).

Robert also said in some other message that floating point is faster than
integer simply because the hardware is designed that way.  This is
certainly true in general, and was true for us.  Performance follows
resources.

I don't know what came over me.  I never agree with Robert.  It won't
happen again.


Until perhaps 5 years ago, we always used fixed point (scaled binary) in
the code, regardless of language (except Ada83, actually; see below),
because we couldn't afford to design good enough floating point hardware
into the computers.  In those days, we built our own purpose-built
single-board computers (SBCs) for major realtime projects.  What we
couldn't afford was SBC real estate, not money for components.  The FP
hardware would always be in the original plan, but soon got pushed off the
board by some other more important hardware components, most often added
on-board memory.  Push come shove, more memory always won.

Now, the issue is gone, for two reasons.  First, the floating point
hardware is generally built into the processor chips, so there is no real
estate to save.  Second, we no longer build our own SBCs for standard
busses such as VME, we buy them, unless cornered.

The fixed point arithmetic in the Ada83 compilers that we used and tested
in those days didn't work well or even correctly in many cases, so we
never used it.  I don't offhand recall doing an Ada job that didn't use
floating point; Ada83 was mostly used on the larger computers, such as
VAXes and Silicon Graphics boxes, so the issue never came up.   The
exception may have been XD Ada from System Designers (where are they now?)
on the 68020.

So, why do we prefer floating point, speed aside?  Because it's easier to
program with, as scaling issues go away, and so it's much easier to get
right.  Remember, the traditional comparison was with manually-implemented
scaled binary.  The fact that floating point arithmetic is approximate
isn't much of an issue for most physics-based mathematical code, and those
places that are exceptions are still *much* less trouble to deal with than
the traditional scaled binary.

A random note:  When declaring an angle in Ada, it's necessary to declare
the range as something like -0.1 to 360.1 degrees (not 0 to 360 degrees),
to prevent the fuzziness of floating point arithmetic from causing
random-appearing constraint errors.  The fuzz can extend ever so slightly
over the line, causing unnecessary and unexpected constraint errors if one
tries to set the range too tightly.

With Ada95 (which I assume has fixed the Ada83 problems with fixed-point
arithmetic) on modern machines, use of fixed point has never been
proposed, simply because use of floating point is still simpler to
program, because scaling is handled automatically, and because more
programmers are familiar with floating point than with scaled fixed
point.  One less thing to worry about, and to get wrong.

As for speed, float versus fixed, in practice it seems to depend more on
the compiler and its level of excessive generalization and also level of
code optimization (and level of overly paranoid and/or redundant range
checking) than the raw hardware speed of arithmetic.  This is especially
true of math library elements such as Sine and Cosine and Square Root,
which, being precompiled, don't usually respond to changes in the
optimization settings.  Again, performance follows engineering resources.

In one recent example, Square Root was taking 60 microseconds on a 50-MHz
68060; this should take no more than 2 uS.  Where did the time go?  To
range check after range check.  To internal use of 96-bit extended floats
(which must be done in the 68060 FP instruction emulation library rather
than the CPU hardware), all to implement a 32-bit float function.  To
functions within functions, all checking everything.  Etc.  Same kind of
story with the transcendental functions, only worse.  One needs to read
the generated assembly code to know what a given compiler is up to.

We only need a few functions, specifically Sine, Cosine, Arc Tangent, and
Square Root, speed is very much of the essence, and we need only 10e-4 to
10e-5 (16-bit) accuracy anyway.  So, we will write our own versions of
these functions, in Ada, C, or even assembly, as determined by performance
tests.

Someone asked how to compute the Sine in fixed point; nobody answered. 
There are three traditional methods, polynomial approximations,
brute-force table lookup, or table lookup with interpolation.  These
methods are approximately equal in speed when it's all said and done,
although brute-force table lookup can trade memory for speed better than
polynomials, while polynomials can be faster than table lookup on some
machines because polynomials can avoid float-fix conversions and index
arithmetic and range checks (to prevent accessing beyond the ends of the
table).  Any of these methods can outperform the standard mathlib
functions precisely because they do exactly what's asked, and nothing
more.

As for the polynomial approximations, the bible is to this day is 
"Approximations for Digital Computers"; Cecil Hastings, Jr.; Princeton
University Press; 1955.  This has been a classic since its publication. 
In those days, the computers were small, so the programmers had to be very
focused on performance.  For computation of polynomials in real systems,
convert the published polynomials into Horner's form.  This is discussed
in many texts on numerical methods.  In short,  a + bX + cX^2 becomes a +
X(b+cX), which is faster to compute, and has better numerical properties
than the standard power-series form.


Joe Gwinn




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-25  0:00           ` Joe Gwinn
@ 1997-11-25  0:00             ` Robert Dewar
  1997-11-25  0:00               ` Joe Gwinn
  1997-11-25  0:00             ` Matthew Heaney
  1997-11-26  0:00             ` William A Whitaker
  2 siblings, 1 reply; 64+ messages in thread
From: Robert Dewar @ 1997-11-25  0:00 UTC (permalink / raw)



Joe says

<<The fixed point arithmetic in the Ada83 compilers that we used and tested
  in those days didn't work well or even correctly in many cases, so we
  never used it.  I don't offhand recall doing an Ada job that didn't use
  floating point; Ada83 was mostly used on the larger computers, such as
  VAXes and Silicon Graphics boxes, so the issue never came up.   The
  exception may have been XD Ada from System Designers (where are they now?)
  on the 68020.>>

While I can't speak for all Ada 83 compilers, I certainly know many of them
that did fixed-point just fine. It has actually been my experience that in
most cases where users thought Ada 83 compilers were doing things wrong, it
was because they did not understand the Ada 83 fixed-point semantics properly,
so it would be interesting to know specifically what Joe is referring to.

<<So, why do we prefer floating point, speed aside?  Because it's easier to
  program with, as scaling issues go away, and so it's much easier to get
  right.  Remember, the traditional comparison was with manually-implemented
  scaled binary.  The fact that floating point arithmetic is approximate
  isn't much of an issue for most physics-based mathematical code, and those
  places that are exceptions are still *much* less trouble to deal with than
  the traditional scaled binary.>>

So how do we square this with Robert Eachus claim that it is *easier* to
analyze fixed-point code. Simple actually, it depends on what you are doing.
If you are writing code casually, without careful error analysis, then it is
indeed true that floating-point is easier, but if you are doing careful
error analysis, then fixed-point is usually easier. Joe's offhand comment
about errors not being an issue for most physics-based mathematical code
(I do not agree!) clearly indicates that we are dealing with the casual
approach here, so in that context Joe's comment makes good sense.

<<With Ada95 (which I assume has fixed the Ada83 problems with fixed-point
  arithmetic)>>

I really don't know what problems might be fixed, since I don't know of
problems in Ada 83, at least not ones that are likely to be what Joe is
referring to, so I can't say whether these problems have been fixed!
Certainly to the extent that Joe was seeing compiler bugs in compilers
I am unfamiliar with, this seems to have no relationship to Ada 95.






^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-23  0:00   ` Tom Moran
@ 1997-11-25  0:00     ` John A. Limpert
  1997-11-25  0:00       ` Robert Dewar
  1997-11-25  0:00       ` Robert Dewar
  0 siblings, 2 replies; 64+ messages in thread
From: John A. Limpert @ 1997-11-25  0:00 UTC (permalink / raw)



tmoran@bix.com (Tom Moran) wrote:

>But "type degrees is delta 1.0/16 range 0.0 .. 360.0;" has only 5760
>possible values, and a table lookup with a table of 5760 entries is
>often quite reasonable, and surely faster than converting to fp,
>calculating a sin, and converting back.  Not to mention if the
>function is not just sin, but, say the 6th power of the cos (in a
>Phong illumination calculation, say).

Table lookup has been an important speed optimization for many years,
but does it still work well on newer processors? My experience is that
I don't get the speed improvements that I expect, probably due to the
increasing penalty for cache misses on faster processors. The trend
seems to be that branch-free pipelined code will run faster than table
lookups. Have optimizing compilers kept up with the shifting
tradeoffs?





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-25  0:00             ` Robert Dewar
@ 1997-11-25  0:00               ` Joe Gwinn
  1997-11-25  0:00                 ` Robert Dewar
  0 siblings, 1 reply; 64+ messages in thread
From: Joe Gwinn @ 1997-11-25  0:00 UTC (permalink / raw)



In article <dewar.880487594@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:

> Joe says
> 
> <<The fixed point arithmetic in the Ada83 compilers that we used and tested
>   in those days didn't work well or even correctly in many cases, so we
>   never used it.  I don't offhand recall doing an Ada job that didn't use
>   floating point; Ada83 was mostly used on the larger computers, such as
>   VAXes and Silicon Graphics boxes, so the issue never came up.   The
>   exception may have been XD Ada from System Designers (where are they now?)
>   on the 68020.>>
> 
> While I can't speak for all Ada 83 compilers, I certainly know many of them
> that did fixed-point just fine. It has actually been my experience that in
> most cases where users thought Ada 83 compilers were doing things wrong, it
> was because they did not understand the Ada 83 fixed-point semantics properly,
> so it would be interesting to know specifically what Joe is referring to.

It's been at least ten years since use of fixed point in Ada83 came up,
and I no longer recall the details.  I don't think it was lack of
understanding, as we had lots of oldtimers experienced with scaled binary
(most often in assembly, but also in fortran etc) and lots of Ada folk,
some quite good.  Anyway, we gave up on it.  It may have been an
early-compiler effect, later fixed, but the damage was already done.


> <<So, why do we prefer floating point, speed aside?  Because it's easier to
>   program with, as scaling issues go away, and so it's much easier to get
>   right.  Remember, the traditional comparison was with manually-implemented
>   scaled binary.  The fact that floating point arithmetic is approximate
>   isn't much of an issue for most physics-based mathematical code, and those
>   places that are exceptions are still *much* less trouble to deal with than
>   the traditional scaled binary.>>
> 
> So how do we square this with Robert Eachus claim that it is *easier* to
> analyze fixed-point code. Simple actually, it depends on what you are doing.

I think Robert Eachus was doing safety critical code.


> If you are writing code casually, without careful error analysis, then it is
> indeed true that floating-point is easier, but if you are doing careful
> error analysis, then fixed-point is usually easier. Joe's offhand comment
> about errors not being an issue for most physics-based mathematical code
> (I do not agree!) clearly indicates that we are dealing with the casual
> approach here, so in that context Joe's comment makes good sense.

Not so fast.  I was obviously talking about the numerical noise caused by
use of floating-point arithmetic.  The noise caused by one's choice of
mathematical algorithm and its formulation, always an issue, is with us in
either case, float or fixed.  We used scaled binary (fixed-point)
arithmetic most often with 16-bit words, so every bit counted, and
numerical noise was very much an issue.  One 1980s-era radar system I
worked on (in fortran 77) used 16-bit mantissas and 16-bit "block"
exponents (one exponent covered multiple mantissa words), to handle the
scaling issues without requiring floating point hardware.  Yes, it *was*
ugly to program, but given the machines of the day, there was little
choice.  A single-precision (32-bit) float has a 23-bit mantissa, so the
numerical noise will be much less than with a 16-bit mantissa (one word),
by a factor of about 2^(23-16)= 128.  A 32-bit fixed point number can
approach this, if and only if one doesn't have to spend too many bits on
the dynamic range.

In all truth, in most systems, the input data streams aren't all that
clean, and numerical noise in the arithmetic is by far the least of it. 
This is the basic reason we could use 16-bit arithmetic in the first
place.  A badly-formulated algorithm will come up against this inherent
measurement noise first.  And, there is always 64-bit double precision,
with 53 bits of mantissa.  Only a few things require such precision.


> <<With Ada95 (which I assume has fixed the Ada83 problems with fixed-point
>   arithmetic)>>
> 
> I really don't know what problems might be fixed, since I don't know of
> problems in Ada 83, at least not ones that are likely to be what Joe is
> referring to, so I can't say whether these problems have been fixed!
> Certainly to the extent that Joe was seeing compiler bugs in compilers
> I am unfamiliar with, this seems to have no relationship to Ada 95.

I do recall some discussion of the Ada83 fixed point issues being
discussed while Ada95 was being developed.  Again, I don't recall the
details, this time because it wasn't an issue that I followed all that
closely, because we had no plans to ever use fixed point.

Anyway, I didn't claim that Ada95 had these problems, only that with
present-day hardware, we will generally use floating point, so I probably
never will find out for myself if the Ada95 fixed point arithmetic works. 
Others may be more likely to use fixed point arithmetic; I can report our
experience and outlook.

Joe Gwinn




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-25  0:00     ` John A. Limpert
@ 1997-11-25  0:00       ` Robert Dewar
  1997-11-25  0:00       ` Robert Dewar
  1 sibling, 0 replies; 64+ messages in thread
From: Robert Dewar @ 1997-11-25  0:00 UTC (permalink / raw)



<<Table lookup has been an important speed optimization for many years,
but does it still work well on newer processors? My experience is that
I don't get the speed improvements that I expect, probably due to the
increasing penalty for cache misses on faster processors. The trend
seems to be that branch-free pipelined code will run faster than table
lookups. Have optimizing compilers kept up with the shifting
tradeoffs?
>>

By the way, it is not branches that are expensive, but mispredicted
branches. With good branch prediction, branches can be almost free.





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-25  0:00     ` John A. Limpert
  1997-11-25  0:00       ` Robert Dewar
@ 1997-11-25  0:00       ` Robert Dewar
  1 sibling, 0 replies; 64+ messages in thread
From: Robert Dewar @ 1997-11-25  0:00 UTC (permalink / raw)



John says

<<Table lookup has been an important speed optimization for many years,
but does it still work well on newer processors? My experience is that
I don't get the speed improvements that I expect, probably due to the
increasing penalty for cache misses on faster processors. The trend
seems to be that branch-free pipelined code will run faster than table
lookups. Have optimizing compilers kept up with the shifting
tradeoffs?
>>

Indeed with the modern progression of ever increasing values for the
ratio (processor-speed / main-memory-speed), table lookup can often
be quite unattractive. On a really fast processor a second level cache
miss is enough time to compute LOTS of floating-point stuff.

I am not sure this particular point is an issue for optimiztion, but
the general question is that the study and implementation of optimization
techniques these days is *VERY* much concerned with cache performance!





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-25  0:00           ` Joe Gwinn
  1997-11-25  0:00             ` Robert Dewar
@ 1997-11-25  0:00             ` Matthew Heaney
  1997-11-26  0:00             ` William A Whitaker
  2 siblings, 0 replies; 64+ messages in thread
From: Matthew Heaney @ 1997-11-25  0:00 UTC (permalink / raw)



In article <gwinn-2511971025020001@dh5055083.res.ray.com>,
gwinn@res.ray.com (Joe Gwinn) wrote:


>In one recent example, Square Root was taking 60 microseconds on a 50-MHz
>68060; this should take no more than 2 uS.  Where did the time go?  To
>range check after range check.  To internal use of 96-bit extended floats
>(which must be done in the 68060 FP instruction emulation library rather
>than the CPU hardware), all to implement a 32-bit float function.  To
>functions within functions, all checking everything.  Etc.  Same kind of
>story with the transcendental functions, only worse.  One needs to read
>the generated assembly code to know what a given compiler is up to.
>
>We only need a few functions, specifically Sine, Cosine, Arc Tangent, and
>Square Root, speed is very much of the essence, and we need only 10e-4 to
>10e-5 (16-bit) accuracy anyway.  So, we will write our own versions of
>these functions, in Ada, C, or even assembly, as determined by performance
>tests.

If you want to turn off constraint checking for intermediate values, be
sure to use T'Base, ie

O : My_Float'Base;

That way only overflow checks (done in hardware) will occur.  This will (I
think) be a better comparison to another language, such as C, which doesn't
perform any range checks.

The selective use of T'Base gives the Ada 95 programmer control of when
range checks occur.

Thanks for the other info too,
Matt

--------------------------------------------------------------------
Matthew Heaney
Software Development Consultant
<mailto:matthew_heaney@acm.org>
(818) 985-1271




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-25  0:00               ` Joe Gwinn
@ 1997-11-25  0:00                 ` Robert Dewar
  1997-11-26  0:00                   ` Joe Gwinn
  0 siblings, 1 reply; 64+ messages in thread
From: Robert Dewar @ 1997-11-25  0:00 UTC (permalink / raw)



Joe said

<<It's been at least ten years since use of fixed point in Ada83 came up,
and I no longer recall the details.  I don't think it was lack of
understanding, as we had lots of oldtimers experienced with scaled binary
(most often in assembly, but also in fortran etc) and lots of Ada folk,
some quite good.  Anyway, we gave up on it.  It may have been an
early-compiler effect, later fixed, but the damage was already done.
>>

Actually that's probably a recipe for such misunderstanding. If you come
to the fixed-point semantics in Ada with preconceptions, you can often
be surprised. For example, people do not realize that delta does not
specify the small, or they don't understand the issue with end points
fudged by delta, or they don't understand the role of universal fixed
in multiplication and division, or they don't undersatnd the accuracy
requiremets etc.

So it would not surprise me *at all* if this "damage" were self
inflicted. Using fixed-point in Ada is not like using scaled binary
in Fortran!





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-25  0:00                 ` Robert Dewar
@ 1997-11-26  0:00                   ` Joe Gwinn
  1997-11-26  0:00                     ` Robert Dewar
  0 siblings, 1 reply; 64+ messages in thread
From: Joe Gwinn @ 1997-11-26  0:00 UTC (permalink / raw)



In article <dewar.880518760@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:

> Joe said
> 
> <<It's been at least ten years since use of fixed point in Ada83 came up,
> and I no longer recall the details.  I don't think it was lack of
> understanding, as we had lots of oldtimers experienced with scaled binary
> (most often in assembly, but also in fortran etc) and lots of Ada folk,
> some quite good.  Anyway, we gave up on it.  It may have been an
> early-compiler effect, later fixed, but the damage was already done.
> >>
> 
> Actually that's probably a recipe for such misunderstanding. If you come
> to the fixed-point semantics in Ada with preconceptions, you can often
> be surprised. For example, people do not realize that delta does not
> specify the small, or they don't understand the issue with end points
> fudged by delta, or they don't understand the role of universal fixed
> in multiplication and division, or they don't undersatnd the accuracy
> requiremets etc.
> 
> So it would not surprise me *at all* if this "damage" were self
> inflicted. Using fixed-point in Ada is not like using scaled binary
> in Fortran!

Well, in those days many people had lots of trouble getting scaled binary
exactly right too.  It wasn't always obvious how to renormalize the
results of multiplications and divisions on various machines.  One of my
best-selling memos of that era gave a simple turn-the-crank algorithm to
convert a function from mathematical (ie, real arithmetic) form to scaled
binary form, and back.  

The great hope of Ada83 fixed-point arithmetic was that it would make such
memos unnecessary.  This would have been a good thing, but it had to wait
for the wide use of floating point.

As for the differences in Ada's model, I don't recall complaints about
that, or that they couldn't understand how it worked.  What I do recall
was that the Ada experts couldn't get it to work reliably or well using
the compilers of the day.

In the large-scale development world, under the ususal schedule and budget
pressures, tools and technologies don't get so many chances.  People will
try something.  If it works, and doesn't cost too much blood or treasure,
they keep on using it, and it will spread.  If it fails, or is too much
trouble, or bites them too often, they stop using it, and they tell all
their friends.  It's rare that something that failed for them will ever be
tried again.  Bad news travels faster than good news.  And, full-scale
engineering development (FSED) labs are not research universities.  For
better or worse, the FSED labs are paid to build something in no more than
two or perhaps three years, and push it out the door, so there's little
time or money to fiddle with things.


Joe Gwinn




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-26  0:00                   ` Joe Gwinn
@ 1997-11-26  0:00                     ` Robert Dewar
  1997-12-01  0:00                       ` Joe Gwinn
  0 siblings, 1 reply; 64+ messages in thread
From: Robert Dewar @ 1997-11-26  0:00 UTC (permalink / raw)



Joe said

<<The great hope of Ada83 fixed-point arithmetic was that it would make such
memos unnecessary.  This would have been a good thing, but it had to wait
for the wide use of floating point.
>>

It would also be a good think if Ada could magically write your program
for you :-)

There is no free lunch when it comes to being careful with fixed-point
scaling. To expect Ada83 to somehow solve this problem automatically
sounds a bit naive, or let's at least say over-optimistic.

<<As for the differences in Ada's model, I don't recall complaints about
that, or that they couldn't understand how it worked.  What I do recall
was that the Ada experts couldn't get it to work reliably or well using
the compilers of the day.
>>

SOunds like you had the wrong "Ada experts" or the wrong compilers, or both.
Many people used fixed-point in Ada very successfully in the early days
(I am really thinking specifically of the Alsys compiler here, since I was
working for Alsys at the time).





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-25  0:00           ` Joe Gwinn
  1997-11-25  0:00             ` Robert Dewar
  1997-11-25  0:00             ` Matthew Heaney
@ 1997-11-26  0:00             ` William A Whitaker
  2 siblings, 0 replies; 64+ messages in thread
From: William A Whitaker @ 1997-11-26  0:00 UTC (permalink / raw)



Joe Gwinn wrote:
> 
snip
> 
> In one recent example, Square Root was taking 60 microseconds on a 50-MHz
> 68060; this should take no more than 2 uS.  Where did the time go? 

  Etc.  Same kind of
> story with the transcendental functions, only worse.  One needs to read
> the generated assembly code to know what a given compiler is up to.
> 
> We only need a few functions, specifically Sine, Cosine, Arc Tangent, and
> Square Root, speed is very much of the essence, and we need only 10e-4 to
> 10e-5 (16-bit) accuracy anyway.  So, we will write our own versions of
> these functions, in Ada, C, or even assembly, as determined by performance
> tests.
> 
> 
> 
> As for the polynomial approximations, the bible is to this day is
> "Approximations for Digital Computers"; Cecil Hastings, Jr.; Princeton
> University Press; 1955.  This has been a classic since its publication.
> In those days, the computers were small, so the programmers had to be very
> focused on performance.  For computation of polynomials in real systems,
> convert the published polynomials into Horner's form.  This is discussed
> in many texts on numerical methods.  In short,  a + bX + cX^2 becomes a +
> X(b+cX), which is faster to compute, and has better numerical properties
> than the standard power-series form.
> 
> Joe Gwinn

I remember working hard to get an exponential in 3.7 microseconds, using
a self-generated "Hasty Approximation" (in honor of Hastings who was the
father of them all).  This was 30 years ago.

Bill Whitaker




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
@ 1997-11-27  0:00 tmoran
  1997-11-27  0:00 ` Robert Dewar
  0 siblings, 1 reply; 64+ messages in thread
From: tmoran @ 1997-11-27  0:00 UTC (permalink / raw)



>Indeed with the modern progression of ever increasing values for the
>ratio (processor-speed / main-memory-speed), table lookup can often
>be quite unattractive. On a really fast processor a second level cache
>miss is enough time to compute LOTS of floating-point stuff.
  So the current crop of systems has fast fpt, slow integer */, and
slow memory.  A few years ago that was not the case.  Things like
MMX instructions start to move back toward fast integer arithmetic.
What's fast this year is not the same as what's fast next year.
As Mathew Heaney pointed out earlier, it would be wise to design
data types, etc, to match the problem, and then, if speed is
an issue, change things to optimize for the particular current target.




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-27  0:00 tmoran
@ 1997-11-27  0:00 ` Robert Dewar
  1997-11-29  0:00   ` Tarjei T. Jensen
  0 siblings, 1 reply; 64+ messages in thread
From: Robert Dewar @ 1997-11-27  0:00 UTC (permalink / raw)



tmoran says

<<  So the current crop of systems has fast fpt, slow integer */, and
slow memory.  A few years ago that was not the case.  Things like
MMX instructions start to move back toward fast integer arithmetic.
What's fast this year is not the same as what's fast next year.
As Mathew Heaney pointed out earlier, it would be wise to design
data types, etc, to match the problem, and then, if speed is
an issue, change things to optimize for the particular current target.
>>

That's true only if you think a few means 5-10 years. This pattern
is fairly old by now, you have to go back a long way (really to the
early 80's to find fast memory -- i.e. main memory comparably fast
to the processors). When I bought my first PC1 in 1981, the memory
was 250ns, and the processor was 4.77MHz (= approx 210 ns), a reasonable
match (remember the visibility of "wait states" in those days?) Now a
decent PC has a 266MHz processor (60 times faster clock, and of course
really the factor is higher), yet memory is only 5-6 times faster.

By all means one should choose floating-point or fixed-point in reponse
to the problem these days. But remember in this thread that a number of
people were in the mindset that one of the possible reasons for choosing
fixed-point was greater speed, and it seems that quite a few people were
unaware that with modern processors this reason, to the extent it is
valid at all, points in the other direction (of course specialized
processors, e.g. those with no fpt, may point in the opposite direction).

However, it is not simply a matter of optimization to flip between
fixed- and floating-point formats, they have radically different
semantics, and you choose the one that matches your semantic requirement.
Matthew's point that because the definition of angle nicely fits the
fixed-point declaration, that is what should be used, is fundamentally
flawed. The decision on whether to represent an angle as fixed-point
or floating-point depends on the operations to be performed on the
quantity and not much else!

By the way, Tom, you keep mentioning MMX (one might almost think you were
an Intel salesperson), but I see MMX has having zero impact on general
purpose computing and general purpose compilation. It is intended for
very specialized hand-written code, and it seems very unlikely that it
is good for anything else.

(I partly base this estimate on the experience with the i860. Remember that
these kinds of instructions are not new, they were on the i860 7 years ago,
but in practice were not useful for general purpose computing).





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
@ 1997-11-28  0:00 tmoran
  1997-11-28  0:00 ` Robert Dewar
  0 siblings, 1 reply; 64+ messages in thread
From: tmoran @ 1997-11-28  0:00 UTC (permalink / raw)



In <dewar.880661467@merv> Robert Dewar said:
>decent PC has a 266MHz processor (60 times faster clock, and of course
>really the factor is higher), yet memory is only 5-6 times faster.
Comparing main memory speed and ignoring cache is hardly reasonable.

>Matthew's point that because the definition of angle nicely fits the
>fixed-point declaration, that is what should be used, is fundamentally
>flawed. The decision on whether to represent an angle as fixed-point
>or floating-point depends on the operations to be performed on the
>quantity and not much else!
  I agree with the point I understood him to be making, that the
program should be designed with an application oriented, not a machine
oriented, view.  It sounds like you are agreeing that matching the
semantics of the type to the application is what's important, and
optimization decisions should be made after it's been determined that
there is an excessive cost in using the code the compiler at hand
generates for the type in use, on the target machine, as compared to
switching to another, less natural, type.  Or in the case at hand:
does the part of his app that multiplies or divides angles have such a
performance impact that it should force the decision on their
representation?

>By the way, Tom, you keep mentioning MMX (one might almost think you were
>an Intel salesperson), but I see MMX has having zero impact on general
  Perhaps MMX is strictly Intel hype (no, I'd not an Intel employee or
salesperson), but the general issue remains.  MMX is a way to speed up
certain repetitive operations using SIMD parallelism.  But the
opportunity for such parallelism must be recognized, either by the
programmer, who codes in asm or calls an asm library routine, or by the
compiler/optimizer.  It's easy for a programmer to recognize that his
signal filtering loop can be done faster with a library procedure that
uses MMX, but what about something like using PMADD to calculate an
index into a multi-dimensional array?  It's much more practical if the
compiler can recognize that and generate the MMX code.
  Perhaps today's MMX operations are so restricted, or have so little
benefit, they can be ignored by a compiler.  But SIMD pallelism may
well be used more in hardware in the future, in which case
compilers/optimizers that can make use of it will have an advantage
over those that don't.
  The original reason to mention MMX in this thread, of course, was as
an illustration that, though hardware designers may have been working
in the last few years to optimize fpt performance, they have also
started more recently to remember that some kinds of integer arithmetic
are extremely important in a growing class of programs, and they've
started to optimize integer performance.  The pendulum swings both ways.




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-28  0:00 tmoran
@ 1997-11-28  0:00 ` Robert Dewar
  0 siblings, 0 replies; 64+ messages in thread
From: Robert Dewar @ 1997-11-28  0:00 UTC (permalink / raw)



Tom says

<<Perhaps MMX is strictly Intel hype (no, I'd not an Intel employee or
  salesperson), but the general issue remains.  MMX is a way to speed up
  certain repetitive operations using SIMD parallelism.  But the
  opportunity for such parallelism must be recognized, either by the
  programmer, who codes in asm or calls an asm library routine, or by the
  compiler/optimizer.>>

I am beginning to get the idea that you have not actually looked at the MMX
instruction set in detail. I think you should, you would find that it is not
what you think it is!

<<The original reason to mention MMX in this thread, of course, was as
  an illustration that, though hardware designers may have been working
  in the last few years to optimize fpt performance, they have also
  started more recently to remember that some kinds of integer arithmetic
  are extremely important in a growing class of programs, and they've
  started to optimize integer performance.  The pendulum swings both ways.>>

There is nothing at all new about the MMX style of instructions. Such
instructions have been an integral part of graphics processors for a long
long time. Even the appearence of such instructions on a general purpose
microprocessor is not new, the i860 had similar instructions seven years
ago. These instructions are very specifically designed for certain graphics
operations, they are not some kind of general lets-make-things-run-faster
SIMD magic, which is the idea you seem to have. I really think you should
grab a copy of the MMX instruction manual and look at it!

There is no pendulum swinging here, except perhaps in the images generated by
the Intel PR machine and those who react to it without looking at details :-)





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-27  0:00 ` Robert Dewar
@ 1997-11-29  0:00   ` Tarjei T. Jensen
  0 siblings, 0 replies; 64+ messages in thread
From: Tarjei T. Jensen @ 1997-11-29  0:00 UTC (permalink / raw)



Robert Dewar wrote:
> 
> tmoran says
><<  So the current crop of systems has fast fpt, slow integer */, and
> slow memory.  A few years ago that was not the case.  Things like
> MMX instructions start to move back toward fast integer arithmetic.
> What's fast this year is not the same as what's fast next year.
> As Mathew Heaney pointed out earlier, it would be wise to design
> data types, etc, to match the problem, and then, if speed is
> an issue, change things to optimize for the particular current target.
> >>
>
> By all means one should choose floating-point or fixed-point in reponse
> to the problem these days. But remember in this thread that a number of
> people were in the mindset that one of the possible reasons for choosing
> fixed-point was greater speed, and it seems that quite a few people were
> unaware that with modern processors this reason, to the extent it is
> valid at all, points in the other direction (of course specialized
> processors, e.g. those with no fpt, may point in the opposite direction).

Whether floating point performs better than fixed point would depend on
the application. If you have an interrupt intensive application that
uses float or fixed point it is conceivable that the processor would
have a hard time getting value from a long floating point pipeline. I
suspect one would get the same problem if there is a lot of floating
point intensive processes (threads) and lots of context switches. I
don't know if fixed point would do better since there might be some
pipelining there as well. Shorter if I remember right, but there.

It's a long time since I studied these things. I'm not up to date with
the latest and greatest processors.

> By the way, Tom, you keep mentioning MMX (one might almost think you were
> an Intel salesperson), but I see MMX has having zero impact on general
> purpose computing and general purpose compilation. It is intended for
> very specialized hand-written code, and it seems very unlikely that it
> is good for anything else.
> 
> (I partly base this estimate on the experience with the i860. Remember that
> these kinds of instructions are not new, they were on the i860 7 years ago,
> but in practice were not useful for general purpose computing).

The main problem with the i860 as I remember it was that it didn't cope
very well with interrupts. Its long pipelines takes time to fill and an
interrupt can devastate it in almost no time.

The answer to this was to make i860 co-processor boards where the front
processor handled interruptintensive things like I/O.

The instruction set and performance of the i860 made it very popular
with high performance graphics companies. To be honest I only know of
one company using it in their graphics accelerator; SGI.

Greetings,

-- 
// Tarjei T. Jensen 
//    tarjei@online.no || voice +47 51 62 85 58
//   Support you local rescue centre: GET LOST!




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-26  0:00                     ` Robert Dewar
@ 1997-12-01  0:00                       ` Joe Gwinn
  1997-12-01  0:00                         ` Robert Dewar
  0 siblings, 1 reply; 64+ messages in thread
From: Joe Gwinn @ 1997-12-01  0:00 UTC (permalink / raw)



In article <dewar.880598918@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:

> Joe said
> 
> <<The great hope of Ada83 fixed-point arithmetic was that it would make such
> memos unnecessary.  This would have been a good thing, but it had to wait
> for the wide use of floating point.
> >>
> 
> It would also be a good thing if Ada could magically write your program
> for you :-)
> 
> There is no free lunch when it comes to being careful with fixed-point
> scaling. To expect Ada83 to somehow solve this problem automatically
> sounds a bit naive, or let's at least say over-optimistic.

Who said we expected Ada83 fixed point types to solve all problems?  What
we hoped for was that conversion between scalings and normalizations after
arithmetic would be handled, and they weren't.  This, one could expect a
language to do.  

I think the reason it didn't was that in my experience most compiler
experts don't understand computer math functions all that well.  I used to
run a small compiler group (fortran and C), and I must say that that group
didn't understand such things.  I recall one incident when our fortran
compiler failed some math function accuracy tests, and they convinced
themselves that the hardware floating point multiply instruction was
wrong.  Wrong.  It was clear that their expertise was in computer language
grammars and compiler internals, not mathematics per se.  Numerical
Methods is not Compiler Design.

Programmers still have to figure out what the variables shall mean, their
ranges, and their accuracies (resolutions, really).

 
> <<As for the differences in Ada's model, I don't recall complaints about
> that, or that they couldn't understand how it worked.  What I do recall
> was that the Ada experts couldn't get it to work reliably or well using
> the compilers of the day.
> >>
> 
> Sounds like you had the wrong "Ada experts" or the wrong compilers, or both.
> Many people used fixed-point in Ada very successfully in the early days
> (I am really thinking specifically of the Alsys compiler here, since I was
> working for Alsys at the time).

Well, we had the Ada experts we had, but not a lot of them, or time to
fiddle.  Almost by definition, the real Ada experts work for Ada vendors,
not their customers.  And something that requires the level of expertise
you seem to imply cannot be widely used in the context of full-scale
engineering development (FSED) projects, with 50 or 100 programmers, most
of which are not language gurus in any language.  Most are experts in the
problem domain, not the langauge of the day.  It cannot be otherwise; we
are not in the language business.

As for choice of compiler, it's a big decision, made using a matrix of
weighted numerical scores covering all manner of issues, and I don't
recall that Alsys was ever chosen here.  I don't know (or recall) why, but
there are lots of bigger issues than the handling of fixed point types. 
Verdix (pre-Rational) was the usual winner, as was XD Ada to a lesser
extent.  Handling of the usual embedded realtime issues, plus toolpath
isses, dominated the decisions then, and still do

Joe Gwinn




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-01  0:00                       ` Joe Gwinn
@ 1997-12-01  0:00                         ` Robert Dewar
  1997-12-01  0:00                           ` Joe Gwinn
  1997-12-03  0:00                           ` robin
  0 siblings, 2 replies; 64+ messages in thread
From: Robert Dewar @ 1997-12-01  0:00 UTC (permalink / raw)



Joe said

<<Who said we expected Ada83 fixed point types to solve all problems?  What
we hoped for was that conversion between scalings and normalizations after
arithmetic would be handled, and they weren't.  This, one could expect a
language to do.>>

Well we are still unclear as to what are language issues here, and what
are issues of bad implementation (i.e. bugs or bad implemenation choices
in the compiler), and so are you :-) In fact we still have no convincing
evidence that there were *any* compiler problems from what you have recalled
so far.

As to expecting the language to do scaling autoamtically, I think this is
a mistake for fixed-point. PL/1 tried and failed, and COBOL certainly
does not succeed (a common coding rule in COBOL is never to use the
COMPUTE verb, precisely because the scaling is not well defined).

I think any attempt to automatically determine the scaling of intermediate
results in multiplications and divisions in fixed-point is doomed to failure.
This just *has* to be left up to the programmer, since it is highly
implementation dependent.

I have no idea what you mean by "normalizations". This term makes no sense
to me at all in a fixed-point context.





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-01  0:00                         ` Robert Dewar
@ 1997-12-01  0:00                           ` Joe Gwinn
  1997-12-03  0:00                           ` robin
  1 sibling, 0 replies; 64+ messages in thread
From: Joe Gwinn @ 1997-12-01  0:00 UTC (permalink / raw)



In article <dewar.881006262@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:

> Joe said
> 
> <<Who said we expected Ada83 fixed point types to solve all problems?  What
> we hoped for was that conversion between scalings and normalizations after
> arithmetic would be handled, and they weren't.  This, one could expect a
> language to do.>>
> 
> Well we are still unclear as to what are language issues here, and what
> are issues of bad implementation (i.e. bugs or bad implemenation choices
> in the compiler), and so are you :-) In fact we still have no convincing
> evidence that there were *any* compiler problems from what you have recalled
> so far.

I neither know nor care if this is an Ada language design issue.  I don't
buy and use language designs or even languages, I buy and use compilers. 
It doesn't much matter if we are today convinced or not; the decision was
made then, based on the compilers then available.  It may be different
today, but I don't see many people using fixed point where floating point
is available.


> As to expecting the language to do scaling autoamtically, I think this is
> a mistake for fixed-point. PL/1 tried and failed, and COBOL certainly
> does not succeed (a common coding rule in COBOL is never to use the
> COMPUTE verb, precisely because the scaling is not well defined).

Whoa.  Read my words again.  "Scalings" are nouns, not verbs.  In this
context, a "scaling" is the (human) decision of where to put the binary
point, and what the value of the least significant bit shall be.  Scalings
are static, done as part of code design, and documented in Interface
Control Documents and Interface Design Documents.  Remember, this is 1960s
era stuff.  

This information can in theory be declared to a suitably designed
compiler, which can then deal with the details of normalization, a boring
mechanical process that humans aren't very good at.  


> I think any attempt to automatically determine the scaling of intermediate
> results in multiplications and divisions in fixed-point is doomed to failure.
> This just *has* to be left up to the programmer, since it is highly
> implementation dependent.

No, it isn't hopeless.  It's quite simple and mechanical, actually.  If it
were imposssible, why then did Ada83 attempt it?  It's exactly the same
rules we were taught in grade school for the handling of decimal points
after arithmetic on decimals, especially multiplication and division. 
Ada83 should have been able to do it, given the information provided when
the various variables types were defined.


> I have no idea what you mean by "normalizations". This term makes no sense
> to me at all in a fixed-point context.

Normalization is also known as rescaling, typically done by right-shifting
the result of a multiplication to put the binary point back where it
belongs in the desired output variable's scaling.  A rounding may also be
done.

Joe Gwinn




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-02  0:00 Robert Dewar
@ 1997-12-02  0:00 ` Joe Gwinn
  1997-12-02  0:00   ` Ken Garlington
  1997-12-02  0:00   ` Robert Dewar
  1997-12-03  0:00 ` robin
  1 sibling, 2 replies; 64+ messages in thread
From: Joe Gwinn @ 1997-12-02  0:00 UTC (permalink / raw)



In article <dewar.881039951@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:

> Joe says
> 
>   <<I neither know nor care if this is an Ada language design issue.  I don't
>   buy and use language designs or even languages, I buy and use compilers.
>   It doesn't much matter if we are today convinced or not; the decision was
>   made then, based on the compilers then available.  It may be different
>   today, but I don't see many people using fixed point where floating point
>   is available.>>
> 
> That to me is an attitude that is undesirable. It is very important for
> programmers to understand the difference between the semantics of a
> language and the behavior of a specific compiler. Otherwise they will
> never be able to successfully write portable code. Portable code is
> code that strictly obeys the semantics of the language. This is quite
> a different criterion than "works with my compiler." Many, perhaps nearly
> all problems with portability when using supposeldly portable languages
> (such as Ada) come from not understanding this distinction, and not being
> sufficiently aware of code that uses implementation dependent features.

Portability was not an issue in the relevant projects, and so was never raised. 

However, now that you have raised the issue, some comments:  Portability
is less important than workability.  And, we were always assured that use
of Ada guaranteed portability.  Apparently that was a lie; we also need to
be Ada experts to achieve portability?  Just like all other languages?


> It is still quite unclear from your description whether the problems you
> report are
> 
>   1. Language design problems
>   2. Compiler implementation problems
>   3. Misunderstandings of the Ada fixed-point semantics
> 
> I tend to suspect 3 at this stage.

Well, too many people failed to make it work for #3 to be a likely cause,
and the compilers of the day are known to have been a work in progress.  

This has degenerated into a pointless yes-you-did, no-I-didn't cycle. 
Suffice it to say that we disagree on a minor issue too far in the past to
remember the details clearly enough to have a useful debate, assuming of
course that the limitations of 1980s compilers have any relevance
whatsoever in the 1990s.


> Robert said
> 
>   > I think any attempt to automatically determine the scaling of intermediate
>   > results in multiplications and divisions in fixed-point is doomed to
failure.
>   > This just *has* to be left up to the programmer, since it is highly
>   > implementation dependent.
> 
> Joe said
> 
>   <<No, it isn't hopeless.  It's quite simple and mechanical, actually.  If it
>   were imposssible, why then did Ada83 attempt it?  It's exactly the same
>   rules we were taught in grade school for the handling of decimal points
>   after arithmetic on decimals, especially multiplication and division.
>   Ada83 should have been able to do it, given the information provided when
>   the various variables types were defined.>>
> 
> Actually it is this paragraph that makes me pretty certain that the problems
> may have been misunderstandings of how Ada works. First of all, it is simply
> wrong that Ada83 attempts to determine the proper normalization (to use
> Joe's term) of multiplication and division. Both the Ada 83 and Ada 95
> design require that the programmer specify the scale and precision of
> intermediate results for multiplication and division (in fact in Ada 83,
> this specification is *always* explicit, in some cases in Ada 95 it is
> implicit, but only in trivial cases, e.g. where in Ada 83 you have to
> write
> 
>    X := Type_Of_X (Y * Z);
> 
> In Ada 95, this particular conversion can be omitted.
> 
> Secondly, it is not at all true that the rules are obvious or that they
> are exactly the same as the "rules we were taught in grade school". If
> you write
> 
>   x := y * (V1 / V3);
> 
> where V1 is 1.0 and V3 is 3.0, then the issue of the precision of the
> intermediate value is critical, and no, there are no simple grade school
> answers to answer this. I suggest looking at PL/1 for an attempt at
> defining this, an attempt which most consider a failure. As I mentioned
> earlier, COBOL simply gives up in the COMPUTE statement and says that
> the results are implementation dependent.

Well, you are obsessing on the fact that I don't recall ten years later
all the details of how to do fixed point arithmetic in Ada83, which we
never used except to test.  And, you are proving my basic point, that with
all that user-supplied information, Ada83 should be able to handle fixed
point arithmetic.  This is just some added information, clearly not
absolutely required information, because Ada95 no longer always requires
it.  

But, the compilers we used in those days couldn't get it *exactly* right,
so we didn't use fixed point arithmetic in Ada83.  If it had worked, we
probably would have used it, and I might well be able to remember the
details today.


Again, we are in a pointless circular discussion.  

And, I really don't see why it's necessary to deny the failings of
compilers that have been obsolete for at least a decade, for a now
superceeded language.  Surely we can find something more current and
relevant to worry about, to argue about.

Joe Gwinn




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-02  0:00 ` Joe Gwinn
@ 1997-12-02  0:00   ` Ken Garlington
  1997-12-03  0:00     ` Joe Gwinn
  1997-12-02  0:00   ` Robert Dewar
  1 sibling, 1 reply; 64+ messages in thread
From: Ken Garlington @ 1997-12-02  0:00 UTC (permalink / raw)



Joe Gwinn wrote:
> 
> Portability is less important than workability.  And, we were always assured that use
> of Ada guaranteed portability.  Apparently that was a lie; we also need to
> be Ada experts to achieve portability?  Just like all other languages?

In my experience - absolutely. If you assume that X is portable just
because it
happens to work when processed with a particular vendor's implementation
of X,
you are probably going to be disappointed.

(Just to save you grief in the future, feel free to plug in any of the
following for
X: Ada, C++, Java, POSIX, CORBA, etc.)

> Well, you are obsessing on the fact that I don't recall ten years later
> all the details of how to do fixed point arithmetic in Ada83, which we
> never used except to test.  And, you are proving my basic point, that with
> all that user-supplied information, Ada83 should be able to handle fixed
> point arithmetic.  This is just some added information, clearly not
> absolutely required information, because Ada95 no longer always requires
> it.

Ada no longer requires it for trivial cases. For many intermediate
calculations
of this type, you still have to explicitly specify the resulting type.

> And, I really don't see why it's necessary to deny the failings of
> compilers that have been obsolete for at least a decade, for a now
> superceeded language.  Surely we can find something more current and
> relevant to worry about, to argue about.

I'm confused. If the behavior you described is not relavant, why did you
bring it up?




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-02  0:00 ` Joe Gwinn
  1997-12-02  0:00   ` Ken Garlington
@ 1997-12-02  0:00   ` Robert Dewar
  1997-12-02  0:00     ` Matthew Heaney
                       ` (2 more replies)
  1 sibling, 3 replies; 64+ messages in thread
From: Robert Dewar @ 1997-12-02  0:00 UTC (permalink / raw)



Joe said:

<<Portability was not an issue .. and so was never raised.

  However, now that you have raised the issue, some comments:  Portability
  is less important than workability.  And, we were always assured that use
  of Ada guaranteed portability.  Apparently that was a lie; we also need to
  be Ada experts to achieve portability?  Just like all other languages?

I always find it amazing that anyone would ever have, even for a moment,
thought that using an Ada compiler would somehow guarantee that their code
was portable. And it is even more amazing that someone with this strange
viewpoint would turn around later and complain that they were lied to.

Really Joe, surely you know that you cannot achieve portability without
some care on the programmer's part. Yes, it is true that programs can be
written in Ada that are portable. BUT, and this is a big BUT, you do have
to know the language. No, you do not have to be an Ada expert, but you DO
have to know the language, and you do have to know the language well enough
so that you know the difference between Ada the language, and what your
compiler happens to implement for implementation dependent constructs.
If you don't know Ada, and you don't care about portability (as was
apparently the case in your project -- by don't know Ada here, I mean
that you do not bother to carefully study the definition of the language),
then you definitely CANNOT write portable code in Ada.

If anyone ever promised you that using an Ada compiler would guarantee that
your programs they were an ignorant fool, and it surprises me that anyone
would believe such nonsense. Certainly you never heard anything of the kind
from anyone who knew anything about the language!

<<Well, too many people failed to make it work for #3 to be a likely cause,
  and the compilers of the day are known to have been a work in progress.

  This has degenerated into a pointless yes-you-did, no-I-didn't cycle.
  Suffice it to say that we disagree on a minor issue too far in the past to
  remember the details clearly enough to have a useful debate, assuming of
  course that the limitations of 1980s compilers have any relevance
  whatsoever in the 1990s.>>

I have NO clear idea what you did or didn't do. You made certain claims in
your original post, and I was interested to see if you could substantiate
them. It seems that really you can't, which I don't find too surprising,
because they did not make much sense to me. But I did not want to jump to
that conclusion, which is why I was trying to find out more details.

<<Well, you are obsessing on the fact that I don't recall ten years later
  all the details of how to do fixed point arithmetic in Ada83, which we
  never used except to test.  And, you are proving my basic point, that with
  all that user-supplied information, Ada83 should be able to handle fixed
  point arithmetic.  This is just some added information, clearly not
  absolutely required information, because Ada95 no longer always requires
  it.>>

No, you did not read carefully. Ada 95 absolutely DOES require programmers
to provide intermediate precision precisely. It is just that in certain
cases, this can be implicit in Ada 95. For example

   x := y(A * B) + y(C * D);

the conversions to type y here, specifying the intermediate scale and
precision are required in Ada 83 and Ada 95, and those are the important
cases. The case which has changed is relatively unimportant:

   x := y(A * B);

Ada 83 required the explicit conversion here, but since the most likely case
is that y is the type of x, Ada 95 allows a shorthand IN THIS SITUATION ONLY
of leaving out the type conversion.

So it is not at all the case that Ada83 should be able to handle fixed point
arithmetic automatically. Neither Ada 83 nor Ada 95 attempts this. As I noted,
PL/1 tried, but most people (not all, Robin characteristically dissents :-)
would agree that the attempt was a failure, for example the rule that requires

   22 + 5/3

to overflow in PL/1, where

   22 + 05/3

succeeds, is hardly a masterpiece of programming language design, but to be
fair, PL/1 was really trying to solve an impossible problem, so it is not
surprising that its solution has glitches. Indeed the reason that 22 + 5/3
overflows bears looking at, since it is a nice example of good decisions
combining to have unintended consequences.

Question 1: What should be the precision of A/B?
Answer: The maximum precision possible guaranteeing no overflow.

Question 2: What happens when two numbers with different scales are added?
Answer: One is shifted left to line up decimal points (we do not want to
         lose pennies!)

Now 5/3 is a case of one digit divided by one digit, which can have at most
one digit before the decimal point (for the case of 9/1), so the result is

   1.6666666666666666666666666667

Where by definition the number of digits is the maximum number of digits that
can be handled.

Now we add the 22 and

   1.6666666666666666666666666667
  22.0000000000000000000000000000

oops, the 2 fell out the left hand side.
But if the literal is 05 (with a leading zero), then it is two digits, and we
get:

  01.666666666666666666666666667
  22.000000000000000000000000000

It is really hard to get these rules to behave well without anomolies of this
type. To my taste no one has succeeded, and the decision in Ada 83 and Ada 95
to make the programmer specify intermediate precision and scaling seems
exactly right.

<<But, the compilers we used in those days couldn't get it *exactly* right,
  so we didn't use fixed point arithmetic in Ada83.>>

There goes that claim again, but we still have seen no evidence or data to
back up this claim, other than "that must have been it, otherwise we would
not have thought it was the case". NOT very convincing!

Incidentally, if you want to read more about fixed-point issues in Ada,
a good starting point is the special issues report on Ada fixed-point
that I did for the Ada 9X project.





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-02  0:00   ` Robert Dewar
@ 1997-12-02  0:00     ` Matthew Heaney
  1997-12-03  0:00       ` Robert Dewar
  1997-12-03  0:00     ` robin
  1997-12-03  0:00     ` Shmuel (Seymour J.) Metz
  2 siblings, 1 reply; 64+ messages in thread
From: Matthew Heaney @ 1997-12-02  0:00 UTC (permalink / raw)



In article <dewar.881120537@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote:

>Incidentally, if you want to read more about fixed-point issues in Ada,
>a good starting point is the special issues report on Ada fixed-point
>that I did for the Ada 9X project.

I just ordered this paper yesterday.

The Fixed Point Facility In Ada
Robert Dewar
SEI-90-SR-2

No, you can't ftp this document from SEI directly, but you can order this
from NTIS.

National Technical Information Service
800/553-6847
Specify document #ADA221374
US$24.50

You can try to get it cheaper at

Science Application International Corporation
304/284-9000
<mailto:sei@asset.com>
<http://www.asset.com/sei.html>

Defense Technical Information Center
800/225-3842
703/767-8222

You may also want to read Robert's other paper (I ordered that one too):

Shared Variables and Ada 9X Issues
Robert Dewar
SEI-90-SR-1
NTIS #ADA221662
US$24.50

--------------------------------------------------------------------
Matthew Heaney
Software Development Consultant
<mailto:matthew_heaney@acm.org>
(818) 985-1271




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
@ 1997-12-02  0:00 Robert Dewar
  1997-12-02  0:00 ` Joe Gwinn
  1997-12-03  0:00 ` robin
  0 siblings, 2 replies; 64+ messages in thread
From: Robert Dewar @ 1997-12-02  0:00 UTC (permalink / raw)



Joe says

  <<I neither know nor care if this is an Ada language design issue.  I don't
  buy and use language designs or even languages, I buy and use compilers.
  It doesn't much matter if we are today convinced or not; the decision was
  made then, based on the compilers then available.  It may be different
  today, but I don't see many people using fixed point where floating point
  is available.>>

That to me is an attitude that is undesirable. It is very important for
programmers to understand the difference between the semantics of a
language and the behavior of a specific compiler. Otherwise they will
never be able to successfully write portable code. Portable code is
code that strictly obeys the semantics of the language. This is quite
a different criterion than "works with my compiler." Many, perhaps nearly
all problems with portability when using supposeldly portable languages
(such as Ada) come from not understanding this distinction, and not being
sufficiently aware of code that uses implementation dependent features.

It is still quite unclear from your description whether the problems you
report are

  1. Language design problems
  2. Compiler implementation problems
  3. Misunderstandings of the Ada fixed-point semantics

I tend to suspect 3 at this stage.

Robert said

  > I think any attempt to automatically determine the scaling of intermediate
  > results in multiplications and divisions in fixed-point is doomed to failure.
  > This just *has* to be left up to the programmer, since it is highly
  > implementation dependent.

Joe said

  <<No, it isn't hopeless.  It's quite simple and mechanical, actually.  If it
  were imposssible, why then did Ada83 attempt it?  It's exactly the same
  rules we were taught in grade school for the handling of decimal points
  after arithmetic on decimals, especially multiplication and division.
  Ada83 should have been able to do it, given the information provided when
  the various variables types were defined.>>

Actually it is this paragraph that makes me pretty certain that the problems
may have been misunderstandings of how Ada works. First of all, it is simply
wrong that Ada83 attempts to determine the proper normalization (to use
Joe's term) of multiplication and division. Both the Ada 83 and Ada 95
design require that the programmer specify the scale and precision of
intermediate results for multiplication and division (in fact in Ada 83,
this specification is *always* explicit, in some cases in Ada 95 it is
implicit, but only in trivial cases, e.g. where in Ada 83 you have to
write

   X := Type_Of_X (Y * Z);

In Ada 95, this particular conversion can be omitted.

Secondly, it is not at all true that the rules are obvious or that they
are exactly the same as the "rules we were taught in grade school". If
you write

  x := y * (V1 / V3);

where V1 is 1.0 and V3 is 3.0, then the issue of the precision of the
intermediate value is critical, and no, there are no simple grade school
answers to answer this. I suggest looking at PL/1 for an attempt at
defining this, an attempt which most consider a failure. As I mentioned
earlier, COBOL simply gives up in the COMPUTE statement and says that
the results are implementation dependent.

Robert Dewar





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-02  0:00     ` Matthew Heaney
@ 1997-12-03  0:00       ` Robert Dewar
  0 siblings, 0 replies; 64+ messages in thread
From: Robert Dewar @ 1997-12-03  0:00 UTC (permalink / raw)



Matthew said

<<The Fixed Point Facility In Ada
Robert Dewar
SEI-90-SR-2

No, you can't ftp this document from SEI directly, but you can order this
from NTIS.
>>

Someone told me by email recently that this was available online somewhere,
but I forget where. If anyone knows, can they tell people where.





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-03  0:00     ` robin
@ 1997-12-03  0:00       ` Robert Dewar
  0 siblings, 0 replies; 64+ messages in thread
From: Robert Dewar @ 1997-12-03  0:00 UTC (permalink / raw)



Robin says

<<Rubbish.  If evaluation of this expression fails, it
is because of exceeding the capacity of the fixed-point
multiplier.  It has -- incidentally -- nothing to do
with the division.

The above is the WRONG way to compute an expression
using fixed-point facilities.

You would write it x = (y * V1) / V3;
>>

No that's quite wrong, the fixed point expressions

   y * (V1 / V3)

and

   (y * V1) / V3

are quite different expressions (to substitute one for the other, for
example in the computation of interest in a bond, would not only be
an error, but might actually be a violation of the law, since the
details of such calculations are often contained in the bond instruments
and mandated).

These two expressions give quite different values, it is wrong to say
that one is right and one is wrong. Which one is right depends on the
problem at hand, and the intermediate precision required in both cases
must be carefully controlled.

I can easily see how the fixed-point facilities in PL/1 would seduce
PL/1 enthusiasts into such errors. It is one of the reasons that the
fixed-point design in PL/1 is generally considered a failure. Please
note I am talking a very specific language design point here. I quite
realize that most language designers consider the whole of PL/1 a
dismal failure, but I don't go that far, PL/1 is not nearly as bad
as its reputation, but it did make a number of serious mistakes,
and the fixed-point design was one of them.





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-02  0:00   ` Robert Dewar
  1997-12-02  0:00     ` Matthew Heaney
  1997-12-03  0:00     ` robin
@ 1997-12-03  0:00     ` Shmuel (Seymour J.) Metz
  1997-12-03  0:00       ` Matthew Heaney
                         ` (2 more replies)
  2 siblings, 3 replies; 64+ messages in thread
From: Shmuel (Seymour J.) Metz @ 1997-12-03  0:00 UTC (permalink / raw)



Robert Dewar wrote:
 
> So it is not at all the case that Ada83 should be able to handle fixed point
> arithmetic automatically. Neither Ada 83 nor Ada 95 attempts this. As I noted,
> PL/1 tried, but most people (not all, Robin characteristically dissents :-)
> would agree that the attempt was a failure, 

Most working PL/I programmers would disagree. Strangely enough, PL/I is
like Ada (or any other programming language) in that you will get
unexpected results if you try to program before learning the language.

Periodically the PL/I people have put together task groups to redefine
the way PL/I handles precision and conversion. Invariably they have come
to the conclussion that the current way is better than any of the
proposed alternative.

BTW, very few real programs have constructs like
constant+constant/constant; usually at least one term will be a
variable. Use default precision in the statement

	A = 1 + B/3;

and the "problem" disappears?

Is it perfect? No. But it's no worse than the anomlies in Ada fixed
point, and those are managable.

-- 

                        Shmuel (Seymour J.) Metz
                        Senior Software SE

The values in from and reply-to are for the benefit of spammers:
reply to domain eds.com, user msustys1.smetz or to domain gsg.eds.com,
user smetz. Do not reply to spamtrap@library.lspace.org




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-02  0:00   ` Ken Garlington
@ 1997-12-03  0:00     ` Joe Gwinn
  1997-12-04  0:00       ` Robert Dewar
  0 siblings, 1 reply; 64+ messages in thread
From: Joe Gwinn @ 1997-12-03  0:00 UTC (permalink / raw)



In article <3484D37E.E68@nospam.flash.net>,
Ken.Garlington@nospam.computer.org wrote:

> Joe Gwinn wrote:
> > 
> > Portability is less important than workability.  And, we were always
assured that use
> > of Ada guaranteed portability.  Apparently that was a lie; we also need to
> > be Ada experts to achieve portability?  Just like all other languages?
> 
> In my experience - absolutely. If you assume that X is portable just
> because it happens to work when processed with a particular vendor's 
> implementation of X, you are probably going to be disappointed.

Yep.  Absolutely.  I think I made just this point three or four times, but
it's become hard to follow.

In truth, I was just twisting Robert's tail.  With great success, too. 
Snapped the hook right out of my hands.  He didn't catch the irony at all,
answering with yards of riposte text.   After all that abuse, it was time
to give as well as to get.


> (Just to save you grief in the future, feel free to plug in any of the
> following for
> X: Ada, C++, Java, POSIX, CORBA, etc.)

Yep.  Don't forget fortran and assembly, or COBOL.


> > Well, you are obsessing on the fact that I don't recall ten years later
> > all the details of how to do fixed point arithmetic in Ada83, which we
> > never used except to test.  And, you are proving my basic point, that with
> > all that user-supplied information, Ada83 should be able to handle fixed
> > point arithmetic.  This is just some added information, clearly not
> > absolutely required information, because Ada95 no longer always requires
> > it.
> 
> Ada no longer requires it for trivial cases. For many intermediate
> calculations
> of this type, you still have to explicitly specify the resulting type.

The original point of departure was my comment that the die was cast in
the bad old days of Ada83 compilers that didn't handle fixed point
arithmetic well enough for the ordinary programmers to use without greater
difficulty and danger than they were willing to undertake.  Perhaps, if
they were only smarter, or in less of a hurry, or better educated, they
would have seen the error of their ways.  Sadly, it was not to be.  Thus
have many technologies died.

Now, with floating point hardware almost universal, the issue is moot.


> > And, I really don't see why it's necessary to deny the failings of
> > compilers that have been obsolete for at least a decade, for a now
> > superceeded language.  Surely we can find something more current and
> > relevant to worry about, to argue about.
> 
> I'm confused. If the behavior you described is not relavant, why did you
> bring it up?

Because Robert attacked me simply for saying that there was a problem back
then.  My point here is that I don't know why he cared that much about
such an ancient issue.  Never mind the now fuzzy details.  It very much
has the flavor of refighting yesterday's war.  Tiresome, and quite
pointless.  

It's not that we can't find something current to argue about, one where
the outcome could have a material effect on something.


Joe Gwinn




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-03  0:00     ` Shmuel (Seymour J.) Metz
  1997-12-03  0:00       ` Matthew Heaney
  1997-12-03  0:00       ` Robert Dewar
@ 1997-12-03  0:00       ` Robert Dewar
  2 siblings, 0 replies; 64+ messages in thread
From: Robert Dewar @ 1997-12-03  0:00 UTC (permalink / raw)



Seymour J says

<<BTW, very few real programs have constructs like
constant+constant/constant; usually at least one term will be a
variable. Use default precision in the statement

        A = 1 + B/3;

and the "problem" disappears?
>>

Maybe and maybe not. Depending on the scale and precision of B, you may
prefer to write

        A = 1 + B/03;

which has different semantics. I find changing the number of leading zeroes
in a decimal constant to be a non-intuitive way of controlling the scale
and precision of the intermediate result, and I think it is FAR safer to
make the programmer think explicitly about what scale and precision is
required, define a type that encapsulates this decision, and write

        A := 1 + Inttype (B / 3.0);

Yes, it requires more work from the programmer, but only work that really
is quite necessary. Fixed-point requires manual scaling, which is why
people prefer floating-point. To think that fixed-point semantics can
be effectively automated so it really functions as a poor man's floating-
point where everything is done right automatically is, I am afraid, naive.





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-03  0:00     ` Shmuel (Seymour J.) Metz
  1997-12-03  0:00       ` Matthew Heaney
@ 1997-12-03  0:00       ` Robert Dewar
  1997-12-03  0:00       ` Robert Dewar
  2 siblings, 0 replies; 64+ messages in thread
From: Robert Dewar @ 1997-12-03  0:00 UTC (permalink / raw)



Seymour says

<<Most working PL/I programmers would disagree. Strangely enough, PL/I is
like Ada (or any other programming language) in that you will get
unexpected results if you try to program before learning the language.

Periodically the PL/I people have put together task groups to redefine
the way PL/I handles precision and conversion. Invariably they have come
to the conclussion that the current way is better than any of the
proposed alternative.
>>

The fact that such task groups are periodically put together is as clear
an indication as one could get that there are problems. That they do not
succeed in improving things, especially given the burden that compatibility
imposes in this case, is hardly surprising.

What is quite significant here is that despite a significant desire in the
COBOL community to solve the same problem (and better define the semantics
of COMPUTE, now left implementation dependent), the COBOL community has
NOT decided to follow the PL/1 direction here, and I think that is definitely
wise, since I find the COBOL semantics preferable to those in PL/1.

Basically, when you avoid COMPUTE, as most COBOL programmers do, then you
must give intermediate scales and precisions explicitly, which, as I have
noted before, clearly seems the right approach to me. Ada reached a nice
comopomise, where the presence of type abstractions allows the basic
expression notation to be used, while retaining the requirement for
providing intermediate precisions and scales where they cannot be
reliably deduced by the compiler (rather than adopting the PL/1 approach
of resolving such situations with arbitrary rules that can have surprising
consequences).





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-03  0:00     ` Shmuel (Seymour J.) Metz
@ 1997-12-03  0:00       ` Matthew Heaney
  1997-12-04  0:00         ` Shmuel (Seymour J.) Metz
  1997-12-03  0:00       ` Robert Dewar
  1997-12-03  0:00       ` Robert Dewar
  2 siblings, 1 reply; 64+ messages in thread
From: Matthew Heaney @ 1997-12-03  0:00 UTC (permalink / raw)



In article <3485A850.3A92@gsg.eds.com>, nospam@gsg.eds.com wrote:

>Is it perfect? No. But it's no worse than the anomlies in Ada fixed
>point, and those are managable.

I'm confused.  What fixed point anomalies in Ada are you refering to?  Can
you elaborate?

--------------------------------------------------------------------
Matthew Heaney
Software Development Consultant
<mailto:matthew_heaney@acm.org>
(818) 985-1271




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-01  0:00                         ` Robert Dewar
  1997-12-01  0:00                           ` Joe Gwinn
@ 1997-12-03  0:00                           ` robin
  1 sibling, 0 replies; 64+ messages in thread
From: robin @ 1997-12-03  0:00 UTC (permalink / raw)



	dewar@merv.cs.nyu.edu (Robert Dewar) writes:


	>As to expecting the language to do scaling autoamtically, I think this is
	>a mistake for fixed-point. PL/1 tried and failed,

PL/I did not fail.  Automatic scaling of fixed-point intermediate
results is highly successful.  It works on the principle
of preserving as many digits after the binary/decimal point
as possible.

	> and COBOL certainly
	>does not succeed (a common coding rule in COBOL is never to use the
	>COMPUTE verb, precisely because the scaling is not well defined).

	>I think any attempt to automatically determine the scaling of intermediate
	>results in multiplications and divisions in fixed-point is doomed to failure.

It isn't really, but the user needs to understand how fixed-point
arithmetic -- with a fractional part -- works.

	>This just *has* to be left up to the programmer, since it is highly
	>implementation dependent.




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-11-22  0:00 Matthew Heaney
                   ` (3 preceding siblings ...)
  1997-11-24  0:00 ` Vince Del Vecchio
@ 1997-12-03  0:00 ` robin
  4 siblings, 0 replies; 64+ messages in thread
From: robin @ 1997-12-03  0:00 UTC (permalink / raw)




	>Matthew Heaney <mheaney@ni.net> wrote:

	>Anyway, I'll accept that fixed point operations are slower, but the
	>question remains: Why do most programmers insist on using floating point
	>when a fixed point better models the abstraction?  Is the reason solely
	>based on efficiency?

	>By the way, are there off-the-shelf subprograms (in Ada preferably, but I'd
	>take any language) for computing the elementary functions (sin, cos, etc)
	>of a fixed point angle?

The following evaluates sine using fixed point, for a limited range of
angles.
_____________________________________________________

/* Copyright (c) 1996, 1997 by R. A. Vowels. */
/* This subroutine computes the sine of an angle using fixed-point binary arithmetic. */
SINE:
	PROCEDURE(XX) OPTIONS (REORDER) RETURNS (FIXED BINARY(31,28));
		DECLARE XX FIXED BINARY(31,24);
		DECLARE X		FIXED BINARY (31,28);
		DECLARE Sum		FIXED BINARY (31,28);
		DECLARE Term		FIXED BINARY (31,28);
		DECLARE (J, K)		FIXED BINARY;
		DECLARE Neg		BIT(1);

   DECLARE (HiX, HiY) FIXED BINARY (31,14);
   DECLARE (LoX, LoY) FIXED BINARY (31,28);
   DECLARE (ProdHiXLoY, ProdHiYLoX, Prod) FIXED BINARY (31,28);

/* This macro procedure forms the product of the fixed-point arguments X1 and Y1. */
%MULT: PROCEDURE(LHS, X1, Y1);
		DECLARE (LHS, X1, Y1) CHARACTER;
			ANSWER ('HiX = ' || X1 || ';') SKIP; /* Extract the upper 14 bits of X. */
			ANSWER ('HiY = ' || Y1 || ';') SKIP; /* Extract the upper 14 bits of X. */
			ANSWER ('LoX = ' || X1 || ' - HiX;' ) SKIP; /* Extract the lower 14 bits of X. */
			ANSWER ('LoY = ' || Y1 || ' - HiY;' ) SKIP; /* Extract the lower 14 bits of Y. */

			ANSWER ('Prod = HiX * HiY;') SKIP;  /* the product of the high bits. */
			ANSWER ('ProdHiXLoY = HiX * LoY;') SKIP; /* Product of the upper bits of X, lower bits of Y. */
			ANSWER ('ProdHiYLoX = HiY * LoX;') SKIP; /* Product of the upper bits of Y, lower bits of X. */
			ANSWER (LHS || ' = Prod + ProdHiXLoY + ProdHiYLoX') SKIP; /* Sum them all. */

%END MULT;
%ACTIVATE MULT NORESCAN;


			/* The following code segment evaluates the sine of an angle x using the   */
			/* first 6 terms of the Taylor series:                                     */
			/* sine x = x - x**3/3! + x**5/5! - x**7/7! + x**9/9! - x**11/11!          */
			/* Range: -pi <= x <= pi                                                   */
			/* Accuracy: 8 decimal places (typically better than 7 significant digits).*/
			X = XX;

			Sum = X;
			Neg = X < 0;
			IF Neg THEN X = -X;
			IF X > 1.5707963 THEN /*sin(x) = sin(180-x). */
				X = 3.1415927 - X;

			/* For small angles, the radian angle is closer to the sine than this approximation. */
			/* Therefore, we only execute the following DO group for larger angles. */
			IF X > 0.002 THEN
				DO;


					Sum = X;
					Term = X / 6;
					MULT(Term, Term, X);
					MULT(Term, Term, X);   /* To give x**3/3! */
					Sum = Sum - Term;

					Term = Term / 20;
					MULT(Term, Term, X);
					MULT(Term, Term, X);   /* To give x**5/5! */
					Sum = Sum + Term;

					Term = Term / 42;
					MULT(Term, Term, X);
					MULT(Term, Term, X);   /* To give x**7/7! */
					Sum = Sum - Term;

					Term = Term / 72;
					MULT(Term, Term, X);
					MULT(Term, Term, X);   /* To give x**9/9! */
					Sum = Sum + Term;

					Term = Term / 110;
					HiX = X;
					Term = Term * HiX;
					Term = Term * HiX;   /* To give x**11/11! */
					Sum = Sum - Term;

					IF Neg THEN Sum = -Sum;
				END;
   RETURN (Sum);
END SINE;




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-02  0:00 Robert Dewar
  1997-12-02  0:00 ` Joe Gwinn
@ 1997-12-03  0:00 ` robin
  1 sibling, 0 replies; 64+ messages in thread
From: robin @ 1997-12-03  0:00 UTC (permalink / raw)



	dewar@merv.cs.nyu.edu (Robert Dewar) writes:

	>  > I think any attempt to automatically determine the scaling of intermediate
	>  > results in multiplications and divisions in fixed-point is doomed to failure.
	>  > This just *has* to be left up to the programmer, since it is highly
	>  > implementation dependent.

	>Joe said

	>  <<No, it isn't hopeless.  It's quite simple and mechanical, actually.  If it
	>  were imposssible, why then did Ada83 attempt it?  It's exactly the same
	>  rules we were taught in grade school for the handling of decimal points
	>  after arithmetic on decimals, especially multiplication and division.
	>  Ada83 should have been able to do it, given the information provided when
	>  the various variables types were defined.>>

	>Actually it is this paragraph that makes me pretty certain that the problems
	>may have been misunderstandings of how Ada works. First of all, it is simply
	>wrong that Ada83 attempts to determine the proper normalization (to use
	>Joe's term) of multiplication and division. Both the Ada 83 and Ada 95
	>design require that the programmer specify the scale and precision of
	>intermediate results for multiplication and division (in fact in Ada 83,
	>this specification is *always* explicit, in some cases in Ada 95 it is
	>implicit, but only in trivial cases, e.g. where in Ada 83 you have to
	>write

	>   X := Type_Of_X (Y * Z);

	>In Ada 95, this particular conversion can be omitted.

	>Secondly, it is not at all true that the rules are obvious or that they
	>are exactly the same as the "rules we were taught in grade school". If
	>you write

	>  x := y * (V1 / V3);

	>where V1 is 1.0 and V3 is 3.0, then the issue of the precision of the
	>intermediate value is critical, and no, there are no simple grade school
	>answers to answer this. I suggest looking at PL/1 for an attempt at
	>defining this, an attempt which most consider a failure.

Rubbish.  If evaluation of this expression fails, it
is because of exceeding the capacity of the fixed-point
multiplier.  It has -- incidentally -- nothing to do
with the division.

The above is the WRONG way to compute an expression
using fixed-point facilities.

You would write it x = (y * V1) / V3;

This way gives the fullest accuracy; the way suggested
x - y * (V1/V3);
succumbs to truncation error, which is duly
magnified by the multiplication! 

Incidentally, in PL/I, if the evaluation of the multiplication
overflows, it is because the intermediate precision
would be too great for the hardware unit.
The same problem would occur if you wrote:

x = y * 0.33333333333333;

If y has 1 or more decimal places, overflow may occur
because the number of places (in the product) after
the point exceeds the capacity of the hardware unit.

In PL/I, in such situations, when one or both operands
has a large number of digits, you specify the precision of
the intermediate product, using the MULIPLY built-in
function.  In this manner, you avoid fixed-point overflow.

	>As I mentioned
	>earlier, COBOL simply gives up in the COMPUTE statement and says that
	>the results are implementation dependent.

	>Robert Dewar




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-02  0:00   ` Robert Dewar
  1997-12-02  0:00     ` Matthew Heaney
@ 1997-12-03  0:00     ` robin
  1997-12-03  0:00       ` Robert Dewar
  1997-12-03  0:00     ` Shmuel (Seymour J.) Metz
  2 siblings, 1 reply; 64+ messages in thread
From: robin @ 1997-12-03  0:00 UTC (permalink / raw)



	dewar@merv.cs.nyu.edu (Robert Dewar) writes:

	>Joe said:

	><<Portability was not an issue .. and so was never raised.

	>  However, now that you have raised the issue, some comments:  Portability
	>  is less important than workability.  And, we were always assured that use
	>  of Ada guaranteed portability.  Apparently that was a lie; we also need to
	>  be Ada experts to achieve portability?  Just like all other languages?

	>I always find it amazing that anyone would ever have, even for a moment,
	>thought that using an Ada compiler would somehow guarantee that their code
	>was portable. And it is even more amazing that someone with this strange
	>viewpoint would turn around later and complain that they were lied to.

	>Really Joe, surely you know that you cannot achieve portability without
	>some care on the programmer's part. Yes, it is true that programs can be
	>written in Ada that are portable. BUT, and this is a big BUT, you do have
	>to know the language. No, you do not have to be an Ada expert, but you DO
	>have to know the language, and you do have to know the language well enough
	>so that you know the difference between Ada the language, and what your
	>compiler happens to implement for implementation dependent constructs.
	>If you don't know Ada, and you don't care about portability (as was
	>apparently the case in your project -- by don't know Ada here, I mean
	>that you do not bother to carefully study the definition of the language),
	>then you definitely CANNOT write portable code in Ada.

	>If anyone ever promised you that using an Ada compiler would guarantee that
	>your programs they were an ignorant fool, and it surprises me that anyone
	>would believe such nonsense. Certainly you never heard anything of the kind
	>from anyone who knew anything about the language!

	><<Well, too many people failed to make it work for #3 to be a likely cause,
	>  and the compilers of the day are known to have been a work in progress.

	>  This has degenerated into a pointless yes-you-did, no-I-didn't cycle.
	>  Suffice it to say that we disagree on a minor issue too far in the past to
	>  remember the details clearly enough to have a useful debate, assuming of
	>  course that the limitations of 1980s compilers have any relevance
	>  whatsoever in the 1990s.>>

	>I have NO clear idea what you did or didn't do. You made certain claims in
	>your original post, and I was interested to see if you could substantiate
	>them. It seems that really you can't, which I don't find too surprising,
	>because they did not make much sense to me. But I did not want to jump to
	>that conclusion, which is why I was trying to find out more details.

	> [snip}

	>No, you did not read carefully. Ada 95 absolutely DOES require programmers
	>to provide intermediate precision precisely. It is just that in certain
	>cases, this can be implicit in Ada 95. For example

	>   x := y(A * B) + y(C * D);

	>the conversions to type y here, specifying the intermediate scale and
	>precision are required in Ada 83 and Ada 95, and those are the important
	>cases. The case which has changed is relatively unimportant:

	>   x := y(A * B);

	>Ada 83 required the explicit conversion here, but since the most likely case
	>is that y is the type of x, Ada 95 allows a shorthand IN THIS SITUATION ONLY
	>of leaving out the type conversion.

	>So it is not at all the case that Ada83 should be able to handle fixed point
	>arithmetic automatically. Neither Ada 83 nor Ada 95 attempts this. As I noted,
	>PL/1 tried, but most people (not all, Robin characteristically dissents :-)
	>would agree that the attempt was a failure, for example the rule that requires

	>   22 + 5/3

	>to overflow in PL/1, where

	>   22 + 05/3

	>succeeds, is hardly a masterpiece of programming language
	>design,

Do you have something better?

This silly example has been bandied about on quite a few
occasions in cases like this so "show" that the conversion
rules for fixed-point working in PL/I are silly or not
optimum.

In fact, what you are doing is adding two very large nunmbers,
both of which are at the capacity of the decimal arithmetic
unit.

i.e. You are asking, what is 25.00000000000000 + 1.33333333333333 ?
on a machine with 15 digits of precision.

Of course it will overflow.  That's the nature of fixed-point
working.

so as to have at least one fewer digits after the point. (there's
a standard function for this.)
nd BTW, PL/I detects the overflow, should it occur,
and tells you.


	> but to be fair, PL/1 was really trying to solve an
	>impossible problem, so it is not surprising that its
	>solution has glitches. Indeed the reason that 22 + 5/3
	>overflows bears looking at, since it is a nice example
	>of good decisions
	>combining to have unintended consequences.

	>Question 1: What should be the precision of A/B?
	>Answer: The maximum precision possible guaranteeing no overflow.

	>Question 2: What happens when two numbers with different scales are added?
	>Answer: One is shifted left to line up decimal points (we do not want to
	>         lose pennies!)

	>Now 5/3 is a case of one digit divided by one digit, which can have at most
	>one digit before the decimal point (for the case of 9/1), so the result is

	>   1.6666666666666666666666666667

Pl/I doesn't round up.

	>Where by definition the number of digits is the maximum number of digits that
	>can be handled.

	>Now we add the 22 and

	>   1.6666666666666666666666666667
	>  22.0000000000000000000000000000

	>oops, the 2 fell out the left hand side.
	>But if the literal is 05 (with a leading zero), then it is two digits, and we
	>get:

	>  01.666666666666666666666666667
	>  22.000000000000000000000000000

	>It is really hard to get these rules to behave well without anomolies of this
	>type. To my taste no one has succeeded, and the decision in Ada 83 and Ada 95
	>to make the programmer specify intermediate precision and scaling seems
	>exactly right.

	><<But, the compilers we used in those days couldn't get it *exactly* right,
	>  so we didn't use fixed point arithmetic in Ada83.>>

	>There goes that claim again, but we still have seen no evidence or data to
	>back up this claim, other than "that must have been it, otherwise we would
	>not have thought it was the case". NOT very convincing!

	>Incidentally, if you want to read more about fixed-point issues in Ada,
	>a good starting point is the special issues report on Ada fixed-point
	>that I did for the Ada 9X project.




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-03  0:00     ` Joe Gwinn
@ 1997-12-04  0:00       ` Robert Dewar
  1997-12-04  0:00         ` Shmuel (Seymour J.) Metz
  0 siblings, 1 reply; 64+ messages in thread
From: Robert Dewar @ 1997-12-04  0:00 UTC (permalink / raw)



Joe said

<<Because Robert attacked me simply for saying that there was a problem back
then.  My point here is that I don't know why he cared that much about
such an ancient issue.  Never mind the now fuzzy details.  It very much
has the flavor of refighting yesterday's war.  Tiresome, and quite
pointless.
>>

I am really puzzled by this response. Joe said there were problems back
then. Fine, I wanted to find out what they were, doesn't seem like an
attack to me! But I must say, my attempt to get more detail on Joe's
claims have been entirely in vain, not for want of trying.

It is fine for people to complain that they had problems because of
incorrect implementation of fixed-point in early compilers, but only
if that really is the case, and I was dubious. I must say this entire
discussion has not removed the doubt I have that this claim is real.

I know Ada fixed-point pretty well, and I have followed its use quite
closely. I am aware of some difficulties in implementations, particularly
restrictions on non-binary small values, but I must say I am not aware
of any difficulties that would correspond to Joe's vague recollections,
so given Joe's inability to recall any details at all, I am inclined
to think that the best guess is that the problems came from not
understanding the language properly at this point (I certainly am
very aware of the level of confusion that people have over fixed-point,
which is one of the reasons I wrote the Ada 9X special report -- Joe
I recommend reading this for a better perspective on the issues).

As for your comments on portability being deliberate irony, I have no way
of telling which of your dubious statements are intended as irony and
which are serious :-) :-)





^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-04  0:00       ` Robert Dewar
@ 1997-12-04  0:00         ` Shmuel (Seymour J.) Metz
  0 siblings, 0 replies; 64+ messages in thread
From: Shmuel (Seymour J.) Metz @ 1997-12-04  0:00 UTC (permalink / raw)



Robert Dewar wrote:

> I know Ada fixed-point pretty well, and I have followed its use quite
> closely. I am aware of some difficulties in implementations, particularly
> restrictions on non-binary small values, but I must say I am not aware
> of any difficulties that would correspond to Joe's vague recollections,
> so given Joe's inability to recall any details at all, I am inclined
> to think that the best guess is that the problems came from not
> understanding the language properly at this point (I certainly am
> very aware of the level of confusion that people have over fixed-point,
> which is one of the reasons I wrote the Ada 9X special report -- Joe
> I recommend reading this for a better perspective on the issues).

While I found the Ada 83 rules for fixed point to be more complicated
than those of PL/I, they were understandable and the Alsys compiler that
we used for the Intel 80286 didn't seem to have any trouble handling
them. Not that we didn't run into compiler bugs, but they were mostly
minor and were in other areas. Maybe we just lucked out, but my
experience doesn't match Joe's.

-- 

                        Shmuel (Seymour J.) Metz
                        Senior Software SE

The values in from and reply-to are for the benefit of spammers:
reply to domain eds.com, user msustys1.smetz or to domain gsg.eds.com,
user smetz. Do not reply to spamtrap@library.lspace.org




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-03  0:00       ` Matthew Heaney
@ 1997-12-04  0:00         ` Shmuel (Seymour J.) Metz
  1997-12-04  0:00           ` Robert Dewar
  0 siblings, 1 reply; 64+ messages in thread
From: Shmuel (Seymour J.) Metz @ 1997-12-04  0:00 UTC (permalink / raw)



I'd have to double check whether it's been fixed in Ada 95, but it used
to be that you had to define fixed point types with a range slightly
larger than the intended range in order to ensure that the arithmetic
model mapped properly into the problem domain.

Matthew Heaney wrote:
> 
> In article <3485A850.3A92@gsg.eds.com>, nospam@gsg.eds.com wrote:
> 
> >Is it perfect? No. But it's no worse than the anomlies in Ada fixed
> >point, and those are managable.
> 
> I'm confused.  What fixed point anomalies in Ada are you refering to?  Can
> you elaborate?
> 
> --------------------------------------------------------------------
> Matthew Heaney
> Software Development Consultant
> <mailto:matthew_heaney@acm.org>
> (818) 985-1271

-- 

                        Shmuel (Seymour J.) Metz
                        Senior Software SE

The values in from and reply-to are for the benefit of spammers:
reply to domain eds.com, user msustys1.smetz or to domain gsg.eds.com,
user smetz. Do not reply to spamtrap@library.lspace.org




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  1997-12-04  0:00         ` Shmuel (Seymour J.) Metz
@ 1997-12-04  0:00           ` Robert Dewar
  0 siblings, 0 replies; 64+ messages in thread
From: Robert Dewar @ 1997-12-04  0:00 UTC (permalink / raw)



Seymour says

<<I'd have to double check whether it's been fixed in Ada 95, but it used
to be that you had to define fixed point types with a range slightly
larger than the intended range in order to ensure that the arithmetic
model mapped properly into the problem domain.
>>

Semour is referring to the permission, present in Ada 83 and Ada 95 to
shrink end-points by delta, This allows the convenience of such
declarations as

   type x is delta 2 ** -31 range -1.0 .. +1.0;

whereas if this freedom did not exist, one would have to say

   type x is delta 2 ** -31 range -1.0 .. +1.0 - 2 ** 31;

(assuming a 2's complement machine ...)

Of course in practice if you write a declaration like

   type x is delta 0.25 range -10.0 .. +10.0;

the endpoints *will* be included.

This was simply a choice of which case to require the explicit fudging
in, and does indeed cause a fair amount of confusion.





^ permalink raw reply	[flat|nested] 64+ messages in thread

* fixed point vs floating point
@ 2011-09-29 10:25 RasikaSrinivasan@gmail.com
  2011-09-29 10:49 ` AdaMagica
  2011-09-30 10:17 ` Stephen Leake
  0 siblings, 2 replies; 64+ messages in thread
From: RasikaSrinivasan@gmail.com @ 2011-09-29 10:25 UTC (permalink / raw)


friends

I am investigating the applicability of fixed point to a numerical
problem. I would like to develop the algorithm as a generic and test
with different floating and fixed point types to decide which one to
go with.

Questions:

- ada.numerics family is pretty much floating point only - is this
correct?
- can we design a generic (function or procedure) that can accept
either fixed point or floating point data types at the same time
excluding other types

thanks for hints/pointers,

srini



^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  2011-09-29 10:25 fixed point vs floating point RasikaSrinivasan@gmail.com
@ 2011-09-29 10:49 ` AdaMagica
  2011-09-29 13:38   ` Martin
  2011-09-30 10:17 ` Stephen Leake
  1 sibling, 1 reply; 64+ messages in thread
From: AdaMagica @ 2011-09-29 10:49 UTC (permalink / raw)


Unfortunately, fixed and floating point are separate categories of
real types, so there is no generic formal that can serve both.

You have to make the type private and supply all numeric operations
like this:

generic
  type Real is private;
  with function "+" (Left, right: Real) return Real;
  ...
package Numerics is



^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  2011-09-29 10:49 ` AdaMagica
@ 2011-09-29 13:38   ` Martin
  0 siblings, 0 replies; 64+ messages in thread
From: Martin @ 2011-09-29 13:38 UTC (permalink / raw)


On Sep 29, 11:49 am, AdaMagica <christ-usch.gr...@t-online.de> wrote:
> Unfortunately, fixed and floating point are separate categories of
> real types, so there is no generic formal that can serve both.
>
> You have to make the type private and supply all numeric operations
> like this:
>
> generic
>   type Real is private;
>   with function "+" (Left, right: Real) return Real;
>   ...
> package Numerics is

Is there a means of determining that 'Real' isn't an integer type?...I
can't think of one...other than using ASIS.

Could AdaControl spot this?

-- Martin



^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  2011-09-29 10:25 fixed point vs floating point RasikaSrinivasan@gmail.com
  2011-09-29 10:49 ` AdaMagica
@ 2011-09-30 10:17 ` Stephen Leake
  2011-09-30 16:25   ` tmoran
                     ` (3 more replies)
  1 sibling, 4 replies; 64+ messages in thread
From: Stephen Leake @ 2011-09-30 10:17 UTC (permalink / raw)


"RasikaSrinivasan@gmail.com" <rasikasrinivasan@gmail.com> writes:

> friends
>
> I am investigating the applicability of fixed point to a numerical
> problem. I would like to develop the algorithm as a generic and test
> with different floating and fixed point types to decide which one to
> go with.

What sort of criteria are you using to make the decision?

If it's just speed, then the answer will depend more on the hardware and
the level of compiler optimization than on this choice.

The major algorithmic difference between fixed and floating is the
handling of small differences; floating point allows arbitrarily small
differences (down to the exponent limit, of course), while fixed point
has a fixed small difference.

So the choice should be determined by the application, not by
experiment.

The only place I have found fixed point to be useful is for time;
everything else ends up needing to be scaled, so it might as well be
floating point from the beginning.

The other thing that can determine the choice is the hardware; if you
have no floating point hardware, you will most likely need fixed point.
But even then, it depends on your speed requirement. You can do floating
point in software; it's just slower than fixed point on the same hardware.

> - can we design a generic (function or procedure) that can accept
> either fixed point or floating point data types at the same time
> excluding other types

No.

-- 
-- Stephe



^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  2011-09-30 10:17 ` Stephen Leake
@ 2011-09-30 16:25   ` tmoran
  2011-09-30 16:52     ` Dmitry A. Kazakov
  2011-10-01 11:09     ` Stephen Leake
  2011-09-30 19:26   ` tmoran
                     ` (2 subsequent siblings)
  3 siblings, 2 replies; 64+ messages in thread
From: tmoran @ 2011-09-30 16:25 UTC (permalink / raw)


> The only place I have found fixed point to be useful is for time;
> everything else ends up needing to be scaled, so it might as well be
> floating point from the beginning.

  Also for matching instrument or control values, formatting output,
saving memory, interfacing to C stuff, or future proofing.

In embedded devices measurements usually come in implicitly scaled
integers, not float, as do output control values.

If Degrees is fixed point, Degrees'image is much more readable than
if it's in floating point.

Usually real world physical values don't need 32 or more bits of float
for either their range or precision.  If memory size (or IO time) is
an issue, they can be stored in much smaller fixed point format.

Very often values passed to C et al are scaled, eg durations are
milliseconds or seconds or hundreths of seconds, represented as integers,
angles are tenths of a degree integers, and so forth.  Trying to do
calculations remembering the proper scaling is error-prone, but the
compiler will do it correctly if you use fixed point.

Intensities (eg color, sound) are always fractions, but they are usually
represented as if they were integers ranging from 0 ..  15, or 0 ..  255,
or 0 ..  65535.  Code like
  Is_Bright := (Color > 128);
is much more tedious and error-prone to change then
  Is_Bright := (Color > 0.5);



^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  2011-09-30 16:25   ` tmoran
@ 2011-09-30 16:52     ` Dmitry A. Kazakov
  2011-10-01 11:09     ` Stephen Leake
  1 sibling, 0 replies; 64+ messages in thread
From: Dmitry A. Kazakov @ 2011-09-30 16:52 UTC (permalink / raw)


On Fri, 30 Sep 2011 16:25:22 +0000 (UTC), tmoran@acm.org wrote:

> Intensities (eg color, sound) are always fractions, but they are usually
> represented as if they were integers ranging from 0 ..  15, or 0 ..  255,
> or 0 ..  65535.  Code like
>   Is_Bright := (Color > 128);
> is much more tedious and error-prone to change then
>   Is_Bright := (Color > 0.5);

Right, but arithmetic of color-models intensities is not linear, so
although a fixed point type would be far more convenient for color stimuli,
it still would require redefinition of the operations.

An addition to your list: screen units (horizontal, vertical coordinates).
Traditionally rendering frameworks are using floating point for them, but I
think that fixed point could be more suitable with regard of anti-aliasing
issues etc.

to the OP: Integer type is a special case of decimal fixed point. So I
don't understand your desire to single out signed integer types them.
However, for the modular ones, it would indeed make sense.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  2011-09-30 10:17 ` Stephen Leake
  2011-09-30 16:25   ` tmoran
@ 2011-09-30 19:26   ` tmoran
  2011-09-30 22:31   ` tmoran
  2011-10-01 13:37   ` RasikaSrinivasan@gmail.com
  3 siblings, 0 replies; 64+ messages in thread
From: tmoran @ 2011-09-30 19:26 UTC (permalink / raw)


> The only place I have found fixed point to be useful is for time;
> everything else ends up needing to be scaled, so it might as well be
> floating point from the beginning.

  Also for matching instrument or control values, formatting output,
saving memory, interfacing to C stuff, or future proofing.

In embedded devices measurements usually come in implicitly scaled
integers, not float, as do output control values.

If Degrees is fixed point, Degrees'image is much more readable than
if it's in floating point.

Usually real world physical values don't need 32 or more bits of float
for either their range or precision.  If memory size (or IO time) is
an issue, they can be stored in much smaller fixed point format.

Very often values passed to C et al are scaled, eg durations are
milliseconds or seconds or hundreths of seconds, represented as integers,
angles are tenths of a degree integers, and so forth.  Trying to do
calculations remembering the proper scaling is error-prone, but the
compiler will do it correctly if you use fixed point.

Intensities (eg color, sound) are always fractions, but they are usually
represented as if they were integers ranging from 0 ..  15, or 0 ..  255,
or 0 ..  65535.  Code like
  Is_Bright := (Color > 128);
is much more tedious and error-prone to change then
  Is_Bright := (Color > 0.5);



^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  2011-09-30 10:17 ` Stephen Leake
  2011-09-30 16:25   ` tmoran
  2011-09-30 19:26   ` tmoran
@ 2011-09-30 22:31   ` tmoran
  2011-10-01 13:37   ` RasikaSrinivasan@gmail.com
  3 siblings, 0 replies; 64+ messages in thread
From: tmoran @ 2011-09-30 22:31 UTC (permalink / raw)


> The only place I have found fixed point to be useful is for time;
> everything else ends up needing to be scaled, so it might as well be
> floating point from the beginning.

  Also for matching instrument or control values, formatting output,
saving memory, interfacing to C stuff, or future proofing.

In embedded devices measurements usually come in implicitly scaled
integers, not float, as do output control values.

If Degrees is fixed point, Degrees'image is much more readable than
if it's in floating point.

Usually real world physical values don't need 32 or more bits of float
for either their range or precision.  If memory size (or IO time) is
an issue, they can be stored in much smaller fixed point format.

Very often values passed to C et al are scaled, eg durations are
milliseconds or seconds or hundreths of seconds, represented as integers,
angles are tenths of a degree integers, and so forth.  Trying to do
calculations remembering the proper scaling is error-prone, but the
compiler will do it correctly if you use fixed point.

Intensities (eg color, sound) are always fractions, but they are usually
represented as if they were integers ranging from 0 ..  15, or 0 ..  255,
or 0 ..  65535.  Code like
  Is_Bright := (Color > 128);
is much more tedious and error-prone to change then
  Is_Bright := (Color > 0.5);



^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  2011-09-30 16:25   ` tmoran
  2011-09-30 16:52     ` Dmitry A. Kazakov
@ 2011-10-01 11:09     ` Stephen Leake
  1 sibling, 0 replies; 64+ messages in thread
From: Stephen Leake @ 2011-10-01 11:09 UTC (permalink / raw)


tmoran@acm.org writes:

>> The only place I have found fixed point to be useful is for time;
>> everything else ends up needing to be scaled, so it might as well be
>> floating point from the beginning.
>
>   Also for matching instrument or control values, formatting output,
> saving memory, interfacing to C stuff, or future proofing.
>
> In embedded devices measurements usually come in implicitly scaled
> integers, not float, as do output control values.

Well, yes. I do declare fixed point types that match hardware values.
But they immediately get turned into float (or time fixed point); they
are not used in computations.

> If Degrees is fixed point, Degrees'image is much more readable than
> if it's in floating point.

Put (item, fore, aft, exp) gives the same control.

> Usually real world physical values don't need 32 or more bits of float
> for either their range or precision.  If memory size (or IO time) is
> an issue, they can be stored in much smaller fixed point format.

Yes; these are reasonable criteria.

> Very often values passed to C et al are scaled, eg durations are
> milliseconds or seconds or hundreths of seconds, represented as integers,
> angles are tenths of a degree integers, and so forth.  Trying to do
> calculations remembering the proper scaling is error-prone, but the
> compiler will do it correctly if you use fixed point.

Yes. Luckily, I don't have to do that often at all :). 

-- 
-- Stephe



^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  2011-09-30 10:17 ` Stephen Leake
                     ` (2 preceding siblings ...)
  2011-09-30 22:31   ` tmoran
@ 2011-10-01 13:37   ` RasikaSrinivasan@gmail.com
  2011-10-02 14:19     ` Stephen Leake
  3 siblings, 1 reply; 64+ messages in thread
From: RasikaSrinivasan@gmail.com @ 2011-10-01 13:37 UTC (permalink / raw)


in embedded platforms, it is not often we have a floating point
processor (or it may come at a price which we cannot afford!) and they
have to be emulated. fixed point arithmetic may do the job in certain
class of problems. in my class of problems, i am not sure the fixed
point arithmetic will be sufficient. the experiments are to understand
how the fixed point solutions may diverge from the floating point
solutions.

but the basic answer appears to be that the ada generics makes it a
tiny bit harder to do this - but unfortunately renders ADa.Numerics.*
also not a viable option for fixed point data types.

thanks for the insights, srini

On Sep 30, 6:17 am, Stephen Leake <stephen_le...@stephe-leake.org>
wrote:
> "RasikaSriniva...@gmail.com" <rasikasriniva...@gmail.com> writes:
> > friends
>
> > I am investigating the applicability of fixed point to a numerical
> > problem. I would like to develop the algorithm as a generic and test
> > with different floating and fixed point types to decide which one to
> > go with.
>
> What sort of criteria are you using to make the decision?
>
> If it's just speed, then the answer will depend more on the hardware and
> the level of compiler optimization than on this choice.
>
> The major algorithmic difference between fixed and floating is the
> handling of small differences; floating point allows arbitrarily small
> differences (down to the exponent limit, of course), while fixed point
> has a fixed small difference.
>
> So the choice should be determined by the application, not by
> experiment.
>
> The only place I have found fixed point to be useful is for time;
> everything else ends up needing to be scaled, so it might as well be
> floating point from the beginning.
>
> The other thing that can determine the choice is the hardware; if you
> have no floating point hardware, you will most likely need fixed point.
> But even then, it depends on your speed requirement. You can do floating
> point in software; it's just slower than fixed point on the same hardware.
>
> > - can we design a generic (function or procedure) that can accept
> > either fixed point or floating point data types at the same time
> > excluding other types
>
> No.
>
> --
> -- Stephe




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: fixed point vs floating point
  2011-10-01 13:37   ` RasikaSrinivasan@gmail.com
@ 2011-10-02 14:19     ` Stephen Leake
  0 siblings, 0 replies; 64+ messages in thread
From: Stephen Leake @ 2011-10-02 14:19 UTC (permalink / raw)


"RasikaSrinivasan@gmail.com" <rasikasrinivasan@gmail.com> writes:

> in embedded platforms, it is not often we have a floating point
> processor (or it may come at a price which we cannot afford!) and they
> have to be emulated. fixed point arithmetic may do the job in certain
> class of problems. in my class of problems, i am not sure the fixed
> point arithmetic will be sufficient. the experiments are to understand
> how the fixed point solutions may diverge from the floating point
> solutions.

It is not likely that experiment will answer that question, unless the
only issue is speed.

If you need to worry about data range and precision, analysis of the
inputs and algorithms is necessary. It might be possible to develop a
truly representative set of data for testing this, but that requires the
same analysis!

Speed has to be measured, on a representative set of data; it's somewhat
easier to develop a data set that covers all speed issues.

-- 
-- Stephe



^ permalink raw reply	[flat|nested] 64+ messages in thread

end of thread, other threads:[~2011-10-02 14:19 UTC | newest]

Thread overview: 64+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-29 10:25 fixed point vs floating point RasikaSrinivasan@gmail.com
2011-09-29 10:49 ` AdaMagica
2011-09-29 13:38   ` Martin
2011-09-30 10:17 ` Stephen Leake
2011-09-30 16:25   ` tmoran
2011-09-30 16:52     ` Dmitry A. Kazakov
2011-10-01 11:09     ` Stephen Leake
2011-09-30 19:26   ` tmoran
2011-09-30 22:31   ` tmoran
2011-10-01 13:37   ` RasikaSrinivasan@gmail.com
2011-10-02 14:19     ` Stephen Leake
  -- strict thread matches above, loose matches on Subject: below --
1997-12-02  0:00 Robert Dewar
1997-12-02  0:00 ` Joe Gwinn
1997-12-02  0:00   ` Ken Garlington
1997-12-03  0:00     ` Joe Gwinn
1997-12-04  0:00       ` Robert Dewar
1997-12-04  0:00         ` Shmuel (Seymour J.) Metz
1997-12-02  0:00   ` Robert Dewar
1997-12-02  0:00     ` Matthew Heaney
1997-12-03  0:00       ` Robert Dewar
1997-12-03  0:00     ` robin
1997-12-03  0:00       ` Robert Dewar
1997-12-03  0:00     ` Shmuel (Seymour J.) Metz
1997-12-03  0:00       ` Matthew Heaney
1997-12-04  0:00         ` Shmuel (Seymour J.) Metz
1997-12-04  0:00           ` Robert Dewar
1997-12-03  0:00       ` Robert Dewar
1997-12-03  0:00       ` Robert Dewar
1997-12-03  0:00 ` robin
1997-11-28  0:00 tmoran
1997-11-28  0:00 ` Robert Dewar
1997-11-27  0:00 tmoran
1997-11-27  0:00 ` Robert Dewar
1997-11-29  0:00   ` Tarjei T. Jensen
     [not found] <9711221603.AA03295@nile.gnat.com>
1997-11-22  0:00 ` Ken Garlington
1997-11-22  0:00 Matthew Heaney
1997-11-22  0:00 ` Tucker Taft
1997-11-22  0:00   ` Robert Dewar
1997-11-22  0:00     ` Matthew Heaney
1997-11-23  0:00 ` Geert Bosch
1997-11-23  0:00   ` Tom Moran
1997-11-25  0:00     ` John A. Limpert
1997-11-25  0:00       ` Robert Dewar
1997-11-25  0:00       ` Robert Dewar
1997-11-23  0:00   ` Matthew Heaney
1997-11-23  0:00     ` Robert Dewar
1997-11-24  0:00       ` Herman Rubin
1997-11-24  0:00         ` Robert Dewar
1997-11-25  0:00           ` Joe Gwinn
1997-11-25  0:00             ` Robert Dewar
1997-11-25  0:00               ` Joe Gwinn
1997-11-25  0:00                 ` Robert Dewar
1997-11-26  0:00                   ` Joe Gwinn
1997-11-26  0:00                     ` Robert Dewar
1997-12-01  0:00                       ` Joe Gwinn
1997-12-01  0:00                         ` Robert Dewar
1997-12-01  0:00                           ` Joe Gwinn
1997-12-03  0:00                           ` robin
1997-11-25  0:00             ` Matthew Heaney
1997-11-26  0:00             ` William A Whitaker
1997-11-24  0:00     ` Geert Bosch
1997-11-24  0:00 ` Vince Del Vecchio
1997-11-24  0:00 ` Vince Del Vecchio
1997-12-03  0:00 ` robin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox