From: dkurfis@enet.net (Dan Kurfis)
Subject: Re: Higher precision and generics
Date: Fri, 17 Mar 95 00:36:28 PST
Date: 1995-03-17T00:36:28-08:00 [thread overview]
Message-ID: <3kbhv0$jcl@maple.enet.net> (raw)
In-Reply-To: 3k6ljm$bhp@nef.ens.fr
In article <3k6ljm$bhp@nef.ens.fr>, sands@clipper.ens.fr says...
>
>|>:I am writing a package of matrix routines (in Ada!) in which some
intermediate
>|>: results
>|>:should be calculated in higher precision than normal. The type for
normal
>|>: precision is
>|>:called Real and is a generic parameter of my package:
>|>:generic
>|>: type Real is digits <>;
>|>:package Matrix_Stuff is
>|>: ...
>|>:end;
>|>:
>|>:In the body I would like to be able to declare Double to be a higher
precision
>|>:floating point type, something like:
>|>:package body Matrix_Stuff is
>|>: type Double is digits 2*Real'Digits;
>|>: ...
>|>:end;
>|>:
>|>:Unfortunately, this does not compile: apparently formal types like
Real are not
>|>:considered static, so Real'Digits is not considered static, and only
static
>|>:expressions are allowed in a "type X is digits expression"
declaration. (If
>|>:I write "type Double is digits 2*Float'Digits" then it compiles
fine).
>|>:
>|>:What can I do to get type Double to be of higher precision than Real?
>|>:Can anyone please help? And does anyone know why formal types like
Real are
>|>:not considered static?
>|>:
>|>:Thanks a lot,
>|>:
>|>:Duncan Sands.
>|>
>|> Probably the best thing to do in this situation is to declare a type
within
>|> the package body which has the highest precision possible:
>|>
>|> type Max_Precision_type is digits System.Max_digits;
>|>
>|> and to use that internaly. Even if there were a way of overcoming the
need
>|> for the expression after digits to be statis, the concept of your
internal
>|> one always being double the precision of the parameter type falls
down if
>|> an instance of the package is declared using any type where the
number of
>|> digits is greater than half the maximum.
>|>
>|> Hope this helps,
>|> --
>|> David Arno
>
>Yes, and as Keith Thompson (The_Other_Keith: kst@thomsoft.com) pointed
>out to me, most Ada compilers only support either 32 or 64 bit
>floating point numbers, so doubling the number of digits would always
>require at least the maximum machine precision anyway.
>Dan Kurfis (dkurfis@enet.net) suggested taking the higher precision type
>as an additional generic parameter, which would increase the portability
>and flexibility of the package.
>
>Many thanks to you all for your help,
>
>Duncan Sands.
Duncan,
I've thought about your question a little more and have read some
of the other replys (my other responses to you didn't get posted for some
reason, hopefully this one will...).
First, both the Motorola and Intel FP Coprocessor chips handle
32, 64, and 80 bit floats. The compiler I use (ICC) always seems
to choose the 80-bit format for intermediate results, although it
usually uses the 64-bit format for the final result.
Next, I think I can now explain what I was trying to say earlier
about the use of the optional range clause. A typical application for me
might involve radar range data. Most of the radars I work with have
range accuracies no better than a few meters. Therefore, I might create
the following type:
type Radar_Range_Meters is digits 1;
This type would be more than adequate for my needs, since 1/10 of a
meter far exceeds the accuracy of my radar. However, my compiler will
still use the largest floating point types available because it has no
idea what the MAXIMUM range is. If I qualify the type declaration:
type Radar_Range_Meters is digits 1 range 0.0..37000.0;
I now have a type which covers a maximum range of 20 nautical miles.
This type can easily fit inside the 32-bit format. Even more precise
intermediate results (digits 2 or 3) will also fit in 32 bits. When
moving objects back and forth between memory and the coprocessor, I cut
the number of memory transactions in half (the coprocessor itself may not
realize any performance improvement, depending on its internal
architecture). This may not be important to non-realtime systems, unless
a large number of floating point objects are to be resident im memory.
Also, in order for this concept to REALLY work, the compiler may have to
treat such quantities as if they were scaled and (if not symmetrical
about zero) biased. This treatment will create more overhead. Maybe
todays' compilers aren't mature enough to do this, I really don't know.
Further, I have to believe that somebody out there has done tradeoff
studies between the cost of always using (unnecessarily) large floating
point formats versus the cost of implementing scaling and biasing.
If you (or anybody out there) are planning to attend the Software
Technology Conference in Salt Lake City next month, let me know. I'll be
there all week and would enjoy discussing issues like this one. As I've
said before, numerics are one of my weakest areas, so I would welcome any
comments/discussion.
Finally, I think it's important for all of us to remember that
even though the current state-of-the art in compilers and target hardware
may not support concepts such as the one just presented (or maybe they do
and I just don't know it), that's no reason for us to limit our use of
the language. I wouldn't mind seeing the hardware play "catch-up" with
us for a change.
Dan Kurfis
next prev parent reply other threads:[~1995-03-17 8:36 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
1995-03-11 17:41 Higher precision and generics Duncan Sands
1995-03-14 16:13 ` David Arno
1995-03-15 12:12 ` Duncan Sands
1995-03-17 8:36 ` Dan Kurfis [this message]
1995-03-17 12:37 ` Peter Hermann
1995-03-19 1:23 ` Robert Dewar
1995-03-20 16:33 ` Peter Hermann
1995-03-21 21:14 ` Robert Dewar
1995-03-18 1:45 ` Keith Thompson
1995-03-20 9:05 ` dkurfis
1995-03-15 15:36 ` Mats Weber
1995-03-15 16:35 ` Peter Hermann
replies disabled
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox