* Matrix Multiplication
@ 1999-12-14 0:00 William Dale
1999-12-14 0:00 ` Gautier
` (5 more replies)
0 siblings, 6 replies; 41+ messages in thread
From: William Dale @ 1999-12-14 0:00 UTC (permalink / raw)
OK,
I was challenged by one of my co-workers - a control and guidance type -
not a real software engineer.
His claim : "Fortran is faster doing floating point multiplication than
Ada"
I could not get any other specifications such as hardware, particular
compiler, version, vendor, special math libraries or any other
equivocations. Just he
claims the above in all cases.
So could I get some help getting times for a Matrix inversion on a 50X50
flotaing point matrix in both languages. Anyone already done these
types of tests? Any sugestions on special libraries that he may not know
about to do such things in Ada. Obviously he has the Fortan side nailed.
I know it is silly but this is the kind of FUD that gets thown around
all the time. Either language could win - it depends on many of the
above issues.
Thanks!
Bill Dale
mailto:william.dale.jr@lmco.com
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-14 0:00 Matrix Multiplication William Dale
@ 1999-12-14 0:00 ` Gautier
[not found] ` <5l1f5s4kck891a2s6o8bhvkirm4q79hm6c@4ax.com>
1999-12-15 0:00 ` Gautier
1999-12-14 0:00 ` David C. Hoos, Sr.
` (4 subsequent siblings)
5 siblings, 2 replies; 41+ messages in thread
From: Gautier @ 1999-12-14 0:00 UTC (permalink / raw)
> OK,
> I was challenged by one of my co-workers - a control and guidance type -
> not a real software engineer.
>
> His claim : "Fortran is faster doing floating point multiplication than
> Ada"
>
> I could not get any other specifications such as hardware, particular
> compiler, version, vendor, special math libraries or any other
> equivocations. Just he
> claims the above in all cases.
>
> So could I get some help getting times for a Matrix inversion on a 50X50
> flotaing point matrix in both languages. Anyone already done these
> types of tests? Any sugestions on special libraries that he may not know
> about to do such things in Ada. Obviously he has the Fortan side nailed.
I have done a comparison, for sparse matrix computations, where
matrix multiplications do not depend on library code, so you compare
really Ada and Fortran. Compiled on DEC Ada & DEC Fortran, both
give almost (some percents) the same timing - even though generic
code is used in Ada to match single and double precision, and
the Ada code is not optimal.
This is an equivalent code comparison.
For a large scale numerics project, procedure encapsulation
subtyping, exceptions in Ada save a lot of parameter passing,
error codes, redimensioning everything everywhere (with errors...)
compared to Fortran - this is beside the floating-point part
strictly speaking. This has to be proven, but my conjecture
is that one can produce Ada code much friendlier with processor cache,
hence faster than Fortran. Well-placed "renames" or "inline" pragmas
produce astonishing effects, too.
> I know it is silly but this is the kind of FUD that gets thown around
> all the time. Either language could win - it depends on many of the
> above issues.
Claims of this sort are never verified and the rule is to compare apples
with patatoes... That is the problem with computing (and humans...). Maybe
the things were more `visual' in the steam / electricity choices - although...
NB: for a fair comparison don't forget to suppress all the standard Ada
checkings at compile-time...
--
Gautier
_____\\________________\_______\
http://members.xoom.com/gdemont/
^ permalink raw reply [flat|nested] 41+ messages in thread
[parent not found: <5l1f5s4kck891a2s6o8bhvkirm4q79hm6c@4ax.com>]
* Re: Matrix Multiplication
[not found] ` <5l1f5s4kck891a2s6o8bhvkirm4q79hm6c@4ax.com>
@ 1999-12-15 0:00 ` Gautier
1999-12-15 0:00 ` Marin D. Condic
0 siblings, 1 reply; 41+ messages in thread
From: Gautier @ 1999-12-15 0:00 UTC (permalink / raw)
> Intrigued about the 'renames' bit. I thought the renames was just a
> compiler overhead and had no run-time effect at all.
IIRC the renames takes `aliases' the its target its present state.
E.g.
declare
x: thing renames complicated(i,j).k;
begin
-- i,j could change, doesn't affect x
The ARM gurus will comment better...
Anyway it is a way to obtain a direct pointer to `complicated(i,j).k',
not just a syntaxic alias. On my tests with sparse matrices,
`renames' in the matrix-vector multiplication simply *halves* the computation
time of the whole BiCGStab algorithm (compiled on Compaq (DEC) Ada)!
The `renames' are
procedure Mult( A: in out CRS_matrix; u: vector; w: in out vector ) is
deb, fin, jaj: index;
ui, wi, wi_sum: real;
rows: constant index:= A.rows;
val: vector renames A.val;
col_ind: index_array renames A.col_ind;
row_ptr: index_array renames A.row_ptr;
begin
if not A.symmetric then
-- *** la matrice est memorisee sous forme non symetrique
for i in 1..rows loop
deb := row_ptr(i);
fin := row_ptr(i + 1) - 1;
wi_sum := 0.0;
for j in deb .. fin loop
wi_sum := wi_sum + val(j) * u(col_ind(j));
end loop;
w(i) := wi_sum;
end loop;
...
GNAT optimizer (-O2) is smarter for memorizing these addresses,
but sometimes one has to help it too with a small `renames'
(testings for textured 3D)...
--
Gautier
_____\\________________\_______\_________
http://members.xoom.com/gdemont/gsoft.htm
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Gautier
@ 1999-12-15 0:00 ` Marin D. Condic
1999-12-15 0:00 ` Ted Dennison
1999-12-15 0:00 ` Gautier
0 siblings, 2 replies; 41+ messages in thread
From: Marin D. Condic @ 1999-12-15 0:00 UTC (permalink / raw)
Gautier wrote:
>
> > Intrigued about the 'renames' bit. I thought the renames was just a
> > compiler overhead and had no run-time effect at all.
>
> IIRC the renames takes `aliases' the its target its present state.
> E.g.
> declare
> x: thing renames complicated(i,j).k;
> begin
> -- i,j could change, doesn't affect x
>
> The ARM gurus will comment better...
It would seem that if the compiler were smart, it would compute I and J
once and find the position once, then reuse it throughout the loop. (I'm
presuming this would be done in nested "for" loops.) I'm not an expert
in compiler theory, but I recall seeing output from more than one
compiler/language doing this sort of thing. (loop invariant code? code
hoisting? some terminology I've long since archived to tape.) So while
the "renames" may help the compiler along in this respect, I think that
it could/should get there without the help.
Of course, I've seen lots of things where a compiler "ought" to do
something, but won't unless you trick it into doing it with syntactic
deceptions. Thats more a comment on the quality of the compiler. In this
case, I'd think that Fortran could produce the optimization without a
"renames" so Ada ought to be able to do the same.
The original question in this thread had to do with "Ada can't do
floating point math as fast as Fortran" - which is slightly different
from Matrix Math in Ada/Fortran. Maybe someone can correct me if I'm
wrong, (or just plain call me an ignoramus! :-) but I don't see much
syntactic or semantic difference between Ada arithmetic and Fortran
arithmetic. For that matter, there isn't much apparent difference in
array processing. So once you disable the runtime checks, (O.K., maybe
that's part of the semantics) the differences don't amount to a warm
bucket of spit. Any performance difference should be attributable to the
quality of the compilers in question. (This is probably the most often
misunderstood thing about language efficiency. Most of the "unwashed
masses" seem to be incapable of distinguishing between the quality of a
language and the quality of a specific compiler. So the compiler writers
have a special responsibility to put out good quality products lest they
besmirch the language in the minds of many! :-)
MDC
--
=============================================================
Marin David Condic - Quadrus Corporation - 1.800.555.3393
1015-116 Atlantic Boulevard, Atlantic Beach, FL 32233
http://www.quadruscorp.com/
Visit my web site at: http://www.mcondic.com/
"Capitalism without failure is like religion without sin."
-- Allan Meltzer, Economist
=============================================================
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Marin D. Condic
@ 1999-12-15 0:00 ` Ted Dennison
1999-12-15 0:00 ` Gautier
1999-12-15 0:00 ` Gautier
1 sibling, 1 reply; 41+ messages in thread
From: Ted Dennison @ 1999-12-15 0:00 UTC (permalink / raw)
In article <3857D640.C1991F0C@quadruscorp.com>,
"Marin D. Condic" <mcondic-nospam@quadruscorp.com> wrote:
> language and the quality of a specific compiler. So the compiler
> writers have a special responsibility to put out good quality products
> besmirch the language in the minds of many! :-)
Hmmm. I think that's my cue to point out that ObjectAda has no loop
optimization options whatsoever. :-)
(nudge, nudge)
--
T.E.D.
Sent via Deja.com http://www.deja.com/
Before you buy.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Ted Dennison
@ 1999-12-15 0:00 ` Gautier
1999-12-15 0:00 ` Tucker Taft
` (2 more replies)
0 siblings, 3 replies; 41+ messages in thread
From: Gautier @ 1999-12-15 0:00 UTC (permalink / raw)
> Hmmm. I think that's my cue to point out that ObjectAda has no loop
> optimization options whatsoever. :-)
Is there optimization at all in OA ?
I didn't find the switches for that...
Maybe a hidden panel in the interface - a bit like...
http://lsewww.epfl.ch/wiesmann/jokes/errors/word.gif
(just a bit)
--
Gautier
_____\\________________\_______\
http://members.xoom.com/gdemont/
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Gautier
@ 1999-12-15 0:00 ` Tucker Taft
1999-12-16 0:00 ` Ted Dennison
1999-12-16 0:00 ` Ted Dennison
2 siblings, 0 replies; 41+ messages in thread
From: Tucker Taft @ 1999-12-15 0:00 UTC (permalink / raw)
Gautier wrote:
>
> > Hmmm. I think that's my cue to point out that ObjectAda has no loop
> > optimization options whatsoever. :-)
>
> Is there optimization at all in OA ?
I certainly presume so, since our front end generates
information specifically for the ObjectAda optimizer.
Try something like "-O3" or "-opt". Usually if you
run the compiler with no arguments it will dump out
a list of available switches.
One possibility is that the optimizer is only in some of their
targets. Historically, the x86 back end has been pretty different
from the other "risc-ish" backends. I have some memory that their
x86 backend did not do as many register optimizations as the
risc-ish backends (which is not terribly surprising given the
rather un-general register set on the x86 -- note: not trying to
start a CISC vs. RISC war here ;-).
> I didn't find the switches for that...
> Maybe a hidden panel in the interface - a bit like...
> http://lsewww.epfl.ch/wiesmann/jokes/errors/word.gif
> (just a bit)
>
> --
> Gautier
--
-Tucker Taft stt@averstar.com http://www.averstar.com/~stt/
Technical Director, Distributed IT Solutions (www.averstar.com/tools)
AverStar (formerly Intermetrics, Inc.) Burlington, MA USA
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Gautier
1999-12-15 0:00 ` Tucker Taft
@ 1999-12-16 0:00 ` Ted Dennison
1999-12-16 0:00 ` Ted Dennison
2 siblings, 0 replies; 41+ messages in thread
From: Ted Dennison @ 1999-12-16 0:00 UTC (permalink / raw)
In article <38581366.9078A2E8@maths.unine.ch>,
Gautier <Gautier.deMontmollin@maths.unine.ch> wrote:
> Is there optimization at all in OA ?
>
> I didn't find the switches for that...
In the Win32 OA (as of 6 months ago when last I checked) there are no
optimization options available. That does not mean that no optimization
is going on. But it does appear to mean that not much optimization is
going on.
I suspect the situation is quite different with their embedded targets.
--
T.E.D.
Sent via Deja.com http://www.deja.com/
Before you buy.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Gautier
1999-12-15 0:00 ` Tucker Taft
1999-12-16 0:00 ` Ted Dennison
@ 1999-12-16 0:00 ` Ted Dennison
2 siblings, 0 replies; 41+ messages in thread
From: Ted Dennison @ 1999-12-16 0:00 UTC (permalink / raw)
In article <38581366.9078A2E8@maths.unine.ch>,
Gautier <Gautier.deMontmollin@maths.unine.ch> wrote:
> > Hmmm. I think that's my cue to point out that ObjectAda has no loop
> > optimization options whatsoever. :-)
>
> Is there optimization at all in OA ?
>
> I didn't find the switches for that...
> Maybe a hidden panel in the interface - a bit like...
> http://lsewww.epfl.ch/wiesmann/jokes/errors/word.gif
> (just a bit)
>
> --
> Gautier
>
> _____\\________________\_______\
> http://members.xoom.com/gdemont/
>
--
T.E.D.
Sent via Deja.com http://www.deja.com/
Before you buy.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Marin D. Condic
1999-12-15 0:00 ` Ted Dennison
@ 1999-12-15 0:00 ` Gautier
1999-12-16 0:00 ` Marin D. Condic
1 sibling, 1 reply; 41+ messages in thread
From: Gautier @ 1999-12-15 0:00 UTC (permalink / raw)
> It would seem that if the compiler were smart, it would compute I and J
> once and find the position once, then reuse it throughout the loop.
GNAT does reuse adresses. A bright proof in c.l.a (by Samuel Tardieu):
<< [a(b(c,d+e(f,g)).h(i,j)) := a(b(c,d+e(f,g)).h(i,j)) + 1;]
Where did you get the impression that the optimizer would miss this?
For example, using GNAT, the following expression generates: (ix86 code)
t__pXb: movl t__jXb,%edx |
decl %edx |
movl t__iXb,%eax |
decl %eax |
leal (%eax,%eax,4),%eax |
sall $3,%eax |
leal (%eax,%edx,4),%edx |
movl t__gXb,%ecx |
decl %ecx | Address computation
movl t__fXb,%eax |
decl %eax |
leal (%eax,%eax,4),%eax |
sall $3,%eax |
movl t__eXb(%eax,%ecx,4),%eax |
addl t__dXb,%eax |
imull $400,%eax,%eax |
leal -400(%edx,%eax),%eax |
imull $4000,t__cXb,%edx |
movl t__bXb-4000(%eax,%edx),%eax |
decl %eax |
incl t__aXb(,%eax,4) <--- Increment done here!
ret
The code used to generate this was: (-O3 -fomit-frame-pointer -gnatp)
package T is
pragma Elaborate_Body;
end T;
package body T is
pragma Warnings (Off); -- Uninitialized variables
type Two_Ints is array (Integer range <>, Integer range <>) of Integer;
type Rec is record
H : Two_Ints (1 .. 10, 1 .. 10);
end record;
type Two_Recs is array (Integer range <>, Integer range <>) of Rec;
A : array (1 .. 10) of Integer;
B : Two_Recs (1 .. 10, 1 .. 10);
C : Integer;
D : Integer;
E : Two_Ints (1 .. 10, 1 .. 10);
F : Integer;
G : Integer;
I : Integer;
J : Integer;
procedure P is
begin
a(b(c,d+e(f,g)).h(i,j)) := a(b(c,d+e(f,g)).h(i,j)) + 1;
end P;
end T;
>>
But there are *rare* cases where such "invariants" are not provable as such,
and a renames helps, even with GNAT. On the other side, some Ada 83 compilers
do not seem to optimize these things (Compaq Ada, Alsys).
--
Gautier
_____\\________________\_______\_________
http://members.xoom.com/gdemont/gsoft.htm
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Gautier
@ 1999-12-16 0:00 ` Marin D. Condic
1999-12-27 0:00 ` Jeffrey L Straszheim
0 siblings, 1 reply; 41+ messages in thread
From: Marin D. Condic @ 1999-12-16 0:00 UTC (permalink / raw)
Gautier wrote:
> Where did you get the impression that the optimizer would miss this?
From you. :-) The claim that the "renames" would help, sort of presumes
that the optimizer might miss it.
>
> But there are *rare* cases where such "invariants" are not provable as such,
> and a renames helps, even with GNAT. On the other side, some Ada 83 compilers
> do not seem to optimize these things (Compaq Ada, Alsys).
>
I'll take your word for that. Still, it seems to look like in *most*
cases Ada and Fortran ought to be able to generate equally efficient
code for the garden variety floating point math and array reference
operations. The rest is arguing about the relative quality of different
compilers. That is the grand misunderstanding of so many people who say
"Ada makes slower code than Fortran for XYZ..."
MDC
--
=============================================================
Marin David Condic - Quadrus Corporation - 1.800.555.3393
1015-116 Atlantic Boulevard, Atlantic Beach, FL 32233
http://www.quadruscorp.com/
Visit my web site at: http://www.mcondic.com/
"Capitalism without failure is like religion without sin."
-- Allan Meltzer, Economist
=============================================================
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-16 0:00 ` Marin D. Condic
@ 1999-12-27 0:00 ` Jeffrey L Straszheim
0 siblings, 0 replies; 41+ messages in thread
From: Jeffrey L Straszheim @ 1999-12-27 0:00 UTC (permalink / raw)
Marin D. Condic wrote:
> I'll take your word for that. Still, it seems to look like in *most*
> cases Ada and Fortran ought to be able to generate equally efficient
> code for the garden variety floating point math and array reference
> operations. The rest is arguing about the relative quality of different
> compilers. That is the grand misunderstanding of so many people who say
> "Ada makes slower code than Fortran for XYZ..."
I probably don't know what I'm talking about here, but one big
advantage Fortran seems to have over languages in the C family
is its lack of aliasing. That is, in Fortran the compiler can
usually assume that an array element will only be accessed from
within that array, and can aggressively optimize.
For instance in C:
void some_function (int x[], int length, int *something)
{
int k;
for (k = 1; k < length; ++k) {
*something = x[k] + x[k-1];
}
}
For this function, the compiler cannot:
1. Assume that 'length' is constant -- which might have allowed
some optimization.
2. Remember, in a register, the current iteration's 'x[k]', which
could be used as the next iteration's 'x[k-1]'.
Fortunately, 'k' is local. Had it been global (yuck) then the
compiler couldn't even make any assumptions about 'k'.
Given Ada's more restrictive use of and more strict typing, it should
do better than C, but perhaps not as well as Fortran for some such
loops.
All this being said, I understand that new Fortrans allow some limited
types of aliasing (or maybe they don't). If so perhaps even this
advantage is lost.
-- Jeffrey Straszheim
-- Systems Engineer, Programmer
-- http://www.shadow.net/~stimuli
-- stimuli AT shadow DOT net
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-14 0:00 ` Gautier
[not found] ` <5l1f5s4kck891a2s6o8bhvkirm4q79hm6c@4ax.com>
@ 1999-12-15 0:00 ` Gautier
1 sibling, 0 replies; 41+ messages in thread
From: Gautier @ 1999-12-15 0:00 UTC (permalink / raw)
> NB: for a fair comparison don't forget to suppress all the standard Ada
> checkings at compile-time...
To avoid misunderstanding, I mean by that: suppress, when compiling,
all the standard Ada run-time checks. ;-) G.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-14 0:00 Matrix Multiplication William Dale
1999-12-14 0:00 ` Gautier
@ 1999-12-14 0:00 ` David C. Hoos, Sr.
1999-12-15 0:00 ` Greg Lindahl
` (3 subsequent siblings)
5 siblings, 0 replies; 41+ messages in thread
From: David C. Hoos, Sr. @ 1999-12-14 0:00 UTC (permalink / raw)
William Dale <william.dale.jr@lmco.com> wrote in message
news:385699B5.59C14D03@lmco.com...
> OK,
> I was challenged by one of my co-workers - a control and guidance type -
> not a real software engineer.
>
> His claim : "Fortran is faster doing floating point multiplication than
> Ada"
>
> I could not get any other specifications such as hardware, particular
> compiler, version, vendor, special math libraries or any other
> equivocations. Just he
> claims the above in all cases.
>
> So could I get some help getting times for a Matrix inversion on a 50X50
> flotaing point matrix in both languages. Anyone already done these
> types of tests? Any sugestions on special libraries that he may not know
> about to do such things in Ada. Obviously he has the Fortan side nailed.
>
> I know it is silly but this is the kind of FUD that gets thown around
> all the time. Either language could win - it depends on many of the
> above issues.
Well... any meaningful comparison could only be drawn if both subprograms
were implementing the same algorithm.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-14 0:00 Matrix Multiplication William Dale
1999-12-14 0:00 ` Gautier
1999-12-14 0:00 ` David C. Hoos, Sr.
@ 1999-12-15 0:00 ` Greg Lindahl
1999-12-15 0:00 ` Preben Randhol
1999-12-15 0:00 ` E. Robert Tisdale
` (2 subsequent siblings)
5 siblings, 1 reply; 41+ messages in thread
From: Greg Lindahl @ 1999-12-15 0:00 UTC (permalink / raw)
William Dale <william.dale.jr@lmco.com> writes:
> So could I get some help getting times for a Matrix inversion on a 50X50
> flotaing point matrix in both languages.
This is bad for 2 reasons. The first is that people do operations like
that using subroutine libraries written in assembly, no matter what
language you are using. The second is that matrix inversion usually is
a bad idea, and you should use LU decomposition instead.
-- g
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-14 0:00 Matrix Multiplication William Dale
` (2 preceding siblings ...)
1999-12-15 0:00 ` Greg Lindahl
@ 1999-12-15 0:00 ` E. Robert Tisdale
[not found] ` <3856FD3F.8291A71C@ucar.edu>
1999-12-15 0:00 ` Ted Dennison
[not found] ` <01bf4708$99ef98f0$022a6282@dieppe>
5 siblings, 1 reply; 41+ messages in thread
From: E. Robert Tisdale @ 1999-12-15 0:00 UTC (permalink / raw)
William Dale wrote:
> I was challenged by one of my co-workers -
> a control and guidance type - not a real software engineer.
>
> His claim:
> "Fortran is faster doing floating point multiplication than Ada"
>
> I could not get any other specifications such as hardware,
> particular compiler, version, vendor, special math libraries
> or any other equivocations. Just he claims the above in all cases.
>
> So could I get some help getting times
> for a Matrix inversion on a 50X50 floating point matrix in both languages.
> Anyone already done these types of tests?
> Any suggestions on special libraries that he may not know about
> to do such things in Ada. Obviously he has the Fortan side nailed.
>
> I know it is silly but this is the kind of FUD
> that gets thown around all the time.
> Either language could win - it depends on many of the above issues.
Performance, in general, has nothing to do with the programming language.
There may be considerable differences between optimizing compilers.
Fortran compilers are typically better at optimizing numerical codes
than other compilers because that is what Fortran programmers do.
Just get a copy of your co-workers Fortran source code
and convert it into an Ada program
then compile his code with the GNU Fortran compiler g77
and compile your code with the GNU Ada compiler gnat.
Both compilers should emit exactly the same code
unless you do something wrong.
Hope this helps,
E. Robert Tisdale <edwin@netwood.net>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-14 0:00 Matrix Multiplication William Dale
` (3 preceding siblings ...)
1999-12-15 0:00 ` E. Robert Tisdale
@ 1999-12-15 0:00 ` Ted Dennison
1999-12-15 0:00 ` William B. Clodius
[not found] ` <01bf4708$99ef98f0$022a6282@dieppe>
5 siblings, 1 reply; 41+ messages in thread
From: Ted Dennison @ 1999-12-15 0:00 UTC (permalink / raw)
In article <385699B5.59C14D03@lmco.com>,
fud@fud.com wrote:
> OK,
> I was challenged by one of my co-workers - a control and guidance type
> - not a real software engineer.
>
> His claim : "Fortran is faster doing floating point multiplication
> than Ada"
I hope you meant "matrix multiplication", as in the subject. This
statement makes absolutely no sense.
> I could not get any other specifications such as hardware, particular
> compiler, version, vendor, special math libraries or any other
> equivocations. Just he
> claims the above in all cases.
So he admits that he basicly doesn't know what he's talking about. Nice.
A couple of years ago I took a graduate course in compiler optimizations
for parallel architectures. As you can imagine, the number crunchers
that use high-end parallel machines want the ability to use their
parallel capabilites to speed up their code, without having to rewite it
all in another language. The language in quesion is almost always
Fortran (from what I can tell, for historical reasons).
Anyway, this need has driven a lot of research into optimization of
Fortran code on parallel architectures. It has also created a new
variant of the language : HPF (High-Performance Fortran). HPF has 3 new
looping constructs that correspond to parallel operations. The big issue
for optimizing these kinds of things is figuring out the lifetimes of
objects and values. The teacher of course was a big Fortran fan, so I
didn't mention Ada (I wanted a good grade. I'm not that stupid). But all
during the class, I'd keep raising my hand and saying "wouldn't it be
easier to perform this optimization if you had {insert Ada rule here}".
Generally, she'd agree.
In fact, I think it would be quite possible to create an Ada compiler
that could optimize matrix operations *better* than the best Fortran
compiler, assuming you also devise loop pragmas to match the HPF
parallel looping constructs. You certianly could do it better with the
same amount of effort.
But "effort" is the key. How much effort a compiler vendor puts into
optimization doesn't directly have anything to do with the language the
compiler recognizes. It has a lot to do with the compiler's maturity and
its target market. However, I wouldn't be shocked to discover that your
average Fortran developer cares immensely more about the efficiency of
looping matrix operations than your average Ada developer.
In any case, claiming "compiled language X is faster than compiled
language Y" is just flat-out wrong.
--
T.E.D.
Sent via Deja.com http://www.deja.com/
Before you buy.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Ted Dennison
@ 1999-12-15 0:00 ` William B. Clodius
1999-12-15 0:00 ` Ted Dennison
0 siblings, 1 reply; 41+ messages in thread
From: William B. Clodius @ 1999-12-15 0:00 UTC (permalink / raw)
There are a number of aspects of a language design that influence the
performance of language implementations. By far the biggest influence is
the amount of dynamism the language allows: if the types associated with
the expressions of a statement can change during multiple passes over a
statement, then I believe that full optimization is equivalent to the
halting problem. At least some scripting languages suffer from this
problem to an extreme extent, while dynamic dispatch in statically
compiled object orientation is a solution that attempts to minimize, but
not eliminate, this problem.
However this problem that does not generally pertain to the Fortran/Ada
numerics question where the types are determined statically and
generally map to the equivalent hardware operations. In this case a
common performance difference between languages is the extent to which
the language can rely on local analyses for its optimizations, which in
turn mostly depends on the aliasing properties of entities in the
language, which in turn is most strongly influenced by the properties of
the arguments to procedures. In this case Ada has relatively loose rules
compared to some other languages, e.g., C/C++, but stronger rules than
Fortran, so that Ada is in principle harder to optimize than Fortran.
However, the difference between Fortran and Ada in this regard is small
enough that it might be washed out in the noise of variations in
efficiency of compiler implementations.
Note that aliasing properties mostly affect the efficiency of "library"
routines. A "simple" coding of matmul in a "main" procedure is often
amenable to local analysis in any statically compiled language.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` William B. Clodius
@ 1999-12-15 0:00 ` Ted Dennison
1999-12-15 0:00 ` William B. Clodius
0 siblings, 1 reply; 41+ messages in thread
From: Ted Dennison @ 1999-12-15 0:00 UTC (permalink / raw)
In article <3857E220.26AE90BB@lanl.gov>,
"William B. Clodius" <wclodius@lanl.gov> wrote:
> numerics question where the types are determined statically and
> generally map to the equivalent hardware operations. In this case a
> common performance difference between languages is the extent to which
> the language can rely on local analyses for its optimizations, which
> in turn mostly depends on the aliasing properties of entities in the
> language, which in turn is most strongly influenced by the properties
> of the arguments to procedures. In this case Ada has relatively loose
> rules compared to some other languages, e.g., C/C++, but stronger
> rules than Fortran, so that Ada is in principle harder to optimize
> than Fortran. However, the difference between Fortran and Ada in this
That is a fairly accurate principle. So why did I end up finding so many
"Ada could do better situations"? You may ask. Well there are two issues
you missed in the above:
1) In general, the more information a compiler has about the possible
values of objects, the better it can optimize. That gives strongly typed
languages an advantage. You may turn off the runtime checking, but the
compiler can still assume the values will never exceed that range for
code generation purposes.
For instance, a big issue is how much information about the possible
values of the iteration variables the compiler has access to. The
ability of an Ada developer to constrain his index types using subtypes
potentially gives the compiler a trememdous boost here.
2) You ignore non-local optimization issues. That's fair enough. But
in their quest for more speed, researchers are delving into non-local
optimizations. Ada's strong typing drasticly reduces the possible access
paths the optimizer has to worry about for a location or value. Also, a
lot of the language was written to allow compilers leeway in optimizing.
For example, users aren't allowed to make assumptions about the
parameter passing method used for subroutines except in certain
situations. Generally the more information and freedom a compiler has at
its disposal, the easier it will be for it to optimize.
> regard is small enough that it might be washed out in the noise of
> variations in efficiency of compiler implementations.
That's the big point, and we are in complete agreement here.
--
T.E.D.
Sent via Deja.com http://www.deja.com/
Before you buy.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Ted Dennison
@ 1999-12-15 0:00 ` William B. Clodius
1999-12-16 0:00 ` Robert A Duff
0 siblings, 1 reply; 41+ messages in thread
From: William B. Clodius @ 1999-12-15 0:00 UTC (permalink / raw)
Ted Dennison wrote:
>
> In article <3857E220.26AE90BB@lanl.gov>,
> "William B. Clodius" <wclodius@lanl.gov> wrote:
> > numerics question where the types are determined statically and
> > generally map to the equivalent hardware operations. In this case a
> > common performance difference between languages is the extent to which
> > the language can rely on local analyses for its optimizations, which
> > in turn mostly depends on the aliasing properties of entities in the
> > language, which in turn is most strongly influenced by the properties
> > of the arguments to procedures. In this case Ada has relatively loose
> > rules compared to some other languages, e.g., C/C++, but stronger
> > rules than Fortran, so that Ada is in principle harder to optimize
> > than Fortran. However, the difference between Fortran and Ada in this
>
> That is a fairly accurate principle. So why did I end up finding so many
> "Ada could do better situations"? You may ask. Well there are two issues
> you missed in the above:
> 1) In general, the more information a compiler has about the possible
> values of objects, the better it can optimize. That gives strongly typed
> languages an advantage. You may turn off the runtime checking, but the
> compiler can still assume the values will never exceed that range for
> code generation purposes.
>
> For instance, a big issue is how much information about the possible
> values of the iteration variables the compiler has access to. The
> ability of an Ada developer to constrain his index types using subtypes
> potentially gives the compiler a trememdous boost here.
Yes it is a win, but I don't know how big a win it is compared to the
requirement in both Ada and Fortran that indexing outside the visible
indices of the array is illegal. I would expect that in the vast
majority of cases, both assumptions are equivalent. Of course what I
expect might not be what I would find.
>
> 2) You ignore non-local optimization issues. That's fair enough. But
> in their quest for more speed, researchers are delving into non-local
> optimizations. Ada's strong typing drasticly reduces the possible access
> paths the optimizer has to worry about for a location or value. Also, a
> lot of the language was written to allow compilers leeway in optimizing.
> For example, users aren't allowed to make assumptions about the
> parameter passing method used for subroutines except in certain
> situations. Generally the more information and freedom a compiler has at
> its disposal, the easier it will be for it to optimize.
I did not discuss non-local optimization issues, but one of your
arguments in favor of such optimizations for Ada, "users aren't allowed
to make assumptions about the parameter passing method used for
subroutines except in certain situations" is even more an argument in
favor of Fortran, which allows even fewer assumptions. That is precisely
what I meant by Ada having even stronger aliasing rules than Fortran for
arguments to its procedures. If I remember correctly, Ada requires
either copy-in/copy-out or call by reference semantics by the compiler
(without specifying the choice made) in most contexts, while Fortran
requires that code allow either copy-in/copy-out or call by reference
implementation, but does not require that the actual implementation be
consistent with either semantics, i.e. at arbitrary points in the
procedure arguments can be copied-in/copied-out without synchronizing
with other copy-ins/copy-outs. The other argument about strong typing is
not a strong argument against Fortran, as by almost any definition
Fortran is strongly typed (albeit some aspects of its typing are error
prone, e.g., implicit typing).
A better argument might be that Ada has better dataflow control
constructs than Fortran 77 and 66 (and in some aspects better than
Fortran 90, though WHERE and the more recent FORALL have their
advantages), but I don't know how strong an argument that is given that
reputedly most compilers use intermediate forms that replace those
constructs by their unstructured equivalents.
One reason I did not bring up non-local optimization is that for some
languages important cases of such optimizations are known to be NP hard
in the general case, e.g., C argument aliasing in the absence of C99's
restrict. I do not know how pertinent that is for Ada vs Fortran.
An additional problem in this discussion is that some compilers may have
a switch that in effect implements: this code has additional semantics
not specified by the standard.
> <snip>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` William B. Clodius
@ 1999-12-16 0:00 ` Robert A Duff
1999-12-16 0:00 ` William B. Clodius
0 siblings, 1 reply; 41+ messages in thread
From: Robert A Duff @ 1999-12-16 0:00 UTC (permalink / raw)
"William B. Clodius" <wclodius@lanl.gov> writes:
>...If I remember correctly, Ada requires
> either copy-in/copy-out or call by reference semantics by the compiler
> (without specifying the choice made) in most contexts, ...
I believe RM-6.2(12) allows substantially more freedom to the compiler
than that. But you are correct that it's not *quite* as much freedom as
Fortran allows. Pretty close, though, I think.
>...while Fortran
> requires that code allow either copy-in/copy-out or call by reference
> implementation, but does not require that the actual implementation be
> consistent with either semantics, i.e. at arbitrary points in the
> procedure arguments can be copied-in/copied-out without synchronizing
> with other copy-ins/copy-outs. ...
Anyway, there's an important non-technical issue: How much money are
people willing to expend on making optimizing compilers.
- Bob
^ permalink raw reply [flat|nested] 41+ messages in thread
[parent not found: <01bf4708$99ef98f0$022a6282@dieppe>]
* Re: Matrix Multiplication
[not found] ` <01bf4708$99ef98f0$022a6282@dieppe>
@ 1999-12-15 0:00 ` Robert A Duff
1999-12-15 0:00 ` Marin D. Condic
` (2 more replies)
1999-12-15 0:00 ` Gautier
1 sibling, 3 replies; 41+ messages in thread
From: Robert A Duff @ 1999-12-15 0:00 UTC (permalink / raw)
"Pascal Obry" <pascal_obry@csi.com> writes:
> Well I don't know what is a fast or slow language !!
I do. A fast language is one for which it is feasible to build
compilers that generate fast code. A slow language is one for which
that is not feasible.
Also I prefer to put the burden of proof on the language advocates --
that is, a language should be considered "slow" until proven "fast" by
the existence of at least one good production-quality compiler.
By this definition, Smalltalk, for example, is slow -- I've never seen a
Smalltalk compiler that can generate fast code. Furthermore, it seems
impossible, without doing all code generation at link time, which I
claim is not feasible in many cases.
I don't know whether Fortran is faster than Ada at matrix multiplies,
but it does seem like a meaningful question to ask. If you measured
lots of compilers, you could learn something useful.
- Bob
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Robert A Duff
@ 1999-12-15 0:00 ` Marin D. Condic
1999-12-16 0:00 ` Dieter Britz
1999-12-16 0:00 ` Pascal Obry
2 siblings, 0 replies; 41+ messages in thread
From: Marin D. Condic @ 1999-12-15 0:00 UTC (permalink / raw)
Robert A Duff wrote:
>
> "Pascal Obry" <pascal_obry@csi.com> writes:
>
> > Well I don't know what is a fast or slow language !!
>
> I do. A fast language is one for which it is feasible to build
> compilers that generate fast code. A slow language is one for which
> that is not feasible.
If, for example, Ada required runtime range checks which were illegal to
disable or no mechanism was available to remove them, then you'd have a
"slow language" - right? Or if the definition of "floating point" in Ada
were such that it was impossible to map to machine hardware (some
bizarre requirement to use packed decimal representation, or to use so
many bits that nobody's hardware would support it, or something
similar.) then the language itself would be slow.
There are lots of ways I can think of to make a language slow. Maybe we
need to have a "slowest language construct" contest? Everyone submits
their favorite "slow" feature (From Ada or any other language) and we
challenge the compiler writers to find the fastest possible
implementation? What does the winner get - besides the extra work of
implementing the optimized feature? :-)
MDC
--
=============================================================
Marin David Condic - Quadrus Corporation - 1.800.555.3393
1015-116 Atlantic Boulevard, Atlantic Beach, FL 32233
http://www.quadruscorp.com/
Visit my web site at: http://www.mcondic.com/
"Capitalism without failure is like religion without sin."
-- Allan Meltzer, Economist
=============================================================
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Robert A Duff
1999-12-15 0:00 ` Marin D. Condic
@ 1999-12-16 0:00 ` Dieter Britz
1999-12-16 0:00 ` Pascal Obry
2 siblings, 0 replies; 41+ messages in thread
From: Dieter Britz @ 1999-12-16 0:00 UTC (permalink / raw)
On Wed, 15 Dec 1999, Robert A Duff wrote:
> "Pascal Obry" <pascal_obry@csi.com> writes:
>
> > Well I don't know what is a fast or slow language !!
>
> I do. A fast language is one for which it is feasible to build
> compilers that generate fast code. A slow language is one for which
> that is not feasible.
>
> Also I prefer to put the burden of proof on the language advocates --
> that is, a language should be considered "slow" until proven "fast" by
> the existence of at least one good production-quality compiler.
>
> By this definition, Smalltalk, for example, is slow -- I've never seen a
> Smalltalk compiler that can generate fast code. Furthermore, it seems
> impossible, without doing all code generation at link time, which I
> claim is not feasible in many cases.
>
> I don't know whether Fortran is faster than Ada at matrix multiplies,
> but it does seem like a meaningful question to ask. If you measured
> lots of compilers, you could learn something useful.
This must depend on the specific compiler. These have become better at
optimising code the last couple of decades. Years ago, I often needed
to shift large array sections, and (on a PDP11, under RT11) wrote
myself an assembler-code subroutine to do the shift; that turned out
to run about 100 times as fast as the equivalent Fortran code. I feel
sure that now, there would not be so much difference, if any (but I
don't have an assembler anymore). Later, I compared Pascal and Fortran 77
on a VAX machine, and Fortran was, on average, about twice as fast. It
might depend on what sort of operations you normally program.
-- Dieter Britz alias db@kemi.aau.dk; http://www.kemi.aau.dk/~db
*** Echelon, bomb, sneakers, GRU: swamp the snoops with trivia! ***
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Robert A Duff
1999-12-15 0:00 ` Marin D. Condic
1999-12-16 0:00 ` Dieter Britz
@ 1999-12-16 0:00 ` Pascal Obry
1999-12-16 0:00 ` Greg Martin
1999-12-16 0:00 ` Brian Rogoff
2 siblings, 2 replies; 41+ messages in thread
From: Pascal Obry @ 1999-12-16 0:00 UTC (permalink / raw)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2067 bytes --]
Robert A Duff <bobduff@world.std.com> a �crit dans le message :
wcchfhkxjeg.fsf@world.std.com...
> I do. A fast language is one for which it is feasible to build
> compilers that generate fast code. A slow language is one for which
> that is not feasible.
I pretty well understand that, but we are then talking about implementation
in a compiler.
>
> Also I prefer to put the burden of proof on the language advocates --
> that is, a language should be considered "slow" until proven "fast" by
> the existence of at least one good production-quality compiler.
>
> By this definition, Smalltalk, for example, is slow -- I've never seen a
> Smalltalk compiler that can generate fast code. Furthermore, it seems
Ok, we have never seen one, but is it really impossible ? I do remember
something about IBM creating a Smalltalk compiler, is that true ? How
fast was it ?
> impossible, without doing all code generation at link time, which I
> claim is not feasible in many cases.
>
> I don't know whether Fortran is faster than Ada at matrix multiplies,
> but it does seem like a meaningful question to ask. If you measured
This still seems a stange question to me! Certainly the algorithm used in
the implementation as more to do with the speed than the language itself.
Pascal.
--
--|------------------------------------------------------------
--| Pascal Obry Team-Ada Member |
--| |
--| EDF-DER-IPN-SID- T T I |
--| Intranet: http://cln46gb |
--| Bureau N-023 e-mail: pascal.obry@edf.fr |
--| 1 Av G�n�ral de Gaulle voice : +33-1-47.65.50.91 |
--| 92141 Clamart CEDEX fax : +33-1-47.65.50.07 |
--| FRANCE |
--|------------------------------------------------------------
--|
--| http://ourworld.compuserve.com/homepages/pascal_obry
--|
--| "The best way to travel is by means of imagination"
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-16 0:00 ` Pascal Obry
@ 1999-12-16 0:00 ` Greg Martin
1999-12-16 0:00 ` Brian Rogoff
1 sibling, 0 replies; 41+ messages in thread
From: Greg Martin @ 1999-12-16 0:00 UTC (permalink / raw)
On Thu, 16 Dec 1999 09:32:45 +0100, "Pascal Obry" <p.obry@der.edf.fr>
wrote:
>Ok, we have never seen one, but is it really impossible ? I do remember
>something about IBM creating a Smalltalk compiler, is that true ? How
>fast was it ?
>
I know nothing about it but there is a Visual Age Smalltalk.
http://www-4.ibm.com/software/ad/smalltalk/
regards,
Greg Martin
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-16 0:00 ` Pascal Obry
1999-12-16 0:00 ` Greg Martin
@ 1999-12-16 0:00 ` Brian Rogoff
1 sibling, 0 replies; 41+ messages in thread
From: Brian Rogoff @ 1999-12-16 0:00 UTC (permalink / raw)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: TEXT/PLAIN; charset=X-UNKNOWN, Size: 2131 bytes --]
On Thu, 16 Dec 1999, Pascal Obry wrote:
> Robert A Duff <bobduff@world.std.com> a écrit dans le message :
> wcchfhkxjeg.fsf@world.std.com...
> > I do. A fast language is one for which it is feasible to build
> > compilers that generate fast code. A slow language is one for which
> > that is not feasible.
>
> I pretty well understand that, but we are then talking about implementation
> in a compiler.
>
> > Also I prefer to put the burden of proof on the language advocates --
> > that is, a language should be considered "slow" until proven "fast" by
> > the existence of at least one good production-quality compiler.
> >
> > By this definition, Smalltalk, for example, is slow -- I've never seen a
> > Smalltalk compiler that can generate fast code. Furthermore, it seems
I like Bob Duff's definition, though it probably needs a bit more
elaboration. A few years ago Richard O'Keefe posted some interesting
microbenchmarks to comp.lang.ada comparing Pascal, Scheme, and Ada
programs for two dimensional integration using downward funargs, with the
(GNAT) Ada code using generics to simulate this. The Scheme code was the
fastest. The Scheme compiler was an aggressive whole-program optimizer
called Stalin. Despite the success of this microbenchmark, and Stalin, I
suspect that Scheme would be classified as a "slow" language, like
Smalltalk, this compiler not proving itself "production quality" yet.
So what's the litmus test for being production quality?
> Ok, we have never seen one, but is it really impossible ? I do remember
> something about IBM creating a Smalltalk compiler, is that true ? How
> fast was it ?
You can create compilers for any language. However, Smalltalk keeps a lot
of info around at runtime, so for example, when doing integer arithmetic
you get transparent overflow into bignums if you need them. That costs!
If your Smalltalk were to make each element of a float or complex array a
self identifying (tagged) object then I guarantee you Smalltalk numerical
linear algebra will run much slower than Fortran or Ada!
-- Brian
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
[not found] ` <01bf4708$99ef98f0$022a6282@dieppe>
1999-12-15 0:00 ` Robert A Duff
@ 1999-12-15 0:00 ` Gautier
1 sibling, 0 replies; 41+ messages in thread
From: Gautier @ 1999-12-15 0:00 UTC (permalink / raw)
> Well I don't know what is a fast or slow language !!
An example: Swiss French is slower than Parisian French.
Ah, no, sorry - it's a difference in compilers...
G.
^ permalink raw reply [flat|nested] 41+ messages in thread
* RE: Matrix Multiplication
@ 1999-12-15 0:00 Carlisle, Martin
1999-12-15 0:00 ` Mario Klebsch
` (2 more replies)
0 siblings, 3 replies; 41+ messages in thread
From: Carlisle, Martin @ 1999-12-15 0:00 UTC (permalink / raw)
To: comp.lang.ada
The idea that matrix multiplication would always be inlined seems absurd to
me. The simple implementation has O(n^3) running time. Any enhancement
requires a fair amount of code, which I surely wouldn't want replicated
every time I did a matrix multiply. The overhead of a function call, in
this case, would be so small as to be negligible.
--Martin
----------------------------------------------
Martin C. Carlisle, PhD
Assistant Professor of Computer Science
US Air Force Academy
DISCLAIMER: This message represents the author's opinions, and not
necessarily those of the US Air Force Academy or the US Air Force.
-----Original Message-----
From: Richard Maine [mailto:maine@qnet.com]
Sent: Tuesday, December 14, 1999 8:25 PM
To: comp.lang.ada@ada.eu.org
Subject: Re: Matrix Multiplication
"E. Robert Tisdale" <edwin@netwood.net> writes:
> That's even easier. The Ada version could call the f90 intrinsic too.
> All you would need to do is link the f90 library which contains matmul.
I will not get into the language comparison or benchmarking aspects of
this thread.
I just note that "linking the f90 library that contains matmul" isn't
necessarily that straightforward. Matmul is an intrinsic. It is not
at all given that there even *IS* a library that contains it; a compiler
is quite free to always do it inline. And even if much of the work is
in a library routine, the interface to them isn't necessarily known
outside of the compiler.
Intrinsics are basically part of the compiler internals. They *MAY* be
implemented with callable library routines, but there is no guarantee of
that.
And it certainly isn't a portable way to call them.
--
Richard Maine
maine@qnet.com
_______________________________________________
comp.lang.ada mailing list
comp.lang.ada@ada.eu.org
http://ada.eu.org/cgi-bin/mailman/listinfo/comp.lang.ada
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 Carlisle, Martin
@ 1999-12-15 0:00 ` Mario Klebsch
1999-12-19 0:00 ` Robert Dewar
1999-12-19 0:00 ` Robert Dewar
1999-12-19 0:00 ` Robert Dewar
2 siblings, 1 reply; 41+ messages in thread
From: Mario Klebsch @ 1999-12-15 0:00 UTC (permalink / raw)
"Carlisle, Martin" <Martin.Carlisle@usafa.af.mil> writes:
>The idea that matrix multiplication would always be inlined seems absurd to
>me. The simple implementation has O(n^3) running time.
What about compiling for a CPU, that does have an instruction for
matrix multiplication? It seems absurd to me, not to use that
instruction, if it is available.
73, Mario
--
Mario Klebsch mario@klebsch.de
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-15 0:00 ` Mario Klebsch
@ 1999-12-19 0:00 ` Robert Dewar
0 siblings, 0 replies; 41+ messages in thread
From: Robert Dewar @ 1999-12-19 0:00 UTC (permalink / raw)
In article <60l838.ocd.ln@ds9.klebsch.de>,
mario@klebsch.de (Mario Klebsch) wrote:
> "Carlisle, Martin" <Martin.Carlisle@usafa.af.mil> writes:
>
> >The idea that matrix multiplication would always be inlined
seems absurd to
> >me. The simple implementation has O(n^3) running time.
>
> What about compiling for a CPU, that does have an instruction
for
> matrix multiplication? It seems absurd to me, not to use that
> instruction, if it is available.
It may seem absurd, but it is likely the case that if you DO
have such an instruction it should not be used in many cases.
The secret of reasonably efficient code for modern CISC machines
is often to ignore many of the junk instructions (this is for
sure true on the Pentium for example!)
Sent via Deja.com http://www.deja.com/
Before you buy.
^ permalink raw reply [flat|nested] 41+ messages in thread
* RE: Matrix Multiplication
1999-12-15 0:00 Carlisle, Martin
1999-12-15 0:00 ` Mario Klebsch
@ 1999-12-19 0:00 ` Robert Dewar
1999-12-19 0:00 ` Robert Dewar
2 siblings, 0 replies; 41+ messages in thread
From: Robert Dewar @ 1999-12-19 0:00 UTC (permalink / raw)
In article
<9BBB0C9AF506D311A68E00902745A537C236B1@fsxqpz04.usafa.af.mil>,
comp.lang.ada@ada.eu.org wrote:
> The idea that matrix multiplication would always be inlined
seems absurd to
> me. The simple implementation has O(n^3) running time. Any
enhancement
> requires a fair amount of code, which I surely wouldn't want
replicated
> every time I did a matrix multiply. The overhead of a
function call, in
> this case, would be so small as to be negligible.
Of course the naive n**3 implementation is appallingly slow for
large matrices because of cache effects, no one would use this
for large arrays, and indeed the rather complex algorithms
that WOULD be used are quite unsuitable for inlining.
Sent via Deja.com http://www.deja.com/
Before you buy.
^ permalink raw reply [flat|nested] 41+ messages in thread
* RE: Matrix Multiplication
1999-12-15 0:00 Carlisle, Martin
1999-12-15 0:00 ` Mario Klebsch
1999-12-19 0:00 ` Robert Dewar
@ 1999-12-19 0:00 ` Robert Dewar
2 siblings, 0 replies; 41+ messages in thread
From: Robert Dewar @ 1999-12-19 0:00 UTC (permalink / raw)
In article
<9BBB0C9AF506D311A68E00902745A537C236B1@fsxqpz04.usafa.af.mil>,
comp.lang.ada@ada.eu.org wrote:
> The idea that matrix multiplication would always be inlined
seems absurd to
> me. The simple implementation has O(n^3) running time. Any
enhancement
> requires a fair amount of code, which I surely wouldn't want
replicated
> every time I did a matrix multiply. The overhead of a
function call, in
> this case, would be so small as to be negligible.
Of course the naive n**3 implementation is appallingly slow for
large matrices because of cache effects, no one would use this
for large arrays, and indeed the rather complex algorithms
that WOULD be used are quite unsuitable for inlining.
Sent via Deja.com http://www.deja.com/
Before you buy.
^ permalink raw reply [flat|nested] 41+ messages in thread
* RE: Matrix Multiplication
@ 1999-12-16 0:00 Carlisle, Martin
1999-12-16 0:00 ` Howard W. LUDWIG
0 siblings, 1 reply; 41+ messages in thread
From: Carlisle, Martin @ 1999-12-16 0:00 UTC (permalink / raw)
To: comp.lang.ada
Suppose you have such a machine (wow! talk about complex instruction sets!).
I then grant that any good Fortran compiler would use it. However, it also
follows that any Ada compiler could create a small function to use it (e.g.
using Machine code insertions), and then pragma Inline that function.
--Martin
-----Original Message-----
From: mario@klebsch.de [mailto:mario@klebsch.de]
Sent: Wednesday, December 15, 1999 11:01 AM
To: comp.lang.ada@ada.eu.org
Subject: Re: Matrix Multiplication
"Carlisle, Martin" <Martin.Carlisle@usafa.af.mil> writes:
>The idea that matrix multiplication would always be inlined seems absurd to
>me. The simple implementation has O(n^3) running time.
What about compiling for a CPU, that does have an instruction for
matrix multiplication? It seems absurd to me, not to use that
instruction, if it is available.
73, Mario
--
Mario Klebsch mario@klebsch.de
_______________________________________________
comp.lang.ada mailing list
comp.lang.ada@ada.eu.org
http://ada.eu.org/cgi-bin/mailman/listinfo/comp.lang.ada
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Matrix Multiplication
1999-12-16 0:00 Carlisle, Martin
@ 1999-12-16 0:00 ` Howard W. LUDWIG
0 siblings, 0 replies; 41+ messages in thread
From: Howard W. LUDWIG @ 1999-12-16 0:00 UTC (permalink / raw)
Well, I've worked on a processor which has a rather complex instruction set,
although it did not include matrix multiplication as an instruction. My
favorite instruction was the one that allowed me to do FFTs on a power-of-2,
up to 64, number of complex data items. The Ada compiler vendor had lots of
fun with this machine--they created a package of Machine-Specific Procedures
which we could call and would be compiled/translated into the corresponding
assembly instruction inline. About 90 % of the code looked like assembly
language with Ada syntax (semicolon at the end of the statement, parentheses
around the set of operands, ...).
Now you might ask why we did such a thing--why not just write in assembly
language in the first place? Let's just say--overzealous bureaucrats
enforcing the "single language policy" (also commonly known as "the Ada
Mandate").
Howard W. LUDWIG
"Carlisle, Martin" wrote:
>
> Suppose you have such a machine (wow! talk about complex instruction sets!).
> I then grant that any good Fortran compiler would use it. However, it also
> follows that any Ada compiler could create a small function to use it (e.g.
> using Machine code insertions), and then pragma Inline that function.
>
> --Martin
^ permalink raw reply [flat|nested] 41+ messages in thread
[parent not found: <637de084-0e71-4077-a1c5-fc4200cad3cf@googlegroups.com>]
* Re: Does Ada need elemental functions to make it suitable for scientific work?
[not found] <637de084-0e71-4077-a1c5-fc4200cad3cf@googlegroups.com>
@ 2012-07-10 8:39 ` Nasser M. Abbasi
2012-07-10 9:07 ` Dmitry A. Kazakov
0 siblings, 1 reply; 41+ messages in thread
From: Nasser M. Abbasi @ 2012-07-10 8:39 UTC (permalink / raw)
On 7/10/2012 3:02 AM, gautier.de.montmollin@gmail.com wrote:
> Le mardi 10 juillet 2012 07:22:58 UTC+2, Ada novice a �crit :
>
>> Believe me, for someone who do numerical computations on a daily basis, an operation
>like V**2 is a NECESSITY. It's a pity that Ada does not offer such a facility.
>I do not blame those who stick to Matlab (or Fortran?) for doing numerical computations.
>
> Wait... There are *such* facilities and several built-in operations *like* that:
> http://www.ada-auth.org/standards/12rm/html/RM-G-3-1.html
>
> Apparently it is not enough. But it is possible to make a proposal for
> the next standard (and short term for the next GNAT version).
>
That is a start. But not enough by any means. All functions should be
vectorized. Even user defined ones.
In Ada, I can't even take the sin of a vector
----------------------
with Ada.Numerics.Elementary_Functions;
use Ada.Numerics.Elementary_Functions;
procedure foo3 is
type array_t is array(1..5) OF float;
D : constant array_t :=(0.1,0.2,0.3,0.4,0.5);
V : array_t;
begin
-- V := sin(D); -- ERROR
for I in D'Range loop -- must use a LOOP
V(I) := sin(D(I));
end loop;
end foo3;
--------------------
I can ofcourse overload sin, hide the loop inside my own sine
and then do
V:= myPackage.sin(D);
All of this is possible in Ada. It is just a little (alot?)
more effort compared to what is out there.
octave/Matlab:
--------------------------
D=[0.1,0.2,0.3,0.4,0.5];
sin(D)
ans =
0.0998 0.1987 0.2955 0.3894 0.4794
---------------------------
> The snag with that *specific* operation, v**2 meaning "v(i)**2 for each i",
>is that it is counter-intuitive from a linear algebra point of view. You
>would rather expect v**2 = v*v. Now another snag is should "*" be the
>dot or the cross product ?...
>
> These kind of things are thought-out carefully before landing into an Ada standard...
>
But these issue have been solved long time time. (like maybe
30 years ago? and are well defined for all cases:
http://www.mathworks.com/help/techdoc/ref/arithmeticoperators.html
There are element-wise operators, and operators that work
on the whole vector or matrix.
In octave/Matlab world, V.^2 means element-wise. (notice
the ".")
-------------------------
octave:3> V=[1 2 3 4];
octave:4> v.^2
ans =
1 4 9 16
--------------------------
Otherwise, it becomes standard matrix operation.
A^2 means A*A
------------------------
octave:17> A=[1 2;3 4]
A =
1 2
3 4
octave:18> A^2
ans =
7 10
15 22
octave:19> A*A
ans =
7 10
15 22
---------------------
To do dot product, it has its own operator, called dot() !
----------------------
octave:5> dot(V,V)
ans = 30
-------------------------
To do cross product, it has its own operator, called cross() !
-----------------------
octave:9> D=[1 2 3]; V=[3 4 5]; cross(D,V)
ans =
-2 4 -2
-------------------
btw, in Mathematica, it is the other way around. V*V does
element by element multiplication, and V.V does matrix product.
It has also Dot[] and Cross[] and all functions are vectorized,
and all user defined functions can be vectorized by simply
giving them the attribute Listable.
--Nasser
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Does Ada need elemental functions to make it suitable for scientific work?
2012-07-10 8:39 ` Does Ada need elemental functions to make it suitable for scientific work? Nasser M. Abbasi
@ 2012-07-10 9:07 ` Dmitry A. Kazakov
2012-07-12 0:31 ` robin.vowels
0 siblings, 1 reply; 41+ messages in thread
From: Dmitry A. Kazakov @ 2012-07-10 9:07 UTC (permalink / raw)
On Tue, 10 Jul 2012 03:39:58 -0500, Nasser M. Abbasi wrote:
> That is a start. But not enough by any means. All functions should be
> vectorized.
It is ill-defined. E.g. exp(A), where A is a matrix. Is exp(A) a matrix of
exponents or exponent matrix? To me it is the second. A similar example for
vectors: abs V. Is it a vector of absolute values, or maybe ||V||?
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Does Ada need elemental functions to make it suitable for scientific work?
2012-07-10 9:07 ` Dmitry A. Kazakov
@ 2012-07-12 0:31 ` robin.vowels
2012-07-12 7:12 ` Dmitry A. Kazakov
0 siblings, 1 reply; 41+ messages in thread
From: robin.vowels @ 2012-07-12 0:31 UTC (permalink / raw)
Cc: mailbox
On Tuesday, 10 July 2012 19:07:05 UTC+10, Dmitry A. Kazakov wrote:
> On Tue, 10 Jul 2012 03:39:58 -0500, Nasser M. Abbasi wrote:
>
> > That is a start. But not enough by any means. All functions should be
> > vectorized.
>
> It is ill-defined. E.g. exp(A), where A is a matrix. Is exp(A) a matrix of
> exponents or exponent matrix?
In languages that provide whole array operations (i.e.,
element-by-element operations -- such as PL/I and Fortran),
it is the former.
BTW., I think you mean matrix exponential, which is a far
less common operation than exp(A) or e**A(i) for i = 1 to n,
and would be written MATRIX_EXPONENTIAL or some such,
just as matrix multiplication would be written MATRIX_MULT
or some such, to distinguish it from the more common
element-by-element product.
> To me it is the second. A similar example for
> vectors: abs V. Is it a vector of absolute values, or maybe ||V||?
For element-by-element operations, it means the first,
as in PL/I and Fortran.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Does Ada need elemental functions to make it suitable for scientific work?
2012-07-12 0:31 ` robin.vowels
@ 2012-07-12 7:12 ` Dmitry A. Kazakov
2012-07-29 13:39 ` Robin Vowels
0 siblings, 1 reply; 41+ messages in thread
From: Dmitry A. Kazakov @ 2012-07-12 7:12 UTC (permalink / raw)
On Wed, 11 Jul 2012 17:31:33 -0700 (PDT), robin.vowels@gmail.com wrote:
>> It is ill-defined. E.g. exp(A), where A is a matrix. Is exp(A) a matrix of
>> exponents or exponent matrix?
>
> In languages that provide whole array operations (i.e.,
> element-by-element operations -- such as PL/I and Fortran),
> it is the former.
>
> BTW., I think you mean matrix exponential, which is a far
> less common operation than exp(A) or e**A(i) for i = 1 to n,
In linear algebra, provided matrices mean matrices, per-element operation
just does not make any sense. Exp(A), as well as power series are fairly
common in spectral analysis.
> and would be written MATRIX_EXPONENTIAL or some such,
> just as matrix multiplication would be written MATRIX_MULT
> or some such, to distinguish it from the more common
> element-by-element product.
My FORTRAN and PL/1 are quite rusty, but even these incredibly poor
languages did not define multiplication for matrices that way. In fact they
just had no matrices last time I used either.
I remember one library for sparse matrices in FORTRAN-IV, doing LU
decomposition and other stuff. It was quite fun. The library was in fact
very well-designed, but since FORTRAN-IV lacked even elementary data types,
they did all memory management required using INTEGER*4 as an index in one
huge REAL*4 array, serving as a memory pool.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Does Ada need elemental functions to make it suitable for scientific work?
2012-07-12 7:12 ` Dmitry A. Kazakov
@ 2012-07-29 13:39 ` Robin Vowels
2012-07-29 14:22 ` Dmitry A. Kazakov
0 siblings, 1 reply; 41+ messages in thread
From: Robin Vowels @ 2012-07-29 13:39 UTC (permalink / raw)
On Jul 12, 5:12 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:
> On Wed, 11 Jul 2012 17:31:33 -0700 (PDT), rrrrrrr@gmail.com wrote:
> >> It is ill-defined. E.g. exp(A), where A is a matrix. Is exp(A) a matrix of
> >> exponents or exponent matrix?
>
> > In languages that provide whole array operations (i.e.,
> > element-by-element operations -- such as PL/I and Fortran),
> > it is the former.
>
> > BTW., I think you mean matrix exponential, which is a far
> > less common operation than exp(A) or e**A(i) for i = 1 to n,
>
> In linear algebra, provided matrices mean matrices, per-element operation
> just does not make any sense.
Element-by-element operations are required routinely in numerical
work. They have been demanded and have been available since
at least 1955.
> Exp(A), as well as power series are fairly
> common in spectral analysis.
>
> > and would be written MATRIX_EXPONENTIAL or some such,
> > just as matrix multiplication would be written MATRIX_MULT
> > or some such, to distinguish it from the more common
> > element-by-element product.
>
> My FORTRAN and PL/1 are quite rusty, but even these incredibly poor
> languages did not define multiplication for matrices that way.
PL/I defined multiplication for matrices as an element-by-element
product, as I said before.
Back then, FORTRAN did not offer such operations on matrices,
however it now does, and has done so since Fortran 90.
> In fact they
> just had no matrices last time I used either.
Back then, both PL/I and FORTRAN offered matrices (and still do).
For that matter, both offered multi-dimensional arrays (and still do).
> I remember one library for sparse matrices in FORTRAN-IV, doing LU
> decomposition and other stuff. It was quite fun. The library was in fact
> very well-designed, but since FORTRAN-IV lacked even elementary data types,
FORTRAN always has had elementary data types.
> they did all memory management required using INTEGER*4 as an index in one
> huge REAL*4 array, serving as a memory pool.
That's because FORTRAN IV did not have dynamic arrays.
PL/I did, of course, from the first compilers in c. 1966.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Does Ada need elemental functions to make it suitable for scientific work?
2012-07-29 13:39 ` Robin Vowels
@ 2012-07-29 14:22 ` Dmitry A. Kazakov
2012-07-29 20:54 ` glen herrmannsfeldt
0 siblings, 1 reply; 41+ messages in thread
From: Dmitry A. Kazakov @ 2012-07-29 14:22 UTC (permalink / raw)
On Sun, 29 Jul 2012 06:39:58 -0700 (PDT), Robin Vowels wrote:
> On Jul 12, 5:12�pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>> On Wed, 11 Jul 2012 17:31:33 -0700 (PDT), rrrrrrr@gmail.com wrote:
>>>> It is ill-defined. E.g. exp(A), where A is a matrix. Is exp(A) a matrix of
>>>> exponents or exponent matrix?
>>
>>> In languages that provide whole array operations (i.e.,
>>> element-by-element operations -- such as PL/I and Fortran),
>>> it is the former.
>>
>>> BTW., I think you mean matrix exponential, which is a far
>>> less common operation than exp(A) or e**A(i) for i = 1 to n,
>>
>> In linear algebra, provided matrices mean matrices, per-element operation
>> just does not make any sense.
>
> Element-by-element operations are required routinely in numerical
> work.
I wrote specifically about matrices as known in linear algebra.
> PL/I defined multiplication for matrices as an element-by-element
> product, as I said before.
Sorry for PL/1!
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: Does Ada need elemental functions to make it suitable for scientific work?
2012-07-29 14:22 ` Dmitry A. Kazakov
@ 2012-07-29 20:54 ` glen herrmannsfeldt
[not found] ` <apib1897s56dkultqmfl3emvk1os3tfdak@invalid.netcom.com>
0 siblings, 1 reply; 41+ messages in thread
From: glen herrmannsfeldt @ 2012-07-29 20:54 UTC (permalink / raw)
In comp.lang.pl1 Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote:
(snip, someone wrote)
>>>>> It is ill-defined. E.g. exp(A), where A is a matrix.
>>>>> Is exp(A) a matrix of exponents or exponent matrix?
I can see a general purpose (more or less) high-level languages
offering a matrix multiply operator, but this seems a little
less common.
The only language I know of that offers matrix multiply and
not element-by-element multiply is BASIC.
>>>> In languages that provide whole array operations (i.e.,
>>>> element-by-element operations -- such as PL/I and Fortran),
>>>> it is the former.
Fortran now has the MATMUL intrinsic to do matrix multiplication.
The * operator does element by element multiply.
One could ask for a MATEXP intrinsic, though it isn't likely
to be added soon.
-- glen
^ permalink raw reply [flat|nested] 41+ messages in thread
end of thread, other threads:[~2012-08-07 7:35 UTC | newest]
Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1999-12-14 0:00 Matrix Multiplication William Dale
1999-12-14 0:00 ` Gautier
[not found] ` <5l1f5s4kck891a2s6o8bhvkirm4q79hm6c@4ax.com>
1999-12-15 0:00 ` Gautier
1999-12-15 0:00 ` Marin D. Condic
1999-12-15 0:00 ` Ted Dennison
1999-12-15 0:00 ` Gautier
1999-12-15 0:00 ` Tucker Taft
1999-12-16 0:00 ` Ted Dennison
1999-12-16 0:00 ` Ted Dennison
1999-12-15 0:00 ` Gautier
1999-12-16 0:00 ` Marin D. Condic
1999-12-27 0:00 ` Jeffrey L Straszheim
1999-12-15 0:00 ` Gautier
1999-12-14 0:00 ` David C. Hoos, Sr.
1999-12-15 0:00 ` Greg Lindahl
1999-12-15 0:00 ` Preben Randhol
1999-12-15 0:00 ` E. Robert Tisdale
[not found] ` <3856FD3F.8291A71C@ucar.edu>
1999-12-15 0:00 ` E. Robert Tisdale
1999-12-14 0:00 ` Richard Maine
1999-12-15 0:00 ` Ted Dennison
1999-12-15 0:00 ` William B. Clodius
1999-12-15 0:00 ` Ted Dennison
1999-12-15 0:00 ` William B. Clodius
1999-12-16 0:00 ` Robert A Duff
1999-12-16 0:00 ` William B. Clodius
[not found] ` <01bf4708$99ef98f0$022a6282@dieppe>
1999-12-15 0:00 ` Robert A Duff
1999-12-15 0:00 ` Marin D. Condic
1999-12-16 0:00 ` Dieter Britz
1999-12-16 0:00 ` Pascal Obry
1999-12-16 0:00 ` Greg Martin
1999-12-16 0:00 ` Brian Rogoff
1999-12-15 0:00 ` Gautier
-- strict thread matches above, loose matches on Subject: below --
1999-12-15 0:00 Carlisle, Martin
1999-12-15 0:00 ` Mario Klebsch
1999-12-19 0:00 ` Robert Dewar
1999-12-19 0:00 ` Robert Dewar
1999-12-19 0:00 ` Robert Dewar
1999-12-16 0:00 Carlisle, Martin
1999-12-16 0:00 ` Howard W. LUDWIG
[not found] <637de084-0e71-4077-a1c5-fc4200cad3cf@googlegroups.com>
2012-07-10 8:39 ` Does Ada need elemental functions to make it suitable for scientific work? Nasser M. Abbasi
2012-07-10 9:07 ` Dmitry A. Kazakov
2012-07-12 0:31 ` robin.vowels
2012-07-12 7:12 ` Dmitry A. Kazakov
2012-07-29 13:39 ` Robin Vowels
2012-07-29 14:22 ` Dmitry A. Kazakov
2012-07-29 20:54 ` glen herrmannsfeldt
[not found] ` <apib1897s56dkultqmfl3emvk1os3tfdak@invalid.netcom.com>
2012-07-30 4:15 ` glen herrmannsfeldt
[not found] ` <nfhd181tv9u87mcqfb7rgd8lm48ihr9f4r@invalid.netcom.com>
2012-07-31 8:53 ` MATRIX MULTIPLICATION Robin Vowels
2012-07-31 9:05 ` Robin Vowels
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox