From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: * X-Spam-Status: No, score=1.3 required=5.0 tests=BAYES_00,INVALID_MSGID, MSGID_RANDY autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 1094ba,99a6311c4195e21b X-Google-Attributes: gid1094ba,public X-Google-Thread: 103376,99a6311c4195e21b X-Google-Attributes: gid103376,public From: Ted Dennison Subject: Re: Matrix Multiplication Date: 1999/12/15 Message-ID: <838koc$b6d$1@nnrp1.deja.com>#1/1 X-Deja-AN: 561026327 References: <385699B5.59C14D03@lmco.com> X-Http-Proxy: 1.0 x43.deja.com:80 (Squid/1.1.22) for client 204.48.27.130 Organization: Deja.com - Before you buy. X-Article-Creation-Date: Wed Dec 15 17:56:31 1999 GMT X-MyDeja-Info: XMYDJUIDtedennison Newsgroups: comp.lang.ada,comp.lang.fortran X-Http-User-Agent: Mozilla/4.6 [en] (WinNT; I) Date: 1999-12-15T00:00:00+00:00 List-Id: In article <385699B5.59C14D03@lmco.com>, fud@fud.com wrote: > OK, > I was challenged by one of my co-workers - a control and guidance type > - not a real software engineer. > > His claim : "Fortran is faster doing floating point multiplication > than Ada" I hope you meant "matrix multiplication", as in the subject. This statement makes absolutely no sense. > I could not get any other specifications such as hardware, particular > compiler, version, vendor, special math libraries or any other > equivocations. Just he > claims the above in all cases. So he admits that he basicly doesn't know what he's talking about. Nice. A couple of years ago I took a graduate course in compiler optimizations for parallel architectures. As you can imagine, the number crunchers that use high-end parallel machines want the ability to use their parallel capabilites to speed up their code, without having to rewite it all in another language. The language in quesion is almost always Fortran (from what I can tell, for historical reasons). Anyway, this need has driven a lot of research into optimization of Fortran code on parallel architectures. It has also created a new variant of the language : HPF (High-Performance Fortran). HPF has 3 new looping constructs that correspond to parallel operations. The big issue for optimizing these kinds of things is figuring out the lifetimes of objects and values. The teacher of course was a big Fortran fan, so I didn't mention Ada (I wanted a good grade. I'm not that stupid). But all during the class, I'd keep raising my hand and saying "wouldn't it be easier to perform this optimization if you had {insert Ada rule here}". Generally, she'd agree. In fact, I think it would be quite possible to create an Ada compiler that could optimize matrix operations *better* than the best Fortran compiler, assuming you also devise loop pragmas to match the HPF parallel looping constructs. You certianly could do it better with the same amount of effort. But "effort" is the key. How much effort a compiler vendor puts into optimization doesn't directly have anything to do with the language the compiler recognizes. It has a lot to do with the compiler's maturity and its target market. However, I wouldn't be shocked to discover that your average Fortran developer cares immensely more about the efficiency of looping matrix operations than your average Ada developer. In any case, claiming "compiled language X is faster than compiled language Y" is just flat-out wrong. -- T.E.D. Sent via Deja.com http://www.deja.com/ Before you buy.