From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,99a6311c4195e21b X-Google-Attributes: gid103376,public From: "Marin D. Condic" Subject: Re: Matrix Multiplication Date: 1999/12/15 Message-ID: <3857D640.C1991F0C@quadruscorp.com>#1/1 X-Deja-AN: 561047665 Content-Transfer-Encoding: 7bit References: <385699B5.59C14D03@lmco.com> <3856C9A1.F89EFD8@maths.unine.ch> <5l1f5s4kck891a2s6o8bhvkirm4q79hm6c@4ax.com> <3857B51F.4B1E0F1E@maths.unine.ch> Organization: Quadrus Corporation X-Sender: "Marin D. Condic" (Unverified) X-Server-Date: 15 Dec 1999 17:55:15 GMT Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 Newsgroups: comp.lang.ada Date: 1999-12-15T17:55:15+00:00 List-Id: Gautier wrote: > > > Intrigued about the 'renames' bit. I thought the renames was just a > > compiler overhead and had no run-time effect at all. > > IIRC the renames takes `aliases' the its target its present state. > E.g. > declare > x: thing renames complicated(i,j).k; > begin > -- i,j could change, doesn't affect x > > The ARM gurus will comment better... It would seem that if the compiler were smart, it would compute I and J once and find the position once, then reuse it throughout the loop. (I'm presuming this would be done in nested "for" loops.) I'm not an expert in compiler theory, but I recall seeing output from more than one compiler/language doing this sort of thing. (loop invariant code? code hoisting? some terminology I've long since archived to tape.) So while the "renames" may help the compiler along in this respect, I think that it could/should get there without the help. Of course, I've seen lots of things where a compiler "ought" to do something, but won't unless you trick it into doing it with syntactic deceptions. Thats more a comment on the quality of the compiler. In this case, I'd think that Fortran could produce the optimization without a "renames" so Ada ought to be able to do the same. The original question in this thread had to do with "Ada can't do floating point math as fast as Fortran" - which is slightly different from Matrix Math in Ada/Fortran. Maybe someone can correct me if I'm wrong, (or just plain call me an ignoramus! :-) but I don't see much syntactic or semantic difference between Ada arithmetic and Fortran arithmetic. For that matter, there isn't much apparent difference in array processing. So once you disable the runtime checks, (O.K., maybe that's part of the semantics) the differences don't amount to a warm bucket of spit. Any performance difference should be attributable to the quality of the compilers in question. (This is probably the most often misunderstood thing about language efficiency. Most of the "unwashed masses" seem to be incapable of distinguishing between the quality of a language and the quality of a specific compiler. So the compiler writers have a special responsibility to put out good quality products lest they besmirch the language in the minds of many! :-) MDC -- ============================================================= Marin David Condic - Quadrus Corporation - 1.800.555.3393 1015-116 Atlantic Boulevard, Atlantic Beach, FL 32233 http://www.quadruscorp.com/ Visit my web site at: http://www.mcondic.com/ "Capitalism without failure is like religion without sin." -- Allan Meltzer, Economist =============================================================