From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.5-pre1 (2020-06-20) on ip-172-31-74-118.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-1.9 required=3.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.5-pre1 Date: 4 Aug 93 17:46:51 GMT From: agate!library.ucla.edu!ddsw1!news.kei.com!sol.ctr.columbia.edu!math.ohio- state.edu!darwin.sura.net!seas.gwu.edu!mfeldman@ucbvax.Berkeley.EDU (Michael F eldman) Subject: Re: storing arrays for Fortran (was: QUERY ABOUT MONITOR) Message-ID: <1993Aug4.174651.2765@seas.gwu.edu> List-Id: In article <1993Aug4.090104.20732@sei.cmu.edu> ae@sei.cmu.edu (Arthur Evans) wr ites: >michael.hagerty@nitelog.com (Michael Hagerty) complains about the >inability to direct an Ada compiler to store arrays so as to be >compatible with Fortran. > >However, there is nothing that I can see to keep an Ada vendor from >introducing some of the functionality of Section M.3 of the 9X document >into an Ada-83 compiler by use of pragmas. Go poke your favorite >vendor! > That's just why I brought it up a few days ago. I'm poking _all_ of them at once. IMHO they are just imitating each other anyway. They see only each other as the competition while the rest of the world leaves them behind. There are a lot of aspects of Ada that are not getting exploited because the folks who do the compilers seem to do _only_ what the next contract's customer wants. It's the Beltway Bandit vs. the entrepreneur mentality. Another suggestion, unwelcome as it may be. Why don't these guys get together with the innovative hardware houses, and get Ada compilers out simultaneously with new machines, especially parallel ones? Somehow these guys always manage to get the Fortran and C dialects written; with Ada, because of the common front-ends, all they need to do is write code generators that really exploit the machines. The language constructs are there already. How 'bout a math library (OK, so it's vendor-dependent) that uses overloaded operators to REALLY do vector and matrix stuff on parallel (vector) machines? What are you waiting for? They've already written the implementations, in C and Fortran; all you need to do is interface 'em nicely to Ada specs. I heard a neat story the other week about an Ada compiler for some vector machine or other that took a loop like FOR I IN 1..10 LOOP A(I) := B(I) + C(I); END LOOP; and vectorized it. Nice, eh? But they took FOR I IN Index LOOP -- Index is a subtype 1..10 A(I) := B(I) + C(I); END LOOP; and compiled all the code as a straight loop. Didn't they ever hear of subtypes? How many compilers out there will compile an array assignment like A := B; -- who cares what the typedefs are into a _minimum_ number of block-move instructions for that target? Or do they compile it as an element-by-element loop? That can make a big performance difference, can't it? Sheesh. This is what Ada's high-level constructs (tasking of course, but also universal array/record assignment, operator overloading, etc.) were supposed to be ABOUT. NOT one more me-too compiler for Sun SPARC. Miike Feldman