From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: ff499,1739747548fda79c X-Google-Attributes: gidff499,public X-Google-Thread: 103376,1739747548fda79c X-Google-Attributes: gid103376,public From: Kim Whitelaw Subject: Re: Ada compilers for parallel platforms Date: 1996/04/29 Message-ID: <96-04-145@comp.compilers>#1/1 X-Deja-AN: 152159678 sender: johnl@iecc.com references: <96-04-091@comp.compilers> organization: JRS Research Labs keywords: Ada, parallel newsgroups: comp.compilers,comp.lang.ada Date: 1996-04-29T00:00:00+00:00 List-Id: Samir N. Muhammad wrote: > > I am interested in Ada compilers for parallel machines. Does anybody > know whether Ada was able of entering the mainstream of parallel > programming? Specifically speaking, has there been an implementation > of Ada to run on a parallel platform and exploit parallelism at > different levels(not only task levels). Your help will be highly > appreciated. We retargeted an Ada to Microcode compiler to a SIMD 8 by 8 Systolic Cellular Array Processor. We handled the parallism by defining planar-types to represent 8 by 8 arrays of scalar types with one element per processor. The arithmetic operators were overloaded to work with the planar types, e.g. A*B represents 64 multiplications (one per processor) if either A or B are planar types. For example, givin the declarations A : planar_real; B : planar_real; where(A>B); B:=B*2.0; endwhere; would double B[i] in those processors-i for which A[i] is greater than B[i]. Because planar operations were used, the test and multiply would be done simultaneously in all 64 processors. All of the planar operations (both arithmetic, selection and systolic-movement) were implemented as builtin functions that translate to inline-microcode during compilation. The optimizer overlapped data-independent operations, so simultaneous add, multiply, and data movement could be achieved. For a carefully written 20-tap FIR-filter, we achieved an effective thruput of 2.6 gigaflops, which was close to the peak 3.2 gigaflops for the processor. Because Ada allows data-abstraction thru the use of packages, overloaded operators, and generic procedures; we were able to develop an Ada package that allowed parallel algorithms to be developed and tested using a conventional Ada compiler, and then cross-compiled to the Systolic Cellular Array Processor. This would have been impossible in C (unless a preprocessor is used) since overloaded operators are not supported. It would have been difficult in C++ since operators can only be overloaded for classes, which are hard to generate 0-overhead code for. Ada allows distinct scalar types to be created using the "new" keyword with full support for operator overloading. For example, you can use "Type planar_integer is new integer;" and then redefine "+", "-", "*", ... on the type planar_integer to map to different microcode. In the host-model designed for emulating the parallel operations, you define "Type planar_integer is array(0..7,0..7) of integer", and overload the operators on this type. Note, the planar data type enhancements were supported using standard Ada 83; no extensions to the language were needed. Hope this helps, Kim Whitelaw JRS Research Labs -- Send compilers articles to compilers@iecc.com, meta-mail to compilers-request@iecc.com.