From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,2825742776461038 X-Google-Thread: 1147fc,2825742776461038 X-Google-Attributes: gid103376,gid1147fc,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news1.google.com!news1.google.com!proxad.net!freenix!news.wanadoo.fr!news.wanadoo.fr!not-for-mail Sender: obry@PASCAL Newsgroups: comp.lang.ada,comp.parallel.mpi Subject: Re: Ada-friendly MPI References: From: Pascal Obry Date: 23 Jun 2005 22:59:15 +0200 Message-ID: Organization: Home - http://www.obry.net User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.3 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii NNTP-Posting-Date: 23 Jun 2005 22:59:21 CEST NNTP-Posting-Host: 82.124.120.204 X-Trace: 1119560361 news.wanadoo.fr 25063 82.124.120.204:3186 X-Complaints-To: abuse@wanadoo.fr Xref: g2news1.google.com comp.lang.ada:11605 comp.parallel.mpi:991 Date: 2005-06-23T22:59:21+02:00 List-Id: jtg writes: > In supercomputer environment many problems arise when you want to use Ada. > There are very fast communication links between processors and MPI libraries > are especially optimized for hardware. You should have Ada libraries for > the hardware and I haven't seen any such library even mentioned. Have you done some testing ? I know some people using the Ada distributed annex on some embedded platform with something like hundred of nodes for HPC (neutronics) without problem. > Another problem is infrastructure. You need libraries, you need running host > processes at every node (for starting your subtasks), you need task spooling > system etc. You need proper configuration. But everything is suited for MPI. > For example: When you put your parallel task on queue, the spooler > automatically What do you call a parallel task ? A process a thread ? Are you talking about an executable (process) and using some batch langauge LSF/PBM ? If so I don't see the difference with Ada. > starts subtasks on some nodes as they become available, > establishes MPI communication between them and assigns MPI number to every > subtask. Now, how to begin computations using Ada built-in parallelism? I'm not talking about parallelism with is more like OpenMP (shared memory) than MPI (distributed memory). I'm talking about Ada distributed annex. > I don't even know how to establish Ada-style communication between them. That's automatic. A program is a set of partition (executable), each partition will communicate with others using the Ada built-in Partition Communication Subsystem. > And all the problems vanish when you use simple C binding for MPI library... Then use the same simple binding for Ada. If the binding is simple you should be able to create the Ada counterpart without problem. You did not answer my question, which OS are you using ? Pascal. -- --|------------------------------------------------------ --| Pascal Obry Team-Ada Member --| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE --|------------------------------------------------------ --| http://www.obry.net --| "The best way to travel is by means of imagination" --| --| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595