From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,ad4585f2971e47c5 X-Google-NewGroupId: yes X-Google-Attributes: gida07f3367d7,domainid0,public,usenet X-Google-Language: ENGLISH,ASCII Path: g2news2.google.com!postnews.google.com!w21g2000yqm.googlegroups.com!not-for-mail From: jonathan Newsgroups: comp.lang.ada Subject: Re: Need some light on using Ada or not Date: Sun, 20 Feb 2011 14:18:51 -0800 (PST) Organization: http://groups.google.com Message-ID: References: <4d5ef836$0$23753$14726298@news.sunsite.dk> <7ibvl6tn4os3njo3p4kek9kop44nke3n7t@4ax.com> <4d5fd57d$0$6992$9b4e6d93@newsspool4.arcor-online.net> <4D617036.4080902@obry.net> <73f1d7b1-98eb-436a-8f74-726ddf9c9407@e8g2000vbz.googlegroups.com> <4D61848C.2040504@obry.net> NNTP-Posting-Host: 143.117.23.236 Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Trace: posting.google.com 1298240331 11001 127.0.0.1 (20 Feb 2011 22:18:51 GMT) X-Complaints-To: groups-abuse@google.com NNTP-Posting-Date: Sun, 20 Feb 2011 22:18:51 +0000 (UTC) Complaints-To: groups-abuse@google.com Injection-Info: w21g2000yqm.googlegroups.com; posting-host=143.117.23.236; posting-account=Jzt5lQoAAAB4PhTgRLOPGuTLd_K1LY-C User-Agent: G2/1.0 X-HTTP-UserAgent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.19) Gecko/2010120923 Iceweasel/3.0.6 (Debian-3.0.6-3),gzip(gfe) Xref: g2news2.google.com comp.lang.ada:18474 Date: 2011-02-20T14:18:51-08:00 List-Id: On Feb 20, 9:15=A0pm, Pascal Obry wrote: > Jonathan, > > > My preferred plan B would link to MPI, which distributes mpi-tasks > > over the available cores. =A0I don't know if that will work either! > > That won't be better. MPI is for non shared memory used in clusters. > Tasking on a single machine is the best option AFAIK. > > -- > > --|------------------------------------------------------ > --| Pascal Obry =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Team-= Ada Member > --| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE > --|------------------------------------------------------ > --| =A0 =A0http://www.obry.net=A0- =A0http://v2p.fr.eu.org > --| "The best way to travel is by means of imagination" > --| > --| gpg --keyserver keys.gnupg.net --recv-key F949BD3B Hi Pascal, MPI is more versatile than you think. When I run 8 MPI tasks on a single machine with 8 cores, then it works beautifully. When I run 48 MPI tasks on six 8-core machines, then it works fine, but you notice the degraded bandwidth from the network. All of this is transparent to user .. all he sees is the 8 cores or the 48 cores. The basic MPI communications model is a remote rendezvous, which I use extensively. I have nothing but praise for it. Asynchronous communication is available but has never been much use in my applications. But actually, I'ld be surprised if MPI worked well in the present application (benchmarking "new" and unchecked_deallocation). I am *still* not planning to do any work on this benchmark. J.