From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: a07f3367d7,5164ccc41905b2d0 X-Google-Attributes: gida07f3367d7,public,usenet X-Google-NewGroupId: yes X-Google-Language: ENGLISH,ASCII-7-bit X-Received: by 10.180.95.97 with SMTP id dj1mr3026078wib.4.1363060952505; Mon, 11 Mar 2013 21:02:32 -0700 (PDT) MIME-Version: 1.0 X-FeedAbuse: http://nntpfeed.proxad.net/abuse.pl feeded by 78.192.65.63 Path: g1ni60747wig.0!nntp.google.com!feeder1.cambriumusenet.nl!feed.tweaknews.nl!85.12.40.131.MISMATCH!xlned.com!feeder3.xlned.com!feed.xsnews.nl!border-2.ams.xsnews.nl!border4.nntp.ams.giganews.com!border2.nntp.ams.giganews.com!nntp.giganews.com!weretis.net!feeder1.news.weretis.net!usenet.pasdenom.info!dedibox.gegeweb.org!gegeweb.eu!nntpfeed.proxad.net!news.muarf.org!news.ecp.fr!news.jacob-sparre.dk!munin.jacob-sparre.dk!pnx.dk!.POSTED!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: Ada and OpenMP Date: Thu, 7 Mar 2013 17:42:59 -0600 Organization: Jacob Sparre Andersen Research & Innovation Message-ID: References: <87k3pjht79.fsf@ludovic-brenta.org> NNTP-Posting-Host: static-69-95-181-76.mad.choiceone.net X-Trace: munin.nbi.dk 1362699782 9315 69.95.181.76 (7 Mar 2013 23:43:02 GMT) X-Complaints-To: news@jacob-sparre.dk NNTP-Posting-Date: Thu, 7 Mar 2013 23:43:02 +0000 (UTC) X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-RFC2646: Format=Flowed; Response X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Date: 2013-03-07T17:42:59-06:00 List-Id: "Peter C. Chapin" wrote in message news:hr-dnULuncyRjqTM4p2dnAA@giganews.com... > OpenMP is a different animal than Ada tasks. It provides fine grained > parallelism where, for example, it is possible to have the compiler > automatically parallelize a loop. In C: > > #pragma omp parallel for > for( i = 0; i < MAX; ++i ) { > array[i]++; > } > > The compiler automatically splits the loop iterations over an > "appropriate" number of threads (probably based on the number of cores). Isn't OpenMP aimed at SIMD-type machines (as in video processors), as opposed to generalized cores as in typical Intel and ARM designs? Fine-grained parallelism doesn't make much sense on the latter, because cache coherence and core scheduling issues will eat up gains in almost all circumstances. Ada tasks are a much better model. > In Ada one might write, perhaps > > pragma OMP(Parallel_For) > for I in 1 .. MAX loop > A(I) := A(I) + 1 > end loop; > > Doing this with Ada tasks in such a way that it uses an optimal number of > threads on each execution (based on core count) would be much more > complicated, I should imagine. Please correct me if I'm wrong! Well, this doesn't make much sense. If the pragma doesn't change the semantics of the loop, then its not necessary at all (the compiler can and ought to do the optimization when it makes sense, possibly under the control of global flags). Programmers are lousy at determining where and how the best of use of machine resources can be made. (Pragma Inline is a similar thing that should never have existed and certainly shouldn't be necessary.) If the pragma does change the semantics, then it violates "good taste in pragmas". It would be much better for the change to be indicated by syntax or by an aspect. Pragmas, IMHO, are the worst way to do anything. Compiler writers tend to use them because they can do so without appearing to modify the language, but it's all an illusion: the program probably won't work right without the pragma, so you're still locked into that particular vendor. Might as well have done it right in the first place (and make a proposal to the ARG, backed with practice, so it can get done right in the next version of Ada). Randy Brukardt, President, Anti-Pragma Society. :-) > OpenMP has various other features, some of which could be done naturally > with tasks, but much of what OpenMP is about is semi-automatic fine > grained parallelization. It is to Ada tasking what Ada tasking is to the > explicit handling of locks, etc. > > Peter > > On 03/07/2013 03:04 PM, Ludovic Brenta wrote: >> Rego, P. writes on comp.lang.ada: >>> I'm trying some exercises of parallel computing using that pragmas >>> from OpenMP in C, but it would be good to use it also in Ada. Is it >>> possible to use that pragmas from OpenMP in Ada? And...does gnat gpl >>> supports it? >> >> Why would you use pragmas when Ada supports tasking directly in the >> language? >>