From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,5164ccc41905b2d0 X-Google-NewGroupId: yes X-Google-Attributes: gida07f3367d7,domainid0,public,usenet X-Google-Language: ENGLISH,ASCII-7-bit X-Received: by 10.180.183.135 with SMTP id em7mr1401808wic.0.1362938405088; Sun, 10 Mar 2013 11:00:05 -0700 (PDT) MIME-Version: 1.0 Path: g1ni57632wig.0!nntp.google.com!fu-berlin.de!newsfeed.pionier.net.pl!pwr.wroc.pl!news.wcss.wroc.pl!not-for-mail From: Waldek Hebisch Newsgroups: comp.lang.ada Subject: Re: Ada and OpenMP Date: Sun, 10 Mar 2013 18:00:04 +0000 (UTC) Organization: Politechnika Wroclawska Message-ID: References: <87k3pjht79.fsf@ludovic-brenta.org> NNTP-Posting-Host: hera.math.uni.wroc.pl X-Trace: z-news.wcss.wroc.pl 1362938404 11519 156.17.86.1 (10 Mar 2013 18:00:04 GMT) X-Complaints-To: abuse@news.pwr.wroc.pl NNTP-Posting-Date: Sun, 10 Mar 2013 18:00:04 +0000 (UTC) User-Agent: tin/1.9.6-20100522 ("Lochruan") (UNIX) (Linux/3.4.34 (x86_64)) Date: 2013-03-10T18:00:04+00:00 List-Id: Randy Brukardt wrote: > "Peter C. Chapin" wrote in message > news:hr-dnULuncyRjqTM4p2dnAA@giganews.com... > > OpenMP is a different animal than Ada tasks. It provides fine grained > > parallelism where, for example, it is possible to have the compiler > > automatically parallelize a loop. In C: > > > > #pragma omp parallel for > > for( i = 0; i < MAX; ++i ) { > > array[i]++; > > } > > > > The compiler automatically splits the loop iterations over an > > "appropriate" number of threads (probably based on the number of cores). > > Isn't OpenMP aimed at SIMD-type machines (as in video processors), as > opposed to generalized cores as in typical Intel and ARM designs? > Fine-grained parallelism doesn't make much sense on the latter, because > cache coherence and core scheduling issues will eat up gains in almost all > circumstances. Ada tasks are a much better model. Actually OpenMP only looks like fine-grained parallelism, but is not: OpenMP creates (and destroys) tasks as needed. Main advantage of OpenMP is that is automates some common parallel patterns and consequently code is much closer to seqential version. It is very hard to get similar effect in fully automatic way, without pragmas. Simply, taking looses on fine-graned cases and pragmas tell the compiler that code is coarse enough to use several tasks. Also, OMP pragmas control memory consitency -- without them compiler would have to assume worst case and generate slower code. -- Waldek Hebisch hebisch@math.uni.wroc.pl