From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,5164ccc41905b2d0 X-Google-NewGroupId: yes X-Google-Attributes: gida07f3367d7,domainid0,public,usenet X-Google-Language: ENGLISH,ASCII-7-bit X-Received: by 10.66.254.231 with SMTP id al7mr38143pad.26.1362694924558; Thu, 07 Mar 2013 14:22:04 -0800 (PST) Path: jm3ni38084pbb.0!nntp.google.com!Xl.tags.giganews.com!border1.nntp.dca.giganews.com!nntp.giganews.com!local2.nntp.dca.giganews.com!news.giganews.com.POSTED!not-for-mail NNTP-Posting-Date: Thu, 07 Mar 2013 16:22:04 -0600 Date: Thu, 07 Mar 2013 17:22:03 -0500 From: "Peter C. Chapin" User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130221 Thunderbird/17.0.3 MIME-Version: 1.0 Newsgroups: comp.lang.ada Subject: Re: Ada and OpenMP References: <87k3pjht79.fsf@ludovic-brenta.org> In-Reply-To: <87k3pjht79.fsf@ludovic-brenta.org> Message-ID: X-Usenet-Provider: http://www.giganews.com X-Trace: sv3-ZSMvDt+CgFC3jG78VJ+L9jblzCeiL6N9Bz2dp+cw/eavQOnytYlHGNSWBEafu56sAm1oFfbLNAHV2uU!ShLWFD2DRJNuKzWFqT5VKowZfNvtZW+rbzzZGLq95dQEi3NDcsN0GdLq8qfyc8s= X-Complaints-To: abuse@giganews.com X-DMCA-Notifications: http://www.giganews.com/info/dmca.html X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly X-Postfilter: 1.3.40 X-Original-Bytes: 2574 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Date: 2013-03-07T17:22:03-05:00 List-Id: OpenMP is a different animal than Ada tasks. It provides fine grained parallelism where, for example, it is possible to have the compiler automatically parallelize a loop. In C: #pragma omp parallel for for( i = 0; i < MAX; ++i ) { array[i]++; } The compiler automatically splits the loop iterations over an "appropriate" number of threads (probably based on the number of cores). In Ada one might write, perhaps pragma OMP(Parallel_For) for I in 1 .. MAX loop A(I) := A(I) + 1 end loop; Doing this with Ada tasks in such a way that it uses an optimal number of threads on each execution (based on core count) would be much more complicated, I should imagine. Please correct me if I'm wrong! OpenMP has various other features, some of which could be done naturally with tasks, but much of what OpenMP is about is semi-automatic fine grained parallelization. It is to Ada tasking what Ada tasking is to the explicit handling of locks, etc. Peter On 03/07/2013 03:04 PM, Ludovic Brenta wrote: > Rego, P. writes on comp.lang.ada: >> I'm trying some exercises of parallel computing using that pragmas >> from OpenMP in C, but it would be good to use it also in Ada. Is it >> possible to use that pragmas from OpenMP in Ada? And...does gnat gpl >> supports it? > > Why would you use pragmas when Ada supports tasking directly in the > language? >