From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: a07f3367d7,5164ccc41905b2d0 X-Google-Attributes: gida07f3367d7,public,usenet X-Google-NewGroupId: yes X-Google-Language: ENGLISH,ASCII X-Received: by 10.224.72.199 with SMTP id n7mr1772841qaj.5.1362752700059; Fri, 08 Mar 2013 06:25:00 -0800 (PST) X-Received: by 10.49.84.73 with SMTP id w9mr107633qey.34.1362752700035; Fri, 08 Mar 2013 06:25:00 -0800 (PST) Path: o5ni47qas.0!nntp.google.com!dd2no8248659qab.0!postnews.google.com!glegroupsg2000goo.googlegroups.com!not-for-mail Newsgroups: comp.lang.ada Date: Fri, 8 Mar 2013 06:24:59 -0800 (PST) In-Reply-To: Complaints-To: groups-abuse@google.com Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=189.77.226.1; posting-account=TRgI1QoAAABSsYi-ox3Pi6N-JEKKU0cu NNTP-Posting-Host: 189.77.226.1 References: <87k3pjht79.fsf@ludovic-brenta.org> User-Agent: G2/1.0 MIME-Version: 1.0 Message-ID: <1f08b9a5-8b9e-4831-b1f2-fdcc4eef666d@googlegroups.com> Subject: Re: Ada and OpenMP From: "Rego, P." Injection-Date: Fri, 08 Mar 2013 14:25:00 +0000 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Date: 2013-03-08T06:24:59-08:00 List-Id: Em quinta-feira, 7 de mar=E7o de 2013 19h22min03s UTC-3, Peter C. Chapin e= screveu: > OpenMP is a different animal than Ada tasks. It provides fine grained=20 > parallelism where, for example, it is possible to have the compiler=20 > automatically parallelize a loop. In C: > #pragma omp parallel for > for( i =3D 0; i < MAX; ++i ) { > array[i]++; > } > The compiler automatically splits the loop iterations over an=20 > "appropriate" number of threads (probably based on the number of cores). > In Ada one might write, perhaps > pragma OMP(Parallel_For) > for I in 1 .. MAX loop > A(I) :=3D A(I) + 1 > end loop; > Doing this with Ada tasks in such a way that it uses an optimal number=20 > of threads on each execution (based on core count) would be much more=20 > complicated, I should imagine. Please correct me if I'm wrong! > OpenMP has various other features, some of which could be done naturally= =20 > with tasks, but much of what OpenMP is about is semi-automatic fine=20 > grained parallelization. It is to Ada tasking what Ada tasking is to the= =20 > explicit handling of locks, etc. > Peter Yes, that's the idea.