From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!mx02.eternal-september.org!feeder.eternal-september.org!gandalf.srv.welterde.de!news.jacob-sparre.dk!loke.jacob-sparre.dk!pnx.dk!.POSTED!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: GNAT and Tasklets Date: Fri, 19 Dec 2014 17:51:59 -0600 Organization: Jacob Sparre Andersen Research & Innovation Message-ID: References: <8277a521-7317-4839-b0b6-97f8155be1a4@googlegroups.com> <9e1d2b9f-1b97-4679-8eec-5ba75f3c357c@googlegroups.com> <478c81f9-5233-4ae1-a3eb-e67c4dbd0da1@googlegroups.com> NNTP-Posting-Host: rrsoftware.com X-Trace: loke.gir.dk 1419033120 3023 24.196.82.226 (19 Dec 2014 23:52:00 GMT) X-Complaints-To: news@jacob-sparre.dk NNTP-Posting-Date: Fri, 19 Dec 2014 23:52:00 +0000 (UTC) X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-RFC2646: Format=Flowed; Original X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Xref: news.eternal-september.org comp.lang.ada:24165 Date: 2014-12-19T17:51:59-06:00 List-Id: "Dmitry A. Kazakov" wrote in message news:yjrnhk0w8gjd.k6ht3uh7raiw.dlg@40tude.net... > On Fri, 19 Dec 2014 10:40:03 +0000 (UTC), Georg Bauhaus wrote: > >> wrote: >> >>> It would be interesting to do a little survey on existing code using >>> tasking. >>> I have the impression that only tasks at Library level do rendez-vous >>> and >>> protected object synchronisation, and local tasks, most of the time, are >>> limited to a rendez-vous with their parent task at the beginning or at >>> the end. So maybe we should put restrictions on local tasks, so that we >>> can map them to jobs. >> >> Won't the parallel loop feature be providing >> for this kind of mini job: > > Parallel loop is useless for practical purposes. It wonders me why people > wasting time with this. Because it's about medium-grained parallelism rather than the impractical fine-grained parallelism or the heavy-weight of Ada tasks. > They could start with logical operations instead: > > X and Y > > is already parallel by default. AFAIK nothing in RM forbids concurrent > evaluation of X and Y if they are independent. Not really true in practice -- it's rare that a compiler can prove independence of anything but the simplest entities (which are too cheap to benefit from optimization). Common-subexpression elimination is hard and rarely does anything. Additionally, 1.1.4(18) is taken to mean that "arbitrary order" means any possible sequential order, but NOT a parallel order. Thus parallel execution is only allowed in cases that (almost) never happen in practice (large, independent expressions). One of the reasons for having parallel blocks and loops is to signal to the compiler and reader that parallel execution IS allowed, even though it might not be an allowed sequential order. > Same with Ada arithmetic. > E.g. > > A + B + C + D > > So far no compiler evaluates arguments concurrently or vectorizes > sub-expressions like: > > A > B + > C + > D + > > Because if they did the result would work slower than sequential code. It > simply does not worth the efforts with existing machine architectures. I agree with you vis-a-vis fine-grained parallelism. The overhead would kill you. But the reason for parallel loops and blocks is to make expensive subprogram calls (or other decent-sized chunks of code) in parallel without writing a massive amount of overhead. That is not allowed in Ada today because of issues with exception handling and rules like 1.1.4(18). While for i in parallel 1 .. 1000 loop A := A + I; end loop; would be madness to execute in parallel, the more realistic for I in parallel 1 .. 1000 loop A := A + Expensive_Function(I); end loop; probably makes sense. It's the latter that we're interested in. Randy.