From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!feeder.eternal-september.org!nntp-feed.chiark.greenend.org.uk!ewrotcd!newsfeed.xs3.de!io.xs3.de!news.jacob-sparre.dk!franka.jacob-sparre.dk!pnx.dk!.POSTED.rrsoftware.com!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: Ada lacks lighterweight-than-task parallelism Date: Fri, 29 Jun 2018 17:47:42 -0500 Organization: JSA Research & Innovation Message-ID: References: <993f28de-6a64-480b-9c6e-d9714bcdef0d@googlegroups.com> <167bec10-2a52-4c79-958d-91faadad915b@googlegroups.com> <2d6a5ab7-812f-47a9-a958-44177a3cf203@googlegroups.com> <64a526cb-e6d5-44a6-b446-5b652ebe60ca@googlegroups.com> Injection-Date: Fri, 29 Jun 2018 22:47:42 -0000 (UTC) Injection-Info: franka.jacob-sparre.dk; posting-host="rrsoftware.com:24.196.82.226"; logging-data="32032"; mail-complaints-to="news@jacob-sparre.dk" X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-RFC2646: Format=Flowed; Response X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.7246 Xref: reader02.eternal-september.org comp.lang.ada:53464 Date: 2018-06-29T17:47:42-05:00 List-Id: "Dmitry A. Kazakov" wrote in message news:ph6b2l$1cta$1@gioia.aioe.org... > On 2018-06-30 00:01, Randy Brukardt wrote: ... >> "Parallel" is mainly a hint to >> the compiler that parallelism can be used (there's almost no case today >> where a compiler could automatically use parallel code, as there would be >> a >> substantial risk of that code being slower than sequential code would >> be). > > Why does it need such hints? One reason is that a loop is defined to iterate operations in a specific order. For instance, in for I in 1..5 loop -- I takes on the values of 1, 2, 3, 4, and 5, in that order. The ability of a compiler to parallelize in such a case then requires determining all of the following: (1) That there are enough iterations in order to make parallelizing worthwhile; (2) That there is enough code in each iteration in order to make parallelizing worthwhile; (3) That there is no interaction between iterations; (4) That there is no use of global variables. With "parallel", the compiler knows that the programmer has requested unordered, parallel iteration, so it only needs make safety checks and create the appropriate implementation. Otherwise, it can't make the optimization without knowing all of the above (having simple loops run very slowly because they were parallelized unnecessarily is not going to be accepted by many). Indeed, you pretty much can only do it when the number of iterations is static and the body of the iteration doesn't contain any external calls -- and the number of iterations is relatively large. It's clearly possible that sometime in the future, we'll have new hardware and operating systems that would make the overhead small enough that such optimizations could be done automatically in enough cases to make it worth it. But that's fine; the situation then would be similar to that for "inline" -- not 100% necessary but still a useful hint to the compiler. Randy.