From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!news.eternal-september.org!feeder.eternal-september.org!aioe.org!.POSTED!not-for-mail From: "Dmitry A. Kazakov" Newsgroups: comp.lang.ada Subject: Re: Generators/coroutines in future Ada? Date: Tue, 18 Jul 2017 09:38:11 +0200 Organization: Aioe.org NNTP Server Message-ID: References: NNTP-Posting-Host: vZYCW951TbFitc4GdEwQJg.user.gioia.aioe.org Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-Complaints-To: abuse@aioe.org User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 Content-Language: en-US X-Notice: Filtered by postfilter v. 0.8.2 Xref: news.eternal-september.org comp.lang.ada:47444 Date: 2017-07-18T09:38:11+02:00 List-Id: On 18/07/2017 01:54, Randy Brukardt wrote: > No, I think there is very little chance of it becoming possible. When I was > a young buck at the University of Wisconsin, there was a lot of optimism and > research into automatic parallelization of programs (particularly Fortran > programs). I recall reading a number of papers in this area for classes. > > Fast forward (gulp!) about 38 years, and most of the optimism is gone, and > there has not been a lot of progress in the area of automatic > parallelization. Certainly, compilers can recognize certain patterns and > parallelize them (generally in cases where all of the "tasklets" run the > same code), but general-purpose parallelization has defied solution (at > least for conventional languages -- there might be more luck with a > purpose-built language like Parasail -- but this is an Ada forum and such > languages are irrelevant for Ada). > > It should be clear that fine-grained parallelism (the kind that a compiler > could conceivably do) is likely to be a failure, as scheduling takes a > significant amount of overhead, on top of which a lot of conventional > sequential code doesn't have much parallelism. (Remember that things like > exceptions, assignment semantics, and the like add sequential requirements > that probably aren't necessary to solve the actual problem.) > > I would never want to say never, but I think it is far more likely that > programming will cease to be a human activity at all (perhaps because > everything is replaced by neural nets) than automatic parallelization of > "classic" languages become possible. Huh, when I was young everybody believed AI round the corner. However already then, long long before that actually, it was well known that neuronal networks were a dead end. The algorithm (Perceptron) was invented in 50's and already then shown that it can separate only linear classes, that is by a flat surface in a n-D space. Imagine how does it work with two concentric circles. Even a hen can do that, and probably an insect too. Fast forward, people forgot 50's, 80's and incidentally the very skill of researching (reading?) scientific publications ... then Perceptron was reborn! > If you believe, like I do, that automatic parallelization is likely to be > impossible in the reasonable term, the next best thing is to give the > compiler hints. Moreover, nothing about "parallel" blocks and loops actually > requires them to be run in parallel; it just suggests the possibility. It > also includes checking that the "parallel" code is actually possible to > execute in parallel, so you are actually giving an optimizer more help by > ensuring that the code can really be run that way. I fully agree. > My goal for these constructs is that they are essentially semantically > identical to the non-parallel versions, just with checking that parallel > execution is possible. (I have no idea how close or far from that goal we > will end up, as these constructs change a lot from meeting to meeting.) IMO, it is pure time-wasting while so many problems stay unresolved. Modern multi-core architectures of shared memory is clearly a dead end. When new architectures emerge we will see what language support or even paradigms will be needed to handle these, not before, -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de