From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!news.eternal-september.org!feeder.eternal-september.org!nntp-feed.chiark.greenend.org.uk!ewrotcd!newsfeed.xs3.de!io.xs3.de!news.jacob-sparre.dk!franka.jacob-sparre.dk!pnx.dk!.POSTED.rrsoftware.com!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: Generators/coroutines in future Ada? Date: Mon, 17 Jul 2017 18:54:53 -0500 Organization: JSA Research & Innovation Message-ID: References: Injection-Date: Mon, 17 Jul 2017 23:54:54 -0000 (UTC) Injection-Info: franka.jacob-sparre.dk; posting-host="rrsoftware.com:24.196.82.226"; logging-data="16673"; mail-complaints-to="news@jacob-sparre.dk" X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-RFC2646: Format=Flowed; Original X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Xref: news.eternal-september.org comp.lang.ada:47440 Date: 2017-07-17T18:54:53-05:00 List-Id: "Shark8" wrote in message news:b6e36917-63f0-4c13-aeee-1a1a6f44d495@googlegroups.com... On Tuesday, July 11, 2017 at 11:35:19 PM UTC-6, Randy Brukardt wrote: > >> The coming problem with "classic" programming languages like Ada is that >> they don't map well to architectures with a lot of parallelism. The more >> constructs that require sequential execution, the worse that Ada programs >> will perform. ... >Yes, I know that "automatic parallelization" of high-level language is >considered > fairly hard right now, but less than 50 years ago an optimizing compiler > was > considered hard... and we gained a LOT from allowing optimizing compilers > + bootstrapping. No, I think there is very little chance of it becoming possible. When I was a young buck at the University of Wisconsin, there was a lot of optimism and research into automatic parallelization of programs (particularly Fortran programs). I recall reading a number of papers in this area for classes. Fast forward (gulp!) about 38 years, and most of the optimism is gone, and there has not been a lot of progress in the area of automatic parallelization. Certainly, compilers can recognize certain patterns and parallelize them (generally in cases where all of the "tasklets" run the same code), but general-purpose parallelization has defied solution (at least for conventional languages -- there might be more luck with a purpose-built language like Parasail -- but this is an Ada forum and such languages are irrelevant for Ada). It should be clear that fine-grained parallelism (the kind that a compiler could conceivably do) is likely to be a failure, as scheduling takes a significant amount of overhead, on top of which a lot of conventional sequential code doesn't have much parallelism. (Remember that things like exceptions, assignment semantics, and the like add sequential requirements that probably aren't necessary to solve the actual problem.) I would never want to say never, but I think it is far more likely that programming will cease to be a human activity at all (perhaps because everything is replaced by neural nets) than automatic parallelization of "classic" languages become possible. >That's also why I'm also a bit dubious about the proposed PARALLEL / AND / >END > blocks: it seems to me that the mistake here is to delve too far into the > minutia (the > above "parallel assembly" idea) so as to make it difficult or impossible > to automatically > optimize parallel code because of the more low-level view imposed by the > language... > much like C's notion of arrays is fundamentally broken and undermines the > benefits > of C++. If you believe, like I do, that automatic parallelization is likely to be impossible in the reasonable term, the next best thing is to give the compiler hints. Moreover, nothing about "parallel" blocks and loops actually requires them to be run in parallel; it just suggests the possibility. It also includes checking that the "parallel" code is actually possible to execute in parallel, so you are actually giving an optimizer more help by ensuring that the code can really be run that way. My goal for these constructs is that they are essentially semantically identical to the non-parallel versions, just with checking that parallel execution is possible. (I have no idea how close or far from that goal we will end up, as these constructs change a lot from meeting to meeting.) > Now, I admit I could very well be wrong here, but there's a gut-feeling > that > this is not the route we want to go down. You're wrong here. ;-) I think it is highly unlikely that the majority of Ada code could ever be parallelized automatically. Assignments (needed for results like reductions), and the handling of exceptions would prevent that. There is also the need (at least in the foreseeable future) to make your "chunks" of execution large enough so that scheduling overhead doesn't kill any parallelism gains. That's best handled with explicit code of some kind; Brad's libraries are a step in the right direction, but one would want checking for conflicts (one of the things that is extremely hard to do with existing Ada tasks) and more integration with the syntax. Randy.