From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=unavailable autolearn_force=no version=3.4.4 X-Received: by 10.200.52.246 with SMTP id x51mr641518qtb.69.1499899637426; Wed, 12 Jul 2017 15:47:17 -0700 (PDT) X-Received: by 10.36.9.82 with SMTP id 79mr963273itm.2.1499899637384; Wed, 12 Jul 2017 15:47:17 -0700 (PDT) Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!news.eternal-september.org!feeder.eternal-september.org!news.glorb.com!m54no256580qtb.1!news-out.google.com!f200ni2931itc.0!nntp.google.com!188no482722itx.0!postnews.google.com!glegroupsg2000goo.googlegroups.com!not-for-mail Newsgroups: comp.lang.ada Date: Wed, 12 Jul 2017 15:47:17 -0700 (PDT) In-Reply-To: Complaints-To: groups-abuse@google.com Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=76.113.92.25; posting-account=lJ3JNwoAAAAQfH3VV9vttJLkThaxtTfC NNTP-Posting-Host: 76.113.92.25 References: User-Agent: G2/1.0 MIME-Version: 1.0 Message-ID: Subject: Re: Generators/coroutines in future Ada? From: Shark8 Injection-Date: Wed, 12 Jul 2017 22:47:17 +0000 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Xref: news.eternal-september.org comp.lang.ada:47391 Date: 2017-07-12T15:47:17-07:00 List-Id: On Tuesday, July 11, 2017 at 11:35:19 PM UTC-6, Randy Brukardt wrote: >=20 > The coming problem with "classic" programming languages like Ada is that= =20 > they don't map well to architectures with a lot of parallelism. The more= =20 > constructs that require sequential execution, the worse that Ada programs= =20 > will perform. I agree, but, on the flip-side NVidia's CUDA and OpenMP are weird/horriffic= amalgamations of [arguably] high-level language and what is essentially "p= arallel-assembly" via pragma, which does nothing at all for the reputation = of parallel in general except MAKE it both difficult and error-prone. I do wonder how much it would be different if instead of the "parallel prag= ma enabled C" compiler they released for CUDA they had gone with an Ada com= piler with a simple/single pragma "parallel" that could be applied to TASK = or [perhaps] protected-object constructs. Yes, I know that "automatic parallelization" of high-level language is cons= idered fairly hard right now, but less than 50 years ago an optimizing comp= iler was considered hard... and we gained a LOT from allowing optimizing co= mpilers + bootstrapping. That's also why I'm also a bit dubious about the proposed PARALLEL / AND / = END blocks: it seems to me that the mistake here is to delve too far into t= he minutia (the above "parallel assembly" idea) so as to make it difficult = or impossible to automatically optimize parallel code because of the more l= ow-level view imposed by the language... much like C's notion of arrays is = fundamentally broken and undermines the benefits of C++. Now, I admit I could very well be wrong here, but there's a gut-feeling tha= t this is not the route we want to go down.