From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=unavailable autolearn_force=no version=3.4.4 X-Received: by 2002:a24:75c9:: with SMTP id y192-v6mr4730itc.10.1529457121917; Tue, 19 Jun 2018 18:12:01 -0700 (PDT) X-Received: by 2002:aca:5208:: with SMTP id g8-v6mr818641oib.1.1529457121657; Tue, 19 Jun 2018 18:12:01 -0700 (PDT) Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!feeder.eternal-september.org!news.unit0.net!newsreader5.netcologne.de!news.netcologne.de!peer02.ams1!peer.ams1.xlned.com!news.xlned.com!peer02.am4!peer.am4.highwinds-media.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!d7-v6no606866itj.0!news-out.google.com!c20-v6ni311itc.0!nntp.google.com!u78-v6no619637itb.0!postnews.google.com!glegroupsg2000goo.googlegroups.com!not-for-mail Newsgroups: comp.lang.ada Date: Tue, 19 Jun 2018 18:12:01 -0700 (PDT) In-Reply-To: Complaints-To: groups-abuse@google.com Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=76.113.16.86; posting-account=lJ3JNwoAAAAQfH3VV9vttJLkThaxtTfC NNTP-Posting-Host: 76.113.16.86 References: User-Agent: G2/1.0 MIME-Version: 1.0 Message-ID: <993f28de-6a64-480b-9c6e-d9714bcdef0d@googlegroups.com> Subject: Re: Ada lacks lighterweight-than-task parallelism From: Shark8 Injection-Date: Wed, 20 Jun 2018 01:12:01 +0000 Content-Type: text/plain; charset="UTF-8" X-Received-Bytes: 2145 X-Received-Body-CRC: 1687417840 Xref: reader02.eternal-september.org comp.lang.ada:53184 Date: 2018-06-19T18:12:01-07:00 List-Id: On Tuesday, June 19, 2018 at 4:14:17 PM UTC-6, Dan'l Miller wrote: > > Ada for the Cold War in 1983 would focus on tasks as the only vehicle for parallelism. Ada for the 21st century would also embrace/facilitate slices somehow, via some sort of locality of reference or via some sort of demarcation of independence. Don't hate on TASK! TASK is a great construct, and particularly good for: 1) Isolating and/or interfacing both subsystems and jobs, with the possibility of taking advantage of multiprocessor capability; 2) Implementing protocols, via the ACCEPT construct; and 3) at a high-level rather than the "annotated GPU assembly" we get with (eg) CUDA/C. As for something lightweight, we're working on that in the ARG right now: * PARALLEL DO blocks, * Parallel LOOPs [IIRC, it might be just FOR], and * And some other things like operators, blocking-detection, etc.