From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,1a44c40a66c293f3 X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news1.google.com!news1.google.com!news.germany.com!news.buerger.net!nuzba.szn.dk!news.jacob-sparre.dk!pnx.dk!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: PAR (Was: Embedded languages based on early Ada) Date: Wed, 7 Mar 2007 23:45:23 -0600 Organization: Jacob's private Usenet server Message-ID: References: <1172192349.419694.274670@k78g2000cwa.googlegroups.com> <113ls6wugt43q$.cwaeexcj166j$.dlg@40tude.net> <1i3drcyut9aaw.isde6utlv6iq.dlg@40tude.net> <1c61jqeqo68w$.2irtg70stnsa.dlg@40tude.net> <1vdieyr16h7ct$.1vuvfmghy8dzo$.dlg@40tude.net> <1l5727owshrjf$.uuylbc4ek430.dlg@40tude.net> <45EF1E2B.2020703@obry.net> NNTP-Posting-Host: static-69-95-181-76.mad.choiceone.net X-Trace: jacob-sparre.dk 1173334462 10723 69.95.181.76 (8 Mar 2007 06:14:22 GMT) X-Complaints-To: news@jacob-sparre.dk NNTP-Posting-Date: Thu, 8 Mar 2007 06:14:22 +0000 (UTC) X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2800.1807 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1807 Xref: g2news1.google.com comp.lang.ada:14417 Date: 2007-03-07T23:45:23-06:00 List-Id: "Dr. Adrian Wrigley" wrote in message news:pan.2007.03.07.20.42.03.883636@linuxchip.demon.co.uk.uk.uk... ... > The contribution is the very lightweight syntax and semantics. > One extra keyword, to permit a wide choice of implementations, > parallel, serial, hardware, pipeline, threaded etc. > > PAR is ideal for the common case of running absolutely independent code! But only if the code is guaranteed to be independent! Otherwise, all of those different implementation techniques will give different results. And relying on programmers to understand the many ways that this could go wrong is not going to help any (or give this technique a good reputation, for that matter). If I was designing this sort of language feature, I would start with a very strong set of restrictions known to prevent most trouble, and then would look for additional things that could be allowed. The starting point would be something like a parallel block: [declare in parallel declarative_part] begin in parallel sequence_of_statements; end in parallel; where the declarative part could only include object declarations initialized by parallel functions (new, see below), and the sequence_of_statements could only be parallel procedure calls. A parallel subprogram would be defined by the keyword parallel. They would be like a normal Ada subprogram, except: * Access to global variables is prohibited, other than protected objects, atomic objects, and objects of a declared parallel type. Note that this also includes global storage pools! * Parameters are results can only be by-copy types, protected objects, and objects of a declared parallel type. These restrictions would be somewhat like those of a pure package, but would be aimed at ensuring that only objects that are safe to be accessed concurrently could be accessed. (It would still be possible to get in trouble by having objects accessed in different orders in different subprograms - which could matter in parallel execution - but I don't think it is practical to prevent that. It's likely that the operations will need to access some common data store, and rules that did not allow that could not go anywhere.) A "declared parallel type" would be a private type that had a pragma that declared that all of it's operations were task-safe. (That's needed to provide containers that could be used in parallel subprograms, for instance.) Precisely what that would mean, I'll leave for some other time. (It would be possible to survive without this, as you could try to use protected objects and interfaces for everything. That sounds like a pain to me...) Humm, I think you'd actually need two separate blocks: "declare in parallel" (and execute statements sequentially) and "begin in parallel" (with sequential declarations). There seems to be a sequential (combining) stage that follows the parallel part in most of these algorithms. Anyway, food for thought... Randy.