From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,1a44c40a66c293f3 X-Google-Thread: 1089ad,7e78f469a06e6516 X-Google-Attributes: gid103376,gid1089ad,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news2.google.com!news3.google.com!volia.net!news2.volia.net!hwmnpeer01.ams!news-out.ntli.net!newsrout1-gui.ntli.net!ntli.net!news.highwinds-media.com!newspeer1-win.ntli.net!newsfe6-gui.ntli.net.POSTED!53ab2750!not-for-mail From: "Dr. Adrian Wrigley" Subject: Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") User-Agent: Pan/0.14.2 (This is not a psychotic episode. It's a cleansing moment of clarity.) Message-Id: Newsgroups: comp.lang.ada,comp.lang.vhdl References: <1172192349.419694.274670@k78g2000cwa.googlegroups.com> <1172239820.896603.222120@k78g2000cwa.googlegroups.com> <113ls6wugt43q$.cwaeexcj166j$.dlg@40tude.net> <1i3drcyut9aaw.isde6utlv6iq.dlg@40tude.net> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit Date: Sat, 03 Mar 2007 00:00:52 GMT NNTP-Posting-Host: 82.21.99.109 X-Trace: newsfe6-gui.ntli.net 1172880052 82.21.99.109 (Sat, 03 Mar 2007 00:00:52 GMT) NNTP-Posting-Date: Sat, 03 Mar 2007 00:00:52 GMT Organization: NTL Xref: g2news2.google.com comp.lang.ada:9642 comp.lang.vhdl:7611 Date: 2007-03-03T00:00:52+00:00 List-Id: On Fri, 02 Mar 2007 17:32:26 +0100, Dmitry A. Kazakov wrote: > On Fri, 02 Mar 2007 11:36:22 GMT, Dr. Adrian Wrigley wrote: > >> On Thu, 01 Mar 2007 14:57:01 +0100, Dmitry A. Kazakov wrote: >> >>> On Thu, 01 Mar 2007 11:22:32 GMT, Dr. Adrian Wrigley wrote: >>> >>>> If you don't have multiple processors, lightweight threading is >>>> less attractive than if you do? Inmos/Occam/Transputer was founded >>>> on the basis that lightweight threading was highly relevant to multiple >>>> processors. >>>> >>>> Ada has no means of saying "Do these bits concurrently, if you like, >>>> because I don't care what the order of execution is". And a compiler >>>> can't work it out from the source. If your CPU has loads of threads, >>>> compiling code with "PAR" style language concurrency is rather useful >>>> and easy. >>> >>> But par is quite low-level. What would be the semantics of: >>> >>> declare >>> Thing : X; >>> begin >>> par >>> Foo Thing); >>> and >>> Bar Thing); >>> and >>> Baz Thing); >>> end par; >>> end; >> >> Do Foo, Bar and Baz in any order or concurrently, all accessing Thing. > > That's the question. If they just have an arbitrary execution order being > mutually exclusive then the above is a kind of select with anonymous > accepts invoking Foo, Bar, Baz. The semantics is clean. > >> Roughly equivalent to doing the same operations in three separate >> tasks. Thing could be a protected object, if concurrent writes >> are prohibited. Seems simple enough! > > This is a very different variant: > > declare > Thing : X; > begin > declare -- par > task Alt_1; task Alt_2; task Alt_3; > task body Alt_1 is > begin > Foo (Thing); > end Alt_1; > task body Alt_2 is > begin > Bar (Thing); > end Alt_2; > task body Alt_3 is > begin > Baz (Thing); > end Alt_3; > begin > null; > end; -- par > > If par is a sugar for this, then Thing might easily get corrupted. The > problem with such par is that the rules of nesting and visibility for the > statements, which are otherwise safe, become very dangerous in the case of > par. This is what I was thinking. Syntax might be even simpler: declare Thing : X; begin par Foo (Thing); Bar (Thing); Baz (Thing); end par; Thing won't get corrupted if the programmed knows what they're doing! In the case of pure functions, there is "obviously" no problem: declare Thing : X := InitThing; begin par A1 := Foo (Thing); A2 := Bar (Thing); A3 := Baz (Thing); end par; return A1+A2+A3; In the case of procedures, there are numerous reasonable uses. Perhaps the three procedures read Thing, and output three separate files. Or maybe they write different parts of Thing. Maybe they validate different properties of Thing, and raise an exception if a fault is found. Perhaps they update statistics stored in a protected object, not shown. The most obvious case is if the procedures are called on different objects. Next most likely is if they are pure functions > Another problem is that Thing cannot be a protected object. Clearly Foo, > Bar and Baz resynchronize themselves on Thing after updating its parts. But > the compiler cannot know this. It also does not know that the updates do > not influence each other. It does not know that the state of Thing is > invalid until resynchronization. So it will serialize alternatives on write > access to Thing. (I cannot imagine a use case where Foo, Bar and Baz would > be pure. There seems to always be a shared outcome which would block them.) > Further Thing should be locked for the outer world while Foo, Bar, Baz are > running. So the standard functionality of protected objects looks totally > wrong here. Could Thing be composed of protected objects? That way updates would be serialised but wouldn't necessarily block the other procedures. Maybe the procedures are very slow, but only touch Thing at the end? Couldn't they run concurrently, and be serialised in an arbitrary order at the end? Nothing in this problem is different from the issues of doing it with separate tasks. So why is this any more problematic? The semantics I want permit serial execution in any order. And permit operation even with a very large number of parallel statements in effect. Imagine a recursive call with each level having many parallel statements. Creating a task for each directly would probably break. Something like an FFT, for example. FFT the upper and lower halves of Thing in parallel. Combine serially. Exception sematics would probably differ. Any statement excepting would stop all other par statements(?) The compiler should be able to generate code which generates a reasonable number of threads, depending on the hardware being used. >> I'm looking for something like Cilk, but even the concurrent loop >> (JPR's for I in all 1 .. n loop?) would be a help. > > Maybe, just a guess, the functional decomposition rather than statements > could be more appropriate here. The alternatives would access their > arguments by copy-in and resynchronize by copy-out. Maybe you're right. But I can't see how to glue this in with Ada (or VHDL) semantics. -- Adrian