From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Thread: 103376,1a44c40a66c293f3 X-Google-Thread: 1089ad,7e78f469a06e6516 X-Google-Attributes: gid103376,gid1089ad,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news2.google.com!news3.google.com!border1.nntp.dca.giganews.com!nntp.giganews.com!newsfeed00.sul.t-online.de!newsfeed01.sul.t-online.de!t-online.de!newsfeed.hanau.net!news-fra1.dfn.de!newsfeed.ision.net!newsfeed2.easynews.net!ision!newsfeed.arcor.de!newsspool4.arcor-online.net!news.arcor.de.POSTED!not-for-mail From: "Dmitry A. Kazakov" Subject: Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") Newsgroups: comp.lang.ada,comp.lang.vhdl User-Agent: 40tude_Dialog/2.0.15.1 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Reply-To: mailbox@dmitry-kazakov.de Organization: cbb software GmbH References: <1172192349.419694.274670@k78g2000cwa.googlegroups.com> <1172239820.896603.222120@k78g2000cwa.googlegroups.com> <113ls6wugt43q$.cwaeexcj166j$.dlg@40tude.net> <1i3drcyut9aaw.isde6utlv6iq.dlg@40tude.net> Date: Sat, 3 Mar 2007 12:00:08 +0100 Message-ID: <1j0a3kevqhqal.riuhe88py2tq$.dlg@40tude.net> NNTP-Posting-Date: 03 Mar 2007 11:59:51 CET NNTP-Posting-Host: 18244b73.newsspool1.arcor-online.net X-Trace: DXC=5W6?bLG\d1hfF8a^:6>b7eic==]BZ:afn4Fo<]lROoRaFl8W>\BH3YbHiNC=jUAX4bDNcfSJ;bb[eIRnRBaCd On Sat, 03 Mar 2007 00:00:52 GMT, Dr. Adrian Wrigley wrote: > On Fri, 02 Mar 2007 17:32:26 +0100, Dmitry A. Kazakov wrote: > >> On Fri, 02 Mar 2007 11:36:22 GMT, Dr. Adrian Wrigley wrote: >> >>> On Thu, 01 Mar 2007 14:57:01 +0100, Dmitry A. Kazakov wrote: >>> >>>> On Thu, 01 Mar 2007 11:22:32 GMT, Dr. Adrian Wrigley wrote: >>>> >>> Roughly equivalent to doing the same operations in three separate >>> tasks. Thing could be a protected object, if concurrent writes >>> are prohibited. Seems simple enough! >> >> This is a very different variant: >> >> declare >> Thing : X; >> begin >> declare -- par >> task Alt_1; task Alt_2; task Alt_3; >> task body Alt_1 is >> begin >> Foo (Thing); >> end Alt_1; >> task body Alt_2 is >> begin >> Bar (Thing); >> end Alt_2; >> task body Alt_3 is >> begin >> Baz (Thing); >> end Alt_3; >> begin >> null; >> end; -- par >> >> If par is a sugar for this, then Thing might easily get corrupted. The >> problem with such par is that the rules of nesting and visibility for the >> statements, which are otherwise safe, become very dangerous in the case of >> par. > > This is what I was thinking. > > Syntax might be even simpler: > declare > Thing : X; > begin par > Foo (Thing); > Bar (Thing); > Baz (Thing); > end par; > > Thing won't get corrupted if the programmed knows what they're doing! Surely, but it becomes a pitfall for those who don't. The construct is inherently unsafe, because it makes no any sense without some mutable Thing or equivalent. This mutable thing is accessed unsafely, or else concurrency gets killed. > In the case of pure functions, there is "obviously" no problem: > > declare > Thing : X := InitThing; > begin par > A1 := Foo (Thing); > A2 := Bar (Thing); > A3 := Baz (Thing); > end par; > return A1+A2+A3; > > In the case of procedures, there are numerous reasonable uses. > Perhaps the three procedures read Thing, and output three separate files. > Or maybe they write different parts of Thing. Maybe they validate > different properties of Thing, and raise an exception if a fault is found. > Perhaps they update statistics stored in a protected object, not shown. > > The most obvious case is if the procedures are called on different > objects. Next most likely is if they are pure functions The problem is that there always exists the final "A1+A2+A3" which semantics is in question. The alternatives resynchronize on "A1+A2+A3" and I see no obvious way to express this. A PAR statement would not really help to decompose it. (What you have done is replacing mutable Thing with mutable set {A1,A2,A3}. Let's rename {A1,A2,A3} to Thing, the problem is still there.) >> Another problem is that Thing cannot be a protected object. Clearly Foo, >> Bar and Baz resynchronize themselves on Thing after updating its parts. But >> the compiler cannot know this. It also does not know that the updates do >> not influence each other. It does not know that the state of Thing is >> invalid until resynchronization. So it will serialize alternatives on write >> access to Thing. (I cannot imagine a use case where Foo, Bar and Baz would >> be pure. There seems to always be a shared outcome which would block them.) >> Further Thing should be locked for the outer world while Foo, Bar, Baz are >> running. So the standard functionality of protected objects looks totally >> wrong here. > > Could Thing be composed of protected objects? That way updates > would be serialised but wouldn't necessarily block the other procedures. That could be a "hierarchical" mutex. But mutexes are themselves very low-level. The unsafety were still there, it just would show itself as deadlocks, rather than as corrupted data. > Maybe the procedures are very slow, but only touch Thing at the end? > Couldn't they run concurrently, and be serialised in an arbitrary order > at the end? That is the key issue, IMO. An ability to chop large chunks when the procedures run most of the time independently into independent and serialized parts is all the decomposition is about... > Nothing in this problem is different from the issues of doing it with > separate tasks. So why is this any more problematic? Because tasks additionally have safe synchronization and data exchange mechanisms, while PAR should rely on inherently unsafe memory sharing. > The semantics I want permit serial execution in any order. And permit > operation even with a very large number of parallel statements in > effect. Imagine a recursive call with each level having many parallel > statements. Creating a task for each directly would probably break. > Something like an FFT, for example. FFT the upper and lower halves > of Thing in parallel. Combine serially. Yes, and the run-time could assign the worker tasks from some pool of, fully transparently to the program. That would be very cool. > Exception sematics would probably differ. Any statement excepting > would stop all other par statements(?) But not by abort, rather it should wait for the next synchronization point an propagate an exception out of there, so that the alternatives might clean up the temporal objects they create. (The synchronization points could be explicit, for example when an alternative calls to an entry point or procedure of a shared thing.) > The compiler should be able to generate code which generates a > reasonable number of threads, depending on the hardware being used. Yes >>> I'm looking for something like Cilk, but even the concurrent loop >>> (JPR's for I in all 1 .. n loop?) would be a help. >> >> Maybe, just a guess, the functional decomposition rather than statements >> could be more appropriate here. The alternatives would access their >> arguments by copy-in and resynchronize by copy-out. > > Maybe you're right. But I can't see how to glue this in with > Ada (or VHDL) semantics. That is the most difficult part! (:-)) -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de