From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,1a44c40a66c293f3 X-Google-Thread: 1089ad,7e78f469a06e6516 X-Google-Attributes: gid103376,gid1089ad,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news2.google.com!news3.google.com!news.glorb.com!news2.euro.net!62.253.162.218.MISMATCH!news-in.ntli.net!newsrout1-win.ntli.net!ntli.net!news.highwinds-media.com!newspeer1-win.ntli.net!newsfe3-gui.ntli.net.POSTED!53ab2750!not-for-mail From: "Dr. Adrian Wrigley" Subject: Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") User-Agent: Pan/0.14.2 (This is not a psychotic episode. It's a cleansing moment of clarity.) Message-Id: Newsgroups: comp.lang.ada,comp.lang.vhdl References: <113ls6wugt43q$.cwaeexcj166j$.dlg@40tude.net> <1i3drcyut9aaw.isde6utlv6iq.dlg@40tude.net> <1j0a3kevqhqal.riuhe88py2tq$.dlg@40tude.net> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit Date: Sat, 03 Mar 2007 13:40:16 GMT NNTP-Posting-Host: 82.21.99.109 X-Trace: newsfe3-gui.ntli.net 1172929216 82.21.99.109 (Sat, 03 Mar 2007 13:40:16 GMT) NNTP-Posting-Date: Sat, 03 Mar 2007 13:40:16 GMT Organization: NTL Xref: g2news2.google.com comp.lang.ada:9654 comp.lang.vhdl:7618 Date: 2007-03-03T13:40:16+00:00 List-Id: On Sat, 03 Mar 2007 11:27:50 +0000, Jonathan Bromley wrote: > On Sat, 3 Mar 2007 12:00:08 +0100, "Dmitry A. Kazakov" > wrote: > >>Because tasks additionally have safe synchronization and data exchange >>mechanisms, while PAR should rely on inherently unsafe memory sharing. > > The PAR that I'm familiar with (from CSP/occam) most certainly does > *not* have "inherently unsafe memory sharing". There seems to be > an absurd amount of wheel-reinvention going on in this thread. I think reinvention is necessary. Whatever "par" semantics was in Occam is not available in Ada (or C, C++, Perl or whatever). It was considered useful then - bring it back! >>> The semantics I want permit serial execution in any order. And permit >>> operation even with a very large number of parallel statements in >>> effect. Imagine a recursive call with each level having many parallel >>> statements. Creating a task for each directly would probably break. >>> Something like an FFT, for example. FFT the upper and lower halves >>> of Thing in parallel. Combine serially. >> >>Yes, and the run-time could assign the worker tasks from some pool of, >>fully transparently to the program. That would be very cool. > > And easy to do, and done many times before. How do you do this in Ada? Or VHDL? It's been done many times before, yes, but not delivered in any currently usable form for the general programmer :( It's not in any mainstream language I know. >>> The compiler should be able to generate code which generates a >>> reasonable number of threads, depending on the hardware being used. >> >>Yes > > For heaven's sake... You have a statically-determinable number of > processors. It's your (or your compiler's) choice whether each of > those processors runs a single thread, or somehow runs multiple > threads. If each processor is entitled to run multiple threads, then > there's no reason why the number and structure of cooperating > threads should not be dynamically variable. If you choose to run > one thread on each processor, your thread structure is similarly Of course. But how do I make this choice with the OSs and languages of today? "nice" doesn't seem to be able to be able to control this when code is written in Ada or VHDL. Nor is it defined anywhere in the source code. > static. Hardware people have been obliged to think about this > kind of thing for decades. Software people seem to have a > pretty good grip on it too, if the textbooks and papers I've read > are anything to go by. Why is it suddenly such a big deal? It's been a big deal for a long time as far as I'm concerned. It's not a matter of "invention" mostly, but one of availability and standards. There is no means in Ada to say "run this in a separate task, if appropriate". Only a few, academic and experimental tools offer the flexibility. Papers /= practise. >>> Maybe you're right. But I can't see how to glue this in with >>> Ada (or VHDL) semantics. > > In VHDL, a process represents a single statically-constructed > thread. It talks to its peers in an inherently safe way through > signals. With this mechanism, together with dynamic memory > allocation, you can easily fake-up whatever threading regime > takes your fancy. You probably wouldn't bother because > there are more convenient tools to do such things in software > land, but it can be done. I'm not sure what you're talking about here. Do you mean like any/all of Split-C, Cilk, C*, ZPL, HPF, F, data-parallel C, MPI-1, MPI-2, OpenMP, ViVA, MOSIX, PVM, SVM, Paderborn BSP, Oxford BSP toolset and IBM's TSpaces? Specifying and using fine-grain parallelism requires language, compiler and hardware support, I think. Consider: begin par x := sin(theta); y := cos(theta); end par; you probably *do* want to create a new thread, if thread creation and destruction is much faster than the function calls. You don't know this at compile-time, because this depends on the library in use, and the actual parameters. Maybe X, Y are of dynamically allocated length (multi-precision). You can't justify designing hardware with very short thread creation/destruction times, unless the software can be written to take advantage. But none of the mainstream languages allow fine grain reordering and concurrency to be specified. That's the Catch-22 that Inmos/Occam solved. Technically. The need is emerging again, now more threads on a chip is easier than higher sequential instruction rate. > In hardware you can do exactly the > same thing, but one (or more) of your processes must then > take responsibility for emulating the dynamic memory allocation, > carving up some real static physical memory according to > whatever strategy you choose to implement. > >>That is the most difficult part! (:-)) > > Maybe. But then again, maybe organising the structure of > the actual application is the most difficult part, and this This is sometimes true. > vapid rambling about things that are already well-understood > is actually rather straightforward. Somewhere our models don't mesh. What is "straightforward" to you is "impossible" for me. What syntax do I use, and which compiler, OS and processor do I need to specify and exploit fine-grain concurrency? In 1987, the answers were "par", Occam, Transputer. Twenty years later, Ada (or VHDL, C++, C#), Linux (or Windows), Niagara (or Tukwila, XinC, ClearSpeed, Cell) do not offer us anything remotely similar. In fact, in twenty years, things have got worse :( -- Adrian