From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,3f60acc31578c72b X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII Path: g2news1.google.com!news3.google.com!news2.volia.net!news-out.ntli.net!newsrout1-gui.ntli.net!ntli.net!news.highwinds-media.com!newspeer1-win.ntli.net!newsfe5-gui.ntli.net.POSTED!53ab2750!not-for-mail From: "Dr. Adrian Wrigley" Subject: Re: question about tasks, multithreading and multi-cpu machines User-Agent: Pan/0.14.2 (This is not a psychotic episode. It's a cleansing moment of clarity.) Message-Id: Newsgroups: comp.lang.ada References: <1b9zea3ok5znb.13wc5k9as88ix.dlg@40tude.net> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit Date: Thu, 16 Mar 2006 20:06:02 GMT NNTP-Posting-Host: 82.10.238.153 X-Trace: newsfe5-gui.ntli.net 1142539562 82.10.238.153 (Thu, 16 Mar 2006 20:06:02 GMT) NNTP-Posting-Date: Thu, 16 Mar 2006 20:06:02 GMT Organization: NTL Xref: g2news1.google.com comp.lang.ada:3384 Date: 2006-03-16T20:06:02+00:00 List-Id: On Thu, 16 Mar 2006 19:03:00 +0100, Jean-Pierre Rosen wrote: > Maciej Sobczak a �crit : > >> The difference between sequential and paraller execution of two >> instructions (or blocks of instructions, but that's obvious) should be >> addressed at the level of control structures, not types (nor objects). >> This means that if I want to have these: >> >> A := 7; >> B := 8; >> >> executed sequentially, then I write them sequentially - just like above >> (I already see posts claiming that the compiler or CPU can reorder these >> instructions, but that's not the point; if the instructions have >> side-effects, then the sequence is really a sequence and I'm talking >> about expressing the programmer's intents here). > Actually, in Occam you would write (syntax not guaranteed): > cobegin > A:=7; > and > B:=8; > coend; Don't you use a "par" statement here? PAR A := 7 B := 8 I have been thinking about the design of programming languages with regard to efficient execution on parallel hardware. I find that most languages are virtually useless for addressing any major form of parallelism. Ada *is* helpful in addressing distributed and MIMD/SMP architectures. But there is no lightweight syntax for simple concurrency :( Occam had the right idea. VHDL is good here too. In VHDL, we simply write: A <= 7; B <= 8; C <= FancyFunction (Z); And all the assignments run concurrently (and continuously!). I would like to have seen Ada 2005 address parallel programming better, but the lack of real experience on highly parallel hardware is the cause (and effect) of the poor state of the (parallel) hardware industry. This vicious circle is a deadlock to industry prograss. In addition to lightweight concurrent statements (like Occam, VHDL), I'd like to see a decent data parallel syntax and paradigm (like ZPL, Data Parallel C (ANSI X3J11.1 report), or HPF). Finally, I'd like to see the ability to mark subprograms as "pure" to indicate permission to implement memoization (caching), to eliminate duplicated calls, reordering and concurrent execution. This would provide semantics to facilitate Cilk-like multithreading. A major part of programming in most programming languages is the analysis and transcription of a problem into fully sequential steps. But the main objective of hardware design is to turn the sequential steps back into concurrent activity - whether it is by hardware pipelining, speculative execution, multithreading (SMT), or multiprocessing (SMP). The objective is also shared by compiler writers trying to un-sequentialize programs, and extract vector operations. What a waste! Surely the failure of modern programming languages to expose problem parallism directly is one of the main causes of the "heroics" going on in processor design today, and the failure of supercomputer design to innovate with efficient, cost-effective solutions? Ada offers a sound technical basis for the language innovations necessary. Unfortunately, parallel computing has become a "Cinderella" subject over the decades of commercial and technical failures. Always five years away from ubiquity. How can the deadlock be broken? -- Adrian