From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Thread: 103376,1a44c40a66c293f3 X-Google-Thread: 1089ad,7e78f469a06e6516 X-Google-Attributes: gid103376,gid1089ad,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news2.google.com!news3.google.com!border1.nntp.dca.giganews.com!border2.nntp.dca.giganews.com!nntp.giganews.com!news.maxwell.syr.edu!newsfeed.icl.net!newsfeed.fjserv.net!newsfeed.cw.net!cw.net!news-FFM2.ecrc.de!newsfeed01.sul.t-online.de!t-online.de!news.csl-gmbh.net!feeder.news-service.com!nntp-peering.plus.net!ptn-nntp-feeder01.plus.net!ptn-nntp-spool01.plus.net!ptn-nntp-spool02.plus.net!ptn-nntp-reader02.plus.net!not-for-mail From: Jonathan Bromley Newsgroups: comp.lang.ada,comp.lang.vhdl Subject: Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") Date: Sat, 03 Mar 2007 15:26:35 +0000 Organization: Doulos Ltd Reply-To: jonathan.bromley@MYCOMPANY.com Message-ID: References: <113ls6wugt43q$.cwaeexcj166j$.dlg@40tude.net> <1i3drcyut9aaw.isde6utlv6iq.dlg@40tude.net> <1j0a3kevqhqal.riuhe88py2tq$.dlg@40tude.net> X-Newsreader: Forte Free Agent 3.1/32.783 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit NNTP-Posting-Host: 8b3c212d.ptn-nntp-reader02.plus.net X-Trace: DXC=l>;6jRYd9c=oP_n1HW];Q0igd3Y`7Rb;>37XnI;[OUC4G`jPeGcW;i;46T?oU=k4\3M?hO4aZFXD41?3FWaA9HN3 X-Complaints-To: abuse@plus.net Xref: g2news2.google.com comp.lang.ada:9656 comp.lang.vhdl:7620 Date: 2007-03-03T15:26:35+00:00 List-Id: On Sat, 03 Mar 2007 13:40:16 GMT, "Dr. Adrian Wrigley" wrote: > What syntax do I use, and which >compiler, OS and processor do I need to specify and exploit >fine-grain concurrency? > >In 1987, the answers were "par", Occam, Transputer. Twenty >years later, Ada (or VHDL, C++, C#), Linux (or Windows), Niagara >(or Tukwila, XinC, ClearSpeed, Cell) do not offer us anything >remotely similar. In fact, in twenty years, things have >got worse :( Absolutely right. And whose fault is that? Not the academics, who have understood this for decades. Not the hardware people like me, who of necessity must understand and exploit massive fine-grained parallelism (albeit with a static structure). No, it's the programmer weenies with their silly nonsense about threads being inefficient. Glad to have got that off my chest :-) But it's pretty frustrating to be told that parallel programming's time has come, when I spent a decade and a half trying to persuade people that it was worth even thinking about and being told that it was irrelevant. For the numerical-algorithms people, I suspect the problem of inferring opportunities for parallelism is nearer to being solved than some might imagine. There are tools around that can convert DSP-type algorithms (such as the FFT that's already been mentioned) into hardware that's inherently parallel; there are behavioural synthesis tools that allow you to explore the various possible parallel vs. serial possibilities for scheduling a computation on heterogeneous hardware. It's surely a small step from that to distributing such a computation across multiple threads or CPUs. All that's needed is the will. -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.