From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Thread: 103376,1a44c40a66c293f3 X-Google-Thread: 1089ad,7e78f469a06e6516 X-Google-Attributes: gid103376,gid1089ad,public X-Google-Language: ENGLISH,ASCII Path: g2news2.google.com!news3.google.com!border1.nntp.dca.giganews.com!nntp.giganews.com!newsfeed00.sul.t-online.de!newsfeed01.sul.t-online.de!t-online.de!newsfeed.hanau.net!noris.net!newsfeed.arcor.de!newsspool1.arcor-online.net!news.arcor.de.POSTED!not-for-mail From: "Dmitry A. Kazakov" Subject: Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") Newsgroups: comp.lang.ada,comp.lang.vhdl User-Agent: 40tude_Dialog/2.0.15.1 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 8bit Reply-To: mailbox@dmitry-kazakov.de Organization: cbb software GmbH References: <113ls6wugt43q$.cwaeexcj166j$.dlg@40tude.net> <1i3drcyut9aaw.isde6utlv6iq.dlg@40tude.net> <1j0a3kevqhqal.riuhe88py2tq$.dlg@40tude.net> <45E9B032.60502@obry.net> Date: Sat, 3 Mar 2007 19:11:49 +0100 Message-ID: <1gu2cq68c8hr$.1264r2leozvow.dlg@40tude.net> NNTP-Posting-Date: 03 Mar 2007 19:11:31 CET NNTP-Posting-Host: 883443d0.newsspool4.arcor-online.net X-Trace: DXC=MX<0C=K6m9kfF8a^:6>b7e4IUK On Sat, 03 Mar 2007 18:28:18 +0100, Pascal Obry wrote: > Dr. Adrian Wrigley a �crit : >> Numerous algorithms in simulation are "embarrassingly parallel", >> but this fact is completely and deliberately obscured from compilers. > > Not a big problem. If the algorithms are "embarrassingly parallel" then > the jobs are fully independent. In this case that is quite simple, > create as many tasks as you have of processors. No big deal. Each task > will compute a specific job. Ada has no problem with "embarrassingly > parallel" jobs. > > What I have not yet understood is that people are trying to solve, in > all cases, the parallelism at the lowest lever. Trying to parallelize an > algorithm in an "embarrassingly parallel" context is loosing precious > time. Many real case simulations have billions of those algorithm to > compute on multiple data, just create a set of task to compute in > parallel multiple of those algorithm. Easier and as effective. > > In other words, what I'm saying is that in some cases ("embarrassingly > parallel" computation is one of them) it is easier to do n computations > in n tasks than n x (1 parallel computation in n tasks), and the overall > performance is better. The idea (of PAR etc) is IMO quite opposite. It is about treating parallelism rather as a compiler optimization problem, than as a part of the domain. In the simplest possible form it can be illustrated on the example of Ada's "or" and "or else." While the former is potentially parallel, it has zero overhead compared to sequential "or else." (I don't count the time required to evaluate the operands). If we compare it with the overhead of creating tasks, we will see a huge difference both in terms of CPU cycles and mental efforts. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de