From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.9 required=5.0 tests=BAYES_00,FORGED_GMAIL_RCVD, FREEMAIL_FROM autolearn=no autolearn_force=no version=3.4.4 X-Google-Thread: 103376,292c095d622af1d0 X-Google-NewGroupId: yes X-Google-Attributes: gida07f3367d7,domainid0,public,usenet X-Google-Language: ENGLISH,ASCII-7-bit Received: by 10.68.234.38 with SMTP id ub6mr3664776pbc.2.1337123403206; Tue, 15 May 2012 16:10:03 -0700 (PDT) Path: pr3ni3075pbb.0!nntp.google.com!news2.google.com!postnews.google.com!glegroupsg2000goo.googlegroups.com!not-for-mail From: Shark8 Newsgroups: comp.lang.ada Subject: Re: basic question on Ada tasks and running on different cores Date: Tue, 15 May 2012 15:54:05 -0700 (PDT) Organization: http://groups.google.com Message-ID: <24097020.519.1337122445176.JavaMail.geo-discussion-forums@ynbq3> References: <30585369.219.1336470732142.JavaMail.geo-discussion-forums@ynbq3> <82d369u0nc.fsf@stephe-leake.org> NNTP-Posting-Host: 96.2.54.122 Mime-Version: 1.0 X-Trace: posting.google.com 1337123402 3822 127.0.0.1 (15 May 2012 23:10:02 GMT) X-Complaints-To: groups-abuse@google.com NNTP-Posting-Date: Tue, 15 May 2012 23:10:02 +0000 (UTC) In-Reply-To: Complaints-To: groups-abuse@google.com Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=96.2.54.122; posting-account=lJ3JNwoAAAAQfH3VV9vttJLkThaxtTfC User-Agent: G2/1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Date: 2012-05-15T15:54:05-07:00 List-Id: On Tuesday, May 15, 2012 1:55:01 AM UTC-5, Randy Brukardt wrote: > > clearly, there are=20 > many applications where the performance isn't that critical or even where= =20 > the added cost might actually provide some speed-up. Indeed, though this is really just the recapitulation of the themes "work s= marter, not harder". If we might draw a parallel, this is to parallelism wh= at basic-techniques were to data-structures (ie the very early days of CS). Someday somebody realized, "if I sort things in this list, then finding the= m will be faster" and so we got all sorts of sort-algorithms and studies on= the properties thereof and the like. And that's where we're at WRT parallelism; we've got people looking into ev= erything from instruction-level to task/program-level and trying to figure = out the best way to do things. (But here the problem's more difficult; beca= use, in addition to the amount of parallelization, the different granularit= ies interact with the evaluation-metrics, and we're only just beginning to = see to what extent.) But, in the end, I think you're right: intuitively the fine-grained should = have a disadvantage because there will likely be more overhead cost due to = the synchronization/data-payload ratio. {We may be wrong though, sometimes = intuition leads astray; and even slight improvements in such fields may lea= d to better compilers and/or OSes.}