From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Thread: 103376,292c095d622af1d0 X-Google-NewGroupId: yes X-Google-Attributes: gida07f3367d7,domainid0,public,usenet X-Google-Language: ENGLISH,ASCII-7-bit Received: by 10.68.191.225 with SMTP id hb1mr8917762pbc.5.1337237425898; Wed, 16 May 2012 23:50:25 -0700 (PDT) Path: pr3ni8134pbb.0!nntp.google.com!news2.google.com!news.glorb.com!aioe.org!.POSTED!not-for-mail From: "Dmitry A. Kazakov" Newsgroups: comp.lang.ada Subject: Re: basic question on Ada tasks and running on different cores Date: Thu, 17 May 2012 08:50:18 +0200 Organization: cbb software GmbH Message-ID: References: <30585369.219.1336470732142.JavaMail.geo-discussion-forums@ynbq3> Reply-To: mailbox@dmitry-kazakov.de NNTP-Posting-Host: 4RFYTQ6jM/dAKFJoI0fUkg.user.speranza.aioe.org Mime-Version: 1.0 X-Complaints-To: abuse@aioe.org User-Agent: 40tude_Dialog/2.0.15.1 X-Notice: Filtered by postfilter v. 0.8.2 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Date: 2012-05-17T08:50:18+02:00 List-Id: On Wed, 16 May 2012 18:06:49 -0700, Jeffrey Carter wrote: > On 05/16/2012 05:11 PM, Randy Brukardt wrote: >> >> The problem is, if you're trying to implement fine-grained parallelism, you >> have to surround that code with some sort of scheduling mechanism, and that >> overhead means you aren't going to get anywhere near 100% of the CPU at any >> point. > > The assumption here is that there are more tasks/threads/parallel things than > there are processors/cores. That's generally true now, but they way things are > going, it may not be true in the not-too-distant future (1-atom transistors > could make for a lot of cores). When you have a core per task you no longer need > scheduling. You will still have synchronization issues to handle (real time, e.g. delay until and to other tasks). I assume that memory will not be shared because evidently that will be a bottleneck for thousands of cores. Now if processors (not cores) are synchronized by data exchange using some topology of connections, that will be transputers reborn. In the heydays of transputers there existed switches to connect transputer links dynamically, but that was slow and expensive. Very likely the same problem and worse would emerge if we had architectures of thousands of processors. The topology of the mesh will be rather fixed with some few links left to reconnect dynamically. Scheduling will remain but the objective will be to choose a processor with physical connections satisfying most of synchronization constraints of the task it will have to run. That might be a very different kind of parallel programming than we know today. But I agree with Randy that fine grained parallelism will never make it, whatever architecture comes. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de