comp.lang.ada
 help / color / mirror / Atom feed
From: "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de>
Subject: Re: Loops and parallel execution
Date: Wed, 26 Jan 2011 10:04:07 +0100
Date: 2011-01-26T10:04:07+01:00	[thread overview]
Message-ID: <18u0kz3rdeadl.19kvrw0a9kqd2$.dlg@40tude.net> (raw)
In-Reply-To: 4d3f4985$0$6774$9b4e6d93@newsspool3.arcor-online.net

On Tue, 25 Jan 2011 23:07:01 +0100, Georg Bauhaus wrote:

> On 1/25/11 10:32 PM, Dmitry A. Kazakov wrote:

>> The Occam's par-statement could be a better candidate, but I don't see how
>> this could be useful under a modern general-purpose OS with their
>> "vertical" parallelism, when each task is assigned to one core.  The thing
>> you propose is "horizontal" parallelism, when a task/process would run on
>> all cores simultaneously. Inmos' Occam ran under no true OS, and the
>> processor architecture was well suited for such ad-hoc parallelism. Modern
>> processors are very different from T805 and I doubt that they would allow
>> an efficient implementation of this.
> 
> I have recently seen small boards carrying one processor each
> that could be connected to one another on all sides, IIRC.
> What matters thens is, I think, the efficiency of
> (a) distribution of small computation, and
> (b) the delivery of results at some nodes.

The Parix OS (actually a monitor) did that. E.g. if you called, say,
"printf" in a node which didn't have a direct link to the server (the
server was an MS-DOS PC or a Solaris workstation), the output would be
routed to the node connected to the server and from there to the server
which printed the output.

> Is it therefore so unthinkable to have something like a transputer
> these days?

I saw them too. BTW, they are in some sense a step back comparing to the
level Inmos arrived before its fall. They introduced a programmable TP-link
switch, so that you could reconnect the network of transputers on the fly.

But the problem is. I really see no use for the par-statement or alike. The
main argument against par is that using threads causes to much overhead. If
the argument stands, I mean if you don't have very long code alternatives
running in parallel for seconds, then using a mesh of processors would make
things only worse. The overhead to distribute the code and data over the
mesh of processors is much bigger than doing this on a machine with shared
memory (multi-core). There certainly exist examples of long independent
code alternatives, but I would say that most of them are constructed or
marginal.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



  parent reply	other threads:[~2011-01-26  9:04 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-25 15:40 Loops and parallel execution Georg Bauhaus
2011-01-25 16:37 ` Dmitry A. Kazakov
2011-01-25 17:36   ` Georg Bauhaus
2011-01-25 17:38     ` Georg Bauhaus
2011-01-25 21:32     ` Dmitry A. Kazakov
2011-01-25 22:07       ` Georg Bauhaus
2011-01-26  1:31         ` Yannick Duchêne (Hibou57)
2011-01-26  9:04         ` Dmitry A. Kazakov [this message]
2011-01-26  1:06       ` Yannick Duchêne (Hibou57)
2011-01-26 10:08         ` Dmitry A. Kazakov
2011-01-31 13:01         ` Paul Colin Gloster
2011-02-06 20:06           ` Yannick Duchêne (Hibou57)
2011-02-07 11:43             ` Nicholas Paul Collin Gloster
2011-01-26  8:46 ` Egil Høvik
2011-01-26 10:47   ` Georg Bauhaus
2011-02-14 23:27     ` Tuck
2011-02-15 21:10       ` Georg Bauhaus
2011-01-26 11:29 ` Peter C. Chapin
2011-01-26 21:57 ` Randy Brukardt
2011-01-27 23:01   ` tmoran
2011-01-29  0:23     ` Randy Brukardt
2011-02-06 20:10       ` Yannick Duchêne (Hibou57)
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox