comp.lang.ada
 help / color / mirror / Atom feed
From: Niklas Holsti <niklas.holsti@tidorum.invalid>
Subject: Re: asynchronous task communication
Date: Thu, 03 Jan 2013 00:43:58 +0200
Date: 2013-01-03T00:43:58+02:00	[thread overview]
Message-ID: <akjrheF1fqhU1@mid.individual.net> (raw)
In-Reply-To: <7cudnYloBfQDw3_NnZ2dnUVZ_rKdnZ2d@earthlink.com>

On 13-01-01 05:51 , Charles Hixson wrote:
> On 12/31/2012 01:09 PM, Niklas Holsti wrote:
>> On 12-12-31 20:52 , Charles Hixson wrote:
>>> Thank you for your answer.
>>>
>>> The problem with that solution is that I've been advised that I
>>> shouldn't have many more tasks than available cores,
>>
>> IMO that advice is useful only if your application logically needs fewer
>> tasks than you have cores, but you want to increase the number of tasks
>> in order to have more cores running in parallel
   ....
>> But if your application logically needs more tasks than there are cores,
>> because of the way tasks communicate and interact, there is absolutely
>> no reason to limit the number of tasks to the number of cores, or
>> anything related to the number of cores.
>>
> My application logically "needs" thousands or millions of tasks.

Hmm. I'm not sure that I agree with you, more below.

> But this is clearly impractical, so I'm trying to work around it.

A few thousands could be practical, but I agree that a million tasks is
not. Even if the scheduling could work, the stacks will need a huge
amount of RAM.

> The application is clearly going to be "communication bound".

I'm not sure about that, either...

> OTOH, even
> with millions of separate cores, I'd still need asynchronous communication.
> 
> FWIW, you can think of what I'm doing is a neural network simulation.
> There are reasons why that isn't quite accurate, but it's close.  And
> each message is likely (but not certain) to get a reply and may also
> cause further messages to be sent on.

That sounds more like discrete event simulation, simulating a network of
interacting automata with discrete states and state transitions. It
doesn't sound so much like an analog neural network.

> Occasionally new nodes will be
> added to the net.  So logically, each "cell" should be a separate task,

There is always a choice of two representations for an automaton: as an
algorithm (i.e. a task body), or as a data object. The former represents
the automaton's state as a point in the algorithm (i.e. program counter
+ call stack + local variables), the latter as a state identifier (an
enumeration) and perhaps some additional variables (if the automaton
model has this concept).

If you have several automata, and represent them as algorithms, and the
automata change states at different, scattered times, then it is almost
necessary to implement them as tasks -- independently progressing
threads of control.

In your case, I suspect (but of course I'm not sure) that the logical
states of an individual automaton (a network node) are not complex
enough to merit representing as an algorithm, so the representation as a
data object seems equally valid. Which is why I'm not sure that your
application "needs" millions of tasks, even logically (but this logic is
subjective).

> but even so asynchronous communication is needed to avoid deadlock.

Very likely, if the network nodes would be implemented as tasks. But
this depends on how your simulation handles time. Do you have a global
concept of time-step or cycle, in which all network nodes progress by
one elementary step or action, or perhaps stay in the same state if they
receive no messages? Or do all nodes run "in parallel" with arbitrary
relative speeds?

Another question: do you expect the whole network to converge to some
stable state (fixed point), as for example in the iterative solution of
a partial differential equation? Or do you expect the network to
continue working and changing, and the interesting result is this
dynamic activity rather than some stable final state?

In the former case (iteration to stability), I would expect (by analogy
with PDEs) that the precise relative execution speeds of the network
nodes or cells is less important, and could even be pseudo-random, as
long as all nodes make positive progress. In the latter case (iteration
to simulate dynamic behaviour) I assume you need some global notion of
time and cycles.

> Since I can't model cells as tasks, I'm planning on modeling them as
> protected variables.  Even this may be too limiting, because of the
> requirement for asynchronous communication.  Particularly since
> protected variables are passive, rather than active, so each one is
> going to need to wait for a task to activate it.

It sounds like each cell implements more or less the same automaton
(state transitions), assuming that all the protected objects (variables)
are instances of the same protected type.

If so, it seems to me that this system is a good candidate for the
"worker task pool" approach. A "job" would be the execution of one state
transition (message reception?) for one cell.

Would this kind of design make some sense:

- Define a record type to represent the state of one cell (including its
connections to other cells). This does not have to be a protected type.

- Create as many worker tasks as there are processor cores.

- Allocate a subset of the cells to each worker task.

- Each worker task executes the state transitions and actions of all the
cells allocated to this task. The task maintains a queue of messages to
these cells (a "work list").

- When a cell sends a message to another cell allocated to the same
task, the message is simply appended to the message queue of that task.
The queue does not need to be protected against concurrent access,
because this queue is accessed only by this task.

- When a cell sends a message to another cell allocated to another task,
the message is appended to a queue of "incoming" messages, which is a
protected object associated with that other task.

- Each worker tasks takes and processes messages from its "incoming"
queue at suitable times, depending on the presence or absence of a
global time cycle.

Since all communication is internal, the only reason for a worker task
to be idle is if it happens that none of its cells has received any
message, which seems unlikely to happen if there are millions of cells.
So this design would be computation-bound, not communication-bound.

If the incoming-message queues become a contention bottleneck, there
could be a separate incoming-message queue for each pair of (sender
task, receiver task), or perhaps a lock-free queue implementation would
help.

> But this appears to be
> quite likely to interfere with normal execution.  The classic answer is
> to have two copies of the network and to cycle through cone copying it
> and state changes into the second.  Then, at the end of a cycle, reverse
> it.  This requires the whole thing to be copied in each cycle, even the
> parts that haven't changed in this cycle.

You can put the two copies of the state in each record object that
represents a cell, together with a per-cell indication of which copy is
"current". If the cell does not change its state, the current-state
indicator is unchanged; nothing need be copied. If the cell changes its
state, some data are changed, but at least the old and new states are
close to each other, and perhaps in the same cache lines, or at least in
the same VM page, which should reduce the cost of the copy.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
      .      @       .



  parent reply	other threads:[~2013-01-02 22:43 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-12-31  0:16 asynchronous task communication Charles Hixson
2012-12-31  9:04 ` Simon Wright
2012-12-31 11:49   ` Simon Wright
2012-12-31 10:50 ` J-P. Rosen
2012-12-31 12:09 ` Georg Bauhaus
2012-12-31 18:52   ` Charles Hixson
2012-12-31 20:18     ` Shark8
2012-12-31 21:09     ` Niklas Holsti
2012-12-31 22:15       ` Randy Brukardt
2013-01-01  3:58         ` Charles Hixson
2013-01-01  4:48           ` tmoran
2013-01-01 17:59             ` Charles Hixson
2013-01-01  3:51       ` Charles Hixson
2013-01-01  9:59         ` Dmitry A. Kazakov
2013-01-01 10:38         ` Brian Drummond
2013-01-01 12:32         ` Jeffrey Carter
2013-01-01 18:21           ` Charles Hixson
2013-01-01 18:54             ` Robert A Duff
2013-01-02  7:36               ` Charles Hixson
2013-01-02  9:55                 ` Dmitry A. Kazakov
2013-01-02 19:02                   ` Charles Hixson
2013-01-02 20:35                     ` Dmitry A. Kazakov
2013-01-03  0:20                       ` Charles Hixson
2013-01-03  6:34                         ` Charles Hixson
2013-01-03  8:50                         ` Dmitry A. Kazakov
2013-01-03 19:01                           ` Charles Hixson
2013-01-03 10:01                         ` J-P. Rosen
2013-01-03 19:29                           ` Charles Hixson
2013-01-04  8:17                             ` J-P. Rosen
2013-01-05  4:31                               ` Charles Hixson
2013-01-09  8:34                                 ` Stephen Leake
2013-01-03 22:27                         ` Randy Brukardt
2013-01-05  5:18                           ` Charles Hixson
2013-01-05  8:48                             ` Niklas Holsti
2013-01-06 22:55                               ` Charles Hixson
2013-01-07  0:38                                 ` tmoran
2013-01-07  6:07                                 ` Shark8
2013-01-07 10:49                                 ` Brian Drummond
2013-01-07 18:27                                   ` Jeffrey Carter
2013-01-08 12:02                                     ` Brian Drummond
2013-01-08 17:12                                       ` Jeffrey Carter
2013-01-08 18:18                                         ` Simon Wright
2013-01-08 20:29                                           ` Dmitry A. Kazakov
2013-01-08 21:01                                           ` Jeffrey Carter
2013-01-08 21:14                                             ` Simon Wright
2013-01-08 22:11                                               ` Randy Brukardt
2013-01-08 22:52                                               ` Jeffrey Carter
2013-01-08 22:26                                         ` Brian Drummond
2013-01-08  2:41                             ` Randy Brukardt
2013-01-02 22:43         ` Niklas Holsti [this message]
2013-01-03  1:30           ` Charles Hixson
2013-01-03 12:11             ` Georg Bauhaus
2013-01-03 13:17               ` Dmitry A. Kazakov
2013-01-05 20:19               ` Charles Hixson
2013-01-07  4:01                 ` Shark8
2013-01-01 19:59     ` J-P. Rosen
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox