comp.lang.ada
 help / color / mirror / Atom feed
From: Charles Hixson <charleshixsn@earthlink.net>
Subject: Re: asynchronous task communication
Date: Wed, 02 Jan 2013 17:30:02 -0800
Date: 2013-01-02T17:30:02-08:00	[thread overview]
Message-ID: <gs-dnZVkvaocfXnNnZ2dnUVZ_sydnZ2d@earthlink.com> (raw)
In-Reply-To: <akjrheF1fqhU1@mid.individual.net>

On 01/02/2013 02:43 PM, Niklas Holsti wrote:
> On 13-01-01 05:51 , Charles Hixson wrote:
>> On 12/31/2012 01:09 PM, Niklas Holsti wrote:
>>> On 12-12-31 20:52 , Charles Hixson wrote:
>>>> Thank you for your answer.
>>>>
>>>> The problem with that solution is that I've been advised that I
>>>> shouldn't have many more tasks than available cores,
>>>
>>> IMO that advice is useful only if your application logically needs fewer
>>> tasks than you have cores, but you want to increase the number of tasks
>>> in order to have more cores running in parallel
>     ....
>>> But if your application logically needs more tasks than there are cores,
>>> because of the way tasks communicate and interact, there is absolutely
>>> no reason to limit the number of tasks to the number of cores, or
>>> anything related to the number of cores.
>>>
>> My application logically "needs" thousands or millions of tasks.
>
> Hmm. I'm not sure that I agree with you, more below.
>
>> But this is clearly impractical, so I'm trying to work around it.
>
> A few thousands could be practical, but I agree that a million tasks is
> not. Even if the scheduling could work, the stacks will need a huge
> amount of RAM.
>
>> The application is clearly going to be "communication bound".
>
> I'm not sure about that, either...
>
>> OTOH, even
>> with millions of separate cores, I'd still need asynchronous communication.
>>
>> FWIW, you can think of what I'm doing is a neural network simulation.
>> There are reasons why that isn't quite accurate, but it's close.  And
>> each message is likely (but not certain) to get a reply and may also
>> cause further messages to be sent on.
>
> That sounds more like discrete event simulation, simulating a network of
> interacting automata with discrete states and state transitions. It
> doesn't sound so much like an analog neural network.
It's clearly discrete simulation, but what is being simulated are cells 
that are acting vaguely like neurons.  (But the accent *is* on vaguely).
>
>> Occasionally new nodes will be
>> added to the net.  So logically, each "cell" should be a separate task,
>
> There is always a choice of two representations for an automaton: as an
> algorithm (i.e. a task body), or as a data object. The former represents
> the automaton's state as a point in the algorithm (i.e. program counter
> + call stack + local variables), the latter as a state identifier (an
> enumeration) and perhaps some additional variables (if the automaton
> model has this concept).
Yes, but what I'm doing is modeling the interactions of the cells.  I 
expect them to act in a certain way. (Though there are enough "gosh 
numbers" that I'm going to be doing a whole lot of "try and find out". 
Probably enough that I'll need to set up an environment where different 
versions can compete for the best solution, and breed the best to get a 
better answer.)  OTOH, each cell is going to be maintaining a complex 
state, so that may not work out either.
>
> If you have several automata, and represent them as algorithms, and the
> automata change states at different, scattered times, then it is almost
> necessary to implement them as tasks -- independently progressing
> threads of control.
>
> In your case, I suspect (but of course I'm not sure) that the logical
> states of an individual automaton (a network node) are not complex
> enough to merit representing as an algorithm, so the representation as a
> data object seems equally valid. Which is why I'm not sure that your
> application "needs" millions of tasks, even logically (but this logic is
> subjective).
They probably aren't that complex.  Close, though.  I expect them to use 
a common algorithm, but they'll have different input histories, which 
will create a complex state.  Actually, one set of them will be 
"directly connected to external stimuli", and the goal is to predict 
what the next stimulus will be.
>
>> but even so asynchronous communication is needed to avoid deadlock.
>
> Very likely, if the network nodes would be implemented as tasks. But
> this depends on how your simulation handles time. Do you have a global
> concept of time-step or cycle, in which all network nodes progress by
> one elementary step or action, or perhaps stay in the same state if they
> receive no messages? Or do all nodes run "in parallel" with arbitrary
> relative speeds?
The mailbox design sort of coerces a semi-global concept of time.  Each 
thread will run at whatever speed it can manage, but it's going to be 
cycling through a subset of the cells, and different cells will process 
different inputs at different speeds.  So with the current design there 
will be a thread level concept of time.  I don't see any advantage in 
slowing down the fast threads to synchronize with the slow ones, though 
I may try it by implementing a "cycle number", where everyone needs to 
agree on the cycle before they can start a new one.  That would produce 
a global synchronization as a sort of time.
>
> Another question: do you expect the whole network to converge to some
> stable state (fixed point), as for example in the iterative solution of
> a partial differential equation? Or do you expect the network to
> continue working and changing, and the interesting result is this
> dynamic activity rather than some stable final state?
The intent is to provide it with a continuing set of inputs that it will 
continually try to predict.  It will be basically impossible to achieve 
real accuracy, but only probabilistic accuracy.  (I'm planning on 
feeding it strings of text segmented into "words". [Note the quotes. 
They'll be wordish sort of things, but I may merge trailing punctuation 
onto the words, I haven't decided.])
>
> In the former case (iteration to stability), I would expect (by analogy
> with PDEs) that the precise relative execution speeds of the network
> nodes or cells is less important, and could even be pseudo-random, as
> long as all nodes make positive progress. In the latter case (iteration
> to simulate dynamic behaviour) I assume you need some global notion of
> time and cycles.
I think I may need a direction of time, but I don't think I *need* 
anything more explicit.  The implementation I'm contemplating, however, 
provides that automatically.
>
>> Since I can't model cells as tasks, I'm planning on modeling them as
>> protected variables.  Even this may be too limiting, because of the
>> requirement for asynchronous communication.  Particularly since
>> protected variables are passive, rather than active, so each one is
>> going to need to wait for a task to activate it.
>
> It sounds like each cell implements more or less the same automaton
> (state transitions), assuming that all the protected objects (variables)
> are instances of the same protected type.
>
> If so, it seems to me that this system is a good candidate for the
> "worker task pool" approach. A "job" would be the execution of one state
> transition (message reception?) for one cell.
>
> Would this kind of design make some sense:
>
> - Define a record type to represent the state of one cell (including its
> connections to other cells). This does not have to be a protected type.
That's interesting.  I had supposed that it did need to be protected. 
But now that I think about it, all interactions with other cells are 
going to be via it's mailbox (given my current design), so I guess that 
it really wouldn't need to be protected.  That's probably just a 
hang-over from an earlier design that didn't work out.
>
> - Create as many worker tasks as there are processor cores.
Check.
>
> - Allocate a subset of the cells to each worker task.
Check.
>
> - Each worker task executes the state transitions and actions of all the
> cells allocated to this task. The task maintains a queue of messages to
> these cells (a "work list").
Is there some reason the mailbox shouldn't be protected, and attached to 
each cell?  I want each cell to have it's own mailbox, which is a 
protected structure.  Cells from other tasks, as well as from this task, 
will be posting messages there.
>
> - When a cell sends a message to another cell allocated to the same
> task, the message is simply appended to the message queue of that task.
> The queue does not need to be protected against concurrent access,
> because this queue is accessed only by this task.
Check.  But cells from the same task can still send messages to the cell.
>
> - When a cell sends a message to another cell allocated to another task,
> the message is appended to a queue of "incoming" messages, which is a
> protected object associated with that other task.
The problem is that the task that contains the cell may be busy, at 
which point a task-attached mailbox would block, even though posting the 
message would be a very quick action.  And if the thread were busy for 
very long, the entire system might slow down considerably.

>
> - Each worker tasks takes and processes messages from its "incoming"
> queue at suitable times, depending on the presence or absence of a
> global time cycle.
It's my intention that when the thread activates a cell, the first thing 
the cell should do is read its incoming mail.
>
> Since all communication is internal, the only reason for a worker task
> to be idle is if it happens that none of its cells has received any
> message, which seems unlikely to happen if there are millions of cells.
> So this design would be computation-bound, not communication-bound.
Even if a cell hasn't received any input, there is a possibility (random 
chance) that it might spontaneously activate.  This will be influenced 
by it's current activation level.
>
> If the incoming-message queues become a contention bottleneck, there
> could be a separate incoming-message queue for each pair of (sender
> task, receiver task), or perhaps a lock-free queue implementation would
> help.
>
>> But this appears to be
>> quite likely to interfere with normal execution.  The classic answer is
>> to have two copies of the network and to cycle through cone copying it
>> and state changes into the second.  Then, at the end of a cycle, reverse
>> it.  This requires the whole thing to be copied in each cycle, even the
>> parts that haven't changed in this cycle.
>
> You can put the two copies of the state in each record object that
> represents a cell, together with a per-cell indication of which copy is
> "current". If the cell does not change its state, the current-state
> indicator is unchanged; nothing need be copied. If the cell changes its
> state, some data are changed, but at least the old and new states are
> close to each other, and perhaps in the same cache lines, or at least in
> the same VM page, which should reduce the cost of the copy.
>
The mailbox solution occurred to me after that was written.  If it would 
work, as it seems to me that it should, then this isn't needed, as the 
mailbox would contain all "change the state" information.  (Now I just 
need to design a message format that will handle all cases.)

OTOH, if the mailbox needs to be attached per thread rather than per 
cell, then this may well still be necessary.  (But the state is most of 
what *is* the cell, so that's almost like copying everything...except, 
of course, for the ones that didn't change.  But perhaps a thread 
attached mailbox could also only hold the state deltas.)

I guess a part of the question is "What is the overhead on a protected 
variable?".  If it's small, then I'd prefer to have one attached to 
every cell, as it eliminates one centralized spot of contention, and 
means that slightly simpler messages can be passed.  (They don't need to 
hold the intended recipient.  It's only one access variable and one 
integer/message, but that would add up.)

Also, if I decide to limit the number of messages that a cell can 
receive in a cycle, then the cell attached mailbox would make this easier.



  reply	other threads:[~2013-01-03  1:30 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-12-31  0:16 asynchronous task communication Charles Hixson
2012-12-31  9:04 ` Simon Wright
2012-12-31 11:49   ` Simon Wright
2012-12-31 10:50 ` J-P. Rosen
2012-12-31 12:09 ` Georg Bauhaus
2012-12-31 18:52   ` Charles Hixson
2012-12-31 20:18     ` Shark8
2012-12-31 21:09     ` Niklas Holsti
2012-12-31 22:15       ` Randy Brukardt
2013-01-01  3:58         ` Charles Hixson
2013-01-01  4:48           ` tmoran
2013-01-01 17:59             ` Charles Hixson
2013-01-01  3:51       ` Charles Hixson
2013-01-01  9:59         ` Dmitry A. Kazakov
2013-01-01 10:38         ` Brian Drummond
2013-01-01 12:32         ` Jeffrey Carter
2013-01-01 18:21           ` Charles Hixson
2013-01-01 18:54             ` Robert A Duff
2013-01-02  7:36               ` Charles Hixson
2013-01-02  9:55                 ` Dmitry A. Kazakov
2013-01-02 19:02                   ` Charles Hixson
2013-01-02 20:35                     ` Dmitry A. Kazakov
2013-01-03  0:20                       ` Charles Hixson
2013-01-03  6:34                         ` Charles Hixson
2013-01-03  8:50                         ` Dmitry A. Kazakov
2013-01-03 19:01                           ` Charles Hixson
2013-01-03 10:01                         ` J-P. Rosen
2013-01-03 19:29                           ` Charles Hixson
2013-01-04  8:17                             ` J-P. Rosen
2013-01-05  4:31                               ` Charles Hixson
2013-01-09  8:34                                 ` Stephen Leake
2013-01-03 22:27                         ` Randy Brukardt
2013-01-05  5:18                           ` Charles Hixson
2013-01-05  8:48                             ` Niklas Holsti
2013-01-06 22:55                               ` Charles Hixson
2013-01-07  0:38                                 ` tmoran
2013-01-07  6:07                                 ` Shark8
2013-01-07 10:49                                 ` Brian Drummond
2013-01-07 18:27                                   ` Jeffrey Carter
2013-01-08 12:02                                     ` Brian Drummond
2013-01-08 17:12                                       ` Jeffrey Carter
2013-01-08 18:18                                         ` Simon Wright
2013-01-08 20:29                                           ` Dmitry A. Kazakov
2013-01-08 21:01                                           ` Jeffrey Carter
2013-01-08 21:14                                             ` Simon Wright
2013-01-08 22:11                                               ` Randy Brukardt
2013-01-08 22:52                                               ` Jeffrey Carter
2013-01-08 22:26                                         ` Brian Drummond
2013-01-08  2:41                             ` Randy Brukardt
2013-01-02 22:43         ` Niklas Holsti
2013-01-03  1:30           ` Charles Hixson [this message]
2013-01-03 12:11             ` Georg Bauhaus
2013-01-03 13:17               ` Dmitry A. Kazakov
2013-01-05 20:19               ` Charles Hixson
2013-01-07  4:01                 ` Shark8
2013-01-01 19:59     ` J-P. Rosen
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox