comp.lang.ada
 help / color / mirror / Atom feed
* Competing Servers
@ 2019-03-26 10:42 hnptz
  2019-03-26 13:11 ` Dmitry A. Kazakov
  0 siblings, 1 reply; 6+ messages in thread
From: hnptz @ 2019-03-26 10:42 UTC (permalink / raw)


Hi,
I want to consider any problem that can be viewed as search, and for which only one solution is required, and the problem is suitable for a data parallel approach, provided that the computation time is large enough to make the communication time negligible.

Assume we habe s servers and n tasks. I may start with one server and n tasks. After a simple domain decomposition I want a task only search in its allocated sub-domain. When one of the tasks has found a solution: it should report it, all tasks should stop immediately and the initiating program should terminate.

A variant from above would be to add a monitoring task, which after receiving a success message by one of the tasks, should then report and stop all tasks immediately and terminate.

An extended approach would be to use different tasks - eg with different search methods  - in each of the sub-domains. I would then like to define an array of a tasks (one for each search method) working on one subdomain. All these tasks are then connected to a collector task.

If there is some experience in this group on the competing tasks aspect? Please comment and/or give hints for a solution in Ada.

montgrimpulo


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Competing Servers
  2019-03-26 10:42 Competing Servers hnptz
@ 2019-03-26 13:11 ` Dmitry A. Kazakov
  2019-03-26 15:50   ` Anh Vo
  0 siblings, 1 reply; 6+ messages in thread
From: Dmitry A. Kazakov @ 2019-03-26 13:11 UTC (permalink / raw)


On 2019-03-26 11:42, hnptz@yahoo.de wrote:

> I want to consider any problem that can be viewed as search, and for which only one solution is required, and the problem is suitable for a data parallel approach, provided that the computation time is large enough to make the communication time negligible.
> 
> Assume we habe s servers and n tasks. I may start with one server and n tasks. After a simple domain decomposition I want a task only search in its allocated sub-domain. When one of the tasks has found a solution: it should report it, all tasks should stop immediately and the initiating program should terminate.
> 
> A variant from above would be to add a monitoring task, which after receiving a success message by one of the tasks, should then report and stop all tasks immediately and terminate.

Usual design is a pool of worker task. A worker task takes jobs from a 
queue controlled by a protected object. The task never terminates, just 
waits for another job. Cancellation of a job is done again over a 
protected object. A worker task simply periodically checks if its 
current job were terminated. The check will propagate an exception, that 
will roll the stack with all local objects back to the main body loop 
where the task will accept the next job.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Competing Servers
  2019-03-26 13:11 ` Dmitry A. Kazakov
@ 2019-03-26 15:50   ` Anh Vo
  2019-03-26 17:03     ` Dmitry A. Kazakov
  0 siblings, 1 reply; 6+ messages in thread
From: Anh Vo @ 2019-03-26 15:50 UTC (permalink / raw)


On Tuesday, March 26, 2019 at 6:11:12 AM UTC-7, Dmitry A. Kazakov wrote:
> On 2019-03-26 11:42, hnptz@yahoo.de wrote:
> 
> > I want to consider any problem that can be viewed as search, and for which only one solution is required, and the problem is suitable for a data parallel approach, provided that the computation time is large enough to make the communication time negligible.
> > 
> > Assume we habe s servers and n tasks. I may start with one server and n tasks. After a simple domain decomposition I want a task only search in its allocated sub-domain. When one of the tasks has found a solution: it should report it, all tasks should stop immediately and the initiating program should terminate.
> > 
> > A variant from above would be to add a monitoring task, which after receiving a success message by one of the tasks, should then report and stop all tasks immediately and terminate.
> 
> Usual design is a pool of worker task. A worker task takes jobs from a 
> queue controlled by a protected object. The task never terminates, just 
> waits for another job. Cancellation of a job is done again over a 
> protected object. A worker task simply periodically checks if its 
> current job were terminated. The check will propagate an exception, that 
> will roll the stack with all local objects back to the main body loop 
> where the task will accept the next job.

why is an exception involved in the checking? 

Anh Vo


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Competing Servers
  2019-03-26 15:50   ` Anh Vo
@ 2019-03-26 17:03     ` Dmitry A. Kazakov
  2019-04-01 19:21       ` Anh Vo
  0 siblings, 1 reply; 6+ messages in thread
From: Dmitry A. Kazakov @ 2019-03-26 17:03 UTC (permalink / raw)


On 2019-03-26 16:50, Anh Vo wrote:
> On Tuesday, March 26, 2019 at 6:11:12 AM UTC-7, Dmitry A. Kazakov wrote:
>> On 2019-03-26 11:42, hnptz@yahoo.de wrote:
>>
>>> I want to consider any problem that can be viewed as search, and for which only one solution is required, and the problem is suitable for a data parallel approach, provided that the computation time is large enough to make the communication time negligible.
>>>
>>> Assume we habe s servers and n tasks. I may start with one server and n tasks. After a simple domain decomposition I want a task only search in its allocated sub-domain. When one of the tasks has found a solution: it should report it, all tasks should stop immediately and the initiating program should terminate.
>>>
>>> A variant from above would be to add a monitoring task, which after receiving a success message by one of the tasks, should then report and stop all tasks immediately and terminate.
>>
>> Usual design is a pool of worker task. A worker task takes jobs from a
>> queue controlled by a protected object. The task never terminates, just
>> waits for another job. Cancellation of a job is done again over a
>> protected object. A worker task simply periodically checks if its
>> current job were terminated. The check will propagate an exception, that
>> will roll the stack with all local objects back to the main body loop
>> where the task will accept the next job.
> 
> why is an exception involved in the checking?

An easy method to roll back from a deep nested call. Considering that a 
job solver will loop through many iterations, maybe doing recursive 
calls, then using a return value might be quite complicated.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Competing Servers
  2019-03-26 17:03     ` Dmitry A. Kazakov
@ 2019-04-01 19:21       ` Anh Vo
  2019-04-01 19:58         ` Dmitry A. Kazakov
  0 siblings, 1 reply; 6+ messages in thread
From: Anh Vo @ 2019-04-01 19:21 UTC (permalink / raw)


On Tuesday, March 26, 2019 at 10:03:33 AM UTC-7, Dmitry A. Kazakov wrote:
> On 2019-03-26 16:50, Anh Vo wrote:
> > On Tuesday, March 26, 2019 at 6:11:12 AM UTC-7, Dmitry A. Kazakov wrote:
> >> On 2019-03-26 11:42, hnptz@yahoo.de wrote:
> >>
> >>> I want to consider any problem that can be viewed as search, and for which only one solution is required, and the problem is suitable for a data parallel approach, provided that the computation time is large enough to make the communication time negligible.
> >>>
> >>> Assume we habe s servers and n tasks. I may start with one server and n tasks. After a simple domain decomposition I want a task only search in its allocated sub-domain. When one of the tasks has found a solution: it should report it, all tasks should stop immediately and the initiating program should terminate.
> >>>
> >>> A variant from above would be to add a monitoring task, which after receiving a success message by one of the tasks, should then report and stop all tasks immediately and terminate.
> >>
> >> Usual design is a pool of worker task. A worker task takes jobs from a
> >> queue controlled by a protected object. The task never terminates, just
> >> waits for another job. Cancellation of a job is done again over a
> >> protected object. A worker task simply periodically checks if its
> >> current job were terminated. The check will propagate an exception, that
> >> will roll the stack with all local objects back to the main body loop
> >> where the task will accept the next job.
> > 
> > why is an exception involved in the checking?
> 
> An easy method to roll back from a deep nested call. Considering that a 
> job solver will loop through many iterations, maybe doing recursive 
> calls, then using a return value might be quite complicated.

I am very curious about its complication. Thus, I would like to see an actual example if it is possible.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Competing Servers
  2019-04-01 19:21       ` Anh Vo
@ 2019-04-01 19:58         ` Dmitry A. Kazakov
  0 siblings, 0 replies; 6+ messages in thread
From: Dmitry A. Kazakov @ 2019-04-01 19:58 UTC (permalink / raw)


On 2019-04-01 21:21, Anh Vo wrote:
> On Tuesday, March 26, 2019 at 10:03:33 AM UTC-7, Dmitry A. Kazakov wrote:
>> On 2019-03-26 16:50, Anh Vo wrote:
>>> On Tuesday, March 26, 2019 at 6:11:12 AM UTC-7, Dmitry A. Kazakov wrote:
>>>> On 2019-03-26 11:42, hnptz@yahoo.de wrote:
>>>>
>>>>> I want to consider any problem that can be viewed as search, and for which only one solution is required, and the problem is suitable for a data parallel approach, provided that the computation time is large enough to make the communication time negligible.
>>>>>
>>>>> Assume we habe s servers and n tasks. I may start with one server and n tasks. After a simple domain decomposition I want a task only search in its allocated sub-domain. When one of the tasks has found a solution: it should report it, all tasks should stop immediately and the initiating program should terminate.
>>>>>
>>>>> A variant from above would be to add a monitoring task, which after receiving a success message by one of the tasks, should then report and stop all tasks immediately and terminate.
>>>>
>>>> Usual design is a pool of worker task. A worker task takes jobs from a
>>>> queue controlled by a protected object. The task never terminates, just
>>>> waits for another job. Cancellation of a job is done again over a
>>>> protected object. A worker task simply periodically checks if its
>>>> current job were terminated. The check will propagate an exception, that
>>>> will roll the stack with all local objects back to the main body loop
>>>> where the task will accept the next job.
>>>
>>> why is an exception involved in the checking?
>>
>> An easy method to roll back from a deep nested call. Considering that a
>> job solver will loop through many iterations, maybe doing recursive
>> calls, then using a return value might be quite complicated.
> 
> I am very curious about its complication. Thus, I would like to see an actual example if it is possible.

Since it is complicated there is no simply example. How the pattern is 
used you can find it here:

    http://www.dmitry-kazakov.de/ada/fuzzy_ml_api.htm#2.8

There is an object passed down to each operation. Operations can take 
several hours to compute. The object is an "indicator" that serves both 
progress indication and cancellation purposes. An outsider task (viewer) 
monitors the object, e.g. to update the progress bar. It can also 
request cancellation when the user presses "abort" button. Once the 
solver task reaches a check point and tries to update the indicator 
object it gets an exception (End_Error) which then propagates up all 
recursive calls and nested loops.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-04-01 19:58 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-26 10:42 Competing Servers hnptz
2019-03-26 13:11 ` Dmitry A. Kazakov
2019-03-26 15:50   ` Anh Vo
2019-03-26 17:03     ` Dmitry A. Kazakov
2019-04-01 19:21       ` Anh Vo
2019-04-01 19:58         ` Dmitry A. Kazakov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox