comp.lang.ada
 help / color / mirror / Atom feed
From: "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de>
Subject: Re: GNAT and Tasklets
Date: Fri, 19 Dec 2014 18:28:45 +0100
Date: 2014-12-19T18:28:45+01:00	[thread overview]
Message-ID: <1r2ziulc78imb$.ad6zx5upic6s$.dlg@40tude.net> (raw)
In-Reply-To: gAYkw.944188$Fo3.593534@fx09.iad

On Fri, 19 Dec 2014 09:42:52 -0700, Brad Moore wrote:

> On 14-12-19 04:01 AM, Dmitry A. Kazakov wrote:
>> On Fri, 19 Dec 2014 10:40:03 +0000 (UTC), Georg Bauhaus wrote:
>>
>>> <vincent.diemunsch@gmail.com> wrote:
>>>
>>>> It would be interesting to do a little survey on existing code using tasking.
>>>> I have the impression that only tasks at Library level do rendez-vous and
>>>> protected object synchronisation, and local tasks, most of the time, are
>>>> limited to a rendez-vous with their parent task at the beginning or at
>>>> the end. So maybe we should put restrictions on local tasks, so that we
>>>> can map them to jobs.
>>>
>>> Won't the parallel loop feature be providing
>>> for this kind of mini job:
>>
>> Parallel loop is useless for practical purposes. It wonders me why people
>> wasting time with this.
> 
> For multicore, the idea is to make better use of the cores when doing so 
> will improve performance.

I don't think multi-core would bring any advantage. Starting / activating /
reusing / feeding / re-synchronizing threads is too expensive.

Parallel loops could be useful on some massively vectorized architectures
in some very specialized form, or on architectures with practically
infinite number of cores (e.g. molecular computers). Anyway feeding threads
with inputs and gathering outputs may still mean more overhead than any
gain.

> Also, the number of iterations does not need to be large to see 
> parallelism benefits.
> 
>      for I in parallel 1 .. 10 loop
>          Lengthy_Processing_Of_Image (I);
>      end loop;

Certainly not for image processing. In image processing when doing it by
segments, you need to sew the segments along their borders, practically in
all algorithms. That makes parallelization far more complicated to be
handled by such a blunt thing as parallel loop.

>> They could start with logical operations instead:
>>
>>      X and Y
>>
>> is already parallel by default. AFAIK nothing in RM forbids concurrent
>> evaluation of X and Y if they are independent. Same with Ada arithmetic.
>> E.g.
>>
>>     A + B + C + D
>>
>> So far no compiler evaluates arguments concurrently or vectorizes
>> sub-expressions like:
>>
>>     A
>>     B  +
>>     C      +
>>     D  +
>>
>> Because if they did the result would work slower than sequential code. It
>> simply does not worth the efforts with existing machine architectures.
> 
> The compiler should be able to make the decision to parallelize these if 
> there is any benefit to doing so. Likely the decision would be to *not* 
> parallelize these, if A, B, C, and D are objects of some elementary type..
> 
> But it depends on the datatype of A, B, C, and D.
> 
> Also A, B, C, and D might be function calls, not simple data references, 
> and these calls might involve lengthy processing, in which case, adding 
> parallelism might make sense.

Yes, provided the language has means to describe side effects of such
computations in a way making the decision safe.

> Or, if these are objects of a Big Number library with infinite 
> precision, you might have an irrational number with pages of digits each 
> for numerator and denominator. Performing math on such values might very 
> well benefit from parallelism.

It won't, because a big number library will use the heap or a shared part
of the stack which will require interlocking and thus will either be marked
as "impure", so that the compiler will not try to parallelize, or else will
make the compiler to use locks, which will effectively kill parallelism.

> We looked at being able to explicitly state parallelism for subprograms, 
> (parallel subprograms), but found that syntax was messy, and there were 
> too many other problems.
> 
> We are currently thinking a parallel block syntax better provides this 
> capability, if the programmer wants to explicitly indicate where 
> parallelism is desired.
> 
> eg.
> 
>       Left, Right, Total : Integer := 0;
> 
>       parallel
>            Left := A + B;
>       and
>            Right := C + D;
>       end parallel;
> 
>       Total := Left + Right;
> 
> or possibly allow some automatic reduction
> 
> 
>     Total : Integer with Reduce := 0;
> 
>     parallel
>          Total := A + B;
>     and
>          Total := C + D;
>     end parallel;
> 
> Here, two "tasklets" would get created that can execute in parallel, 
> each with their local instance of the Total result (i.e. thread local 
> storage), and at the end of the parallel block, the two results are 
> reduced into one and assigned to the actual Total.
> 
> The reduction operation and identity value used to initialize the local 
> instances of the Total could be defaulted by the compiler for simple 
> data types, but could be explicitly stated if desired.
> 
> eg.
> 
>     Total : Integer with Reduce => (Reducer => "+", Identity => 0) := 0;
> 
>     parallel
>          Total := A + B;
>     and
>          Total := C + D;
>     end parallel;

The semantics is not clear. What happens if:

   parallel
      Total := Total + 1;
      Total := A + B;
   and
      Total := C + D;
   end parallel;

and of course the question of exceptions raised within concurrent paths.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

  reply	other threads:[~2014-12-19 17:28 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-10 16:31 GNAT and Tasklets vincent.diemunsch
2014-12-11 10:02 ` Jacob Sparre Andersen
2014-12-11 16:30   ` Anh Vo
2014-12-11 18:15     ` David Botton
2014-12-11 21:45     ` Egil H H
2014-12-11 23:09   ` Randy Brukardt
2014-12-12  2:28     ` Jacob Sparre Andersen
2014-12-12  8:46   ` vincent.diemunsch
2014-12-12 23:33     ` Georg Bauhaus
2014-12-13  2:06   ` Brad Moore
2014-12-13  6:50     ` Dirk Craeynest
2014-12-14  0:18 ` Hubert
2014-12-14 21:29   ` vincent.diemunsch
2014-12-16  5:09     ` Brad Moore
2014-12-17 13:24       ` vincent.diemunsch
2014-12-16  4:42 ` Brad Moore
2014-12-17 13:06   ` vincent.diemunsch
2014-12-17 20:31     ` Niklas Holsti
2014-12-17 22:08       ` Randy Brukardt
2014-12-17 22:52         ` Björn Lundin
2014-12-17 23:58           ` Randy Brukardt
2014-12-18 10:39             ` Björn Lundin
2014-12-18 23:01               ` Randy Brukardt
2014-12-19  8:39                 ` Natasha Kerensikova
2014-12-19 23:39                   ` Randy Brukardt
2014-12-19  8:59                 ` Dmitry A. Kazakov
2014-12-19 11:56                 ` Björn Lundin
2014-12-20  0:02                   ` Randy Brukardt
2014-12-18  8:42       ` Dmitry A. Kazakov
2014-12-18  8:56         ` vincent.diemunsch
2014-12-18  9:36           ` Dmitry A. Kazakov
2014-12-18 10:32             ` vincent.diemunsch
2014-12-18 11:19               ` Dmitry A. Kazakov
2014-12-18 12:09                 ` vincent.diemunsch
2014-12-18 13:07                   ` Dmitry A. Kazakov
2014-12-19 10:40                   ` Georg Bauhaus
2014-12-19 11:01                     ` Dmitry A. Kazakov
2014-12-19 16:42                       ` Brad Moore
2014-12-19 17:28                         ` Dmitry A. Kazakov [this message]
2014-12-19 18:35                           ` Brad Moore
2014-12-19 20:37                             ` Dmitry A. Kazakov
2014-12-20  1:05                               ` Randy Brukardt
2014-12-20 17:36                                 ` Brad Moore
2014-12-21 18:23                                   ` Brad Moore
2014-12-21 19:21                                     ` Shark8
2014-12-21 19:45                                       ` Brad Moore
2014-12-21 23:21                                         ` Shark8
2014-12-22 16:53                                           ` Brad Moore
2014-12-21 21:35                                     ` tmoran
2014-12-21 22:50                                       ` Brad Moore
2014-12-21 23:34                                         ` Shark8
2014-12-22 16:55                                           ` Brad Moore
2014-12-22 23:06                                   ` Randy Brukardt
2014-12-20 16:49                             ` Dennis Lee Bieber
2014-12-20 17:58                               ` Brad Moore
2014-12-19 19:43                           ` Peter Chapin
2014-12-19 20:45                           ` Georg Bauhaus
2014-12-19 20:56                             ` Dmitry A. Kazakov
2014-12-19 23:55                           ` Randy Brukardt
2014-12-19 23:51                       ` Randy Brukardt
2014-12-18 22:33               ` Randy Brukardt
2014-12-19 13:01                 ` GNAT�and Tasklets vincent.diemunsch
2014-12-19 17:46                   ` GNAT?and Tasklets Brad Moore
2014-12-20  0:39                   ` GNAT and Tasklets Peter Chapin
2014-12-20  9:03                     ` Dmitry A. Kazakov
2014-12-20  0:58                   ` GNAT�and Tasklets Randy Brukardt
2014-12-18  9:34         ` GNAT and Tasklets Niklas Holsti
2014-12-18  9:50           ` Dmitry A. Kazakov
2014-12-17 21:08     ` Brad Moore
2014-12-18  8:47       ` vincent.diemunsch
2014-12-18 21:58         ` Randy Brukardt
2014-12-17 22:18     ` Randy Brukardt
2014-12-18  0:56     ` Shark8
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox