comp.lang.ada
 help / color / mirror / Atom feed
From: Brad Moore <brad.moore@shaw.ca>
Subject: Re: GNAT and Tasklets
Date: Fri, 19 Dec 2014 11:35:05 -0700
Date: 2014-12-19T11:35:05-07:00	[thread overview]
Message-ID: <td_kw.949255$FX2.779836@fx18.iad> (raw)
In-Reply-To: <1r2ziulc78imb$.ad6zx5upic6s$.dlg@40tude.net>

On 14-12-19 10:28 AM, Dmitry A. Kazakov wrote:
> On Fri, 19 Dec 2014 09:42:52 -0700, Brad Moore wrote:
>
>> On 14-12-19 04:01 AM, Dmitry A. Kazakov wrote:
>>> On Fri, 19 Dec 2014 10:40:03 +0000 (UTC), Georg Bauhaus wrote:
>>>
>>>> <vincent.diemunsch@gmail.com> wrote:
>>>>
>>>>> It would be interesting to do a little survey on existing code using tasking.
>>>>> I have the impression that only tasks at Library level do rendez-vous and
>>>>> protected object synchronisation, and local tasks, most of the time, are
>>>>> limited to a rendez-vous with their parent task at the beginning or at
>>>>> the end. So maybe we should put restrictions on local tasks, so that we
>>>>> can map them to jobs.
>>>>
>>>> Won't the parallel loop feature be providing
>>>> for this kind of mini job:
>>>
>>> Parallel loop is useless for practical purposes. It wonders me why people
>>> wasting time with this.
>>
>> For multicore, the idea is to make better use of the cores when doing so
>> will improve performance.
>
> I don't think multi-core would bring any advantage. Starting / activating /
> reusing / feeding / re-synchronizing threads is too expensive.

Yes, there is overhead, but if the amount of overhead is small relative 
to the work being performed, and the work can be divided up between 
multicple cores then there obviously is a performance advantage.


>
> Parallel loops could be useful on some massively vectorized architectures
> in some very specialized form, or on architectures with practically
> infinite number of cores (e.g. molecular computers). Anyway feeding threads
> with inputs and gathering outputs may still mean more overhead than any
> gain.
>
>> Also, the number of iterations does not need to be large to see
>> parallelism benefits.
>>
>>       for I in parallel 1 .. 10 loop
>>           Lengthy_Processing_Of_Image (I);
>>       end loop;
>
> Certainly not for image processing. In image processing when doing it by
> segments, you need to sew the segments along their borders, practically in
> all algorithms. That makes parallelization far more complicated to be
> handled by such a blunt thing as parallel loop.


If you have special hardware, such as vectorization support, that is 
another form of parallelism. Not all platforms have such hardware 
support though. A compiler might decide to take advantage of both 
hardware and software parallelism for a particular problem.

The example here was meant to be a trivial one. The example was meant to 
show manipulation of a large data structure such as a bit array. Another 
example might be solving a large matrix of differential equations.

>
>>> They could start with logical operations instead:
>>>
>>>       X and Y
>>>
>>> is already parallel by default. AFAIK nothing in RM forbids concurrent
>>> evaluation of X and Y if they are independent. Same with Ada arithmetic.
>>> E.g.
>>>
>>>      A + B + C + D
>>>
>>> So far no compiler evaluates arguments concurrently or vectorizes
>>> sub-expressions like:
>>>
>>>      A
>>>      B  +
>>>      C      +
>>>      D  +
>>>
>>> Because if they did the result would work slower than sequential code. It
>>> simply does not worth the efforts with existing machine architectures.
>>
>> The compiler should be able to make the decision to parallelize these if
>> there is any benefit to doing so. Likely the decision would be to *not*
>> parallelize these, if A, B, C, and D are objects of some elementary type..
>>
>> But it depends on the datatype of A, B, C, and D.
>>
>> Also A, B, C, and D might be function calls, not simple data references,
>> and these calls might involve lengthy processing, in which case, adding
>> parallelism might make sense.
>
> Yes, provided the language has means to describe side effects of such
> computations in a way making the decision safe.
>
>> Or, if these are objects of a Big Number library with infinite
>> precision, you might have an irrational number with pages of digits each
>> for numerator and denominator. Performing math on such values might very
>> well benefit from parallelism.
>
> It won't, because a big number library will use the heap or a shared part
> of the stack which will require interlocking and thus will either be marked
> as "impure", so that the compiler will not try to parallelize, or else will
> make the compiler to use locks, which will effectively kill parallelism.

A while back I wrote an example tackling the problem associated with the 
first computer program, (the one written by Ada Lovelace), which is to 
generate Bernoulli numbers. I originally wanted to use an algorithm that 
closely matched the processing of Babbage's Analytical Engine, but found 
that algorithm couldn't be parallelized, since the results of each 
iteration depended on the results from previous iterations.

However, I did find another algorithm to generate Bernoulli numbers that 
could be parallelized.

For that implementation, I wrote a Big Number library, and found indeed 
that the parallelism did work, and was beneficial.

At the time I believe I was just using the default memory pool, which 
would have had the interlocking you mention.

If I were to try again, I would try to use a pool such as the Depend 
storage pool, where each worker thread has its own local pool, and thus 
no interlocking is needed. I would expect to see even more performance 
gains using this approach.


>
>> We looked at being able to explicitly state parallelism for subprograms,
>> (parallel subprograms), but found that syntax was messy, and there were
>> too many other problems.
>>
>> We are currently thinking a parallel block syntax better provides this
>> capability, if the programmer wants to explicitly indicate where
>> parallelism is desired.
>>
>> eg.
>>
>>        Left, Right, Total : Integer := 0;
>>
>>        parallel
>>             Left := A + B;
>>        and
>>             Right := C + D;
>>        end parallel;
>>
>>        Total := Left + Right;
>>
>> or possibly allow some automatic reduction
>>
>>
>>      Total : Integer with Reduce := 0;
>>
>>      parallel
>>           Total := A + B;
>>      and
>>           Total := C + D;
>>      end parallel;
>>
>> Here, two "tasklets" would get created that can execute in parallel,
>> each with their local instance of the Total result (i.e. thread local
>> storage), and at the end of the parallel block, the two results are
>> reduced into one and assigned to the actual Total.
>>
>> The reduction operation and identity value used to initialize the local
>> instances of the Total could be defaulted by the compiler for simple
>> data types, but could be explicitly stated if desired.
>>
>> eg.
>>
>>      Total : Integer with Reduce => (Reducer => "+", Identity => 0) := 0;
>>
>>      parallel
>>           Total := A + B;
>>      and
>>           Total := C + D;
>>      end parallel;
>
> The semantics is not clear. What happens if:
>
>     parallel
>        Total := Total + 1;
>        Total := A + B;
>     and
>        Total := C + D;
>     end parallel;
>
> and of course the question of exceptions raised within concurrent paths.
>


I haven't listed all the semantics but for the questions you ask,
each arm of the parallel block is a separate thread of execution (which 
we have been calling a tasklet).

Each tasklet starts off with its own local declaration of Total, 
initialized to 0, which is the Identity value for the reduction.

So, for the top Total, you end up with {Top_}Total := 1 + A + B;
for the bottom Total, you end up with {Bottom_}Total := C + D;

Then during the reduction phase, those two results get reduced using the 
reduction operation, which in this case is "+".

So the end result is Total = 1 + A + B + C + D;

As for exceptions, we are thinking that execution does not continue past 
the parallel block until all branches have finished executing. If 
exceptions are raised in multiple branches of the parallel block, then 
only of those exceptions would be selected, and only that one exception 
would get propagated outside the block.

Brad


  reply	other threads:[~2014-12-19 18:35 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-10 16:31 GNAT and Tasklets vincent.diemunsch
2014-12-11 10:02 ` Jacob Sparre Andersen
2014-12-11 16:30   ` Anh Vo
2014-12-11 18:15     ` David Botton
2014-12-11 21:45     ` Egil H H
2014-12-11 23:09   ` Randy Brukardt
2014-12-12  2:28     ` Jacob Sparre Andersen
2014-12-12  8:46   ` vincent.diemunsch
2014-12-12 23:33     ` Georg Bauhaus
2014-12-13  2:06   ` Brad Moore
2014-12-13  6:50     ` Dirk Craeynest
2014-12-14  0:18 ` Hubert
2014-12-14 21:29   ` vincent.diemunsch
2014-12-16  5:09     ` Brad Moore
2014-12-17 13:24       ` vincent.diemunsch
2014-12-16  4:42 ` Brad Moore
2014-12-17 13:06   ` vincent.diemunsch
2014-12-17 20:31     ` Niklas Holsti
2014-12-17 22:08       ` Randy Brukardt
2014-12-17 22:52         ` Björn Lundin
2014-12-17 23:58           ` Randy Brukardt
2014-12-18 10:39             ` Björn Lundin
2014-12-18 23:01               ` Randy Brukardt
2014-12-19  8:39                 ` Natasha Kerensikova
2014-12-19 23:39                   ` Randy Brukardt
2014-12-19  8:59                 ` Dmitry A. Kazakov
2014-12-19 11:56                 ` Björn Lundin
2014-12-20  0:02                   ` Randy Brukardt
2014-12-18  8:42       ` Dmitry A. Kazakov
2014-12-18  8:56         ` vincent.diemunsch
2014-12-18  9:36           ` Dmitry A. Kazakov
2014-12-18 10:32             ` vincent.diemunsch
2014-12-18 11:19               ` Dmitry A. Kazakov
2014-12-18 12:09                 ` vincent.diemunsch
2014-12-18 13:07                   ` Dmitry A. Kazakov
2014-12-19 10:40                   ` Georg Bauhaus
2014-12-19 11:01                     ` Dmitry A. Kazakov
2014-12-19 16:42                       ` Brad Moore
2014-12-19 17:28                         ` Dmitry A. Kazakov
2014-12-19 18:35                           ` Brad Moore [this message]
2014-12-19 20:37                             ` Dmitry A. Kazakov
2014-12-20  1:05                               ` Randy Brukardt
2014-12-20 17:36                                 ` Brad Moore
2014-12-21 18:23                                   ` Brad Moore
2014-12-21 19:21                                     ` Shark8
2014-12-21 19:45                                       ` Brad Moore
2014-12-21 23:21                                         ` Shark8
2014-12-22 16:53                                           ` Brad Moore
2014-12-21 21:35                                     ` tmoran
2014-12-21 22:50                                       ` Brad Moore
2014-12-21 23:34                                         ` Shark8
2014-12-22 16:55                                           ` Brad Moore
2014-12-22 23:06                                   ` Randy Brukardt
2014-12-20 16:49                             ` Dennis Lee Bieber
2014-12-20 17:58                               ` Brad Moore
2014-12-19 19:43                           ` Peter Chapin
2014-12-19 20:45                           ` Georg Bauhaus
2014-12-19 20:56                             ` Dmitry A. Kazakov
2014-12-19 23:55                           ` Randy Brukardt
2014-12-19 23:51                       ` Randy Brukardt
2014-12-18 22:33               ` Randy Brukardt
2014-12-19 13:01                 ` GNAT�and Tasklets vincent.diemunsch
2014-12-19 17:46                   ` GNAT?and Tasklets Brad Moore
2014-12-20  0:39                   ` GNAT and Tasklets Peter Chapin
2014-12-20  9:03                     ` Dmitry A. Kazakov
2014-12-20  0:58                   ` GNAT�and Tasklets Randy Brukardt
2014-12-18  9:34         ` GNAT and Tasklets Niklas Holsti
2014-12-18  9:50           ` Dmitry A. Kazakov
2014-12-17 21:08     ` Brad Moore
2014-12-18  8:47       ` vincent.diemunsch
2014-12-18 21:58         ` Randy Brukardt
2014-12-17 22:18     ` Randy Brukardt
2014-12-18  0:56     ` Shark8
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox