comp.lang.ada
 help / color / mirror / Atom feed
From: Brad Moore <brad.moore@shaw.ca>
Subject: Re: GNAT and Tasklets
Date: Mon, 15 Dec 2014 22:09:39 -0700
Date: 2014-12-15T22:09:39-07:00	[thread overview]
Message-ID: <n8Pjw.147675$Sn4.43191@fx10.iad> (raw)
In-Reply-To: <529f18b8-0612-4236-8aac-efd641f677cd@googlegroups.com>

On 2014-12-14 2:29 PM, vincent.diemunsch@gmail.com wrote:
> Le dimanche 14 décembre 2014 01:18:42 UTC+1, Hubert a écrit :
>
>> the result of my research was that depending
>> on the OS the Ada program was running on you could get several 100 OS
>> threads or maybe 1-2K on Linux but there is an upper limit because every
>> OS thread that runs a task will have a stack associated with it, so
>> mostly the available memory is the limit, I think.
>> [...]
>> My solution was to implement my own pre-emptive Job system on top of the
>> OS threads. I allocate as many threads (or Tasks in Ada) as there are
>> processor cores and then assign a number of Jobs to each.
>> [...]
>> Depending on what your requirements are (great number of parallel Jobs),
>> this may very well be your only reliable solution.
>>
>
> Yes, I think you are completely right. Is your library private or do you plan to release
> it as Open Source ?

As another alternative, you could look at the Paraffin libraries, which 
can be found at

https://sourceforge.net/projects/paraffin/

These libraries are a set of open source generics that provide several 
different strategies to use for parallel loops, parallel recursion, and 
parallel blocks. You can choose between different parallelism strategies 
such as a static load balancing (work sharing), or dynamic load 
balancing using work stealing approach for loops, or what I call work 
seeking which is another variation of load balancing.

You can also choose between using task pools, or creating worker tasks 
dynamically on the fly.

Generally I found similar results as reported by Hubert, that the 
optimal number of workers is typically based on the number of available 
cores in the system. Adding more workers above that typically does not 
improve performance, and eventually degrades performance, as each worker 
introduces some overhead, and if the cores are already fully loaded with 
work, adding more workers only adds overhead without adding performance 
benefits.

>
> This shows clearly that the compiler wasn't able to produce an adequate solution, even
> if the case of a lot of little local tasks is quite simple, and has become a standard way of
> using multicore computers (see for instance Grand Central Dispatch on Mac OS X).

The compiler is already allowed to use implicit parallelism when it sees 
fit, if it can achieve the same semantic effects that would result from 
sequential execution.

RM 9.11 "Concurrent task execution may be implemented on multicomputers, 
multiprocessors, or with interleaved execution on a single physical 
processor. On the other hand, whenever an implementation can determine 
that the required semantic effects can be achieved when parts of the 
execution of a given task are performed by different physical processors 
acting in parallel, it may choose to perform them in this way"

However, there are limits to what the compiler can do implicitly. For 
instance it cannot determine if the following loop can execute in parallel.

       Sum : Integer := 0;
      for I in 1 .. 1000 loop
          Sum := Sum + Foo(I);
      end loop;

For one there is a data race on the variable Sum. If the loop were to be 
broken up into multiple tasklets executing in parallel, the compiler 
would need to structure the implementation of the loop very differently 
than written, and the semantics of execution would not be the same as 
the sequential case, particularly if an exception is raised inside the 
loop. Secondly, if Foo is a third party library call, the compiler 
cannot know if the Foo function itself is modifying global variables 
which would be unsafe for parallelization.


>
> I really hope that Ada 202X will limit new features to a few real improvements, and try hard improving compilers.

In order for the compiler to generate implicit parallelism for code such 
as the example above, it needs to be given additional semantic 
information so that it can guarantee the parallel transformation can be 
done safely. We are looking at ways of providing such information to the 
compiler via new aspects that can be checked statically by the compiler. 
  Whether such proposals will actually become part of Ada 202x is 
another question. It depends on the demand for such features, and how 
well they can be worked out, without adding too much complexity to the 
language, or implementation burden to the compiler vendors. I think the 
general goal at this point will be to limit Ada 202x in terms of new 
features, but that is the future, and the future is unknown.

Brad

>
> Kind regards,
>
> Vincent
>

  reply	other threads:[~2014-12-16  5:09 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-10 16:31 GNAT and Tasklets vincent.diemunsch
2014-12-11 10:02 ` Jacob Sparre Andersen
2014-12-11 16:30   ` Anh Vo
2014-12-11 18:15     ` David Botton
2014-12-11 21:45     ` Egil H H
2014-12-11 23:09   ` Randy Brukardt
2014-12-12  2:28     ` Jacob Sparre Andersen
2014-12-12  8:46   ` vincent.diemunsch
2014-12-12 23:33     ` Georg Bauhaus
2014-12-13  2:06   ` Brad Moore
2014-12-13  6:50     ` Dirk Craeynest
2014-12-14  0:18 ` Hubert
2014-12-14 21:29   ` vincent.diemunsch
2014-12-16  5:09     ` Brad Moore [this message]
2014-12-17 13:24       ` vincent.diemunsch
2014-12-16  4:42 ` Brad Moore
2014-12-17 13:06   ` vincent.diemunsch
2014-12-17 20:31     ` Niklas Holsti
2014-12-17 22:08       ` Randy Brukardt
2014-12-17 22:52         ` Björn Lundin
2014-12-17 23:58           ` Randy Brukardt
2014-12-18 10:39             ` Björn Lundin
2014-12-18 23:01               ` Randy Brukardt
2014-12-19  8:39                 ` Natasha Kerensikova
2014-12-19 23:39                   ` Randy Brukardt
2014-12-19  8:59                 ` Dmitry A. Kazakov
2014-12-19 11:56                 ` Björn Lundin
2014-12-20  0:02                   ` Randy Brukardt
2014-12-18  8:42       ` Dmitry A. Kazakov
2014-12-18  8:56         ` vincent.diemunsch
2014-12-18  9:36           ` Dmitry A. Kazakov
2014-12-18 10:32             ` vincent.diemunsch
2014-12-18 11:19               ` Dmitry A. Kazakov
2014-12-18 12:09                 ` vincent.diemunsch
2014-12-18 13:07                   ` Dmitry A. Kazakov
2014-12-19 10:40                   ` Georg Bauhaus
2014-12-19 11:01                     ` Dmitry A. Kazakov
2014-12-19 16:42                       ` Brad Moore
2014-12-19 17:28                         ` Dmitry A. Kazakov
2014-12-19 18:35                           ` Brad Moore
2014-12-19 20:37                             ` Dmitry A. Kazakov
2014-12-20  1:05                               ` Randy Brukardt
2014-12-20 17:36                                 ` Brad Moore
2014-12-21 18:23                                   ` Brad Moore
2014-12-21 19:21                                     ` Shark8
2014-12-21 19:45                                       ` Brad Moore
2014-12-21 23:21                                         ` Shark8
2014-12-22 16:53                                           ` Brad Moore
2014-12-21 21:35                                     ` tmoran
2014-12-21 22:50                                       ` Brad Moore
2014-12-21 23:34                                         ` Shark8
2014-12-22 16:55                                           ` Brad Moore
2014-12-22 23:06                                   ` Randy Brukardt
2014-12-20 16:49                             ` Dennis Lee Bieber
2014-12-20 17:58                               ` Brad Moore
2014-12-19 19:43                           ` Peter Chapin
2014-12-19 20:45                           ` Georg Bauhaus
2014-12-19 20:56                             ` Dmitry A. Kazakov
2014-12-19 23:55                           ` Randy Brukardt
2014-12-19 23:51                       ` Randy Brukardt
2014-12-18 22:33               ` Randy Brukardt
2014-12-19 13:01                 ` GNAT�and Tasklets vincent.diemunsch
2014-12-19 17:46                   ` GNAT?and Tasklets Brad Moore
2014-12-20  0:39                   ` GNAT and Tasklets Peter Chapin
2014-12-20  9:03                     ` Dmitry A. Kazakov
2014-12-20  0:58                   ` GNAT�and Tasklets Randy Brukardt
2014-12-18  9:34         ` GNAT and Tasklets Niklas Holsti
2014-12-18  9:50           ` Dmitry A. Kazakov
2014-12-17 21:08     ` Brad Moore
2014-12-18  8:47       ` vincent.diemunsch
2014-12-18 21:58         ` Randy Brukardt
2014-12-17 22:18     ` Randy Brukardt
2014-12-18  0:56     ` Shark8
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox