comp.lang.ada
 help / color / mirror / Atom feed
From: Niklas Holsti <niklas.holsti@tidorum.invalid>
Subject: Re: Ada 2012 automatic concurrency?
Date: Thu, 27 Dec 2012 21:06:29 +0200
Date: 2012-12-27T21:06:29+02:00	[thread overview]
Message-ID: <ak3khlFe84jU1@mid.individual.net> (raw)
In-Reply-To: <cdf1f6a2-8773-4aa4-95a6-29fb810167d1@googlegroups.com>

On 12-12-27 07:57 , charleshixsn@earthlink.net wrote:
> On Wednesday, December 26, 2012 2:55:11 PM UTC-8, Niklas Holsti wrote:
>> On 12-12-25 19:26 , charleshixsn@earthlink.net wrote:
>>
>>> In the RM I read (page 209): NOTES 1 Concurrent task ex...
>>
>>
>> I believe that the second sentence in your quote has a more general 
>> meaning: that an Ada compiler is allowed to generate code that uses 
>> several processors or cores in parallel just to implement a logically 
>> sequential (single-task) computation. For example, if your program 
>> contains the two statements:
>>  
>>    x := a + b;
>>    y := a - b;
>>
>> and the target computer has two processors or cores capable of addition
>> and subtraction, the Ada compiler may generate code that uses these two
>> processors in parallel, one to compute a+b and the other to compute a-b.
>> This is allowed, since the semantic effects are the same as when using
>> one processor for both computations (assuming that none of these
>> variables is declared "volatile").
>>  
>> So I don't think that your use or non-use of protected types (or tasks)
>> is relevant to this RM sentence.
>>
> The use of protected types would be to ensure that the sections were independent.

I don't see how protected operations would help, in fact I think that
they hinder automatic parallelization.

A protected operation is allowed to access and modify global variables,
as well as its parameters and the components of the protected object.
For example, consider two operations, A and B, and a task that calls
them in sequence, like this:

   A;
   B;

Whether A and B are protected operations or not, the compiler would have
to look into their internals to discover if they access and/or modify
the same variables.

If A and B are ordinary operations, not protected ones, the compiler can
(at least conceptually) in-line the calls into the calling task, and
could then analyze the data dependencies and possibly arrange for
parallel execution of independent parts of the operations, or even of
the whole calls.

However, if A and B are protected operations, while they can (in
principle) be in-lined in the same way, the in-lined code will include
the PO locking and unlocking code, which is very likely to access
variables that the compiler should consider as "volatile", which means
that it would be difficult for the compiler to interleave any code
in-lined from A with code in-lined from B (because it must preserve the
order of volatile accesses). The compiler would then have less
opportunities for parallelization than when the operations are not
protected.

>> For large-grain parallelization, you should use tasks. I don't know of
>> any Ada compiler, including GNAT, that does automatic large-grain
>> parallelization.

> Thank you.  That was the answer I was expecting, if not the one
> I was hoping for.  Tasks it is.
> 
> IIUC, there shouldn't be more than about twice as many tasks as
> cores.  Is this about right?

That depends... If the tasks are logically different, I would use
whatever number of tasks make the design clearest. This would be using
concurrency or parallelism for logical purposes, not (mainly) for
performance increases.

If the tasks are all the same, as for example when you have a number of
identical "worker" tasks that grab and execute "jobs" from some pool of
"jobs to do", I would use a total number of tasks that makes the average
number of "ready" tasks about equal to the number of cores (or a bit
more, for margin). For example, if each worker task typically spends
half the time working, and half the time waiting for something, the
number of workers should be about twice the number of cores. (You should
not, of course, count as "waiting" the time that worker tasks spend
waiting for new jobs to work on.)

Usually, the amount of waiting in tasks is variable and dynamic, which
means that the number of "ready" tasks fluctuates and will, with the
above rule of thumb, sometimes be less than the number of cores, so some
cores will be idle when that happens. If you are very anxious to avoid
occasional idle cores, you can of course increase the number of tasks to
skew the distribution of the number of "ready" tasks to higher numbers,
making idle cores less likely. But this comes at the cost of more
overhead in task scheduling, task stack space consumption, and perhaps
in data-cache turnover.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
      .      @       .



      parent reply	other threads:[~2012-12-27 19:06 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-12-25 17:26 Ada 2012 automatic concurrency? charleshixsn
2012-12-26 22:02 ` John B. Matthews
2012-12-26 22:55 ` Niklas Holsti
2012-12-27  5:57   ` charleshixsn
2012-12-27  8:59     ` Dirk Heinrichs
2012-12-27 11:22       ` Brian Drummond
2012-12-27 22:13       ` Randy Brukardt
2012-12-27 14:27     ` Jeffrey Carter
2012-12-27 20:13       ` tmoran
2012-12-27 19:06     ` Niklas Holsti [this message]
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox