comp.lang.ada
 help / color / mirror / Atom feed
From: "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de>
Subject: Re: question about tasks, multithreading and multi-cpu machines
Date: Fri, 17 Mar 2006 20:10:17 +0100
Date: 2006-03-17T20:10:15+01:00	[thread overview]
Message-ID: <k8jyv9mvluby.109brt60q5uc8$.dlg@40tude.net> (raw)
In-Reply-To: dvegp9$im3$1@sunnews.cern.ch

On Fri, 17 Mar 2006 15:23:37 +0100, Maciej Sobczak wrote:

> Dmitry A. Kazakov wrote:
> 
>> If you
>> don't or don't care, then it is an optimization issue.
> 
> In which case someone somewhere needs to translate it into source code, 
> unless you have very smart parallerizing compiler. I'm addressing the 
> source code aspect of the whole.

I prefer to leave that to the compiler. After all, you don't manually
optimize programs for pipelines of concrete CPU. At some point you say, OK,
that's my abstraction level, I don't care what's going beneath, because it
is economically unreasonable.
 
>> I don't think that Occam's style of concurrency could be an viable
>> alternative. Mapping concurrency onto control flow statements is OK, and
>> Ada has this as accept and select statements
> 
> Yes, but accept and select are statements that operate with the 
> concurrency that is already there. The problem is how to introduce this 
> concurrency in the first place, without resorting to polluting the final 
> solution with irrelevant entities.

I'm not sure what you mean. But, fundamentally, concurrency cannot be
described in terms of a Turing Machine. It is incomputable. You have to
postulate it.

>> The code like above is very difficult to reuse. Let you want to
>> extend it. How can you add something into each of two execution paths?
> 
> In the same way as you add something into each branch of the If statement.

Absolutely, because both are too low-level.

You can extend a polymorphic subprogram.

>> Without code modification?
> 
> So what about the If statement? :)
> Can you extend the branches of If without code modification? :|

One way is how extend constructors and destructors.

>> Let I call some subprograms from each of the
>> execution paths, how could they communicate?
> 
> Using objects that are dedicated for this, ehem, task.

OK, but then, the concept of light-weight concurrency you have described,
is incomplete. You need something else to make it usable. Ada has now
protected objects as well, though, tasks were complete without them. 

>> What about safety?
> 
> I don't see anything that would make it less safe than two separate task 
> bodies. Take the code already presented by Jean-Pierre Rosen:
> 
> declare
>     task T1;
>     task body T1 is
>     begin
>        A := 7;
>     end T1;
> 
>     task T2;
>     task body T2 is
>     begin
>        B := 8;
>     end T2;
> begin
>     null;
> end;
> 
> My *equivalent* example would be:
> 
> begin
>     A := 7;
> with
>     B := 8;
> end;
> 
> (were's the Ada readability where you need it? ;) )

Well, it is like to claim that *ptr++ is more readable. Yes it is, if the
program is three lines long. Ada's variant turns better when concurrent
paths become more complicated. Further advantage is that Ada's way is
modular from the start. Your model will sooner or later end up with:

begin
   Do_This;
with
   Do_That;
end;

not much different from:

   X : Do_This;  -- Task
   Y : Do_That; -- Another task
begin
   null;
end;

>  From the safety point of view, what makes my example worse than the one 
> above it? What makes it less safe?

Concurrent bodies are visually decoupled. They don't have common context.
Nothing forces them to be in the same package. It feels safer. (:-))

>> They would
>> definitely access some common data?
> 
> Probably yes, probably not - depends on the actual problem to be solved.
> Let's say that yes, there is some shared data. What makes the data 
> sharing more difficult/unsafe than in the case of two separate task bodies?

Tasks have contracted interfaces, I mean rendezvous. This is the normal way
for tasks to communicate each other. It can be to some extent analyzed
statically, provided, programmers do not misuse it. For an Occam-like
paradigm there is no any contracts stated. Most likely programmers will use
shared variables.

>> If they spin for a lock, then what was
>> the gain of concurrency?
> 
> What is the gain of concurrency in the presence of locks with two 
> separate task bodies?
> This issue is completely orthogonal to the way the concurrency is 
> expressed. You have the same problems and the same solutions no matter 
> what is the syntax used to introduce concurrency.

I meant if the construct begin ... with ... end is atomic. I remotely
remember that Occam had channels to communicate between scheduling items.

>> If they don't; how to maintain such sort of code
>> in a large complex system?
> 
> The difference between separate task bodies and the support for 
> concurrency on the level of control statement is that the former can be 
> *always* built on top of the latter.

I don't think so. Consider: T1 starts T2 and then completes before T2.

> The other way round is not true.

If you allow functional decomposition it will.

>> How can I rewrite it for n-processors (n is
>> unknown in advance)?
> 
> With the help of asynchronous blocks, for example (I've mentioned about 
> them in the article on my web page).
> As said above, more structured solutions can be always build on top of 
> less structured ones. In particular, it would be quite straightforward 
> to build a (generic?) package for this purpose, that would internally be 
> implemented with the help of concurrent control structures - this is 
> always possible. What I want is the possibility to express concurrency 
> on the level that does not require me to use any new entity, if this new 
> entity does not emerge by itself in the problem analysis.

It is OK, but it is only one side of truth. And also, for the case, where
the application domain is orthogonal to concurrency, I'd prefer no any
statements. I'd like to specify some constraints and let the compiler to
derive the threads it wants.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



  reply	other threads:[~2006-03-17 19:10 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-03-14 16:26 question about tasks, multithreading and multi-cpu machines Norbert Caspari
2006-03-14 16:51 ` Pascal Obry
2006-03-16  4:27   ` Norbert Caspari
2006-03-16 10:04     ` Alex R. Mosteo
2006-03-14 17:18 ` Jean-Pierre Rosen
2006-03-16  4:22   ` Norbert Caspari
2006-03-16  6:58     ` Jean-Pierre Rosen
2006-03-14 18:49 ` Martin Krischik
2006-03-14 18:56 ` tmoran
2006-03-14 23:01 ` Jeffrey Creem
2006-03-15  1:15   ` Jeffrey R. Carter
2006-03-16  8:06   ` Maciej Sobczak
2006-03-16 10:23     ` Ole-Hjalmar Kristensen
2006-03-16 12:59     ` Dmitry A. Kazakov
2006-03-16 15:11       ` Larry Kilgallen
2006-03-16 15:50       ` Maciej Sobczak
2006-03-16 18:03         ` Jean-Pierre Rosen
2006-03-16 20:06           ` Dr. Adrian Wrigley
2006-03-17  3:26       ` Randy Brukardt
2006-03-16 20:06     ` Jeffrey R. Carter
2006-03-17  8:22       ` Maciej Sobczak
2006-03-17 11:36         ` Dmitry A. Kazakov
2006-03-17 14:23           ` Maciej Sobczak
2006-03-17 19:10             ` Dmitry A. Kazakov [this message]
2006-03-17 19:42         ` Jeffrey R. Carter
2006-03-18  0:27           ` tmoran
2006-03-25 21:28     ` Robert A Duff
     [not found]       ` <43gb22h4811ojjh308r2lqf5qqrujijjok@4ax.com>
2006-03-26  0:44         ` Robert A Duff
2006-03-15  6:46 ` Simon Wright
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox