comp.lang.ada
 help / color / mirror / Atom feed
* question about tasks, multithreading and multi-cpu machines
@ 2006-03-14 16:26 Norbert Caspari
  2006-03-14 16:51 ` Pascal Obry
                   ` (5 more replies)
  0 siblings, 6 replies; 29+ messages in thread
From: Norbert Caspari @ 2006-03-14 16:26 UTC (permalink / raw)


In Ada it is possible to declare multiple parallel running "tasks". But for 
my opinion the keyword "task" is somewhat misleding because in fact, those 
"tasks" are really threads.

If I run such a program on a multi-cpu machine, the process itself will use 
only one cpu, even though I create several "tasks".

I tested this with gnat v3.15p under HPUX 11 on a multi-cpu server.

How can I write my code to utilize all cpu's on such a machine? Is there a 
different way in Ada to perform multi-tasking instead of multi-threading?

Thank you for your help!

Best regards, Norbert




^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-14 16:26 question about tasks, multithreading and multi-cpu machines Norbert Caspari
@ 2006-03-14 16:51 ` Pascal Obry
  2006-03-16  4:27   ` Norbert Caspari
  2006-03-14 17:18 ` Jean-Pierre Rosen
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 29+ messages in thread
From: Pascal Obry @ 2006-03-14 16:51 UTC (permalink / raw)
  To: Norbert Caspari

Norbert Caspari a �crit :
> In Ada it is possible to declare multiple parallel running "tasks". But for 
> my opinion the keyword "task" is somewhat misleding because in fact, those 
> "tasks" are really threads.

Tasks are higher level than threads. For example it comes with
Rendez-vous. Tasks can indeed be implemented using threads on some OS.
It is the case for GNAT on GNU/Linux and Windows for example.

> If I run such a program on a multi-cpu machine, the process itself will use 
> only one cpu, even though I create several "tasks".

Strange, threads should be properly scheduled by the OS to use multiple
CPU. Looks like an OS issue to me... Or a runtime issue, I don't
remember how tasking was implemented on HPUX on GNAT 3.15p.

> I tested this with gnat v3.15p under HPUX 11 on a multi-cpu server.
> 
> How can I write my code to utilize all cpu's on such a machine? Is there a 
> different way in Ada to perform multi-tasking instead of multi-threading?

Use Tasks.

> Thank you for your help!

You're welcome.

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|              http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-14 16:26 question about tasks, multithreading and multi-cpu machines Norbert Caspari
  2006-03-14 16:51 ` Pascal Obry
@ 2006-03-14 17:18 ` Jean-Pierre Rosen
  2006-03-16  4:22   ` Norbert Caspari
  2006-03-14 18:49 ` Martin Krischik
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 29+ messages in thread
From: Jean-Pierre Rosen @ 2006-03-14 17:18 UTC (permalink / raw)


Norbert Caspari a �crit :
> In Ada it is possible to declare multiple parallel running "tasks". But for 
> my opinion the keyword "task" is somewhat misleding because in fact, those 
> "tasks" are really threads.
"task" was the name commonly used for this at the time Ada was designed.
Only (much) later light-weight concurrency was introduced in Unix, and 
for some mysterious reason they called that "threads" instead of "tasks".

> If I run such a program on a multi-cpu machine, the process itself will use 
> only one cpu, even though I create several "tasks".
> 
It depends on which run-time you use. If you use FSU-threads, you'll use 
only one CPU, but if you use native threads, you should get full use of 
you CPUs. Check the documentation to learn how to switch run-times (it 
is just a symbolic link you need to change).

-- 
---------------------------------------------------------
            J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-14 16:26 question about tasks, multithreading and multi-cpu machines Norbert Caspari
  2006-03-14 16:51 ` Pascal Obry
  2006-03-14 17:18 ` Jean-Pierre Rosen
@ 2006-03-14 18:49 ` Martin Krischik
  2006-03-14 18:56 ` tmoran
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 29+ messages in thread
From: Martin Krischik @ 2006-03-14 18:49 UTC (permalink / raw)


Norbert Caspari wrote:

> If I run such a program on a multi-cpu machine, the process itself will
> use only one cpu, even though I create several "tasks".

That's not a (missing) language feature it's a GNAT bug. Or prephaps not a
bug as such. See down below.

> I tested this with gnat v3.15p under HPUX 11 on a multi-cpu server.

When the GNAT compiler is compiled a specific thread can be activated. The
default for GNAT is "--enable-thread=gnat". This tread model has the
advantage that it is compatible with all gcc target operating systems. But
it won't have all the features available.

If you where a paying AdaCore customer you could just file an error report
at AdaCore and you would probably get a new compiler with a better thread
model activated pretty soon.

But since you use 3.15p I guess you are not a paying customer and here come
the bitter news: You have to to it yourself. That is: you need to get the
gcc sources (i.e from http://gnuada.sf.net) and compile your own compiler.
For unix systems I would suggest "--enable-theads=posix".

If you are successful: how about uploading the finshed compiler to The GNU
Ada Project?

Martin
-- 
mailto://krischik@users.sourceforge.net
Ada programming at: http://ada.krischik.com



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-14 16:26 question about tasks, multithreading and multi-cpu machines Norbert Caspari
                   ` (2 preceding siblings ...)
  2006-03-14 18:49 ` Martin Krischik
@ 2006-03-14 18:56 ` tmoran
  2006-03-14 23:01 ` Jeffrey Creem
  2006-03-15  6:46 ` Simon Wright
  5 siblings, 0 replies; 29+ messages in thread
From: tmoran @ 2006-03-14 18:56 UTC (permalink / raw)


> If I run such a program on a multi-cpu machine, the process itself will use
> only one cpu, even though I create several "tasks".
   On the configuration you tried, perhaps.  Both Gnat 3.15p and ObjectAda
7.2.2 on a Windows 2000 hyperthreaded CPU machine use both "cpus" for
multiple Ada tasks.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-14 16:26 question about tasks, multithreading and multi-cpu machines Norbert Caspari
                   ` (3 preceding siblings ...)
  2006-03-14 18:56 ` tmoran
@ 2006-03-14 23:01 ` Jeffrey Creem
  2006-03-15  1:15   ` Jeffrey R. Carter
  2006-03-16  8:06   ` Maciej Sobczak
  2006-03-15  6:46 ` Simon Wright
  5 siblings, 2 replies; 29+ messages in thread
From: Jeffrey Creem @ 2006-03-14 23:01 UTC (permalink / raw)


Norbert Caspari wrote:
> In Ada it is possible to declare multiple parallel running "tasks". But for 
> my opinion the keyword "task" is somewhat misleding because in fact, those 
> "tasks" are really threads.


Others answered the real question but I'll comment on this. Actually, 
threads is misleading since threads are just tasks minus rendevous. It 
all just depends on your perspective. Since Ada had tasking before 
(most) unix had threads I think Unix got the name wrong :)

Also, under vxWorks things you think of as threads are called tasks and 
Ada tasks are layered on top of them.

So, the only thing I am trying to say here is that people seem to assume 
that task, thread and process have some formal definition that is all 
powerful. This is not true.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-14 23:01 ` Jeffrey Creem
@ 2006-03-15  1:15   ` Jeffrey R. Carter
  2006-03-16  8:06   ` Maciej Sobczak
  1 sibling, 0 replies; 29+ messages in thread
From: Jeffrey R. Carter @ 2006-03-15  1:15 UTC (permalink / raw)


Jeffrey Creem wrote:
> 
> Others answered the real question but I'll comment on this. Actually, 
> threads is misleading since threads are just tasks minus rendevous. It 
> all just depends on your perspective. Since Ada had tasking before 
> (most) unix had threads I think Unix got the name wrong :)

At the Ada Launch (1980 Dec 10), when Ichbiah, Barnes, and Firth presented their 
new language, one of them said that they tended to prefer shorter words for 
reserved words, and task is shorter than thread or process. Of course, that 
doesn't explain why they picked package instead of module.

-- 
Jeff Carter
"My brain hurts!"
Monty Python's Flying Circus
21



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-14 16:26 question about tasks, multithreading and multi-cpu machines Norbert Caspari
                   ` (4 preceding siblings ...)
  2006-03-14 23:01 ` Jeffrey Creem
@ 2006-03-15  6:46 ` Simon Wright
  5 siblings, 0 replies; 29+ messages in thread
From: Simon Wright @ 2006-03-15  6:46 UTC (permalink / raw)


Norbert Caspari <nnc@gmx.li> writes:

> In Ada it is possible to declare multiple parallel running "tasks". But for 
> my opinion the keyword "task" is somewhat misleding because in fact, those 
> "tasks" are really threads.

If you used VxWorks you would find that the right word to use was
'task' throughout. Threads/tasks mean different things to different
folk using different languages, get used to it.

On several platforms GNAT uses POSIX threads to implement tasking,
don't know about HPUX.

> If I run such a program on a multi-cpu machine, the process itself will use 
> only one cpu, even though I create several "tasks".
>
> I tested this with gnat v3.15p under HPUX 11 on a multi-cpu server.
>
> How can I write my code to utilize all cpu's on such a machine? Is there a 
> different way in Ada to perform multi-tasking instead of multi-threading?

I have had an SMP Linux x86 box for years, runs tasks on both CPUs
with no trouble (though I haven't used 3.15p for a long time).



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-14 17:18 ` Jean-Pierre Rosen
@ 2006-03-16  4:22   ` Norbert Caspari
  2006-03-16  6:58     ` Jean-Pierre Rosen
  0 siblings, 1 reply; 29+ messages in thread
From: Norbert Caspari @ 2006-03-16  4:22 UTC (permalink / raw)


Jean-Pierre Rosen wrote:

> It depends on which run-time you use. If you use FSU-threads, you'll use
> only one CPU, but if you use native threads, you should get full use of
> you CPUs. Check the documentation to learn how to switch run-times (it
> is just a symbolic link you need to change).

This sounds interesting. Maybe a solution for me.

Could you please tell me more about this feature?

Thanks a lot for your help.

Best Regards, Norbert




^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-14 16:51 ` Pascal Obry
@ 2006-03-16  4:27   ` Norbert Caspari
  2006-03-16 10:04     ` Alex R. Mosteo
  0 siblings, 1 reply; 29+ messages in thread
From: Norbert Caspari @ 2006-03-16  4:27 UTC (permalink / raw)


Pascal Obry wrote:

> Use Tasks.

Thank you very much for answering my questions. But how can I use
multitasking with Ada? There is no such thing like "fork" available in
this programming language.

Best Regards, Norbert




^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-16  4:22   ` Norbert Caspari
@ 2006-03-16  6:58     ` Jean-Pierre Rosen
  0 siblings, 0 replies; 29+ messages in thread
From: Jean-Pierre Rosen @ 2006-03-16  6:58 UTC (permalink / raw)


Norbert Caspari a �crit :
> Jean-Pierre Rosen wrote:
> 
>> It depends on which run-time you use. If you use FSU-threads, you'll use
>> only one CPU, but if you use native threads, you should get full use of
>> you CPUs. Check the documentation to learn how to switch run-times (it
>> is just a symbolic link you need to change).
> 
> This sounds interesting. Maybe a solution for me.
> 
> Could you please tell me more about this feature?
> 
It's been a long time since I've uninstalled my last 3.15p. Moreover, it 
was on Linux.

But there were two RTL provided in two different directories, with a 
symbolic link to chose which one was used. FSU thread provided full 
conformance to all annexes, but native threads provided more 
asynchronousness. Please check your Gnat User Guide for more details.
-- 
---------------------------------------------------------
            J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-14 23:01 ` Jeffrey Creem
  2006-03-15  1:15   ` Jeffrey R. Carter
@ 2006-03-16  8:06   ` Maciej Sobczak
  2006-03-16 10:23     ` Ole-Hjalmar Kristensen
                       ` (3 more replies)
  1 sibling, 4 replies; 29+ messages in thread
From: Maciej Sobczak @ 2006-03-16  8:06 UTC (permalink / raw)


Jeffrey Creem wrote:

> Actually, 
> threads is misleading since threads are just tasks minus rendevous. It 
> all just depends on your perspective. Since Ada had tasking before 
> (most) unix had threads I think Unix got the name wrong :)

In my humble opinion, Ada got it wrong in the first place.

{Pre-flame note: below is a very informal description of how I think 
about these terms while designing/programming/etc., not how they are 
defined in any language/system/etc. standard.}

What's funny, the terminology is correctly used in even simplest 
programs for project planning, organizers, and such.

The "task" is something that needs to be done. It's an objective, or a 
goal to achieve. (I'm not an English native speaker, but my dictionaries 
seem to confirm this.)

To actually perform any given work, some kind of "resource" is needed. 
Whether it is a human resource or a computing resource does not matter - 
what's important is that this "resource" is treated as an *ability* to 
perform work. What's even more important is that there might be unused 
resources lying around, which can be just used to perform some work if 
they are available.

Now, "thread" is a software resource that represents the ability of the 
system to do some work.

The difference between "task" and "thread" is that "task" is something 
to do, whereas "thread" is a way to do it.

The tasks in Ada do not match any of the above. Other languages (for 
example Java) also got it wrong, by treating threads as objects or even 
by allowing a programmer to "subclass" or "specialize" a thread class. 
None of the languages that I'm aware of allows me to treat tasks and 
threads as they really are, which means that everybody got it wrong in 
one way or another. :)

> So, the only thing I am trying to say here is that people seem to assume 
> that task, thread and process have some formal definition that is all 
> powerful. This is not true.

Right.

-- 
Maciej Sobczak : http://www.msobczak.com/
Programming    : http://www.msobczak.com/prog/



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-16  4:27   ` Norbert Caspari
@ 2006-03-16 10:04     ` Alex R. Mosteo
  0 siblings, 0 replies; 29+ messages in thread
From: Alex R. Mosteo @ 2006-03-16 10:04 UTC (permalink / raw)


Norbert Caspari wrote:
> Pascal Obry wrote:
> 
> 
>>Use Tasks.
> 
> 
> Thank you very much for answering my questions. But how can I use
> multitasking with Ada? There is no such thing like "fork" available in
> this programming language.

Multitasking is built-in in the language. Basically, each object of type 
Task is a new thread of execution. Check some tutorials on Ada tasking 
for details. Sorry, I don't have references handy. Tasking is an 
important feature of Ada and so can't be properly explained in a single 
post.

Maybe:

http://en.wikibooks.org/wiki/Ada_Programming/Tasking

So, in Ada instead of "forking" the current process into two, you simply 
create a new task. You'll see that Ada tasking features are much more 
confortable than low-level APIs for multithreading.

Two examples:

procedure Main is

    task Second;

    task body Second is
    begin
       loop
          null;
       end loop;
    end Second;

begin

    --  whatever

end;

Here you have your main task, whose code is in the body of the main 
procedure, and a second "thread" which is the code inside the body of 
Second. The Second task starts running during the elaboration (or 
"loading", if you're not familiar with the elaboration concept) of the 
main program.

Second example:

procedure Main is

    task type Child; -- Note the "type" keyword here, missing in 1st ex.

    task body Child is
    begin
       --  whatever
    end;

    type Access_Child is access all Child;

    C : Access_Child;

begin

    --  some code

    C := new Child;

    --  more code

end;

Here, you have your main task/thread embodied, as before, in the Main 
procedure. You create a new task/thread with the line

C := new Child;

and the code inside Child starts running at that point. The difference 
with the 1st example is that we are declaring a "task type", so no task 
is created until some object of that type is created. In the first 
example we directly declared a task object (not type), so the task 
started to exist right there.

Each task has its own stack, but all tasks share the global variables. 
You can then communicate amongst them with rendez-vous or protected objects.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-16  8:06   ` Maciej Sobczak
@ 2006-03-16 10:23     ` Ole-Hjalmar Kristensen
  2006-03-16 12:59     ` Dmitry A. Kazakov
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 29+ messages in thread
From: Ole-Hjalmar Kristensen @ 2006-03-16 10:23 UTC (permalink / raw)


Actually, I think the Ada task concept very closely match the way the
term task is used in programs for project planning and organizers.  An
Ada task definition really says what needs to be done, it does not
consider the mapping to OS processes or threads, only the steps to get
the job done, and the interfaces to other tasks. A task may not
terminate before its sub-tasks terminate. Much like the planning stage
of a project, I would say. Luckily, you do not have to hunt for
resources or map tasks to threads or processes, the compiler and run
time system does it for you. Also observe that the specification of
protected types is such that it is possible to run code which is
realted to two different tasks in the same thread. So I would say that
really, Ada tasks *are* logical tasks, not threads or processes. If you
think about Ada tasks as communicating sequential (logical) processes,
it is perhaps easier to see what I mean.

If you would like more interesting food for thought, have a look at the
Occam language. You have to explicitly tell the compiler if you want
things to be done sequentially, otherwise it is free to run your
statements in parallel :-)

>>>>> "MS" == Maciej Sobczak <no.spam@no.spam.com> writes:

    MS> Jeffrey Creem wrote:
    >> Actually, threads  is misleading since threads are  just tasks minus
    >> rendevous. It  all just depends  on your perspective. Since  Ada had
    >> tasking before  (most) unix  had threads I  think Unix got  the name
    >> wrong :)


    MS> In my humble opinion, Ada got it wrong in the first place.

    MS> {Pre-flame note: below  is a very informal description  of how I think
    MS> about these  terms while designing/programming/etc., not  how they are
    MS> defined in any language/system/etc. standard.}


    MS> What's  funny, the  terminology  is correctly  used  in even  simplest
    MS> programs for project planning, organizers, and such.


    MS> The "task" is something that needs to be done. It's an objective, or a
    MS> goal  to  achieve.  (I'm  not   an  English  native  speaker,  but  my
    MS> dictionaries seem to confirm this.)


    MS> To  actually  perform any  given  work,  some  kind of  "resource"  is
    MS> needed. Whether  it is a human  resource or a  computing resource does
    MS> not matter -

    MS> what's important is that this "resource" is treated as an *ability* to
    MS> perform work. What's even more important is that there might be unused
    MS> resources lying around, which can be just used to perform some work if
    MS> they are available.


    MS> Now, "thread"  is a software  resource that represents the  ability of
    MS> the system to do some work.


    MS> The difference between "task" and "thread" is that "task" is something
    MS> to do, whereas "thread" is a way to do it.


    MS> The tasks in  Ada do not match any of the  above. Other languages (for
    MS> example Java)  also got  it wrong, by  treating threads as  objects or
    MS> even by allowing  a programmer to "subclass" or  "specialize" a thread
    MS> class. None  of the  languages that  I'm aware of  allows me  to treat
    MS> tasks and threads  as they really are, which  means that everybody got
    MS> it wrong in one way or another. :)


    >> So, the only  thing I am trying  to say here is that  people seem to
    >> assume  that task, thread  and process  have some  formal definition
    >> that is all powerful. This is not true.


    MS> Right.

    MS> -- 
    MS> Maciej Sobczak : http://www.msobczak.com/
    MS> Programming    : http://www.msobczak.com/prog/

-- 
   C++: The power, elegance and simplicity of a hand grenade.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-16  8:06   ` Maciej Sobczak
  2006-03-16 10:23     ` Ole-Hjalmar Kristensen
@ 2006-03-16 12:59     ` Dmitry A. Kazakov
  2006-03-16 15:11       ` Larry Kilgallen
                         ` (2 more replies)
  2006-03-16 20:06     ` Jeffrey R. Carter
  2006-03-25 21:28     ` Robert A Duff
  3 siblings, 3 replies; 29+ messages in thread
From: Dmitry A. Kazakov @ 2006-03-16 12:59 UTC (permalink / raw)


On Thu, 16 Mar 2006 09:06:12 +0100, Maciej Sobczak wrote:

> The tasks in Ada do not match any of the above. Other languages (for 
> example Java) also got it wrong, by treating threads as objects or even 
> by allowing a programmer to "subclass" or "specialize" a thread class. 

Hmm, if task type is a type then it should obey operations of the types
algebra, which includes inheritance, aggregation. So I vote for tagged task
types.

What could be an alternative? Honestly I can't think anything useful out.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-16 12:59     ` Dmitry A. Kazakov
@ 2006-03-16 15:11       ` Larry Kilgallen
  2006-03-16 15:50       ` Maciej Sobczak
  2006-03-17  3:26       ` Randy Brukardt
  2 siblings, 0 replies; 29+ messages in thread
From: Larry Kilgallen @ 2006-03-16 15:11 UTC (permalink / raw)


In article <1b9zea3ok5znb.13wc5k9as88ix.dlg@40tude.net>, "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> On Thu, 16 Mar 2006 09:06:12 +0100, Maciej Sobczak wrote:
> 
>> The tasks in Ada do not match any of the above. Other languages (for 
>> example Java) also got it wrong, by treating threads as objects or even 
>> by allowing a programmer to "subclass" or "specialize" a thread class. 
> 
> Hmm, if task type is a type then it should obey operations of the types
> algebra, which includes inheritance, aggregation. So I vote for tagged task
> types.

Does Ada 2005 include tagged numeric types ?

How about arithmetic on record types ?



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-16 12:59     ` Dmitry A. Kazakov
  2006-03-16 15:11       ` Larry Kilgallen
@ 2006-03-16 15:50       ` Maciej Sobczak
  2006-03-16 18:03         ` Jean-Pierre Rosen
  2006-03-17  3:26       ` Randy Brukardt
  2 siblings, 1 reply; 29+ messages in thread
From: Maciej Sobczak @ 2006-03-16 15:50 UTC (permalink / raw)


Dmitry A. Kazakov wrote:

> Hmm, if task type is a type [...]

Why *should* it be a type in the first place? :)

> What could be an alternative?

The difference between sequential and paraller execution of two 
instructions (or blocks of instructions, but that's obvious) should be 
addressed at the level of control structures, not types (nor objects).
This means that if I want to have these:

A := 7;
B := 8;

executed sequentially, then I write them sequentially - just like above 
(I already see posts claiming that the compiler or CPU can reorder these 
instructions, but that's not the point; if the instructions have 
side-effects, then the sequence is really a sequence and I'm talking 
about expressing the programmer's intents here).
Executing those two instructions in paraller should not require from me 
to define new types, allocate new objects, nor any other thing like 
this. Think about it in terms of UML activity diagrams. The difference 
between sequential and paraller execution is just a plain syntax issue, 
without introducing any additional and dedicated entities. The same 
should be possible on the level of source code - especially if we take 
into account that diagrams and source code should be just two ways to 
say the same thing.

I have no serious thoughts about Ada in this aspect - for the simple 
reason that I'm learning Ada and I don't know where my reasoning could 
possibly break. However, for your amusement, you might take a look at 
the small article that I wrote about the same thing but w.r.t. C++:

http://www.msobczak.com/prog/articles/threadscpp.html

Just changing curly brackets to "begin" and "end" might not be enough, 
but you should get the idea.

In short, the point is that concurrency should be handled at the level 
of control statements, without requiring any additional entities (type, 
object, etc.) for reasons other than communication and synchronization.

Only then I'm ready to admit that the given language has a *built-in* 
support for concurrency. :)


-- 
Maciej Sobczak : http://www.msobczak.com/
Programming    : http://www.msobczak.com/prog/



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-16 15:50       ` Maciej Sobczak
@ 2006-03-16 18:03         ` Jean-Pierre Rosen
  2006-03-16 20:06           ` Dr. Adrian Wrigley
  0 siblings, 1 reply; 29+ messages in thread
From: Jean-Pierre Rosen @ 2006-03-16 18:03 UTC (permalink / raw)


Maciej Sobczak a �crit :

> The difference between sequential and paraller execution of two 
> instructions (or blocks of instructions, but that's obvious) should be 
> addressed at the level of control structures, not types (nor objects).
> This means that if I want to have these:
> 
> A := 7;
> B := 8;
> 
> executed sequentially, then I write them sequentially - just like above 
> (I already see posts claiming that the compiler or CPU can reorder these 
> instructions, but that's not the point; if the instructions have 
> side-effects, then the sequence is really a sequence and I'm talking 
> about expressing the programmer's intents here).
Actually, in Occam you would write (syntax not guaranteed):
cobegin
    A:=7;
and
    B:=8;
coend;

> Executing those two instructions in paraller should not require from me 
> to define new types, allocate new objects, nor any other thing like 
> this. 
And why? That's your opinion, fine, but what support do you have for it?

The truth is that there are more than one concurrency paradigm. What you 
say is true for concurrent evaluation. You can have it in Ada (although 
it requires more typing) in the following:

declare
    task T1;
    task body T1 is
    begin
       A := 7;
    end T1;

    task T2;
    task body T2 is
    begin
       B := 8;
    end T2;
begin
    null;
end;

Another model of concurrency is for performing concurrent tasks (this 
last word in the usual sense). For example, imagine you want to display 
the time in the upper left corner of your screen. This can be done 
easily with something like:

task Time_Display;
task body Time_Display is
begin
    loop
       Display (Clock);
       delay 1.0;
    end loop;
end Time_Display;

You cannot achieve this with any model of concurrent evaluation!
-- 
---------------------------------------------------------
            J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-16  8:06   ` Maciej Sobczak
  2006-03-16 10:23     ` Ole-Hjalmar Kristensen
  2006-03-16 12:59     ` Dmitry A. Kazakov
@ 2006-03-16 20:06     ` Jeffrey R. Carter
  2006-03-17  8:22       ` Maciej Sobczak
  2006-03-25 21:28     ` Robert A Duff
  3 siblings, 1 reply; 29+ messages in thread
From: Jeffrey R. Carter @ 2006-03-16 20:06 UTC (permalink / raw)


Maciej Sobczak wrote:
> 
> The "task" is something that needs to be done. It's an objective, or a 
> goal to achieve. (I'm not an English native speaker, but my dictionaries 
> seem to confirm this.)
> 
> To actually perform any given work, some kind of "resource" is needed. 
> Whether it is a human resource or a computing resource does not matter - 
> what's important is that this "resource" is treated as an *ability* to 
> perform work. What's even more important is that there might be unused 
> resources lying around, which can be just used to perform some work if 
> they are available.
> 
> Now, "thread" is a software resource that represents the ability of the 
> system to do some work.
> 
> The difference between "task" and "thread" is that "task" is something 
> to do, whereas "thread" is a way to do it.

The way to design SW in a decent language such as Ada is to create in SW a model 
of the problem. If your problem includes multiple tasks to be completed 
concurrently, then your SW model of the problem should include multiple objects 
which are models of these tasks and which execute concurrently.

A good name for such objects in general would be "task", since that is their 
general name in the problem space.

"Thread" is an implementation detail which should be hidden by the much 
higher-level abstraction of your model.

-- 
Jeff Carter
"Now go away or I shall taunt you a second time."
Monty Python & the Holy Grail
07



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-16 18:03         ` Jean-Pierre Rosen
@ 2006-03-16 20:06           ` Dr. Adrian Wrigley
  0 siblings, 0 replies; 29+ messages in thread
From: Dr. Adrian Wrigley @ 2006-03-16 20:06 UTC (permalink / raw)


On Thu, 16 Mar 2006 19:03:00 +0100, Jean-Pierre Rosen wrote:

> Maciej Sobczak a �crit :
> 
>> The difference between sequential and paraller execution of two 
>> instructions (or blocks of instructions, but that's obvious) should be 
>> addressed at the level of control structures, not types (nor objects).
>> This means that if I want to have these:
>> 
>> A := 7;
>> B := 8;
>> 
>> executed sequentially, then I write them sequentially - just like above 
>> (I already see posts claiming that the compiler or CPU can reorder these 
>> instructions, but that's not the point; if the instructions have 
>> side-effects, then the sequence is really a sequence and I'm talking 
>> about expressing the programmer's intents here).
> Actually, in Occam you would write (syntax not guaranteed):
> cobegin
>     A:=7;
> and
>     B:=8;
> coend;

Don't you use a "par" statement here?

PAR
   A := 7
   B := 8

<rant>
I have been thinking about the design of programming languages with
regard to efficient execution on parallel hardware.  I find that
most languages are virtually useless for addressing any major
form of parallelism.  Ada *is* helpful in addressing distributed
and MIMD/SMP architectures.  But there is no lightweight syntax
for simple concurrency :(  Occam had the right idea.  VHDL is good
here too.

In VHDL, we simply write:

   A <= 7;
   B <= 8;
   C <= FancyFunction (Z);

And all the assignments run concurrently (and continuously!).

I would like to have seen Ada 2005 address parallel programming better,
but the lack of real experience on highly parallel hardware is the
cause (and effect) of the poor state of the (parallel) hardware industry. 
This vicious circle is a deadlock to industry prograss.

In addition to lightweight concurrent statements (like Occam, VHDL),
I'd like to see a decent data parallel syntax and paradigm (like ZPL,
Data Parallel C (ANSI X3J11.1 report), or HPF).

Finally, I'd like to see the ability to mark subprograms as "pure"
to indicate permission to implement memoization (caching), to
eliminate duplicated calls, reordering and concurrent execution.
This would provide semantics to facilitate Cilk-like multithreading.

A major part of programming in most programming languages is the
analysis and transcription of a problem into fully sequential
steps.  But the main objective of hardware design is to turn
the sequential steps back into concurrent activity - whether it is
by hardware pipelining, speculative execution, multithreading (SMT),
or multiprocessing (SMP).  The objective is also shared by compiler
writers trying to un-sequentialize programs, and extract vector
operations.  What a waste!

Surely the failure of modern programming languages to expose
problem parallism directly is one of the main causes of the "heroics"
going on in processor design today, and the failure of supercomputer
design to innovate with efficient, cost-effective solutions?

Ada offers a sound technical basis for the language innovations necessary.

Unfortunately, parallel computing has become a "Cinderella" subject
over the decades of commercial and technical failures.  Always
five years away from ubiquity.  How can the deadlock be broken?
</rant>
--
Adrian



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-16 12:59     ` Dmitry A. Kazakov
  2006-03-16 15:11       ` Larry Kilgallen
  2006-03-16 15:50       ` Maciej Sobczak
@ 2006-03-17  3:26       ` Randy Brukardt
  2 siblings, 0 replies; 29+ messages in thread
From: Randy Brukardt @ 2006-03-17  3:26 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:1b9zea3ok5znb.13wc5k9as88ix.dlg@40tude.net...
> On Thu, 16 Mar 2006 09:06:12 +0100, Maciej Sobczak wrote:
>
> > The tasks in Ada do not match any of the above. Other languages (for
> > example Java) also got it wrong, by treating threads as objects or even
> > by allowing a programmer to "subclass" or "specialize" a thread class.
>
> Hmm, if task type is a type then it should obey operations of the types
> algebra, which includes inheritance, aggregation. So I vote for tagged
task
> types.

Sounds like you're ready for Ada 200Y, which has tagged task types. :-)

No extension, though, because no one has a clear model for what it could
mean to extend a chunk of code (especially to compose synchronization in a
meaningful way).

                            Randy.





^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-16 20:06     ` Jeffrey R. Carter
@ 2006-03-17  8:22       ` Maciej Sobczak
  2006-03-17 11:36         ` Dmitry A. Kazakov
  2006-03-17 19:42         ` Jeffrey R. Carter
  0 siblings, 2 replies; 29+ messages in thread
From: Maciej Sobczak @ 2006-03-17  8:22 UTC (permalink / raw)


Jeffrey R. Carter wrote:

>> The difference between "task" and "thread" is that "task" is something 
>> to do, whereas "thread" is a way to do it.
> 
> The way to design SW in a decent language such as Ada is to create in SW 
> a model of the problem. If your problem includes multiple tasks to be 
> completed concurrently, then your SW model of the problem should include 
> multiple objects which are models of these tasks and which execute 
> concurrently.

Makes sense, but is biased towards single programming paradigm.

Consider a simple example of two long vectors that need to be added. In 
the simplest case you do this:

    for I in Vector'Range loop
       V3(I) := V1(I) + V2(I);
    end loop;

and you're done.

Now, assume that you want to target dual-CPU machine and you *know* that 
you could greatly benefit from making things in paraller.

Do you see any need to model the above sentence using additional objects?
I don't, because in the *application domain* no additional type nor 
object was created by just computing things in paraller.

That's why the "lightweight concurrency", like in Occam, would be more 
appropriate here. Something like this:

declare
    procedure Add_Range(First, Last : in Index_Range) is
    begin
       for I in First .. Last loop
          V3(I) := V1(I) + V2(I);
       end loop;
    end Add_Range;
begin
    Add_Range(Vector'First, Vector'Last / 2);
with
    Add_Range(Vector'Last / 2 + 1, Vector'Last);
end;

(syntax taken from the top of my head)

I see the above as better expressing what is really done.
The same example using additional types or objects would not make the 
code more readable for me.


-- 
Maciej Sobczak : http://www.msobczak.com/
Programming    : http://www.msobczak.com/prog/



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-17  8:22       ` Maciej Sobczak
@ 2006-03-17 11:36         ` Dmitry A. Kazakov
  2006-03-17 14:23           ` Maciej Sobczak
  2006-03-17 19:42         ` Jeffrey R. Carter
  1 sibling, 1 reply; 29+ messages in thread
From: Dmitry A. Kazakov @ 2006-03-17 11:36 UTC (permalink / raw)


On Fri, 17 Mar 2006 09:22:21 +0100, Maciej Sobczak wrote:

> Consider a simple example of two long vectors that need to be added. In 
> the simplest case you do this:
> 
>     for I in Vector'Range loop
>        V3(I) := V1(I) + V2(I);
>     end loop;
> 
> and you're done.
> 
> Now, assume that you want to target dual-CPU machine and you *know* that 
> you could greatly benefit from making things in paraller.
> 
> Do you see any need to model the above sentence using additional objects?
> I don't, because in the *application domain* no additional type nor 
> object was created by just computing things in paraller.

No. If you *know* that the application have to be executed on a
multi-processor machine then it is within the application domain. If you
don't or don't care, then it is an optimization issue.

So if it is the domain, then you should invest a little bit more into
careful design, and consider performance issue on 1, 2, n-processors
machines. So concurrency will be a sufficient part of the design.

> That's why the "lightweight concurrency", like in Occam, would be more 
> appropriate here. Something like this:
> 
> declare
>     procedure Add_Range(First, Last : in Index_Range) is
>     begin
>        for I in First .. Last loop
>           V3(I) := V1(I) + V2(I);
>        end loop;
>     end Add_Range;
> begin
>     Add_Range(Vector'First, Vector'Last / 2);
> with
>     Add_Range(Vector'Last / 2 + 1, Vector'Last);
> end;
> 
> (syntax taken from the top of my head)

I don't think that Occam's style of concurrency could be an viable
alternative. Mapping concurrency onto control flow statements is OK, and
Ada has this as accept and select statements [AFAIK, Occam was one of Ada's
precursors.]  But the real problem is that it isn't scalable and
composable. The code like above is very difficult to reuse. Let you want to
extend it. How can you add something into each of two execution paths?
Without code modification? Let I call some subprograms from each of the
execution paths, how could they communicate? What about safety? They would
definitely access some common data? If they spin for a lock, then what was
the gain of concurrency? If they don't; how to maintain such sort of code
in a large complex system? How can I rewrite it for n-processors (n is
unknown in advance)?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-17 11:36         ` Dmitry A. Kazakov
@ 2006-03-17 14:23           ` Maciej Sobczak
  2006-03-17 19:10             ` Dmitry A. Kazakov
  0 siblings, 1 reply; 29+ messages in thread
From: Maciej Sobczak @ 2006-03-17 14:23 UTC (permalink / raw)


Dmitry A. Kazakov wrote:

> No. If you *know* that the application have to be executed on a
> multi-processor machine then it is within the application domain.

That's a valid point of view.

> If you
> don't or don't care, then it is an optimization issue.

In which case someone somewhere needs to translate it into source code, 
unless you have very smart parallerizing compiler. I'm addressing the 
source code aspect of the whole.


> I don't think that Occam's style of concurrency could be an viable
> alternative. Mapping concurrency onto control flow statements is OK, and
> Ada has this as accept and select statements

Yes, but accept and select are statements that operate with the 
concurrency that is already there. The problem is how to introduce this 
concurrency in the first place, without resorting to polluting the final 
solution with irrelevant entities.


> The code like above is very difficult to reuse. Let you want to
> extend it. How can you add something into each of two execution paths?

In the same way as you add something into each branch of the If statement.

> Without code modification?

So what about the If statement? :)
Can you extend the branches of If without code modification? :|

> Let I call some subprograms from each of the
> execution paths, how could they communicate?

Using objects that are dedicated for this, ehem, task.

> What about safety?

I don't see anything that would make it less safe than two separate task 
bodies. Take the code already presented by Jean-Pierre Rosen:

declare
    task T1;
    task body T1 is
    begin
       A := 7;
    end T1;

    task T2;
    task body T2 is
    begin
       B := 8;
    end T2;
begin
    null;
end;

My *equivalent* example would be:

begin
    A := 7;
with
    B := 8;
end;

(were's the Ada readability where you need it? ;) )

 From the safety point of view, what makes my example worse than the one 
above it? What makes it less safe?

> They would
> definitely access some common data?

Probably yes, probably not - depends on the actual problem to be solved.
Let's say that yes, there is some shared data. What makes the data 
sharing more difficult/unsafe than in the case of two separate task bodies?

> If they spin for a lock, then what was
> the gain of concurrency?

What is the gain of concurrency in the presence of locks with two 
separate task bodies?
This issue is completely orthogonal to the way the concurrency is 
expressed. You have the same problems and the same solutions no matter 
what is the syntax used to introduce concurrency.

> If they don't; how to maintain such sort of code
> in a large complex system?

The difference between separate task bodies and the support for 
concurrency on the level of control statement is that the former can be 
*always* built on top of the latter. The other way round is not true.

> How can I rewrite it for n-processors (n is
> unknown in advance)?

With the help of asynchronous blocks, for example (I've mentioned about 
them in the article on my web page).
As said above, more structured solutions can be always build on top of 
less structured ones. In particular, it would be quite straightforward 
to build a (generic?) package for this purpose, that would internally be 
implemented with the help of concurrent control structures - this is 
always possible. What I want is the possibility to express concurrency 
on the level that does not require me to use any new entity, if this new 
entity does not emerge by itself in the problem analysis.

And the problem is that all popular languages (and libraries in the case 
of C++, for example) force me to work with entities which are irrelevant 
to the concept of concurrency itself.

-- 
Maciej Sobczak : http://www.msobczak.com/
Programming    : http://www.msobczak.com/prog/



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-17 14:23           ` Maciej Sobczak
@ 2006-03-17 19:10             ` Dmitry A. Kazakov
  0 siblings, 0 replies; 29+ messages in thread
From: Dmitry A. Kazakov @ 2006-03-17 19:10 UTC (permalink / raw)


On Fri, 17 Mar 2006 15:23:37 +0100, Maciej Sobczak wrote:

> Dmitry A. Kazakov wrote:
> 
>> If you
>> don't or don't care, then it is an optimization issue.
> 
> In which case someone somewhere needs to translate it into source code, 
> unless you have very smart parallerizing compiler. I'm addressing the 
> source code aspect of the whole.

I prefer to leave that to the compiler. After all, you don't manually
optimize programs for pipelines of concrete CPU. At some point you say, OK,
that's my abstraction level, I don't care what's going beneath, because it
is economically unreasonable.
 
>> I don't think that Occam's style of concurrency could be an viable
>> alternative. Mapping concurrency onto control flow statements is OK, and
>> Ada has this as accept and select statements
> 
> Yes, but accept and select are statements that operate with the 
> concurrency that is already there. The problem is how to introduce this 
> concurrency in the first place, without resorting to polluting the final 
> solution with irrelevant entities.

I'm not sure what you mean. But, fundamentally, concurrency cannot be
described in terms of a Turing Machine. It is incomputable. You have to
postulate it.

>> The code like above is very difficult to reuse. Let you want to
>> extend it. How can you add something into each of two execution paths?
> 
> In the same way as you add something into each branch of the If statement.

Absolutely, because both are too low-level.

You can extend a polymorphic subprogram.

>> Without code modification?
> 
> So what about the If statement? :)
> Can you extend the branches of If without code modification? :|

One way is how extend constructors and destructors.

>> Let I call some subprograms from each of the
>> execution paths, how could they communicate?
> 
> Using objects that are dedicated for this, ehem, task.

OK, but then, the concept of light-weight concurrency you have described,
is incomplete. You need something else to make it usable. Ada has now
protected objects as well, though, tasks were complete without them. 

>> What about safety?
> 
> I don't see anything that would make it less safe than two separate task 
> bodies. Take the code already presented by Jean-Pierre Rosen:
> 
> declare
>     task T1;
>     task body T1 is
>     begin
>        A := 7;
>     end T1;
> 
>     task T2;
>     task body T2 is
>     begin
>        B := 8;
>     end T2;
> begin
>     null;
> end;
> 
> My *equivalent* example would be:
> 
> begin
>     A := 7;
> with
>     B := 8;
> end;
> 
> (were's the Ada readability where you need it? ;) )

Well, it is like to claim that *ptr++ is more readable. Yes it is, if the
program is three lines long. Ada's variant turns better when concurrent
paths become more complicated. Further advantage is that Ada's way is
modular from the start. Your model will sooner or later end up with:

begin
   Do_This;
with
   Do_That;
end;

not much different from:

   X : Do_This;  -- Task
   Y : Do_That; -- Another task
begin
   null;
end;

>  From the safety point of view, what makes my example worse than the one 
> above it? What makes it less safe?

Concurrent bodies are visually decoupled. They don't have common context.
Nothing forces them to be in the same package. It feels safer. (:-))

>> They would
>> definitely access some common data?
> 
> Probably yes, probably not - depends on the actual problem to be solved.
> Let's say that yes, there is some shared data. What makes the data 
> sharing more difficult/unsafe than in the case of two separate task bodies?

Tasks have contracted interfaces, I mean rendezvous. This is the normal way
for tasks to communicate each other. It can be to some extent analyzed
statically, provided, programmers do not misuse it. For an Occam-like
paradigm there is no any contracts stated. Most likely programmers will use
shared variables.

>> If they spin for a lock, then what was
>> the gain of concurrency?
> 
> What is the gain of concurrency in the presence of locks with two 
> separate task bodies?
> This issue is completely orthogonal to the way the concurrency is 
> expressed. You have the same problems and the same solutions no matter 
> what is the syntax used to introduce concurrency.

I meant if the construct begin ... with ... end is atomic. I remotely
remember that Occam had channels to communicate between scheduling items.

>> If they don't; how to maintain such sort of code
>> in a large complex system?
> 
> The difference between separate task bodies and the support for 
> concurrency on the level of control statement is that the former can be 
> *always* built on top of the latter.

I don't think so. Consider: T1 starts T2 and then completes before T2.

> The other way round is not true.

If you allow functional decomposition it will.

>> How can I rewrite it for n-processors (n is
>> unknown in advance)?
> 
> With the help of asynchronous blocks, for example (I've mentioned about 
> them in the article on my web page).
> As said above, more structured solutions can be always build on top of 
> less structured ones. In particular, it would be quite straightforward 
> to build a (generic?) package for this purpose, that would internally be 
> implemented with the help of concurrent control structures - this is 
> always possible. What I want is the possibility to express concurrency 
> on the level that does not require me to use any new entity, if this new 
> entity does not emerge by itself in the problem analysis.

It is OK, but it is only one side of truth. And also, for the case, where
the application domain is orthogonal to concurrency, I'd prefer no any
statements. I'd like to specify some constraints and let the compiler to
derive the threads it wants.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-17  8:22       ` Maciej Sobczak
  2006-03-17 11:36         ` Dmitry A. Kazakov
@ 2006-03-17 19:42         ` Jeffrey R. Carter
  2006-03-18  0:27           ` tmoran
  1 sibling, 1 reply; 29+ messages in thread
From: Jeffrey R. Carter @ 2006-03-17 19:42 UTC (permalink / raw)


Maciej Sobczak wrote:
> 
> Consider a simple example of two long vectors that need to be added. In 
> the simplest case you do this:
> 
>    for I in Vector'Range loop
>       V3(I) := V1(I) + V2(I);
>    end loop;
> 
> and you're done.

No, in all cases you should do

V3 := V1 + V2;

with an appropriate definition of "+" (which may be similar to your example).

> Now, assume that you want to target dual-CPU machine and you *know* that 
> you could greatly benefit from making things in paraller.

Then this is part of your requirements, and should be reflected in your design 
and implementation.

Incidentally, the English word is "parallel".

The interesting problem is writing portable code that takes advantage of N 
processors (N = 1, 2, 3, ...), with N unknown until run time.

-- 
Jeff Carter
"The time has come to act, and act fast. I'm leaving."
Blazing Saddles
36



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-17 19:42         ` Jeffrey R. Carter
@ 2006-03-18  0:27           ` tmoran
  0 siblings, 0 replies; 29+ messages in thread
From: tmoran @ 2006-03-18  0:27 UTC (permalink / raw)


There's rather a big difference between statement level granularity
(like the Occam examples) and module level (like Ada tasks).  It's
merely confusing to mix them together.

>The interesting problem is writing portable code that takes advantage of N
>processors (N = 1, 2, 3, ...), with N unknown until run time.
   A multitasking quicksort on my hyperthreaded machine runs faster than
the single tasking version IF the "extra" CPU isn't already busy doing
something else and IF the data set to sort isn't so big that breaking it
up results in cache thrashing.  Writing such code portably is indeed an
"interesting problem".



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
  2006-03-16  8:06   ` Maciej Sobczak
                       ` (2 preceding siblings ...)
  2006-03-16 20:06     ` Jeffrey R. Carter
@ 2006-03-25 21:28     ` Robert A Duff
       [not found]       ` <43gb22h4811ojjh308r2lqf5qqrujijjok@4ax.com>
  3 siblings, 1 reply; 29+ messages in thread
From: Robert A Duff @ 2006-03-25 21:28 UTC (permalink / raw)


Maciej Sobczak <no.spam@no.spam.com> writes:

> Jeffrey Creem wrote:
> 
> > Actually, threads is misleading since threads are just tasks minus
> > rendevous. It all just depends on your perspective. Since Ada had
> > tasking before (most) unix had threads I think Unix got the name wrong
> > :)
> 
> In my humble opinion, Ada got it wrong in the first place.

I tend to agree that "thread" is a better term than "task".
But I think you get it wrong when you say "wrong".  ;-)
It's wrong to call names-for-things "wrong" -- they are just
conventions.  See Lewis Carrol.

> The tasks in Ada do not match any of the above. Other languages (for
> example Java) also got it wrong, by treating threads as objects or even
> by allowing a programmer to "subclass" or "specialize" a thread
> class. None of the languages that I'm aware of allows me to treat tasks
> and threads as they really are, which means that everybody got it wrong
> in one way or another. :)

I don't understand why "threads as objects" seems wrong to you.

> > So, the only thing I am trying to say here is that people seem to
> > assume that task, thread and process have some formal definition that
> > is all powerful. This is not true.
> 
> Right.

Right.  Task, thread, and process have all been used interchangeably in
the past.  These days, "process" seems to imply a separate memory
context, though.

- Bob



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: question about tasks, multithreading and multi-cpu machines
       [not found]       ` <43gb22h4811ojjh308r2lqf5qqrujijjok@4ax.com>
@ 2006-03-26  0:44         ` Robert A Duff
  0 siblings, 0 replies; 29+ messages in thread
From: Robert A Duff @ 2006-03-26  0:44 UTC (permalink / raw)


Dennis Lee Bieber <wlfraed@ix.netcom.com> writes:

> 	<G> Don't you mean Charles Dodgson <G>

:-) :-) 

- Bob



^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2006-03-26  0:44 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-03-14 16:26 question about tasks, multithreading and multi-cpu machines Norbert Caspari
2006-03-14 16:51 ` Pascal Obry
2006-03-16  4:27   ` Norbert Caspari
2006-03-16 10:04     ` Alex R. Mosteo
2006-03-14 17:18 ` Jean-Pierre Rosen
2006-03-16  4:22   ` Norbert Caspari
2006-03-16  6:58     ` Jean-Pierre Rosen
2006-03-14 18:49 ` Martin Krischik
2006-03-14 18:56 ` tmoran
2006-03-14 23:01 ` Jeffrey Creem
2006-03-15  1:15   ` Jeffrey R. Carter
2006-03-16  8:06   ` Maciej Sobczak
2006-03-16 10:23     ` Ole-Hjalmar Kristensen
2006-03-16 12:59     ` Dmitry A. Kazakov
2006-03-16 15:11       ` Larry Kilgallen
2006-03-16 15:50       ` Maciej Sobczak
2006-03-16 18:03         ` Jean-Pierre Rosen
2006-03-16 20:06           ` Dr. Adrian Wrigley
2006-03-17  3:26       ` Randy Brukardt
2006-03-16 20:06     ` Jeffrey R. Carter
2006-03-17  8:22       ` Maciej Sobczak
2006-03-17 11:36         ` Dmitry A. Kazakov
2006-03-17 14:23           ` Maciej Sobczak
2006-03-17 19:10             ` Dmitry A. Kazakov
2006-03-17 19:42         ` Jeffrey R. Carter
2006-03-18  0:27           ` tmoran
2006-03-25 21:28     ` Robert A Duff
     [not found]       ` <43gb22h4811ojjh308r2lqf5qqrujijjok@4ax.com>
2006-03-26  0:44         ` Robert A Duff
2006-03-15  6:46 ` Simon Wright

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox