comp.lang.ada
 help / color / mirror / Atom feed
* basic question on Ada tasks and running on different cores
@ 2012-05-06  7:38 Nasser M. Abbasi
  2012-05-06  7:59 ` Gautier write-only
                   ` (3 more replies)
  0 siblings, 4 replies; 28+ messages in thread
From: Nasser M. Abbasi @ 2012-05-06  7:38 UTC (permalink / raw)



Assume I am using a PC with say 1,000 cores (may be
in few short years).

If I use Ada, and create many, many tasks, will these
tasks automatically be scheduled to run on as many
different cores as possible so to spread the load and
achieve the most parallelism possible?

Is this something that is controlled by Ada run-time
automatically, or is it the OS that that is in charge here
with which task (i.e. thread) runs on which core?

Will the programmer have to do anything other than just
creating the tasks (and ofcourse protect any critical
section as needed), but not worry about the mapping
of tasks to cores and all the other scheduling issues.

Sorry for such a basic question, but I have not looked
at this for long time.

thanks,
--Nasser



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-06  7:38 basic question on Ada tasks and running on different cores Nasser M. Abbasi
@ 2012-05-06  7:59 ` Gautier write-only
  2012-05-06 10:02 ` Simon Wright
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 28+ messages in thread
From: Gautier write-only @ 2012-05-06  7:59 UTC (permalink / raw)


On 6 mai, 09:38, "Nasser M. Abbasi" <n...@12000.org> wrote:

> Assume I am using a PC with say 1,000 cores (may be
> in few short years).
>
> If I use Ada, and create many, many tasks, will these
> tasks automatically be scheduled to run on as many
> different cores as possible so to spread the load and
> achieve the most parallelism possible?

Yes. Actually the operating system does it for you, automatically.
The difficulty is for *not* using as many cores or CPUs as available.
_________________________
Gautier's Ada programming
http://freecode.com/users/gdemont



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-06  7:38 basic question on Ada tasks and running on different cores Nasser M. Abbasi
  2012-05-06  7:59 ` Gautier write-only
@ 2012-05-06 10:02 ` Simon Wright
  2012-05-06 10:31   ` Ludovic Brenta
  2012-05-06 14:13 ` Robert A Duff
  2012-05-07  7:36 ` anon
  3 siblings, 1 reply; 28+ messages in thread
From: Simon Wright @ 2012-05-06 10:02 UTC (permalink / raw)


"Nasser M. Abbasi" <nma@12000.org> writes:

> If I use Ada, and create many, many tasks, will these tasks
> automatically be scheduled to run on as many different cores as
> possible so to spread the load and achieve the most parallelism
> possible?
>
> Is this something that is controlled by Ada run-time automatically, or
> is it the OS that that is in charge here with which task (i.e. thread)
> runs on which core?
>
> Will the programmer have to do anything other than just creating the
> tasks (and ofcourse protect any critical section as needed), but not
> worry about the mapping of tasks to cores and all the other scheduling
> issues.

One imagines that Ada RTS's will offer the ability to map tasks to
cores, building on the facilities offered by the OS's to specify
processor affinities. Whether Ada 2020 will standardise an interface to
this is to be determined! (ada-auth.org isn't responding at the moment,
so I can't tell whether there's anything in Ada 2012).



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-06 10:02 ` Simon Wright
@ 2012-05-06 10:31   ` Ludovic Brenta
  2012-05-06 14:14     ` Robert A Duff
  0 siblings, 1 reply; 28+ messages in thread
From: Ludovic Brenta @ 2012-05-06 10:31 UTC (permalink / raw)


Simon Wright writes on comp.lang.ada:
> One imagines that Ada RTS's will offer the ability to map tasks to
> cores, building on the facilities offered by the OS's to specify
> processor affinities. Whether Ada 2020 will standardise an interface
> to this is to be determined! (ada-auth.org isn't responding at the
> moment, so I can't tell whether there's anything in Ada 2012).

Ada 2012 has an entire clause, D.16, "Multiprocessor Implementation",
devoted to this.  The salient part is:

7/3 For a task type (including the anonymous type of a
single_task_declaration) or subprogram, the following language-defined
representation aspect may be specified:

8/3 CPU

               The aspect CPU is an expression, which shall be of type
               System.Multiprocessors.CPU_Range.

This allows assigning a single CPU to a task; the problem is that the
value of the aspect_specification must be static.  In subclause D.16.1
"Dispatching Domains" we see:

16/3
The type Dispatching_Domain represents a series of processors on which
a task may execute. Each processor is contained within exactly one
Dispatching_Domain. System_Dispatching_Domain contains the processor or
processors on which the environment task executes. At program start-up
all processors are contained within System_Dispatching_Domain.

17/3
For a task type (including the anonymous type of a
single_task_declaration), the following language-defined representation
aspect may be specified:

18/3
Dispatching_Domain

               The value of aspect Dispatching_Domain is an expression,
               which shall be of type
               Dispatching_Domains.Dispatching_Domain. This aspect is
               the domain to which the task (or all objects of the task
               type) are assigned.

This allows assigning any one of a range of CPUs to a task.

-- 
Ludovic Brenta.
An enhanced, consistent, best practice culturally prioritizes an
emotional intelligence.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-06  7:38 basic question on Ada tasks and running on different cores Nasser M. Abbasi
  2012-05-06  7:59 ` Gautier write-only
  2012-05-06 10:02 ` Simon Wright
@ 2012-05-06 14:13 ` Robert A Duff
  2012-05-07  7:36 ` anon
  3 siblings, 0 replies; 28+ messages in thread
From: Robert A Duff @ 2012-05-06 14:13 UTC (permalink / raw)


"Nasser M. Abbasi" <nma@12000.org> writes:

> If I use Ada, and create many, many tasks, will these
> tasks automatically be scheduled to run on as many
> different cores as possible so to spread the load and
> achieve the most parallelism possible?

Yes.  Unless you have particular requirements, you don't
have to do anything special.

> Is this something that is controlled by Ada run-time
> automatically, or is it the OS that that is in charge here
> with which task (i.e. thread) runs on which core?

The way GNAT works, it's the OS.  But it doesn't have
to be that way.

- Bob



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-06 10:31   ` Ludovic Brenta
@ 2012-05-06 14:14     ` Robert A Duff
  2012-05-06 16:07       ` Vinzent Hoefler
  2012-05-06 19:43       ` Ludovic Brenta
  0 siblings, 2 replies; 28+ messages in thread
From: Robert A Duff @ 2012-05-06 14:14 UTC (permalink / raw)


Ludovic Brenta <ludovic@ludovic-brenta.org> writes:

> 8/3 CPU
>
>                The aspect CPU is an expression, which shall be of type
>                System.Multiprocessors.CPU_Range.

If there's more than one of them, how can they be "central"?

- Bob



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-06 14:14     ` Robert A Duff
@ 2012-05-06 16:07       ` Vinzent Hoefler
  2012-05-06 19:43       ` Ludovic Brenta
  1 sibling, 0 replies; 28+ messages in thread
From: Vinzent Hoefler @ 2012-05-06 16:07 UTC (permalink / raw)


Robert A Duff wrote:

> Ludovic Brenta <ludovic@ludovic-brenta.org> writes:
>
>> 8/3 CPU
>>
>>                The aspect CPU is an expression, which shall be of type
>>                System.Multiprocessors.CPU_Range.
>
> If there's more than one of them, how can they be "central"?

In sufficiently local scope, they are. Well, "it is". ;)


Vinzent.

-- 
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.
     -- Nathaniel Borenstein



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-06 14:14     ` Robert A Duff
  2012-05-06 16:07       ` Vinzent Hoefler
@ 2012-05-06 19:43       ` Ludovic Brenta
       [not found]         ` <15qdq7df9cji7htp52i9d5f8sqsgmisc3b@invalid.netcom.com>
  1 sibling, 1 reply; 28+ messages in thread
From: Ludovic Brenta @ 2012-05-06 19:43 UTC (permalink / raw)


Robert A Duff writes:
> Ludovic Brenta <ludovic@ludovic-brenta.org> writes:
>
>> 8/3 CPU
>>
>>                The aspect CPU is an expression, which shall be of type
>>                System.Multiprocessors.CPU_Range.
>
> If there's more than one of them, how can they be "central"?

Mainframes have many "central" processing units and dozens of peripheral
processing units, e.g. embedded in disk and network controllers.  They
also have "peripheral" processors dedicated to system monitoring and
maintenance, i.e. the operator console or its modern equivalent.

-- 
Ludovic Brenta.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
       [not found]         ` <15qdq7df9cji7htp52i9d5f8sqsgmisc3b@invalid.netcom.com>
@ 2012-05-06 21:24           ` Ludovic Brenta
  0 siblings, 0 replies; 28+ messages in thread
From: Ludovic Brenta @ 2012-05-06 21:24 UTC (permalink / raw)


Dennis Lee Bieber writes:
> On Sun, 06 May 2012 21:43:53 +0200, Ludovic Brenta
> <ludovic@ludovic-brenta.org> declaimed the following in comp.lang.ada:
>
>> Mainframes have many "central" processing units and dozens of peripheral
>> processing units, e.g. embedded in disk and network controllers.  They
>> also have "peripheral" processors dedicated to system monitoring and
>> maintenance, i.e. the operator console or its modern equivalent.
>
> 	They still make "mainframes" <G>

Yes. Mostly to run thousands of Linux virtual machines, and because
their disk subsystems are unmatched in performance and reliability.

-- 
Ludovic Brenta.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-06  7:38 basic question on Ada tasks and running on different cores Nasser M. Abbasi
                   ` (2 preceding siblings ...)
  2012-05-06 14:13 ` Robert A Duff
@ 2012-05-07  7:36 ` anon
  2012-05-08  7:08   ` Maciej Sobczak
  3 siblings, 1 reply; 28+ messages in thread
From: anon @ 2012-05-07  7:36 UTC (permalink / raw)


At the movement anyone uses the keywords, "threads", "tasks" or "multiple 
cores" what they are asking for is parallelism system. And at this time 
no standard language come setup for doing parallel.  You have to modify 
a number of compiler sub-systems as well as rewrite the run-time-libraries 
and this includes GCC. The main problem now is that most to all open 
source parallel compilers have disappear off the net since Intel introduce
the Pentium D and Core 2.

An the Ada's run-time-libraries is still concurrent.  Ada 2012, does 
have some intro into CPU control, but when will those features be 
developed and allow Ada's run-time-libraries to be converted to parallel 
is any one guess.


History of parallel Ada.

Back in the 1980s and early 1990s there was the transputers using 
Ada which allowed some parallel coding. To use parallel one or 
more multiple transputers daughter boards had to be added to the 
system design. In this design, the PC system housed the compiler 
and transputer program loader. The while the Ada code was running 
the PC handled all hardware I/O required for the Ada program.

Then in the the early 1990s when dual and quad processor motherboards 
were developed, all most every language except Ada had a parallel 
version. The ARG with the DoD had not fully developed the concept 
for Ada being parallel.  Even though, back in the mid 1990 there was 
a compiler patch and "back end" replacement for GNAT that did allow 
some parallel but due to Adacore's design style of GNAT, the parallel
system was not fully parallel.



In <jo59po$s06$1@speranza.aioe.org>, "Nasser M. Abbasi" <nma@12000.org> writes:
>
>Assume I am using a PC with say 1,000 cores (may be
>in few short years).
>
>If I use Ada, and create many, many tasks, will these
>tasks automatically be scheduled to run on as many
>different cores as possible so to spread the load and
>achieve the most parallelism possible?
>
>Is this something that is controlled by Ada run-time
>automatically, or is it the OS that that is in charge here
>with which task (i.e. thread) runs on which core?
>
>Will the programmer have to do anything other than just
>creating the tasks (and ofcourse protect any critical
>section as needed), but not worry about the mapping
>of tasks to cores and all the other scheduling issues.
>
>Sorry for such a basic question, but I have not looked
>at this for long time.
>
>thanks,
>--Nasser




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-07  7:36 ` anon
@ 2012-05-08  7:08   ` Maciej Sobczak
  2012-05-08  9:02     ` anon
  0 siblings, 1 reply; 28+ messages in thread
From: Maciej Sobczak @ 2012-05-08  7:08 UTC (permalink / raw)


On May 7, 9:36 am, a...@att.net wrote:

> At the movement anyone uses the keywords, "threads", "tasks" or "multiple
> cores" what they are asking for is parallelism system.

No, from the thread (sic!) subject it is pretty clear that the poster
is asking about tasks and cores.

> And at this time
> no standard language come setup for doing parallel.

It is not the first time when you spread misinformation related to
this subject, even though you have been already corrected on this.
Please take into account that publishing incorrect or misleading
information about Ada's ability in this area is a disservice to the
community.

--
Maciej Sobczak * http://www.msobczak.com * http://www.inspirel.com



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-08  7:08   ` Maciej Sobczak
@ 2012-05-08  9:02     ` anon
  2012-05-08  9:52       ` Ludovic Brenta
  0 siblings, 1 reply; 28+ messages in thread
From: anon @ 2012-05-08  9:02 UTC (permalink / raw)


Where you proof!!! That I am wrong!!! You have none!!!
The proof must be creditable. That is someone not associate 
with Adacore, ARG. 
Someone like the DoD, IBM, SUN, SGI. And must be posted on 
their web site.

In <822367e8-28c2-4289-a59c-88799d68bc9e@j16g2000vbl.googlegroups.com>, Maciej Sobczak <see.my.homepage@gmail.com> writes:
>On May 7, 9:36=A0am, a...@att.net wrote:
>
>> At the movement anyone uses the keywords, "threads", "tasks" or "multiple
>> cores" what they are asking for is parallelism system.
>
>No, from the thread (sic!) subject it is pretty clear that the poster
>is asking about tasks and cores.
>
>> And at this time
>> no standard language come setup for doing parallel.
>
>It is not the first time when you spread misinformation related to
>this subject, even though you have been already corrected on this.
>Please take into account that publishing incorrect or misleading
>information about Ada's ability in this area is a disservice to the
>community.
>
>--
>Maciej Sobczak * http://www.msobczak.com * http://www.inspirel.com




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-08  9:02     ` anon
@ 2012-05-08  9:52       ` Ludovic Brenta
  2012-05-09 12:28         ` anon
  0 siblings, 1 reply; 28+ messages in thread
From: Ludovic Brenta @ 2012-05-08  9:52 UTC (permalink / raw)
  Cc: anon

anon wrote on comp.lang.ada:
> Where you proof!!! That I am wrong!!! You have none!!!
> The proof must be creditable. That is someone not associate 
> with Adacore, ARG. 
> Someone like the DoD, IBM, SUN, SGI. And must be posted on 
> their web site.

You must have a case of schizophrenia.  This is not a joke;
I know first-hand what schizophrenia looks like and I know it
when I see it.  Get medical counsel.  If you are mentally ill,
*this is not your fault*.

Symptoms: you live in a fantasy world where your word must be
taken for granted but AdaCore and the ARG are liars.  This
world does not exist. In the real world, *you* must prove that
you are correct; the ARG is the final authority on the
language definition and AdaCore is the final authority on the
implementation of this language in GNAT (and AdaCore is a
member of the ARG, too).  In the real world, the DoD has
nothing to do with Ada anymore except as a customer of AdaCore
and IBM; IBM is, just like AdaCore, the final authority on
their implementations of the language, and SUN and SGI have
both delegated all their Ada activities to... AdaCore.

-- 
Ludovic Brenta.
Communications prioritize transparent matrices.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-08  9:52       ` Ludovic Brenta
@ 2012-05-09 12:28         ` anon
  2012-05-10  2:20           ` Randy Brukardt
  0 siblings, 1 reply; 28+ messages in thread
From: anon @ 2012-05-09 12:28 UTC (permalink / raw)


I am talking about standard languages aka all languages including Ada.
IBM does maintain a number of languages not just their version of Ada
and all hardware/software companies like IBM, SUN, SGI have been using 
parallel for a number of years. Also IBM, SUN, SGI, and might as well 
include Microsoft are not easily updated like wikipedia is to prove
a point. 

Now, most of the languages that were parallelized in the late 1980s and 
90s disappeared about the time that Intel introduce the Pentium D and Core 
2. And every since then programmers and some professors have been looking 
for one or more parallel languages.

For GCC, the deal is that just downloading and compiling all languages in 
GCC suite will not give you a set of parallel languages. You must modify 
the existing compiler code to add parallel constructs and controls and/or 
link the code to special run-time libraries (compiled for parallel).

Some of these special libraries have links that can either temporary 
lock or replace the OS existing job scheduler, not for normal Home 
version of Windows. 

Then there are three major current design for parallel. First, is 
the OS design, where the OS loads each program but does nothing to 
allow one's program to execute in parallel.  Also the OS is still 
hogging one or more cores with its normal system functions such as 
I/O and video updates as well as the job scheduling. The OS may also 
at times limit the resources a programs needs to complete it job. So, 
to many programmers and algorithms this design is not even parallel.

Secondly, is the run-time design. Which uses library routines to 
allow multiple executions. Now, in GNAT as of GNAT 2011 this is still 
in its initial stages of getting these type of routines working. A 
major problem here is using the standard linker and system job loader
which can kill any performance increase that could be obtain in using 
parallel code. Take a 2 MB program for example if the parallel routine 
to be execute is only 25 KB, then it is very inefficiency to load a 2 MB 
program image for each core as well as reloading for every vs swap. So, 
the answer is to use a special binder and linker as well as a special 
parallel support loader routines to load only the routine 
that needs to be executed. Also, routines like Ada system finalization 
routine needs to be preloaded as part of this parallel support system. 
This would decrease the load time and increase performance. But In the 
case of GNAT, the run-time system requires too many soft_links that 
basically requires the whole program to be loaded each time. Also, for 
the programmer this version is a little too much work because they still
have to make the algorithm parallel and take time to get the code 
efficiency.

Third, is the compiler design.  In this case Ada would either have to 
be expanded to include a pragma like "pragma Parallel ( <routine> ) ;" 
and a modified compiler would then build a parallel version of the 
routine. Or have a "Parallelizer" which world rewrite the source code 
into a more parallel design before using the normal compiler with some 
additional libraries. Most programmer like this design but their 
are major problems like in debugging the algorithm for one. Also, this 
design requires a system utilization to be stable for code efficiency.

In the Run-time and Compiler designs there exist a side-effect of message 
passing which can cause a decrease of performance. And all design can 
have the problem of shared memory, in other words if all core access 
memory at the same time which core has their request answered first or 
last. In the OS design this is not a major problem but in the other 
two design this type of memory bottleneck for numerical calculation may 
result in calculating errors.

And this does not include the parallel paradigms like Data, Message 
Passing, and Shared Memory that must be considered.



Note: An exception to the disappearing parallel languages is the UC 
Berkeley's Unified Parallel C (UPC) with the current version working 
on Microsoft Windows and Apple OSX. It has been tested for others OS 
like Linux but only Windows and Apple are fully supported with binaries. 
The problem with UPC starts when using a simple two statement 
"Hello World" C program is converts to 120 line of source code for 
parallel. To be compile and linked. But if you look at the orginal 
code you can see the compiler should have defaulted to a no parallel 
design.



#include <upc.h>
#include <stdio.h>
#include <stdlib.h>
int main ( ) 
  {
    printf ( "Hello parallel World" ) ;
    return 0;
  }





--  A final note loweing youself to name calling just proves that 
--  you do not know what your talking about concerning "Parallel".



In <30585369.219.1336470732142.JavaMail.geo-discussion-forums@ynbq3>, Ludovic Brenta <ludovic@ludovic-brenta.org> writes:
>anon wrote on comp.lang.ada:
>> Where you proof!!! That I am wrong!!! You have none!!!
>> The proof must be creditable. That is someone not associate 
>> with Adacore, ARG. 
>> Someone like the DoD, IBM, SUN, SGI. And must be posted on 
>> their web site.
>
>You must have a case of schizophrenia.  This is not a joke;
>I know first-hand what schizophrenia looks like and I know it
>when I see it.  Get medical counsel.  If you are mentally ill,
>*this is not your fault*.
>
>Symptoms: you live in a fantasy world where your word must be
>taken for granted but AdaCore and the ARG are liars.  This
>world does not exist. In the real world, *you* must prove that
>you are correct; the ARG is the final authority on the
>language definition and AdaCore is the final authority on the
>implementation of this language in GNAT (and AdaCore is a
>member of the ARG, too).  In the real world, the DoD has
>nothing to do with Ada anymore except as a customer of AdaCore
>and IBM; IBM is, just like AdaCore, the final authority on
>their implementations of the language, and SUN and SGI have
>both delegated all their Ada activities to... AdaCore.
>
>-- 
>Ludovic Brenta.
>Communications prioritize transparent matrices.




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-09 12:28         ` anon
@ 2012-05-10  2:20           ` Randy Brukardt
  2012-05-11  2:38             ` NatarovVI
  0 siblings, 1 reply; 28+ messages in thread
From: Randy Brukardt @ 2012-05-10  2:20 UTC (permalink / raw)


<anon@att.net> wrote in message news:jodnse$c29$1@speranza.aioe.org...
>I am talking about standard languages aka all languages including Ada.

Ada always has supported parallelism.

I think you are confused by various research languages that are trying to 
*automatically* make programs execute in parallel. It's not surprising that 
these disappear, people have been trying to design these since the 1960s 
(some of the research papers I read while in college - in the 1970s - were 
ancient even then).

My personal opinion is that these "holy grail" languages will never really 
work out. If a reasonably sensible solution existed, someone would have 
found it in the last 50 years of looking. I'm sure some progress will be 
made, but only by increasing non-determinism elsewhere in the language --  
which will make debugging (and thus the languages) impractical.

People that are looking for research grants have a lot of advantage to 
spreading FUD about what is and is not possible, so be careful about 
depending on them.

OTOH, *manual* parallelism, as exhibited by Ada's tasks, threads in Windows 
and Linux, and constructs in many other languages, works and has been in use 
for decades. (Ada makes no claim to having invented that.) To claim 
otherwise is just plain wrong.

And you don't have to take my word for it. You can see it yourself. You can 
download the very useful Process Explorer tool for Windows, which lets you 
see what each process on your computer is doing. Install it on any modern 
Windows system. Then compile a program that uses a number of CPU-intensive 
tasks with most Windows Ada compilers (any that claim to use threads, I know 
that GNAT, ObjectAda, and IBM's Ada all do from personal experience). I'm 
sure someone else here could recommend a short Ada program.

If you run such a compiled Ada program and examine the process with Process 
Explorer on a multicore machine, you'll see that several or all of the cores 
will be in use for that single process. (Yes, I've done this on Windows XP. 
Supposely, Windows 7 is better at it but I haven't tested that myself.) 
Typically, more CPU than a single core could use will be consumed by the 
program. That means that the core are running the tasks *in parallel*, 
because they obviously aren't being run sequentially and there is no other 
choice.

I suppose Microsoft could be lying about the parallel execution evidenced by 
this commonly used tool, but I have no idea what gain that would have for 
Microsoft.

Anyway, do the experiment with an appropriate program (just make sure that 
the program doesn't require sequential execution). See if you still think 
that Ada programs can't run parallel threads.

                                                         Randy.





^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-10  2:20           ` Randy Brukardt
@ 2012-05-11  2:38             ` NatarovVI
  2012-05-11  8:30               ` Georg Bauhaus
  2012-05-12  0:33               ` Randy Brukardt
  0 siblings, 2 replies; 28+ messages in thread
From: NatarovVI @ 2012-05-11  2:38 UTC (permalink / raw)


> I think you are confused by various research languages that are trying
> to *automatically* make programs execute in parallel. It's not
> surprising that these disappear, people have been trying to design these
> since the 1960s (some of the research papers I read while in college -
> in the 1970s - were ancient even then).
> My personal opinion is that these "holy grail" languages will never
> really work out. If a reasonably sensible solution existed, someone
> would have found it in the last 50 years of looking. I'm sure some
> progress will be made, but only by increasing non-determinism elsewhere
> in the language -- which will make debugging (and thus the languages)
> impractical.

read existentialtypes.wordpress.com, or comments in their translation
to russian. imperative programming and ephemeral data structures born as
just "optimization trick" for executing true (fully parallel functional) 
programs (working with fully parallel persistent data) on cheap 
sequential hardware of the time. trick is do parallel actions sequentially
in time, on one processor. another trick is work with ephemeral copies of 
parts of persistent data, again - sequentially in time.
it's just this two tricks form all modern IT, form many artifacts.

so. sure you can not easily deduce true, parallel, program from it's 
purposely sequential variant. projection on hardware. but if you think 
parallel from start, you CAN get automatic parallelisation (programm will 
work without modification on any hardware, any number of processors).

you just can not do it in imperative, intertwined with dependencies, 
lobotomized manner. in functional languages - you can.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-11  2:38             ` NatarovVI
@ 2012-05-11  8:30               ` Georg Bauhaus
  2012-05-16 15:40                 ` NatarovVI
  2012-05-12  0:33               ` Randy Brukardt
  1 sibling, 1 reply; 28+ messages in thread
From: Georg Bauhaus @ 2012-05-11  8:30 UTC (permalink / raw)


On 11.05.12 04:38, NatarovVI wrote:

> you just can not do it in imperative, intertwined with dependencies,
> lobotomized manner. in functional languages - you can.

Do they address predictability of I/O, both local and
communicating,  in reactive systems?



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-11  2:38             ` NatarovVI
  2012-05-11  8:30               ` Georg Bauhaus
@ 2012-05-12  0:33               ` Randy Brukardt
  2012-05-12 10:57                 ` Stephen Leake
  2012-05-16 15:54                 ` NatarovVI
  1 sibling, 2 replies; 28+ messages in thread
From: Randy Brukardt @ 2012-05-12  0:33 UTC (permalink / raw)


"NatarovVI" <4KCheshireCat@gmail.com> wrote in message 
news:johu2d$9vj$1@speranza.aioe.org...
...
> you just can not do it in imperative, intertwined with dependencies,
> lobotomized manner. in functional languages - you can.

Sure, but I'm skeptical that the vast majority of programmers can deal with 
any programming language that has the level of non-determinism needed to 
support useful parallelism. Functional programming languages make this 
worse, if anything.

Secondly, I'm skeptical that any language attempting fine-grained 
parallelism can ever perform anywhere near as well as a language using 
coarse parallelism (like Ada) and deterministic sequential semantics for the 
rest. Any parallelism requires scheduling overhead, and that overhead is 
going to be a lot higher for the fine-grained case, simply because there is 
a lot more of it needed (even if it is conceptually simpler).

There are of course special cases where neither issue is a problem or can be 
worked around, but for general purpose programming (especially embedded 
programming, which is pretty much the only *real* programming going on today 
:-), I don't see it happening. I wouldn't mind being wrong (even though it 
would mean the end of my programming career; I already know that I'm not 
productive in non-Ada languages - I cannot tolerate debugging very often).

                                         Randy.







^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-12  0:33               ` Randy Brukardt
@ 2012-05-12 10:57                 ` Stephen Leake
  2012-05-15  6:55                   ` Randy Brukardt
  2012-05-16 15:54                 ` NatarovVI
  1 sibling, 1 reply; 28+ messages in thread
From: Stephen Leake @ 2012-05-12 10:57 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:

> Secondly, I'm skeptical that any language attempting fine-grained 
> parallelism can ever perform anywhere near as well as a language using 
> coarse parallelism (like Ada) and deterministic sequential semantics for the 
> rest. Any parallelism requires scheduling overhead, and that overhead is 
> going to be a lot higher for the fine-grained case, simply because there is 
> a lot more of it needed (even if it is conceptually simpler).

Have you been following Parasail?
http://parasail-programming-language.blogspot.com/

Not fully implemented yet, but it sounds very interesting.

-- 
-- Stephe



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-12 10:57                 ` Stephen Leake
@ 2012-05-15  6:55                   ` Randy Brukardt
  2012-05-15 22:54                     ` Shark8
  0 siblings, 1 reply; 28+ messages in thread
From: Randy Brukardt @ 2012-05-15  6:55 UTC (permalink / raw)


"Stephen Leake" <stephen_leake@stephe-leake.org> wrote in message 
news:82d369u0nc.fsf@stephe-leake.org...
> "Randy Brukardt" <randy@rrsoftware.com> writes:
>
>> Secondly, I'm skeptical that any language attempting fine-grained
>> parallelism can ever perform anywhere near as well as a language using
>> coarse parallelism (like Ada) and deterministic sequential semantics for 
>> the
>> rest. Any parallelism requires scheduling overhead, and that overhead is
>> going to be a lot higher for the fine-grained case, simply because there 
>> is
>> a lot more of it needed (even if it is conceptually simpler).
>
> Have you been following Parasail?
> http://parasail-programming-language.blogspot.com/
>
> Not fully implemented yet, but it sounds very interesting.

A little bit, but I don't think even Tucker can do scheduling with no cost. 
(So far as I know, there is a significant penalty to running in the Parasail 
virtual machine, and I doubt that he'll be able to get rid of it.) Languages 
like this assume that it is OK to waste some significant fraction of the CPU 
in order to make it up on volume, er parallelism. As I said, I'm skeptical, 
but that doesn't mean that people shouldn't try -- and clearly, there are 
many applications where the performance isn't that critical or even where 
the added cost might actually provide some speed-up.

                                Randy.


                                          Randy.





^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-15  6:55                   ` Randy Brukardt
@ 2012-05-15 22:54                     ` Shark8
  0 siblings, 0 replies; 28+ messages in thread
From: Shark8 @ 2012-05-15 22:54 UTC (permalink / raw)


On Tuesday, May 15, 2012 1:55:01 AM UTC-5, Randy Brukardt wrote:
>
> clearly, there are 
> many applications where the performance isn't that critical or even where 
> the added cost might actually provide some speed-up.

Indeed, though this is really just the recapitulation of the themes "work smarter, not harder". If we might draw a parallel, this is to parallelism what basic-techniques were to data-structures (ie the very early days of CS).

Someday somebody realized, "if I sort things in this list, then finding them will be faster" and so we got all sorts of sort-algorithms and studies on the properties thereof and the like.

And that's where we're at WRT parallelism; we've got people looking into everything from instruction-level to task/program-level and trying to figure out the best way to do things. (But here the problem's more difficult; because, in addition to the amount of parallelization, the different granularities interact with the evaluation-metrics, and we're only just beginning to see to what extent.)

But, in the end, I think you're right: intuitively the fine-grained should have a disadvantage because there will likely be more overhead cost due to the synchronization/data-payload ratio. {We may be wrong though, sometimes intuition leads astray; and even slight improvements in such fields may lead to better compilers and/or OSes.}



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-11  8:30               ` Georg Bauhaus
@ 2012-05-16 15:40                 ` NatarovVI
  2012-05-16 18:03                   ` Georg Bauhaus
  0 siblings, 1 reply; 28+ messages in thread
From: NatarovVI @ 2012-05-16 15:40 UTC (permalink / raw)


> Do they address predictability of I/O, both local and communicating,  in
> reactive systems?

again - "parallelism is not concurrency"

parallelism is not about predictability, it's about computational 
efficiency. fast way to get some correct result.
it can be implemented via concurrency, hard way.
but exist other ways also.

programmer writes dependencies, runtime schedules it automatically.
SISAL. NESL. this way.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-12  0:33               ` Randy Brukardt
  2012-05-12 10:57                 ` Stephen Leake
@ 2012-05-16 15:54                 ` NatarovVI
  2012-05-17  0:11                   ` Randy Brukardt
  1 sibling, 1 reply; 28+ messages in thread
From: NatarovVI @ 2012-05-16 15:54 UTC (permalink / raw)


> Sure, but I'm skeptical that the vast majority of programmers can deal
> with any programming language that has the level of non-determinism
> needed to support useful parallelism. Functional programming languages
> make this worse, if anything.

and again - "parallelism is not concurrency"
if first-course CMU students can write correct parallel programs, 
everybody can. right abstractions is the key.
read Robert Harper experience at existentialtypes.wordpress.com

> Secondly, I'm skeptical that any language attempting fine-grained
> parallelism can ever perform anywhere near as well as a language using
> coarse parallelism (like Ada) and deterministic sequential semantics for
> the rest. Any parallelism requires scheduling overhead, and that
> overhead is going to be a lot higher for the fine-grained case, simply
> because there is a lot more of it needed (even if it is conceptually
> simpler).

you really need 100% of performance? maybe you like write in asm?))
seriously, SISAL and NESL can automatically get good part of data 
parallelism, no magic here. and this will be enought for most programmers.
(high order functions will be requirement here, extending data 
parallelism to operations on functions)

p.s. best scheduling is no scheduling...

> going on today :-), I don't see it happening. I wouldn't mind being

it will happen)) and Ada - not only language without need for debugging.
Standard ML also.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-16 15:40                 ` NatarovVI
@ 2012-05-16 18:03                   ` Georg Bauhaus
  0 siblings, 0 replies; 28+ messages in thread
From: Georg Bauhaus @ 2012-05-16 18:03 UTC (permalink / raw)


On 16.05.12 17:40, NatarovVI wrote:
>> Do they address predictability of I/O, both local and communicating,  in
>> reactive systems?
> 
> again - "parallelism is not concurrency"
> 
> parallelism is not about predictability, it's about computational 
> efficiency. fast way to get some correct result.
> it can be implemented via concurrency, hard way.
> but exist other ways also.

What other way that is also efficient?

> programmer writes dependencies, runtime schedules it automatically.
> SISAL. NESL. this way.

OK. I have ranted about this in the other post. Sorry.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-16 15:54                 ` NatarovVI
@ 2012-05-17  0:11                   ` Randy Brukardt
  2012-05-17  1:06                     ` Jeffrey Carter
  0 siblings, 1 reply; 28+ messages in thread
From: Randy Brukardt @ 2012-05-17  0:11 UTC (permalink / raw)


"NatarovVI" <4KCheshireCat@gmail.com> wrote in message 
news:jp0ikh$b7q$1@speranza.aioe.org...
>> Sure, but I'm skeptical that the vast majority of programmers can deal
>> with any programming language that has the level of non-determinism
>> needed to support useful parallelism. Functional programming languages
>> make this worse, if anything.
>
> and again - "parallelism is not concurrency"
> if first-course CMU students can write correct parallel programs,
> everybody can.

Baloney. First-course students don't have to worry about any real-world 
constraints. It's easy to write any kind of program in that environment (it 
was much easier to write programs of all sorts when I was a student).

> right abstractions is the key.
> read Robert Harper experience at existentialtypes.wordpress.com

I'm dubious of the pronouncements of anyone who's income (in this case, 
research grants) depends on people believing those pronouncements. In any 
case, those who ignore history are doomed to repeat it, and there has been 
research into these areas for 50 years. No one yet has found the "holy 
grail" of "free parallelism", and moreover nothing here is new.

Besides, 90% of most applications is I/O. The computation part of most 
programs is quite small. If all you can do is speed those things up, it's 
not going to make a significant difference to most programs. There are, of 
course, a few parts of programs that do extensive computations, but often 
these are in libraries where it makes sense to spend special efforts.

>> Secondly, I'm skeptical that any language attempting fine-grained
>> parallelism can ever perform anywhere near as well as a language using
>> coarse parallelism (like Ada) and deterministic sequential semantics for
>> the rest. Any parallelism requires scheduling overhead, and that
>> overhead is going to be a lot higher for the fine-grained case, simply
>> because there is a lot more of it needed (even if it is conceptually
>> simpler).
>
> you really need 100% of performance?

Of course, for the most critical computations. If you don't need 100% of 
performance, then you surely don't need parallel execution - what could 
possibly be the point?? Using two cores instead of one to do a computation 
that is not critical is insane -- you end up using more power to do the same 
work in the same time -- meaning more heat, lower battery life, etc.

> maybe you like write in asm?))

If necessary. But high-quality compilers can (or should) generate better 
code than you can write by hand, because they can take into account many 
more variables than a hand-programmer can. So the vast majority of the time, 
writing critical code in Ada is enough. (But, yes, Janus/Ada does have a few 
hand-coded ASM lookup loops; those cut compilation time by 25% when they 
were implemented. [No idea if they're needed anymore, of course, the CPU and 
I/O balance has changed a lot in the last 30 years.])

The problem is, if you're trying to implement fine-grained parallelism, you 
have to surround that code with some sort of scheduling mechanism, and that 
overhead means you aren't going to get anywhere near 100% of the CPU at any 
point. That means you'll have to find a

> seriously, SISAL and NESL can automatically get good part of data
> parallelism, no magic here. and this will be enought for most programmers.

Most programmers need no parallelism at all; they just need a good graphics 
library that uses parallelism to do their drawning. (And possibly a few 
other libraries.) The point is, most programmers are writing I/O intensive 
applications that do little computation (think of the apps on your phone, 
for example).

> (high order functions will be requirement here, extending data
> parallelism to operations on functions)

Certain special cases surely exist, and those can be implemented in a 
compiler for almost any language. (At the very least, by well-defined 
libraries.) These things surely could be done in Ada, either via libraries 
(look at Brad Moore's work) or via language-defined primitives. As I've said 
all along, I've very skeptical that anything further will prove practical.

> p.s. best scheduling is no scheduling...

Right. But that's only possible using vector instructions and the like; 
that's a *very* small set of all computing. (I doubt that I have ever 
written a program -- in 30+ years -- that could have been helped by a vector 
instruction.) Any other sort of parallelism will have to be implemented by 
some sort of concurency, and that requires some sort of scheduling.

>> going on today :-), I don't see it happening. I wouldn't mind being
>
> it will happen)) and Ada - not only language without need for debugging.
> Standard ML also.

I can believe that; no one has a clue what an ML program means, so there is 
no need to debug it - discarding makes more sense. ;-) Ada is the only true 
syntax. :-)

                                      Randy.





^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-17  0:11                   ` Randy Brukardt
@ 2012-05-17  1:06                     ` Jeffrey Carter
  2012-05-17  6:50                       ` Dmitry A. Kazakov
  2012-05-18  4:12                       ` Randy Brukardt
  0 siblings, 2 replies; 28+ messages in thread
From: Jeffrey Carter @ 2012-05-17  1:06 UTC (permalink / raw)


On 05/16/2012 05:11 PM, Randy Brukardt wrote:
>
> The problem is, if you're trying to implement fine-grained parallelism, you
> have to surround that code with some sort of scheduling mechanism, and that
> overhead means you aren't going to get anywhere near 100% of the CPU at any
> point.

The assumption here is that there are more tasks/threads/parallel things than 
there are processors/cores. That's generally true now, but they way things are 
going, it may not be true in the not-too-distant future (1-atom transistors 
could make for a lot of cores). When you have a core per task you no longer need 
scheduling.

-- 
Jeff Carter
"I blow my nose on you."
Monty Python & the Holy Grail
03

--- Posted via news://freenews.netfront.net/ - Complaints to news@netfront.net ---



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-17  1:06                     ` Jeffrey Carter
@ 2012-05-17  6:50                       ` Dmitry A. Kazakov
  2012-05-18  4:12                       ` Randy Brukardt
  1 sibling, 0 replies; 28+ messages in thread
From: Dmitry A. Kazakov @ 2012-05-17  6:50 UTC (permalink / raw)


On Wed, 16 May 2012 18:06:49 -0700, Jeffrey Carter wrote:

> On 05/16/2012 05:11 PM, Randy Brukardt wrote:
>>
>> The problem is, if you're trying to implement fine-grained parallelism, you
>> have to surround that code with some sort of scheduling mechanism, and that
>> overhead means you aren't going to get anywhere near 100% of the CPU at any
>> point.
> 
> The assumption here is that there are more tasks/threads/parallel things than 
> there are processors/cores. That's generally true now, but they way things are 
> going, it may not be true in the not-too-distant future (1-atom transistors 
> could make for a lot of cores). When you have a core per task you no longer need 
> scheduling.

You will still have synchronization issues to handle (real time, e.g. delay
until and to other tasks).

I assume that memory will not be shared because evidently that will be a
bottleneck for thousands of cores. Now if processors (not cores) are
synchronized by data exchange using some topology of connections, that will
be transputers reborn. In the heydays of transputers there existed switches
to connect transputer links dynamically, but that was slow and expensive.
Very likely the same problem and worse would emerge if we had architectures
of thousands of processors. The topology of the mesh will be rather fixed
with some few links left to reconnect dynamically.

Scheduling will remain but the objective will be to choose a processor with
physical connections satisfying most of synchronization constraints of the
task it will have to run. That might be a very different kind of parallel
programming than we know today.

But I agree with Randy that fine grained parallelism will never make it,
whatever architecture comes.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: basic question on Ada tasks and running on different cores
  2012-05-17  1:06                     ` Jeffrey Carter
  2012-05-17  6:50                       ` Dmitry A. Kazakov
@ 2012-05-18  4:12                       ` Randy Brukardt
  1 sibling, 0 replies; 28+ messages in thread
From: Randy Brukardt @ 2012-05-18  4:12 UTC (permalink / raw)


"Jeffrey Carter" <spam.jrcarter.not@spam.not.acm.org> wrote in message 
news:jp1ivb$o6m$1@adenine.netfront.net...
> On 05/16/2012 05:11 PM, Randy Brukardt wrote:
>>
>> The problem is, if you're trying to implement fine-grained parallelism, 
>> you
>> have to surround that code with some sort of scheduling mechanism, and 
>> that
>> overhead means you aren't going to get anywhere near 100% of the CPU at 
>> any
>> point.
>
> The assumption here is that there are more tasks/threads/parallel things 
> than there are processors/cores. That's generally true now, but they way 
> things are going, it may not be true in the not-too-distant future (1-atom 
> transistors could make for a lot of cores). When you have a core per task 
> you no longer need scheduling.

Scheduling includes deciding *what* to run, as well as *when* to run it. If 
there are enough cores, *when* becomes trivial, but *what* still requires 
overhead (and data communication, as Dmitry noted). There are special cases 
were *what* is pre-determined (SIMD machines), but for general workloads, 
determining what to run will still add costs.

                                           Randy.





^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2012-05-18  4:12 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-05-06  7:38 basic question on Ada tasks and running on different cores Nasser M. Abbasi
2012-05-06  7:59 ` Gautier write-only
2012-05-06 10:02 ` Simon Wright
2012-05-06 10:31   ` Ludovic Brenta
2012-05-06 14:14     ` Robert A Duff
2012-05-06 16:07       ` Vinzent Hoefler
2012-05-06 19:43       ` Ludovic Brenta
     [not found]         ` <15qdq7df9cji7htp52i9d5f8sqsgmisc3b@invalid.netcom.com>
2012-05-06 21:24           ` Ludovic Brenta
2012-05-06 14:13 ` Robert A Duff
2012-05-07  7:36 ` anon
2012-05-08  7:08   ` Maciej Sobczak
2012-05-08  9:02     ` anon
2012-05-08  9:52       ` Ludovic Brenta
2012-05-09 12:28         ` anon
2012-05-10  2:20           ` Randy Brukardt
2012-05-11  2:38             ` NatarovVI
2012-05-11  8:30               ` Georg Bauhaus
2012-05-16 15:40                 ` NatarovVI
2012-05-16 18:03                   ` Georg Bauhaus
2012-05-12  0:33               ` Randy Brukardt
2012-05-12 10:57                 ` Stephen Leake
2012-05-15  6:55                   ` Randy Brukardt
2012-05-15 22:54                     ` Shark8
2012-05-16 15:54                 ` NatarovVI
2012-05-17  0:11                   ` Randy Brukardt
2012-05-17  1:06                     ` Jeffrey Carter
2012-05-17  6:50                       ` Dmitry A. Kazakov
2012-05-18  4:12                       ` Randy Brukardt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox