comp.lang.ada
 help / color / mirror / Atom feed
From: anon@att.net
Subject: Re: basic question on Ada tasks and running on different cores
Date: Wed, 9 May 2012 12:28:00 +0000 (UTC)
Date: 2012-05-09T12:28:00+00:00	[thread overview]
Message-ID: <jodnse$c29$1@speranza.aioe.org> (raw)
In-Reply-To: 30585369.219.1336470732142.JavaMail.geo-discussion-forums@ynbq3

I am talking about standard languages aka all languages including Ada.
IBM does maintain a number of languages not just their version of Ada
and all hardware/software companies like IBM, SUN, SGI have been using 
parallel for a number of years. Also IBM, SUN, SGI, and might as well 
include Microsoft are not easily updated like wikipedia is to prove
a point. 

Now, most of the languages that were parallelized in the late 1980s and 
90s disappeared about the time that Intel introduce the Pentium D and Core 
2. And every since then programmers and some professors have been looking 
for one or more parallel languages.

For GCC, the deal is that just downloading and compiling all languages in 
GCC suite will not give you a set of parallel languages. You must modify 
the existing compiler code to add parallel constructs and controls and/or 
link the code to special run-time libraries (compiled for parallel).

Some of these special libraries have links that can either temporary 
lock or replace the OS existing job scheduler, not for normal Home 
version of Windows. 

Then there are three major current design for parallel. First, is 
the OS design, where the OS loads each program but does nothing to 
allow one's program to execute in parallel.  Also the OS is still 
hogging one or more cores with its normal system functions such as 
I/O and video updates as well as the job scheduling. The OS may also 
at times limit the resources a programs needs to complete it job. So, 
to many programmers and algorithms this design is not even parallel.

Secondly, is the run-time design. Which uses library routines to 
allow multiple executions. Now, in GNAT as of GNAT 2011 this is still 
in its initial stages of getting these type of routines working. A 
major problem here is using the standard linker and system job loader
which can kill any performance increase that could be obtain in using 
parallel code. Take a 2 MB program for example if the parallel routine 
to be execute is only 25 KB, then it is very inefficiency to load a 2 MB 
program image for each core as well as reloading for every vs swap. So, 
the answer is to use a special binder and linker as well as a special 
parallel support loader routines to load only the routine 
that needs to be executed. Also, routines like Ada system finalization 
routine needs to be preloaded as part of this parallel support system. 
This would decrease the load time and increase performance. But In the 
case of GNAT, the run-time system requires too many soft_links that 
basically requires the whole program to be loaded each time. Also, for 
the programmer this version is a little too much work because they still
have to make the algorithm parallel and take time to get the code 
efficiency.

Third, is the compiler design.  In this case Ada would either have to 
be expanded to include a pragma like "pragma Parallel ( <routine> ) ;" 
and a modified compiler would then build a parallel version of the 
routine. Or have a "Parallelizer" which world rewrite the source code 
into a more parallel design before using the normal compiler with some 
additional libraries. Most programmer like this design but their 
are major problems like in debugging the algorithm for one. Also, this 
design requires a system utilization to be stable for code efficiency.

In the Run-time and Compiler designs there exist a side-effect of message 
passing which can cause a decrease of performance. And all design can 
have the problem of shared memory, in other words if all core access 
memory at the same time which core has their request answered first or 
last. In the OS design this is not a major problem but in the other 
two design this type of memory bottleneck for numerical calculation may 
result in calculating errors.

And this does not include the parallel paradigms like Data, Message 
Passing, and Shared Memory that must be considered.



Note: An exception to the disappearing parallel languages is the UC 
Berkeley's Unified Parallel C (UPC) with the current version working 
on Microsoft Windows and Apple OSX. It has been tested for others OS 
like Linux but only Windows and Apple are fully supported with binaries. 
The problem with UPC starts when using a simple two statement 
"Hello World" C program is converts to 120 line of source code for 
parallel. To be compile and linked. But if you look at the orginal 
code you can see the compiler should have defaulted to a no parallel 
design.



#include <upc.h>
#include <stdio.h>
#include <stdlib.h>
int main ( ) 
  {
    printf ( "Hello parallel World" ) ;
    return 0;
  }





--  A final note loweing youself to name calling just proves that 
--  you do not know what your talking about concerning "Parallel".



In <30585369.219.1336470732142.JavaMail.geo-discussion-forums@ynbq3>, Ludovic Brenta <ludovic@ludovic-brenta.org> writes:
>anon wrote on comp.lang.ada:
>> Where you proof!!! That I am wrong!!! You have none!!!
>> The proof must be creditable. That is someone not associate 
>> with Adacore, ARG. 
>> Someone like the DoD, IBM, SUN, SGI. And must be posted on 
>> their web site.
>
>You must have a case of schizophrenia.  This is not a joke;
>I know first-hand what schizophrenia looks like and I know it
>when I see it.  Get medical counsel.  If you are mentally ill,
>*this is not your fault*.
>
>Symptoms: you live in a fantasy world where your word must be
>taken for granted but AdaCore and the ARG are liars.  This
>world does not exist. In the real world, *you* must prove that
>you are correct; the ARG is the final authority on the
>language definition and AdaCore is the final authority on the
>implementation of this language in GNAT (and AdaCore is a
>member of the ARG, too).  In the real world, the DoD has
>nothing to do with Ada anymore except as a customer of AdaCore
>and IBM; IBM is, just like AdaCore, the final authority on
>their implementations of the language, and SUN and SGI have
>both delegated all their Ada activities to... AdaCore.
>
>-- 
>Ludovic Brenta.
>Communications prioritize transparent matrices.




  reply	other threads:[~2012-05-09 12:28 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-06  7:38 basic question on Ada tasks and running on different cores Nasser M. Abbasi
2012-05-06  7:59 ` Gautier write-only
2012-05-06 10:02 ` Simon Wright
2012-05-06 10:31   ` Ludovic Brenta
2012-05-06 14:14     ` Robert A Duff
2012-05-06 16:07       ` Vinzent Hoefler
2012-05-06 19:43       ` Ludovic Brenta
     [not found]         ` <15qdq7df9cji7htp52i9d5f8sqsgmisc3b@invalid.netcom.com>
2012-05-06 21:24           ` Ludovic Brenta
2012-05-06 14:13 ` Robert A Duff
2012-05-07  7:36 ` anon
2012-05-08  7:08   ` Maciej Sobczak
2012-05-08  9:02     ` anon
2012-05-08  9:52       ` Ludovic Brenta
2012-05-09 12:28         ` anon [this message]
2012-05-10  2:20           ` Randy Brukardt
2012-05-11  2:38             ` NatarovVI
2012-05-11  8:30               ` Georg Bauhaus
2012-05-16 15:40                 ` NatarovVI
2012-05-16 18:03                   ` Georg Bauhaus
2012-05-12  0:33               ` Randy Brukardt
2012-05-12 10:57                 ` Stephen Leake
2012-05-15  6:55                   ` Randy Brukardt
2012-05-15 22:54                     ` Shark8
2012-05-16 15:54                 ` NatarovVI
2012-05-17  0:11                   ` Randy Brukardt
2012-05-17  1:06                     ` Jeffrey Carter
2012-05-17  6:50                       ` Dmitry A. Kazakov
2012-05-18  4:12                       ` Randy Brukardt
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox