From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00,FREEMAIL_FROM, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Thread: 103376,292c095d622af1d0 X-Google-NewGroupId: yes X-Google-Attributes: gida07f3367d7,domainid0,public,usenet X-Google-Language: ENGLISH,ASCII-7-bit Received: by 10.204.152.217 with SMTP id h25mr2724470bkw.3.1336566484987; Wed, 09 May 2012 05:28:04 -0700 (PDT) MIME-Version: 1.0 Path: h15ni2773bkw.0!nntp.google.com!news2.google.com!goblin1!goblin.stu.neva.ru!news.tornevall.net!aioe.org!.POSTED!not-for-mail From: anon@att.net Newsgroups: comp.lang.ada Subject: Re: basic question on Ada tasks and running on different cores Date: Wed, 9 May 2012 12:28:00 +0000 (UTC) Organization: Aioe.org NNTP Server Message-ID: References: <30585369.219.1336470732142.JavaMail.geo-discussion-forums@ynbq3> Reply-To: anon@anon.org NNTP-Posting-Host: yZe9vwTtHoX8Nux7CiZHMw.user.speranza.aioe.org X-Complaints-To: abuse@aioe.org X-Notice: Filtered by postfilter v. 0.8.2 X-Newsreader: IBM NewsReader/2 2.0 Date: 2012-05-09T12:28:00+00:00 List-Id: I am talking about standard languages aka all languages including Ada. IBM does maintain a number of languages not just their version of Ada and all hardware/software companies like IBM, SUN, SGI have been using parallel for a number of years. Also IBM, SUN, SGI, and might as well include Microsoft are not easily updated like wikipedia is to prove a point. Now, most of the languages that were parallelized in the late 1980s and 90s disappeared about the time that Intel introduce the Pentium D and Core 2. And every since then programmers and some professors have been looking for one or more parallel languages. For GCC, the deal is that just downloading and compiling all languages in GCC suite will not give you a set of parallel languages. You must modify the existing compiler code to add parallel constructs and controls and/or link the code to special run-time libraries (compiled for parallel). Some of these special libraries have links that can either temporary lock or replace the OS existing job scheduler, not for normal Home version of Windows. Then there are three major current design for parallel. First, is the OS design, where the OS loads each program but does nothing to allow one's program to execute in parallel. Also the OS is still hogging one or more cores with its normal system functions such as I/O and video updates as well as the job scheduling. The OS may also at times limit the resources a programs needs to complete it job. So, to many programmers and algorithms this design is not even parallel. Secondly, is the run-time design. Which uses library routines to allow multiple executions. Now, in GNAT as of GNAT 2011 this is still in its initial stages of getting these type of routines working. A major problem here is using the standard linker and system job loader which can kill any performance increase that could be obtain in using parallel code. Take a 2 MB program for example if the parallel routine to be execute is only 25 KB, then it is very inefficiency to load a 2 MB program image for each core as well as reloading for every vs swap. So, the answer is to use a special binder and linker as well as a special parallel support loader routines to load only the routine that needs to be executed. Also, routines like Ada system finalization routine needs to be preloaded as part of this parallel support system. This would decrease the load time and increase performance. But In the case of GNAT, the run-time system requires too many soft_links that basically requires the whole program to be loaded each time. Also, for the programmer this version is a little too much work because they still have to make the algorithm parallel and take time to get the code efficiency. Third, is the compiler design. In this case Ada would either have to be expanded to include a pragma like "pragma Parallel ( ) ;" and a modified compiler would then build a parallel version of the routine. Or have a "Parallelizer" which world rewrite the source code into a more parallel design before using the normal compiler with some additional libraries. Most programmer like this design but their are major problems like in debugging the algorithm for one. Also, this design requires a system utilization to be stable for code efficiency. In the Run-time and Compiler designs there exist a side-effect of message passing which can cause a decrease of performance. And all design can have the problem of shared memory, in other words if all core access memory at the same time which core has their request answered first or last. In the OS design this is not a major problem but in the other two design this type of memory bottleneck for numerical calculation may result in calculating errors. And this does not include the parallel paradigms like Data, Message Passing, and Shared Memory that must be considered. Note: An exception to the disappearing parallel languages is the UC Berkeley's Unified Parallel C (UPC) with the current version working on Microsoft Windows and Apple OSX. It has been tested for others OS like Linux but only Windows and Apple are fully supported with binaries. The problem with UPC starts when using a simple two statement "Hello World" C program is converts to 120 line of source code for parallel. To be compile and linked. But if you look at the orginal code you can see the compiler should have defaulted to a no parallel design. #include #include #include int main ( ) { printf ( "Hello parallel World" ) ; return 0; } -- A final note loweing youself to name calling just proves that -- you do not know what your talking about concerning "Parallel". In <30585369.219.1336470732142.JavaMail.geo-discussion-forums@ynbq3>, Ludovic Brenta writes: >anon wrote on comp.lang.ada: >> Where you proof!!! That I am wrong!!! You have none!!! >> The proof must be creditable. That is someone not associate >> with Adacore, ARG. >> Someone like the DoD, IBM, SUN, SGI. And must be posted on >> their web site. > >You must have a case of schizophrenia. This is not a joke; >I know first-hand what schizophrenia looks like and I know it >when I see it. Get medical counsel. If you are mentally ill, >*this is not your fault*. > >Symptoms: you live in a fantasy world where your word must be >taken for granted but AdaCore and the ARG are liars. This >world does not exist. In the real world, *you* must prove that >you are correct; the ARG is the final authority on the >language definition and AdaCore is the final authority on the >implementation of this language in GNAT (and AdaCore is a >member of the ARG, too). In the real world, the DoD has >nothing to do with Ada anymore except as a customer of AdaCore >and IBM; IBM is, just like AdaCore, the final authority on >their implementations of the language, and SUN and SGI have >both delegated all their Ada activities to... AdaCore. > >-- >Ludovic Brenta. >Communications prioritize transparent matrices.