From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,292c095d622af1d0 X-Google-NewGroupId: yes X-Google-Attributes: gida07f3367d7,domainid0,public,usenet X-Google-Language: ENGLISH,ASCII-7-bit Received: by 10.68.241.7 with SMTP id we7mr7823638pbc.4.1337213496952; Wed, 16 May 2012 17:11:36 -0700 (PDT) MIME-Version: 1.0 Path: pr3ni7060pbb.0!nntp.google.com!news1.google.com!npeer01.iad.highwinds-media.com!news.highwinds-media.com!feed-me.highwinds-media.com!nx02.iad01.newshosting.com!newshosting.com!news2.euro.net!news.mixmin.net!feeder.erje.net!nuzba.szn.dk!news.jacob-sparre.dk!munin.jacob-sparre.dk!pnx.dk!.POSTED!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: basic question on Ada tasks and running on different cores Date: Wed, 16 May 2012 19:11:31 -0500 Organization: Jacob Sparre Andersen Research & Innovation Message-ID: References: <30585369.219.1336470732142.JavaMail.geo-discussion-forums@ynbq3> NNTP-Posting-Host: static-69-95-181-76.mad.choiceone.net X-Trace: munin.nbi.dk 1337213494 31910 69.95.181.76 (17 May 2012 00:11:34 GMT) X-Complaints-To: news@jacob-sparre.dk NNTP-Posting-Date: Thu, 17 May 2012 00:11:34 +0000 (UTC) X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-RFC2646: Format=Flowed; Original X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-Received-Bytes: 6250 Date: 2012-05-16T19:11:31-05:00 List-Id: "NatarovVI" <4KCheshireCat@gmail.com> wrote in message news:jp0ikh$b7q$1@speranza.aioe.org... >> Sure, but I'm skeptical that the vast majority of programmers can deal >> with any programming language that has the level of non-determinism >> needed to support useful parallelism. Functional programming languages >> make this worse, if anything. > > and again - "parallelism is not concurrency" > if first-course CMU students can write correct parallel programs, > everybody can. Baloney. First-course students don't have to worry about any real-world constraints. It's easy to write any kind of program in that environment (it was much easier to write programs of all sorts when I was a student). > right abstractions is the key. > read Robert Harper experience at existentialtypes.wordpress.com I'm dubious of the pronouncements of anyone who's income (in this case, research grants) depends on people believing those pronouncements. In any case, those who ignore history are doomed to repeat it, and there has been research into these areas for 50 years. No one yet has found the "holy grail" of "free parallelism", and moreover nothing here is new. Besides, 90% of most applications is I/O. The computation part of most programs is quite small. If all you can do is speed those things up, it's not going to make a significant difference to most programs. There are, of course, a few parts of programs that do extensive computations, but often these are in libraries where it makes sense to spend special efforts. >> Secondly, I'm skeptical that any language attempting fine-grained >> parallelism can ever perform anywhere near as well as a language using >> coarse parallelism (like Ada) and deterministic sequential semantics for >> the rest. Any parallelism requires scheduling overhead, and that >> overhead is going to be a lot higher for the fine-grained case, simply >> because there is a lot more of it needed (even if it is conceptually >> simpler). > > you really need 100% of performance? Of course, for the most critical computations. If you don't need 100% of performance, then you surely don't need parallel execution - what could possibly be the point?? Using two cores instead of one to do a computation that is not critical is insane -- you end up using more power to do the same work in the same time -- meaning more heat, lower battery life, etc. > maybe you like write in asm?)) If necessary. But high-quality compilers can (or should) generate better code than you can write by hand, because they can take into account many more variables than a hand-programmer can. So the vast majority of the time, writing critical code in Ada is enough. (But, yes, Janus/Ada does have a few hand-coded ASM lookup loops; those cut compilation time by 25% when they were implemented. [No idea if they're needed anymore, of course, the CPU and I/O balance has changed a lot in the last 30 years.]) The problem is, if you're trying to implement fine-grained parallelism, you have to surround that code with some sort of scheduling mechanism, and that overhead means you aren't going to get anywhere near 100% of the CPU at any point. That means you'll have to find a > seriously, SISAL and NESL can automatically get good part of data > parallelism, no magic here. and this will be enought for most programmers. Most programmers need no parallelism at all; they just need a good graphics library that uses parallelism to do their drawning. (And possibly a few other libraries.) The point is, most programmers are writing I/O intensive applications that do little computation (think of the apps on your phone, for example). > (high order functions will be requirement here, extending data > parallelism to operations on functions) Certain special cases surely exist, and those can be implemented in a compiler for almost any language. (At the very least, by well-defined libraries.) These things surely could be done in Ada, either via libraries (look at Brad Moore's work) or via language-defined primitives. As I've said all along, I've very skeptical that anything further will prove practical. > p.s. best scheduling is no scheduling... Right. But that's only possible using vector instructions and the like; that's a *very* small set of all computing. (I doubt that I have ever written a program -- in 30+ years -- that could have been helped by a vector instruction.) Any other sort of parallelism will have to be implemented by some sort of concurency, and that requires some sort of scheduling. >> going on today :-), I don't see it happening. I wouldn't mind being > > it will happen)) and Ada - not only language without need for debugging. > Standard ML also. I can believe that; no one has a clue what an ML program means, so there is no need to debug it - discarding makes more sense. ;-) Ada is the only true syntax. :-) Randy.