From: "Randy Brukardt" <randy@rrsoftware.com>
Subject: Re: basic question on Ada tasks and running on different cores
Date: Wed, 16 May 2012 19:11:31 -0500
Date: 2012-05-16T19:11:31-05:00 [thread overview]
Message-ID: <jp1fnm$v56$1@munin.nbi.dk> (raw)
In-Reply-To: jp0ikh$b7q$1@speranza.aioe.org
"NatarovVI" <4KCheshireCat@gmail.com> wrote in message
news:jp0ikh$b7q$1@speranza.aioe.org...
>> Sure, but I'm skeptical that the vast majority of programmers can deal
>> with any programming language that has the level of non-determinism
>> needed to support useful parallelism. Functional programming languages
>> make this worse, if anything.
>
> and again - "parallelism is not concurrency"
> if first-course CMU students can write correct parallel programs,
> everybody can.
Baloney. First-course students don't have to worry about any real-world
constraints. It's easy to write any kind of program in that environment (it
was much easier to write programs of all sorts when I was a student).
> right abstractions is the key.
> read Robert Harper experience at existentialtypes.wordpress.com
I'm dubious of the pronouncements of anyone who's income (in this case,
research grants) depends on people believing those pronouncements. In any
case, those who ignore history are doomed to repeat it, and there has been
research into these areas for 50 years. No one yet has found the "holy
grail" of "free parallelism", and moreover nothing here is new.
Besides, 90% of most applications is I/O. The computation part of most
programs is quite small. If all you can do is speed those things up, it's
not going to make a significant difference to most programs. There are, of
course, a few parts of programs that do extensive computations, but often
these are in libraries where it makes sense to spend special efforts.
>> Secondly, I'm skeptical that any language attempting fine-grained
>> parallelism can ever perform anywhere near as well as a language using
>> coarse parallelism (like Ada) and deterministic sequential semantics for
>> the rest. Any parallelism requires scheduling overhead, and that
>> overhead is going to be a lot higher for the fine-grained case, simply
>> because there is a lot more of it needed (even if it is conceptually
>> simpler).
>
> you really need 100% of performance?
Of course, for the most critical computations. If you don't need 100% of
performance, then you surely don't need parallel execution - what could
possibly be the point?? Using two cores instead of one to do a computation
that is not critical is insane -- you end up using more power to do the same
work in the same time -- meaning more heat, lower battery life, etc.
> maybe you like write in asm?))
If necessary. But high-quality compilers can (or should) generate better
code than you can write by hand, because they can take into account many
more variables than a hand-programmer can. So the vast majority of the time,
writing critical code in Ada is enough. (But, yes, Janus/Ada does have a few
hand-coded ASM lookup loops; those cut compilation time by 25% when they
were implemented. [No idea if they're needed anymore, of course, the CPU and
I/O balance has changed a lot in the last 30 years.])
The problem is, if you're trying to implement fine-grained parallelism, you
have to surround that code with some sort of scheduling mechanism, and that
overhead means you aren't going to get anywhere near 100% of the CPU at any
point. That means you'll have to find a
> seriously, SISAL and NESL can automatically get good part of data
> parallelism, no magic here. and this will be enought for most programmers.
Most programmers need no parallelism at all; they just need a good graphics
library that uses parallelism to do their drawning. (And possibly a few
other libraries.) The point is, most programmers are writing I/O intensive
applications that do little computation (think of the apps on your phone,
for example).
> (high order functions will be requirement here, extending data
> parallelism to operations on functions)
Certain special cases surely exist, and those can be implemented in a
compiler for almost any language. (At the very least, by well-defined
libraries.) These things surely could be done in Ada, either via libraries
(look at Brad Moore's work) or via language-defined primitives. As I've said
all along, I've very skeptical that anything further will prove practical.
> p.s. best scheduling is no scheduling...
Right. But that's only possible using vector instructions and the like;
that's a *very* small set of all computing. (I doubt that I have ever
written a program -- in 30+ years -- that could have been helped by a vector
instruction.) Any other sort of parallelism will have to be implemented by
some sort of concurency, and that requires some sort of scheduling.
>> going on today :-), I don't see it happening. I wouldn't mind being
>
> it will happen)) and Ada - not only language without need for debugging.
> Standard ML also.
I can believe that; no one has a clue what an ML program means, so there is
no need to debug it - discarding makes more sense. ;-) Ada is the only true
syntax. :-)
Randy.
next prev parent reply other threads:[~2012-05-17 0:11 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-05-06 7:38 basic question on Ada tasks and running on different cores Nasser M. Abbasi
2012-05-06 7:59 ` Gautier write-only
2012-05-06 10:02 ` Simon Wright
2012-05-06 10:31 ` Ludovic Brenta
2012-05-06 14:14 ` Robert A Duff
2012-05-06 16:07 ` Vinzent Hoefler
2012-05-06 19:43 ` Ludovic Brenta
[not found] ` <15qdq7df9cji7htp52i9d5f8sqsgmisc3b@invalid.netcom.com>
2012-05-06 21:24 ` Ludovic Brenta
2012-05-06 14:13 ` Robert A Duff
2012-05-07 7:36 ` anon
2012-05-08 7:08 ` Maciej Sobczak
2012-05-08 9:02 ` anon
2012-05-08 9:52 ` Ludovic Brenta
2012-05-09 12:28 ` anon
2012-05-10 2:20 ` Randy Brukardt
2012-05-11 2:38 ` NatarovVI
2012-05-11 8:30 ` Georg Bauhaus
2012-05-16 15:40 ` NatarovVI
2012-05-16 18:03 ` Georg Bauhaus
2012-05-12 0:33 ` Randy Brukardt
2012-05-12 10:57 ` Stephen Leake
2012-05-15 6:55 ` Randy Brukardt
2012-05-15 22:54 ` Shark8
2012-05-16 15:54 ` NatarovVI
2012-05-17 0:11 ` Randy Brukardt [this message]
2012-05-17 1:06 ` Jeffrey Carter
2012-05-17 6:50 ` Dmitry A. Kazakov
2012-05-18 4:12 ` Randy Brukardt
replies disabled
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox