comp.lang.ada
 help / color / mirror / Atom feed
* Re: Best language for parallel architectures [was: autoparallelize]
       [not found]     ` <3e9465bc$1@news.ucsc.edu>
@ 2003-04-10 23:14       ` Peter Hermann
  2003-04-11  8:40         ` Eugene Miya
  0 siblings, 1 reply; 2+ messages in thread
From: Peter Hermann @ 2003-04-10 23:14 UTC (permalink / raw)



[c.p. moderator: the poster's note crosspost.]

somebody (no insult intended) wrote:
> Some argue that those features further complicate handling parallelism.
                                 ^^^^^^^^^^^^^^^^^^
This one made me break my silent observation of a helpless discussion:
Ada has comfortable parallelism (working since 1983!) with automatic
dispatching of parallel tasks to different CPUs easily done by the compiler.
I am wondering why the so-called High-Performance-Fortran-Community
does not realize that fact. incredible.
This said by a Fortran veteran with ample experience in 2 very large
Fortran Finite Element systems.

-- 
--Peter Hermann(49)0711-685-3611 fax3758 ica2ph@csv.ica.uni-stuttgart.de
--Pfaffenwaldring 27 Raum 114, D-70569 Stuttgart Uni Computeranwendungen
--http://www.csv.ica.uni-stuttgart.de/homes/ph/
--Team Ada: "C'mon people let the world begin" (Paul McCartney)




^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Best language for parallel architectures [was: autoparallelize]
  2003-04-10 23:14       ` Best language for parallel architectures [was: autoparallelize] Peter Hermann
@ 2003-04-11  8:40         ` Eugene Miya
  0 siblings, 0 replies; 2+ messages in thread
From: Eugene Miya @ 2003-04-11  8:40 UTC (permalink / raw)


[c.p. moderator: the poster's note crosspost.]

In article <b73bno$ovu$1@news.uni-stuttgart.de>,
Peter Hermann  <ica2ph@sinus.csv.ica.uni-stuttgart.de> wrote:
>somebody (no insult intended) wrote:
>> Some argue that those features further complicate handling parallelism.
>                                 ^^^^^^^^^^^^^^^^^^
>This one made me break my silent observation of a helpless discussion:
>Ada has comfortable parallelism (working since 1983!) with automatic
>dispatching of parallel tasks to different CPUs easily done by the compiler.
>I am wondering why the so-called High-Performance-Fortran-Community
>does not realize that fact. incredible.
>This said by a Fortran veteran with ample experience in 2 very large
>Fortran Finite Element systems.

Some of this appears as a c.p. FAQ.
It's also why MPI and PVM got created when people could have just used TCP.

I recommend this reference:
From this point, search forward for the "Pancake" reference.

There is another reference which answers your Ada style "why not rendevzous?"
point, but I cannot remember that specific paper.
Basic criticism is the poor scaling of operating system style fork-join
of which rendezvous are like: fine for OS style tasks or processes, but
these tend to be less useful in massive (say 16K processors) takes lots
of initialization time.

Find the Pancake issue of IEEE Computer (the article following her
article is also important).

If you want an Ada example try:

%A P. G. Hibbard
%A A. Hisgen
%A others
%T Studies in Ada Style
%I Springer-Verlag
%C New York
%D 1981
%P 81-95
%K book, text,
%X Ada on Cm* for a PDE (LaPlace) solver with simplistic boundary conditions.


Oh, here's another one:

%A Kenneth W. Dritz
%T Ada Solutions to the Salishan Problems
%B A Comparative Study of Parallel Programming Languages: The Salishan Problems
%E John T. Feo
%S Special Topics in Supercomputing
%V 6
%I North Holland
%C Amsterdam
%D 1992
%P 9-92
%K Hamming's Problem (primes), paraffins problem, skyline matrix,
doctor's office problem,

I recommend my friend John's book here.  It's a little pricey.
Only the APL community didn't respond.



Newsgroups: comp.parallel,comp.sys.super
Subject: [l/m 7/17/2002] network resources -- comp.parallel (10/28) FAQ
...
Keywords: 

Archive-Name: superpar-faq
Last-modified: 1 Oct 2001

10	Related news groups, archives, test codes, and other references
12	User/developer communities
14	References, biblios
16	
18	Supercomputing and Crayisms
20	IBM and Amdahl
22	Grand challenges and HPCC
24	Suggested (required) readings
26	Dead computer architecture society
28	Dedications
2	Introduction and Table of Contents and justification
4	Comp.parallel news group history
6	parlib
8	comp.parallel group dynamics



....



Where are the parallel applications?
------------------------------------
Where are the parallel codes?
-----------------------------
Where can I find parallel benchmarks?
=====================================

High performance computing has important historical roots with some
"sensititivity:"
1) Remember the first computers were used to calculate the trajectory of
artillery shells, crack enemy codes, and figure out how an atomic bomb
would work.  You are fooling yourself if you think those applications
have disappeared.
2) The newer users, the simulators and analysts, tend to work for industrial
and economic concerns which are highly competitive with one another.
You are fooling yourself if you think someone is going to just place their
industrial strength code here.  Or give it to you.

	So where might I find academic benchmarks?
		parlib@hubcap.clemson.edu
			send index
		netlib@ornl.gov
			send index from benchmark
		nistlib@cmr.ncsl.nist.gov
			send index

	See also:
		Why is this news group so quiet?

		Other news groups:
			sci.military.moderated

We also tend to have many "chicken-and-egg" problems.
	"We need a big computer."
	"We can design one for you.  Can you give us a sample code?"
	"No."
	...


Be extremely mindful of the sensitive nature of collecting benchmarks.


Obit quips:
	MIP: Meaningless Indicators of Performance

	Parallel MFLOPS: The "Guaranteed not to exceed speed."





Is parallel computing easier or harder than "normal, serial" programming?
=========================================================================

Ha.  Take your pick.  Jones says no harder.  Grit and many others say yes
harder.  It's subjective.  Jones equated programming to also mean
"systems programming."

In 1994, Don Knuth in a "Fire Side Chat" session at a Conference when asked,
(not me):
	"Will you write an "Art of Parallel Programming?"
replied:
	"No."
Knuth did not.

One group of comp.parallel people hold that parallel algorithm is an oxymoron:
that an algorithm is inherently serial by definition.


Knuth measures:
	bit	(1-bit).
	nyp	(2-bits).
	nybble	(4-bit).
	byte	(8-bit).
	wyde	(16-bit).
	tetrabyte	(32-bit).
	octobyte	(64-bit).



How can you scope out a supercomputing/parallel processing firm?
================================================================

Lack of software.
	What's your ratio of hardware to software people?
Lack of technical rather than marketing documentation.
	When will you have architecture and programming manuals?
Excessive claims about automatic parallelization.
	What languages are you targeting?




See Also: What's holding back parallel computer development?
	  ==================================================


	"I do not know what the language of the year 2000 will look like
	but it will be called FORTRAN."
				--Attributed to many people including
				Dan McCracken, Seymour Cray, John Backus...

All the Perlis Epigrams on this language:

42. You can measure a programmer's perspective by noting his
attitude on the continuing vitality of FORTRAN.
			--Alan Perlis (Epigrams)

70. Over the centuries the Indians developed sign language
for communicating phenomena of interest.  Programmers from
different tribes (FORTRAN, LISP, ALGOL, SNOBOL, etc.) could
use one that doesn't require them to carry a blackboard on
their ponies.
			--Alan Perlis (Epigrams)

85. Though the Chinese should adore APL, it's FORTRAN they
put their money on.
			--Alan Perlis (Epigrams)

See also #68 and #9.


FORTRAN | C | C++ | etc.
------------------------
Why don't you guys grow up and use real languages?
==================================================

The best way to answer this question first is to determine what languages
the questioner is asking (sometimes called 'language bigots').
What's a 'real' langauge?
This is a topic guaranteed to get yawns from the experienced folk,
you will only argue with newbies.

In two words, many of the existing application programs are:
	"Dusty decks."
You remember what a 'card deck' was right?  These programs are non-trivial:
thousands and sometimes millions of lines of code whose authors have sometimes
retired and not kept on retainer.

A missing key concept is "conversion."  Users don't want to convert their
programs (rewrite, etc.) to use other languages.

Incentives.

See also: Statement: Supercomputers are too important to run
        interactive operating systems,
        text editors, etc.


Don't language Converters like f2c help?
----------------------------------------

No.

Problems fall into several categories:
        1) Implementation specific features:
                you have a software architecture to take advantage certain
                hardware specific features (doesn't have to be vectors,
                it could be I/O for instance).  A delicate tradeoff
                exists between using said features vs. not using them
                for reasons of things like portability and long-term
                program life.  E.g., Control Data Q8xxxxxx based
                subprogram calls while having proper legal FORTRAN syntax,
                involved calls to hardware and software which didn't
                exist on other systems.  Some of these calls could be
                replaced with non-vector code, but why?  You impulse purchased
                the machine for its speed to solve immediate problems.
        2) Some language features don't have precisely matching/
                corresponding semantics.  E.g., dynamic vs. static memory use.
        3). Etc.
These little "gotchas" are very annoying and frequently compound to
serious labor.



What's wrong with FORTRAN?  What are it's problems for parallel computing?
--------------------------------------------------------------------------

The best non-language specific explanation of the parallel computing problem
was written in 1980 by Anita Jones on the Cm* Project

Paraphasing:
1) Lack of facilities to protect and insure the consistency of results.
	[Determinism and consistency.]
2) Lack of adequate communication facilities.
	[What's wrong with READ and WRITE?]
3) Lack of synchronization (explicit or implicit) facilities.
	[Locks, barriers, and all those things.]
4) Exception handling (miscellaneous things).

Her citation of problems were: consistency, deadlock, and starvation.

FORTRAN's (from 1966 to current) problems:
	Side effects (mixed blessing: re: random numbers)
	GOTOs (the classic software engineering reason)
	Relatively rigid poor data structures
	Relatively static run time environment semantics

68. If we believe in data structures, we must believe in
independent (hence simultaneous) processing.  For why else
would we collect items within a structure?  Why do we
tolerate languages that give us the one without the other?
			--Alan Perlis (Epigrams)

9. It is better to have 100 functions operate on one data
structure than 10 functions on 10 data structures.
			--Alan Perlis (Epigrams)




A few people (Don Knuth included) would argue that the definition of an
algorithm contradicts certain aspects regarding parallelism.  Fine.
We can speak parallel (replicated) data structures, but the problem of
programming languages and architectures covers more than education and math.
 
Programming language types (people) tend to either develop specialized
languages for parallelism or their tend to add operating system features.
The issue is assuming determinism and consistency during a computation.
If you don't mind the odd inconsistent error, then you are lucky.
Such a person must clearly write perfect code every time.  The rest of
us must debug.
 
"Drop in" parallel speed-up is the Holy Grail of high performance computing.
The Holy Grail of programming and software engineering has been
"automatic programming."  If you believe we have either, then I have a
big bridge to sell you.
 
Attempts to write parallel languages fall into two categories:
completely new languages: with new semantics in some case
        e.g., APL, VAL, ID, SISAL, etc.
add ons to old languages: with new semantics and hacked on syntax.
        The latter fall into two types:
                OS like constructs like semaphores, monitors, etc.
                        which tend not to scale.  ("Oh, yeah you want
                        concurrency, well, let me help you with these....")
                        Starting with Concurrent Pascal, Modula, etc.
                Constructs for message passing or barriers thought up
                        by numerical analysts (actually these are two
                        vastly different subtypes (oversimplified)).
                        Starting with "meta-adjective" FORTRAN.
 
Compilers and architectures ARE an issue (can be different):
One issue is programmability or ease of programming:
Two camps:
        parallel programming is no different than any other programming.
        [Jones is an early ref.]
and
        Bull shit! It's at least comparable in difficulty to
        "systems" programming.
        [Grit and McGraw is an early ref.]
 
 
Take a look at the use of the full-empty bit on Denelcor HEP memory
(and Tera).  This stuff is weird if you have never encountered it.
I'm going to use this as one example feature, but keep in mind that
other features exist.  You can find "trenches" war stories (mine fields for
Tera to avoid [they know it]).  Why?  Because the programmers are very
confident they (we) know what they (we) are doing.  BUZZT!
We (I mean Murphy) screw up.
 
The difficulty comes (side effects) when you deal with global storage
(to varying degrees if you have ever seen TASK COMMON).  You have
difficulty tracing the scope.  Architecture issues.
I like to see serial codes which have dead-lock and other problems.
I think we should collect examples (including error messages) put them
on display as warnings (tell that to the govt. ha!).
 
The use of atomic full-empty bits might be the parity bits of the future
(noting that the early supercomputers didn't have parity).
How consistent do you like your data?  Debug any lately?
 
Don't get fooled that message passing is any safer.
See the Latest Word on Message Passing.
You can get just as confused.
 
Ideally, the programmer would LOVE to have all this stuff hidden.
I wonder when that will happen?
 
What makes us think that as we scale up processors, that we won't make
changes in our memory systems?  Probably because von Neumann memories
are so easily made.
 
Communication: moving data around consistently is tougher than most
people give credit, and it's not parallelism.  Floating point gets
too much attention.
 
Solutions (heuristic): education: I think we need to make emulations for
older designed machines like the HEP available (public domain for schools).
The problem is that I don't trust some of those emulators,
because I think we really need to run them on parallel machines,
and many are proprietary and written for sequential machines.
The schools potentiall have a complex porting job.
I fear that old emulations have timing gotchas which never
got updated as the designs moved into hardware.
 
Just as PC software got hacked, some of these emulators could use some hacking.
 
Another thing I thought was cool: free compilers.  Tim Bubb made his APL
compiler available.  I'm not necessarily a fan of APL, but I moved it
over to Convex for people to play with during a try back East.  I doubt
people (Steve: [don't fall off those holds with Dan belaying]) had time to
bring APL up on the Convex and get vector code generation working.
The information learned from that kind of experience needs to get fed
back to compiler writers.  That's not happening now.
 
Patrick and I spent an amusing Dec. evening standing outside the Convex HQ
pondering what it might be like raising a generation of APL (or parallel
language) hacker kids:
        either they will be very good, or
        they will be confused as hell.
 
 
For a while the ASPLOS conferences were pretty great conferences.
(Arch. support for PLs and OSes.).  Have not been to one lately.
 
Theory alone won't cut it.  You want a Wozniak.  This is debateable, of course.
He needs the right tools.  They don't exist now.  Maybe.
 
 
Math won't be enough.
To quote Knuth: mathematicians don't understand the cost of operations.
(AMM)
 
Perlis had that too (for the LISP guys, not directly for parallelism, in
the FAQ).
 
Beware the Turing tarpit.
 
 
Compilers reflect the architecture, and there is some influence on
architecture by compilers, but that vicious circle doesn't have enough
components to be worth considering.  Blind men attempting to describe
an elephant.


>   The computer languages field seems to have no Dawkins, no Gould, no
>   popularizers and not even any very good text books. Every one of the
>   books I have tried has come across as boring, poorly structured, making
>   no obvious attempt to justify things and completely unwilling to stand by
>   itself.
 
That's largely because Dawkins and Gould are making observations.
They are not attempting to construct things which have never existed before.
I say that after reading Gould's text book (O&P, very good)
not his popular books (like Mismeasure of Man) which are enjoyable.
 
Wirth and Ishbiah (sp) are pretty bright guys, but they are not above
making mistakes (Niklaus himself wrote a letter/article when he forgot
an important piece of syntax in Pascal (catch-all exception to the multiway
branch: aka as "OTHERWISE" in some compilers, "ELSE" in other compilers,
and a slew of other keywords having identical semnatics).  It was almost
impossible to get this simple (ha!) fix added to the language.
Ritchie is pretty bright, too.
 
I recommend The History of Programming Languages II (HOPL-II) published
by ACM Press/Addison-Wesley.
The Sunnyvale public library has a copy (I was impressed).
 
Backus is also bright.  Bill Wulf in conversation to me suggested that
the Griswolds are also bright.  Oh, a LISP cross post: I occasionally
see John McCarthy at Stanford and Printers Inc.  John is also quite bright.
I signed his petition against Computing the Future.
All bright guys, and they all learned (made mistakes along the way).
 
The brightest most inspired language designers I can think of might be
Alan Kay and Adele Goldberg and their world on Smalltalk-80.  If you are
using a windowing system, you are most likely using a system inspired by
them.  A very impressive chapter in HOPL-II about them (see the
paragraphs refering to "management").
 
You merely need a decent library or a 6 year IEEE member to get the gist.
Two articles stand out (one comes from the MIT AI Lab [and Stanford]).
The two articles stand as an interesting contrast: one is a perfect example
of the problems cited by the other:
 
The order which you read these articles might highly influence your
perception, so I will cite them in page order.  Fair enough?
        [The annotations are NOT all mine (collected over time).
        In particular see the last sentence of the first annotation
        to the first article.]
 
%A Cherri M. Pancake
%A Donna Bergmark
%T Do Parallel Languages Respond to the Needs of Scientific Programmers?
%J Computer
%I IEEE
%V 23
%N 12
%D December 1990
%P 13-23
%K fortran, shared memory, concurrency,
%X This article is a must read about the problems of designing, programming,
and "marketing" parallel programming languages.
It does not present definitive solutions but is a descriptive
"state-of-the-art" survey of the semantic problem.  The paper reads like
the "war of the sexes."  Computer scientist versus computational scientist,
some subtle topics (like shared memory models) are mentioned.  An
excellent table summarizes the article, but I think there is one format error.
[e.g. of barriers versus subroutines.]
It is ironically followed for an article by computer scientists typifying
the author's thesis.
%X Points out the hierarchical model of "model-making (4-level)
very similar to Rodrigue's (LLNL) parallelism model (real world ->
math theory -> numerical algorithms -> code).
%X Table 1:
Category        For scientific researcher       For computer scientist
*
Convenience
                Fortran 77 syntax               Structured syntax and abstract
                                                  data types
                Minimal number of new           Extensible constructs
                  constructs to learn
                Structures that provide         Less need for fine-grain
                 low-overhead parallelism        parallelism
Reliability
                Minimal number of changes to    Changes that provide
                  familiar constructs             clarification
                No conflict with Fortran models Support for nested scoping
                  of data storage and use         and packages
                Provision of deterministic      Provision of non-deterministic
                  high-level constructs           high-level constructs
                  (like critical sections,        (like parallel sections,
                   barriers)                       subroutine invocations)
                Syntax that clearly             Syntax distinctions less
                 distinguishes parallel from     critical
                  serial constructs
Expressiveness
                Conceptual models that support  Conceptual models adaptable to
                  common scientific programming   wide range of programming
                  strategies                      strategies
                High-level features for         High-level features for
                  distributing data across        distributing work across
                  processors                      processors
                Parallel operators for array/   Parallel operators for abstract
                  vector operands                 data types
                Operators for regular patterns  Operators for irregular
                  of process interaction          patterns of process
                                                  interaction
Compatibility
                Portability across range of     Vendor specificity or
                  vendors, product lines          portability to related
                                                 machine models
                Conversion/upgrading of         Conversion less important
                  existing Fortran code           (formal maintenance
                                                   procedures available)
               Reasonable efficiency on most   Tailorability to a variety of
                  machine models                  machine models
                Interfacing with visualization  Minimal visualization
                  support routines                support
                Compatibility with parallel     Little need for "canned"
                  subroutine libraries            routines
 
%A Andrew Berlin
%A Daniel Weise
%T Compiling Scientific Code Using Partial Evaluation
%J Computer
%I IEEE
%V 23
%N 12
%D December 1990
%P 25-37
%r AIM 1145
%i MIT
%d July 1989
%O pages 21 $3.25
%Z Computer Systems Lab, Stanford, University, Stanford, CA
%d March 1990
%O 31 pages......$5.20
%K partial evaluation, scientific computation, parallel architectures,
parallelizing compilers,
%K scheme, LISP,
%X Scientists are faced with a dilemma: Either they can write abstract
programs that express their understanding of a problem, but which do
not execute efficiently; or they can write programs that computers can
execute efficiently, but which are difficult to write and difficult to
understand.  We have developed a compiler that uses partial evaluation
and scheduling techniques to provide a solution to this dilemma.
%X Partial evaluation converts a high-level program into a low-level program
that is specialized for a particular application. We describe a compiler that
uses partial evaluation to dramatically speed up programs. We have measured
speedups over conventionally compiled code that range from seven times faster
to ninety one times faster. Further experiments have also shown that by
eliminating inherently sequential data structure references and their
associated conditional branches, partial evaluation exposes the
low-level parallelism inherent in a computation. By coupling partial evaluation
with parallel scheduling techniques, this parallelism can be exploited for
use on heavily pipelined or parallel architectures. We have demonstrated this
approach by applying a parallel scheduler to a partially evaluated
program that simulates the motion of a nine body solar system.
 
....


Why does the computer science community insist upon writing these
=================================================================
esoteric papers on theory and which no one uses anyway?
=======================================================
Why don't the computer engineers just throw the chips together?
===============================================================

It's the communications, stupid!

CREW, EREW, etc.etc.


Over the years, many pieces of email were exchanged in private complaining
about the parallel processing community from applications people.  
Specifically topics which appear to irk applications people, discussion:

	Operating systems.
	New programming languages.
	Multistage interconnection networks.
	Load balancing.

This is a short list, I know that I will remember other topics
and other people will remind me (anonymously).
Of course the applications would like "drop-in" automatic parallelization
(it will come after we have drop in "automatic programming."
A.k.a.: anything to get my program to run faster crowd.
	Short of added cost.

One noted early paper paraphrased:
	If a cook can parallel process, why can't computer people?


Boy, you guys sure argue a lot.
===============================

It's academic.
The bark so far is worse than the bite.
The name calling can be found in various parts of the literature
(e.g., "Polo players of science...").

Many misunderstandings have evolved:

``An exciting thing was happening at Livermore.  They were building a
supercomputer [the LLNL/USN S-1], and I will certainly confess to being
a cycle junkie.
Computers are never big enough or fast enough.  I have no patience at
all with these damned PC's.  What I didn't realize when I went over to
Livermore was that as long as physicists are running the show you're
never going to get any software.  And if you don't get any software,
you're never going to get anywhere.  Physicists have the most abysmal
taste in programming environments.  It's the software equivalent of a
junk-strewn lab with plug boards, bare wires and alligator clips.
They also seem to think that computers (and programmers for that
matter) are the sorts of things to which you submit punched card decks
like you did in the mid-sixties.''

--- Bill Gosper, in ``More Mathematical People: Contemporary Conversations''
               (Donald J. Albers et. al., Eds.; Harcourt Brace Jovanovich)
	[Gosper is a well-known LISPer.]

Computing the future: a broader agenda for computer science and engineering /
Juris Hartmanis and Herbert Lin, editors ;
Committee to Assess the Scope and Direction of Computer Science
and Technology, Computer Science and Telecommunications Board,
Commission on Physical Sciences, Mathematics, and Applications,
National Research Council. Washington, D.C. : National Academy Press, 1992

Petition by John McCarthy, John Backus, Don Knuth, Marvin Minsky,
Bob Boyer, Barbara Grosz, Jack Minker, and Nils Nilsson rebutting
"Computing the future," 1992.
	http://www-formal.stanford.edu/jmc/petition.html

The FAQ maintainer is one of the signatories of this petition and one of
the few people apparently to read the Report.  The Report has a good set of
references further citing problems.

Some of these problems go away when checkbooks are brought out.

Don't let anyone tell you there isn't a gulf between CS and other activities,
is some cases it better than this, and other cases it's worse.


Perhaps here's another FAQ? ;)
Q: How do experts debug parallel code?
======================================
A1: Debug? Just don't make any programming mistakes.
A2: Use print statements. 

Debugging
Yep, we discuss it.  No good ways.
People who say they have good ways have heavy context.

What does the FAQ recommend?

Long timers point to A2.  I know people who pride themselves on A1,
they really believe they do it.
One reader so far recommends Fox's textbooks (1994 in particular).

This section needs more thrashing on what little time which I don't have.
You can gain a person's perspective on how advanced they believe
debugging technology is.


What's holding back parallel computer development?
==================================================

Software, in one word.

Don't hold your breath for "automatic parallelization."
[Yes, we have a tendency to butcher language in this field.
I have a Grail to sell you.]

See my buzzword: "computational break-even" (like break-even in
controlled fusion research).  We have yet to reach computational break-even.

......

Articles: comp.parallel
Administrative: eugene@cse.ucsc.edu.SNIP
Archive: http://groups.google.com/groups?hl=en&group=comp.parallel




^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2003-04-11  8:40 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <3E92DFBE.F4920623@imm.rwth-aachen.de>
     [not found] ` <b6vfuu$313u$1@news2.engin.umich.edu>
     [not found]   ` <Q2DNfLAeK+k+EwB3@mooremusic.org.uk>
     [not found]     ` <3e9465bc$1@news.ucsc.edu>
2003-04-10 23:14       ` Best language for parallel architectures [was: autoparallelize] Peter Hermann
2003-04-11  8:40         ` Eugene Miya

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox