comp.lang.ada
 help / color / mirror / Atom feed
* Ada-friendly MPI
@ 2005-06-22  4:48 *-> That Guy Downstairs <-*
  2005-06-22  8:28 ` Pascal Obry
  2005-06-23  8:03 ` jtg
  0 siblings, 2 replies; 11+ messages in thread
From: *-> That Guy Downstairs <-* @ 2005-06-22  4:48 UTC (permalink / raw)


Hi all,
I am looking at getting into a LAM-MPI system running Ada95 on Linux.
Does anyone have any learning / reference info suggestions?
Books? PDFs? etc?

Thanks.





^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Ada-friendly MPI
  2005-06-22  4:48 Ada-friendly MPI *-> That Guy Downstairs <-*
@ 2005-06-22  8:28 ` Pascal Obry
  2005-06-23 13:08   ` jtg
  2005-06-23  8:03 ` jtg
  1 sibling, 1 reply; 11+ messages in thread
From: Pascal Obry @ 2005-06-22  8:28 UTC (permalink / raw)



"*-> That Guy Downstairs <-*" <jtjammer@hotmail.com> writes:

> Hi all,
> I am looking at getting into a LAM-MPI system running Ada95 on Linux.

Why not use the Distributed Annex (Annex-E) and the Ada's built-in
parallelism support ?

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|              http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Ada-friendly MPI
  2005-06-22  4:48 Ada-friendly MPI *-> That Guy Downstairs <-*
  2005-06-22  8:28 ` Pascal Obry
@ 2005-06-23  8:03 ` jtg
  1 sibling, 0 replies; 11+ messages in thread
From: jtg @ 2005-06-23  8:03 UTC (permalink / raw)


*-> That Guy Downstairs <-* wrote:
> Hi all,
> I am looking at getting into a LAM-MPI system running Ada95 on Linux.
> Does anyone have any learning / reference info suggestions?
> Books? PDFs? etc?
> 

I was looking for such a solution for a long time
until the obvious came to my mind: MPI calls can be executed from C code
and everything else can be in Ada.
You can write one-line function in C for every MPI call you want to use,
then gather these functions in separate .c file and call them
from Ada code. Simple, quick and portable (if you support the right
.c file for each platform).
What is more, you can use all the abundant learning/reference info
intended for C users.



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Ada-friendly MPI
  2005-06-22  8:28 ` Pascal Obry
@ 2005-06-23 13:08   ` jtg
  2005-06-23 16:37     ` Pascal Obry
  0 siblings, 1 reply; 11+ messages in thread
From: jtg @ 2005-06-23 13:08 UTC (permalink / raw)


Pascal Obry wrote:
> "*-> That Guy Downstairs <-*" <jtjammer@hotmail.com> writes:
> 
> 
>>Hi all,
>>I am looking at getting into a LAM-MPI system running Ada95 on Linux.
> 
> 
> Why not use the Distributed Annex (Annex-E) and the Ada's built-in
> parallelism support ?
> 

As far as I am concerned Ada Distributed Annex is non-standard. :-(
All three supercomputers, that I had opportunity to use, supported MPI
only or MPI/PVM. I had no clue how to use Ada built-in parallelism
in these environments.
But I am interested in the subject. Do you know about environments
where Ada Distributed Annex is supported (and used)? Do you use it
yourself?



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Ada-friendly MPI
  2005-06-23 13:08   ` jtg
@ 2005-06-23 16:37     ` Pascal Obry
  2005-06-23 20:26       ` jtg
  0 siblings, 1 reply; 11+ messages in thread
From: Pascal Obry @ 2005-06-23 16:37 UTC (permalink / raw)



jtg <jtg77@poczta.onet.pl> writes:

> As far as I am concerned Ada Distributed Annex is non-standard. :-(

Depending on what you mean by standard. Ada 95 is an ISO standard and the
distributed annex is part of it :)

> All three supercomputers, that I had opportunity to use, supported MPI

What are you supercomputers ? Is that a cluster ?

> only or MPI/PVM. I had no clue how to use Ada built-in parallelism
> in these environments.
> But I am interested in the subject. Do you know about environments
> where Ada Distributed Annex is supported (and used)? Do you use it
> yourself?

I've used it myself for experimentation on Windows, GNU/Linux and Solaris.

If your supercomputers is running GNU/Linux I don't see why this won't be
working. What compiler are you using ?

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|              http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Ada-friendly MPI
  2005-06-23 16:37     ` Pascal Obry
@ 2005-06-23 20:26       ` jtg
  2005-06-23 20:59         ` Pascal Obry
  0 siblings, 1 reply; 11+ messages in thread
From: jtg @ 2005-06-23 20:26 UTC (permalink / raw)


Pascal Obry wrote:
> jtg <jtg77@poczta.onet.pl> writes:
> 
> 
>>As far as I am concerned Ada Distributed Annex is non-standard. :-(
> 
> 
> Depending on what you mean by standard. Ada 95 is an ISO standard and the
> distributed annex is part of it :)
> 
So it's a pun: a standard that is not standard :-)
(An ISO standard that is not standard feature in supercomputer environments)

> 
>>All three supercomputers, that I had opportunity to use, supported MPI
> 
> 
> What are you supercomputers ? Is that a cluster ?
> 
1. Cluster 15 x  POWER2 RS/6000
2. Cluster 128 x PIII Xeon
3. I don't remember.
But all that was several years ago :-)

> 
> I've used it myself for experimentation on Windows, GNU/Linux and Solaris.
> 
> If your supercomputers is running GNU/Linux I don't see why this won't be
> working. What compiler are you using ?
> 
In supercomputer environment many problems arise when you want to use Ada.
There are very fast communication links between processors and MPI libraries
are especially optimized for hardware. You should have Ada libraries for
the hardware and I haven't seen any such library even mentioned.
Another problem is infrastructure. You need libraries, you need running host
processes at every node (for starting your subtasks), you need task spooling
system etc. You need proper configuration. But everything is suited for MPI.
For example: When you put your parallel task on queue, the spooler automatically
starts subtasks on some nodes as they become available,
establishes MPI communication between them and assigns MPI number to every
subtask. Now, how to begin computations using Ada built-in parallelism?
I don't even know how to establish Ada-style communication between them.
And all the problems vanish when you use simple C binding for MPI library...



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Ada-friendly MPI
  2005-06-23 20:26       ` jtg
@ 2005-06-23 20:59         ` Pascal Obry
  2005-06-23 22:14           ` jtg
  0 siblings, 1 reply; 11+ messages in thread
From: Pascal Obry @ 2005-06-23 20:59 UTC (permalink / raw)



jtg <jtg77@poczta.onet.pl> writes:

> In supercomputer environment many problems arise when you want to use Ada.
> There are very fast communication links between processors and MPI libraries
> are especially optimized for hardware. You should have Ada libraries for
> the hardware and I haven't seen any such library even mentioned.

Have you done some testing ? I know some people using the Ada distributed
annex on some embedded platform with something like hundred of nodes for HPC
(neutronics) without problem.

> Another problem is infrastructure. You need libraries, you need running host
> processes at every node (for starting your subtasks), you need task spooling
> system etc. You need proper configuration. But everything is suited for MPI.
> For example: When you put your parallel task on queue, the spooler
> automatically

What do you call a parallel task ? A process a thread ? Are you talking about
an executable (process) and using some batch langauge LSF/PBM ? If so I don't
see the difference with Ada.

> starts subtasks on some nodes as they become available,
> establishes MPI communication between them and assigns MPI number to every
> subtask. Now, how to begin computations using Ada built-in parallelism?

I'm not talking about parallelism with is more like OpenMP (shared memory)
than MPI (distributed memory). I'm talking about Ada distributed annex.

> I don't even know how to establish Ada-style communication between them.

That's automatic. A program is a set of partition (executable), each partition
will communicate with others using the Ada built-in Partition Communication
Subsystem.

> And all the problems vanish when you use simple C binding for MPI library...

Then use the same simple binding for Ada. If the binding is simple you should
be able to create the Ada counterpart without problem.

You did not answer my question, which OS are you using ?

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|              http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Ada-friendly MPI
  2005-06-23 20:59         ` Pascal Obry
@ 2005-06-23 22:14           ` jtg
  2005-06-24  6:39             ` Pascal Obry
  0 siblings, 1 reply; 11+ messages in thread
From: jtg @ 2005-06-23 22:14 UTC (permalink / raw)


Pascal Obry wrote:

> jtg <jtg77@poczta.onet.pl> writes:
> 
> 
>>In supercomputer environment many problems arise when you want to use Ada.
>>There are very fast communication links between processors and MPI libraries
>>are especially optimized for hardware. You should have Ada libraries for
>>the hardware and I haven't seen any such library even mentioned.
> 
> 
> Have you done some testing ? I know some people using the Ada distributed
> annex on some embedded platform with something like hundred of nodes for HPC
> (neutronics) without problem.
> 
No perfofmance testing. I had even no idea how to run it.

> 
> 
> What do you call a parallel task ? A process a thread ?
Errr... I should have said: job.

> Are you talking about
> an executable (process) and using some batch langauge LSF/PBM ? If so I don't
> see the difference with Ada.
LSF/PBM

> 
> 
>>starts subtasks on some nodes as they become available,
>>establishes MPI communication between them and assigns MPI number to every
>>subtask. Now, how to begin computations using Ada built-in parallelism?
> 
> 
> I'm not talking about parallelism with is more like OpenMP (shared memory)
> than MPI (distributed memory). I'm talking about Ada distributed annex.
OK... If you write your program for MPI, you must run it with "mpirun"
command in mpi-enabled environment. If you write your program for PVM,
you must run it with "pvmrun" command in pvm-enabled environment. If you
use Ada Distributed Annex, there must be a command that dispatches tasks
and establishes communication between them, and the environment itself
must be configured for Ada Distribution Annex.
On 15-processor cluster I could pass a parameter, either MPI or PVM.
Funny thing: PVM was available but not properly configured so you had
to use MPI.
On 128-processor cluster all jobs were by default MPI tasks and I could
not find a way to change it. But even if I did, I'm pretty sure
the system was not configured for Ada Distributed Annex.

> 
> 
>>I don't even know how to establish Ada-style communication between them.
> 
> That's automatic. A program is a set of partition (executable), each partition
> will communicate with others using the Ada built-in Partition Communication
> Subsystem.
That's automatic only if you run it in the proper way and the environment is
configured. For instance if you run MPI task with "pvmrun" or vice versa
nothing will be done automatic.
How do you run Ada distributed programs? How are they dispatched to other
nodes (processors/computers)? Is there a command like "adarun" (analogous
to mpirun and pvmrun)?

> Then use the same simple binding for Ada. If the binding is simple you should
> be able to create the Ada counterpart without problem.
I'm afraid I don't understand this.


> 
> You did not answer my question, which OS are you using ?
Now I use Windows/Debian on my personal computer, both with Ada installed.
Previously:
15-node cluster - AIX
128-node cluster - AFAIR TurboLinux (no Ada installed, I had to install gnat
                                      in my home directory)
                    Now they have Debian on it.



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Ada-friendly MPI
  2005-06-23 22:14           ` jtg
@ 2005-06-24  6:39             ` Pascal Obry
  2005-06-24 10:31               ` Ludovic Brenta
  2005-06-25  6:33               ` jtg
  0 siblings, 2 replies; 11+ messages in thread
From: Pascal Obry @ 2005-06-24  6:39 UTC (permalink / raw)



jtg <jtg77@poczta.onet.pl> writes:

> Errr... I should have said: job.

Ok.

> OK... If you write your program for MPI, you must run it with "mpirun"
> command in mpi-enabled environment. If you write your program for PVM,
> you must run it with "pvmrun" command in pvm-enabled environment. If you
> use Ada Distributed Annex, there must be a command that dispatches tasks
                                                                     ^^^^^
                                                                 job/partitions
> and establishes communication between them, and the environment itself
> must be configured for Ada Distribution Annex.

No. Job dispatching on the different nodes are handled by GLADE (the
implementation of Annex-E for GNAT). In fact, gnatdist (the equivalent to
gnatmake for building distributed program) can generate a script or Ada main
that will run/dispatch the different job/partitions on the different nodes.

Then the boot-partition (a module in one or more partition) is used to
initialized the distributed program. As you see MPI is far more low-level
than the Ada solution. And the good news is that you can run your Ada program
in a single computer (built with gnatmake) or in a distributed environment
(built with gnatdist) without changing a single line of Ada code.

Looks like magic, but it is not. It is in my opinion the right level of
abstraction to concentrate on your problem instead of low-level stuff.

> That's automatic only if you run it in the proper way and the environment is
> configured. For instance if you run MPI task with "pvmrun" or vice versa
> nothing will be done automatic.

As I said GLADE take care of that.

> How do you run Ada distributed programs? How are they dispatched to other
> nodes (processors/computers)? Is there a command like "adarun" (analogous
> to mpirun and pvmrun)?

As described above this is done by a generated script (shell or Ada).

> > Then use the same simple binding for Ada. If the binding is simple you
> > should be able to create the Ada counterpart without problem.
> I'm afraid I don't understand this.

I mean you can always create a binding to the MPI API for Ada.

> > You did not answer my question, which OS are you using ?
> Now I use Windows/Debian on my personal computer, both with Ada installed.
> Previously:
> 15-node cluster - AIX
> 128-node cluster - AFAIR TurboLinux (no Ada installed, I had to install gnat
>                                       in my home directory)
>                     Now they have Debian on it.

So I don't see a problem there. GNU/Linux Debian is well supported by GNAT.
GNAT is maybe already on your system as it is installed along with the other
GCC's compilers.

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|              http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Ada-friendly MPI
  2005-06-24  6:39             ` Pascal Obry
@ 2005-06-24 10:31               ` Ludovic Brenta
  2005-06-25  6:33               ` jtg
  1 sibling, 0 replies; 11+ messages in thread
From: Ludovic Brenta @ 2005-06-24 10:31 UTC (permalink / raw)


Pascal Obry wrote ...
> jtg writes:
> > > You did not answer my question, which OS are you using ?
> > Now I use Windows/Debian on my personal computer, both with Ada installed.
> > Previously:
> > 15-node cluster - AIX
> > 128-node cluster - AFAIR TurboLinux (no Ada installed, I had to install gnat
> >                                       in my home directory)
> >                     Now they have Debian on it.
> 
> So I don't see a problem there. GNU/Linux Debian is well supported by GNAT.
> GNAT is maybe already on your system as it is installed along with the other
> GCC's compilers.

That's not entirely true.  In sarge, there are 3 Ada compilers
(gnat, gnat-3.3 and gnat-3.4), only one of which (gnat) supports
GLADE.  No Ada compiler is installed by default, and GLADE
requires an extra package.  To get started with GLADE, you need:

apt-get install gnat gnat-glade

and optionally:

apt-get install gnat-doc gnat-glade-doc

But I agree with you on one account: Debian supports GNAT well :)

-- 
Ludovic Brenta.



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Ada-friendly MPI
  2005-06-24  6:39             ` Pascal Obry
  2005-06-24 10:31               ` Ludovic Brenta
@ 2005-06-25  6:33               ` jtg
  1 sibling, 0 replies; 11+ messages in thread
From: jtg @ 2005-06-25  6:33 UTC (permalink / raw)


Pascal Obry wrote:
>>and establishes communication between them, and the environment itself
>>must be configured for Ada Distribution Annex.
> 
> 
> No. Job dispatching on the different nodes are handled by GLADE (the
> implementation of Annex-E for GNAT). In fact, gnatdist (the equivalent to
> gnatmake for building distributed program) can generate a script or Ada main
> that will run/dispatch the different job/partitions on the different nodes.
> 
I am still not convinced. All the jobs must be submitted through job
management system and users must not use other software for job dispatching.
I think this dispute may be settled only by a test, and I have no access to
any supercomputer now.



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2005-06-25  6:33 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-06-22  4:48 Ada-friendly MPI *-> That Guy Downstairs <-*
2005-06-22  8:28 ` Pascal Obry
2005-06-23 13:08   ` jtg
2005-06-23 16:37     ` Pascal Obry
2005-06-23 20:26       ` jtg
2005-06-23 20:59         ` Pascal Obry
2005-06-23 22:14           ` jtg
2005-06-24  6:39             ` Pascal Obry
2005-06-24 10:31               ` Ludovic Brenta
2005-06-25  6:33               ` jtg
2005-06-23  8:03 ` jtg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox