comp.lang.ada
 help / color / mirror / Atom feed
* Distributed Systems Annex, data sharing between programs
@ 2012-06-04 18:49 Adam Beneschan
  2012-06-05  7:36 ` Maciej Sobczak
  0 siblings, 1 reply; 30+ messages in thread
From: Adam Beneschan @ 2012-06-04 18:49 UTC (permalink / raw)


I haven't had much need to look at the Distributed Systems Annex until recently.  I'm finding that the answer to a simple question is eluding me: Can the execution of a single partition be part of the executions of two or more Ada program executions (whether executions of the same program, or of different programs)?  To explain further: Suppose a program P WITH's a Remote_Call_Interface package R, and R has global data in its body, and R provides operations that modify and retrieve this data.  Suppose that R (and everything it needs) is put into its own partition, while the rest of P is put into another partition. Now, if the partition containing R is run just once, and the other partition is executed multiple times, do they share the same data in R's body?  That is, if one execution of P calls a routine that modifies the global data, and another execution calls a routine that retrieves it, will the second execution retrieve the data that was set by the first execution?  Similarly, suppose that two different programs WITH R, but the partition containing R is executed just once; do the two programs share the data in R's body?

My first impression is that the answer is "no".  The introduction to Annex E talks about "multiple partitions working cooperatively as part of a single Ada program" and other similar wording, which seems to preclude the possibility of partitions being part of more than one program.  Also, the note in E(7) says that the resulting execution is semantically equivalent if the program is partitioned differently; and since two executions of the same program obviously can't share data like this if the entire program is put into one partition, it wouldn't be semantically equivalent if they shared data when R is put in its own partition. Still, 10.2 is somewhat vague about the different ways partitions can be executed, and I'm confused by examples such as 
http://www.adacore.com/adaanswers/gems/gem-111-the-distributed-systems-annex-part-5-embedded-name-server
that appears to be set up so that multiple executions of the same "client" partition cause data to accumulate in a server partition. This strikes me as being outside of the paradigm defined by the RM--but is it, or am I just confused?

If my understanding is correct, my next question is: is there any language-defined support for sharing data between programs (besides file I/O), or does this require using an outside library for (say) socket communication or memory sharing? 

                                -- thanks, Adam



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-04 18:49 Distributed Systems Annex, data sharing between programs Adam Beneschan
@ 2012-06-05  7:36 ` Maciej Sobczak
  2012-06-05 16:02   ` Adam Beneschan
  0 siblings, 1 reply; 30+ messages in thread
From: Maciej Sobczak @ 2012-06-05  7:36 UTC (permalink / raw)


On 4 Cze, 20:49, Adam Beneschan <a...@irvine.com> wrote:

> I haven't had much need to look at the Distributed Systems Annex until recently.
[...]

These are very interesting observations.
I would like to add my two cents to them, as they are related in
particular to this:

> Also, the note in E(7) says that the resulting execution is semantically equivalent if the program is partitioned differently;

DSA is intended to provide transparent "distributability", where
transparent means that the physicality of distribution is as hidden as
possible (ideally it should be undetectable). The problem that I see
is that Ada distinguishes potentially blocking operations from those
that never block and allows to use only the latter in protected bodies
- now, if we repartition the program in such a way that a given
operation (Do_Something_For_Me) ends up in a separate partition, it
might become potentially blocking by virtue of the hidden physicality
of remote call. In short, it might become a I/O operation even though
it is not visible anywhere. If such operation is used within a
protected body, then the program can be legal or illegal depending on
the partitioning scheme, which is not expressed in code.

That's just against what I understand as Ada spirit.

Things like this lead me to conclusion that DSA is essentially broken
and is not a proper way to handle distribution in the system.

> I'm confused by examples such as http://www.adacore.com/...

A worse observation is that AdaCore might be confused, too. Which only
confirms that DSA is broken.

> If my understanding is correct, my next question is: is there any language-defined support for sharing data between programs

Do you expect it to be language-defined? Why? Ada is not the only
language (not even the only ISO-stamped one) and heterogeneity is a
natural property of systems composed of multiple programs. Defining
data sharing between programs written in the same language would be a
wasted effort (better spent elsewhere in the standard). Instead, allow
others (third-parties) to define it so that programs written in
different languages will be able to share data - this will have much
bigger benefits.

--
Maciej Sobczak * http://www.msobczak.com * http://www.inspirel.com



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-05  7:36 ` Maciej Sobczak
@ 2012-06-05 16:02   ` Adam Beneschan
  2012-06-05 18:35     ` tmoran
                       ` (2 more replies)
  0 siblings, 3 replies; 30+ messages in thread
From: Adam Beneschan @ 2012-06-05 16:02 UTC (permalink / raw)


On Tuesday, June 5, 2012 12:36:01 AM UTC-7, Maciej Sobczak wrote:

> > If my understanding is correct, my next question is: is there any language-defined support for sharing data between programs
> 
> Do you expect it to be language-defined? Why? Ada is not the only
> language (not even the only ISO-stamped one) and heterogeneity is a
> natural property of systems composed of multiple programs. Defining
> data sharing between programs written in the same language would be a
> wasted effort (better spent elsewhere in the standard). Instead, allow
> others (third-parties) to define it so that programs written in
> different languages will be able to share data - this will have much
> bigger benefits.

My reasoning here is that the DSA requires mechanisms to pass objects of Ada types between partitions (of the same program?) transparently, and also to share data in Shared_Passive partitions transparently, without users having to worry about the representation.  So it would seem like a natural extension to provide this sort of communication between "programs".  The mechanisms would already exist.  On the other hand, you have a point that writing a server that uses a feature like this would limit the programs that could use the service to those written in Ada.  I still think this would be useful in some situations--for example, if the server and clients were expected to be used within one company (or one division of a company), so that it might be reasonable to expect that all programs be written in a common language.

                              -- Adam





^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-05 16:02   ` Adam Beneschan
@ 2012-06-05 18:35     ` tmoran
  2012-06-06  7:14     ` Jacob Sparre Andersen
  2012-06-06  7:39     ` Maciej Sobczak
  2 siblings, 0 replies; 30+ messages in thread
From: tmoran @ 2012-06-05 18:35 UTC (permalink / raw)


> .. between partitions (of the same program?) transparently, and also to share
> data in Shared_Passive partitions transparently, without users having to
> worry about the representation.  So it would seem like a natural extension
> to provide this sort of communication between "programs".  The mechanisms
> would already exist.

    If program A consists of Procedure Main_A and a separate partition
Server, while program B is Procedure Main_B and also a separate Server,
you can write a small program AB which simply starts up two tasks.  One of
those tasks calls Main_A and the other Main_B and there you have
effectively the two programs running as a single, multitasking, program.
Separate it into partitions and you have the original two programs running
as a distributed program.  (Server would of course have to be re-entrant
whether it's called by two tasks in a traditional single physical
partition, or by two different active partitions.)

> On the other hand, you have a point that writing a server that uses a
> feature like this would limit the programs that could use the service to
> those written in Ada.
   Or any program with the needed Ada interface layer.

> .. now, if we repartition the program in such a way that a given
> operation (Do_Something_For_Me) ends up in a separate partition, it
> might become potentially blocking by virtue of the hidden physicality
> of remote call. In short, it might become a I/O operation even though
> it is not visible anywhere. If such operation is used within a
> protected body, then the program can be legal or illegal depending on
> the partitioning scheme, which is not expressed in code.
    "All forms of remote subprogram calls are potentially blocking
operations."  So the final physical partitioning may decide whether a
particular call is actually blocking, but if it's a remote subprogram
call, even if the whole program is physically in one partition, then
the call is "potentially blocking" and thus illegal from a protected type.



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-05 16:02   ` Adam Beneschan
  2012-06-05 18:35     ` tmoran
@ 2012-06-06  7:14     ` Jacob Sparre Andersen
  2012-06-06  7:39     ` Maciej Sobczak
  2 siblings, 0 replies; 30+ messages in thread
From: Jacob Sparre Andersen @ 2012-06-06  7:14 UTC (permalink / raw)


Adam Beneschan wrote:

> [...]  On the other hand, you have a point that writing a server that
> uses a feature like this would limit the programs that could use the
> service to those written in Ada.

... and compiled with the same version of the same Ada compiler.

Greetings,

Jacob
-- 
"It is very easy to get ridiculously confused about the
 tenses of time travel, but most things can be resolved
 by a sufficiently large ego."



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-05 16:02   ` Adam Beneschan
  2012-06-05 18:35     ` tmoran
  2012-06-06  7:14     ` Jacob Sparre Andersen
@ 2012-06-06  7:39     ` Maciej Sobczak
  2012-06-06  8:07       ` Dmitry A. Kazakov
  2012-06-06 10:09       ` Niklas Holsti
  2 siblings, 2 replies; 30+ messages in thread
From: Maciej Sobczak @ 2012-06-06  7:39 UTC (permalink / raw)


On 5 Cze, 18:02, Adam Beneschan <a...@irvine.com> wrote:

> I still think this would be useful in some situations--for example, if the server and clients were expected to be used within one company (or one division of a company), so that it might be reasonable to expect that all programs be written in a common language.

I'm afraid not even then. My work consists mostly of implementing
middleware solutions for multi-language (mostly C++/Java) systems that
are developed within the same division of a single company. I seem to
see this pattern more and more often, wherever I go.

Interestingly, even if you focus on programs written by the same
person (you cannot get more control than that, right?), there is still
no guarantee that they will be written in the same language, as
different languages have different features and tradeoffs that justify
their use in different contexts. Java-based GUI displays for Ada-based
backends that are configured by Python-based scripts, all using C++-
based databases? Better get used to that.

This is what makes single-language-distributed-systems solutions kind
of pointless.

--
Maciej Sobczak * http://www.msobczak.com * http://www.inspirel.com



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-06  7:39     ` Maciej Sobczak
@ 2012-06-06  8:07       ` Dmitry A. Kazakov
  2012-06-06 10:09       ` Niklas Holsti
  1 sibling, 0 replies; 30+ messages in thread
From: Dmitry A. Kazakov @ 2012-06-06  8:07 UTC (permalink / raw)


On Wed, 6 Jun 2012 00:39:18 -0700 (PDT), Maciej Sobczak wrote:

> This is what makes single-language-distributed-systems solutions kind
> of pointless.

True. The common denominator must be the middleware itself. The language
should provide an equivalent of the pragma Convention for objects and
operations living in the middleware. The interfaces of these things should
be publicly protected.

I agree with your assertion that DSA concept is wrong. But it would be very
difficult to improve that because things are quite unsettled. Interfacing
states of distributed objects is just a tip of the iceberg of the services
a middleware provides.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-06  7:39     ` Maciej Sobczak
  2012-06-06  8:07       ` Dmitry A. Kazakov
@ 2012-06-06 10:09       ` Niklas Holsti
  2012-06-06 11:40         ` Maciej Sobczak
  1 sibling, 1 reply; 30+ messages in thread
From: Niklas Holsti @ 2012-06-06 10:09 UTC (permalink / raw)


On 12-06-06 09:39 , Maciej Sobczak wrote:
> On 5 Cze, 18:02, Adam Beneschan<a...@irvine.com>  wrote:
>
>>   I still think this would be useful in some situations--for example, if the server and clients were expected to be used within one company (or one division of a company), so that it might be reasonable to expect that all programs be written in a common language.
>
> I'm afraid not even then. My work consists mostly of implementing
> middleware solutions for multi-language (mostly C++/Java) systems that
> are developed within the same division of a single company. I seem to
> see this pattern more and more often, wherever I go.
>
> Interestingly, even if you focus on programs written by the same
> person (you cannot get more control than that, right?), there is still
> no guarantee that they will be written in the same language, as
> different languages have different features and tradeoffs that justify
> their use in different contexts. Java-based GUI displays for Ada-based
> backends that are configured by Python-based scripts, all using C++-
> based databases? Better get used to that.
>
> This is what makes single-language-distributed-systems solutions kind
> of pointless.

The fact that *some* distributed systems are multi-language does not 
mean that a single-language solution is pointless. Insufficient, perhaps.

Multi-language distributed systems tend to be built with 
language-independent middleware. There is not much to discuss about 
them, from a language-centered point of view (as in c.l.a.). 
Language-specific bindings to IDLs come closest. In your experience, are 
IDLs like CORBA used today? Or just sockets with ad-hoc protocols?

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-06 10:09       ` Niklas Holsti
@ 2012-06-06 11:40         ` Maciej Sobczak
  2012-06-06 12:08           ` Dmitry A. Kazakov
                             ` (3 more replies)
  0 siblings, 4 replies; 30+ messages in thread
From: Maciej Sobczak @ 2012-06-06 11:40 UTC (permalink / raw)


On 6 Cze, 12:09, Niklas Holsti <niklas.hol...@tidorum.invalid> wrote:

> > This is what makes single-language-distributed-systems solutions kind
> > of pointless.
>
> The fact that *some* distributed systems are multi-language does not
> mean that a single-language solution is pointless. Insufficient, perhaps.

Single-language systems (A) are a subset of multi-language systems
(B). This means that if you need the B solution anyway (and you really
need it) and it solve problem A as well, then having a separate A
solution is pointless.

You might still ask for it for performance reasons (it is easier to
achieve good performance if you are not constrained by artificial
common denominators), but this is a luxury that can be afforded only
when you have other burning problems sorted out already.
Unfortunately, this is not the case for Ada and that is why I argue
that this is an effort that is better spent elsewhere.

> In your experience, are
> IDLs like CORBA used today? Or just sockets with ad-hoc protocols?

I think that plain sockets are not very widely used by mature
development teams and instead some ready protocols are adopted with
varying degrees of completeness. CORBA is complete but huge and it
seems to be less and less popular, but there are plenty of other
solutions somewhere in this spectrum.
One of the small, but potentially useful things I have found recently
is this:

http://msgpack.org/

It is not complete in the sense that it covers only the serialization
part (and even there has many limitations), but for a number of
problems seems like a nice and easy, off-hand solution. Yes, the Ada
binding is missing there.

And of course, I will benefit from the opportunity to shamelessly
mention this:

http://inspirel.com/yami4/

--
Maciej Sobczak * http://www.msobczak.com * http://www.inspirel.com



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-06 11:40         ` Maciej Sobczak
@ 2012-06-06 12:08           ` Dmitry A. Kazakov
  2012-06-06 19:17           ` Simon Wright
                             ` (2 subsequent siblings)
  3 siblings, 0 replies; 30+ messages in thread
From: Dmitry A. Kazakov @ 2012-06-06 12:08 UTC (permalink / raw)


On Wed, 6 Jun 2012 04:40:41 -0700 (PDT), Maciej Sobczak wrote:

> On 6 Cze, 12:09, Niklas Holsti <niklas.hol...@tidorum.invalid> wrote:
> 
>>> This is what makes single-language-distributed-systems solutions kind
>>> of pointless.
>>
>> The fact that *some* distributed systems are multi-language does not
>> mean that a single-language solution is pointless. Insufficient, perhaps.
> 
> Single-language systems (A) are a subset of multi-language systems
> (B).

Code distribution is only one role of the middleware, at least in process
automation. Integration of components and devices is usually more
important. It is not even B, the other side might be not a programmable
system and almost always just a black box.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-06 11:40         ` Maciej Sobczak
  2012-06-06 12:08           ` Dmitry A. Kazakov
@ 2012-06-06 19:17           ` Simon Wright
  2012-06-08 11:38             ` Peter C. Chapin
  2012-06-06 20:02           ` Niklas Holsti
  2012-06-07  0:55           ` BrianG
  3 siblings, 1 reply; 30+ messages in thread
From: Simon Wright @ 2012-06-06 19:17 UTC (permalink / raw)


Maciej Sobczak <see.my.homepage@gmail.com> writes:

> One of the small, but potentially useful things I have found recently
> is this:
>
> http://msgpack.org/
>
> It is not complete in the sense that it covers only the serialization
> part (and even there has many limitations), but for a number of
> problems seems like a nice and easy, off-hand solution. Yes, the Ada
> binding is missing there.

I'm thinking I need a project; would anyone be interested in an Ada
binding?



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-06 11:40         ` Maciej Sobczak
  2012-06-06 12:08           ` Dmitry A. Kazakov
  2012-06-06 19:17           ` Simon Wright
@ 2012-06-06 20:02           ` Niklas Holsti
  2012-06-07 10:37             ` Maciej Sobczak
  2012-06-07  0:55           ` BrianG
  3 siblings, 1 reply; 30+ messages in thread
From: Niklas Holsti @ 2012-06-06 20:02 UTC (permalink / raw)


On 12-06-06 13:40 , Maciej Sobczak wrote:
> On 6 Cze, 12:09, Niklas Holsti<niklas.hol...@tidorum.invalid>  wrote:
>
>>> This is what makes single-language-distributed-systems solutions kind
>>> of pointless.
>>
>> The fact that *some* distributed systems are multi-language does not
>> mean that a single-language solution is pointless. Insufficient, perhaps.
>
> Single-language systems (A) are a subset of multi-language systems
> (B). This means that if you need the B solution anyway (and you really
> need it) and it solve problem A as well, then having a separate A
> solution is pointless.

They don't solve the "same" problem, or not equally well. A 
single-language solution can offer better integration, more compile-time 
checks, and less effort on middleware tools (less effort by the 
application developer, at least).

I don't agree that the Ada DSA is "pointless", although it is not used 
in as many applications as the multi-language interfaces.

> You might still ask for it for performance reasons (it is easier to
> achieve good performance if you are not constrained by artificial
> common denominators),

It can also offer better degree of integration, safety, logical consistency.

> but this is a luxury that can be afforded only
> when you have other burning problems sorted out already.
> Unfortunately, this is not the case for Ada and that is why I argue
> that this is an effort that is better spent elsewhere.

Is a lot of effort (still) needed for the DSA? Perhaps it is, since only 
AdaCore support it -- or are there others?

>> In your experience, are
>> IDLs like CORBA used today? Or just sockets with ad-hoc protocols?
>
> I think that plain sockets are not very widely used by mature
> development teams and instead some ready protocols are adopted with
> varying degrees of completeness. CORBA is complete but huge and it
> seems to be less and less popular, but there are plenty of other
> solutions somewhere in this spectrum.
> One of the small, but potentially useful things I have found recently
> is this:
>
> http://msgpack.org/
>
> It is not complete in the sense that it covers only the serialization
> part (and even there has many limitations), but for a number of
> problems seems like a nice and easy, off-hand solution. Yes, the Ada
> binding is missing there.

MessagePack looks potentially useful, but I also think it illustrates 
the drawbacks of a multi-language middleware compared to the Ada DSA, 
such as a limitation to low-level "physical" types.

> And of course, I will benefit from the opportunity to shamelessly
> mention this:
>
> http://inspirel.com/yami4/

Nice tutorial video. But here, too, I get the impression of working one 
layer below the DSA level, mostly. Usable, and good for cross-language 
work, but still missing some of the high-level integrity of the DSA level.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-06 11:40         ` Maciej Sobczak
                             ` (2 preceding siblings ...)
  2012-06-06 20:02           ` Niklas Holsti
@ 2012-06-07  0:55           ` BrianG
  3 siblings, 0 replies; 30+ messages in thread
From: BrianG @ 2012-06-07  0:55 UTC (permalink / raw)


On 06/06/2012 07:40 AM, Maciej Sobczak wrote:
> On 6 Cze, 12:09, Niklas Holsti<niklas.hol...@tidorum.invalid>  wrote:
>
>>> This is what makes single-language-distributed-systems solutions kind
>>> of pointless.
>>
>> The fact that *some* distributed systems are multi-language does not
>> mean that a single-language solution is pointless. Insufficient, perhaps.
>
> Single-language systems (A) are a subset of multi-language systems
> (B). This means that if you need the B solution anyway (and you really
> need it) and it solve problem A as well, then having a separate A
> solution is pointless.
>
> You might still ask for it for performance reasons...

You might also ask for it for other reasons, since there are other 
reasons behind Ada, such as maintainability.

-- 
---
BrianG
000
@[Google's email domain]
.com



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-06 20:02           ` Niklas Holsti
@ 2012-06-07 10:37             ` Maciej Sobczak
  2012-06-08  2:11               ` Shark8
  0 siblings, 1 reply; 30+ messages in thread
From: Maciej Sobczak @ 2012-06-07 10:37 UTC (permalink / raw)


On 6 Cze, 22:02, Niklas Holsti <niklas.hol...@tidorum.invalid> wrote:

> MessagePack looks potentially useful, but I also think it illustrates
> the drawbacks of a multi-language middleware compared to the Ada DSA,
> such as a limitation to low-level "physical" types.

There is no limitation to low-level types, as MessagePack supports
user-defined types as well. Of course, they have to all based on some
set of fundamental types, but this is also true in Ada.
I can imagine a ASIS->MessagePack serializer generator.

> >http://inspirel.com/yami4/
>
> Nice tutorial video. But here, too, I get the impression of working one
> layer below the DSA level, mostly.

No, it is neither below nor above. YAMI4 is an asynchronous messaging
system and as such has no direct analogies in synchronous RPC system
as DSA. At the very high conceptual level both can be used to
"communicate" (and this is where they compete), but from the paradigm
point of view they offer different set of patterns and idioms (and
this is where they don't compete).
In other words, some problems have very direct solutions with DSA, due
to its synchronous nature and integration with the language, but some
other problems have easier solutions with YAMI4. This means that they
cannot be compared in terms of being higher- or lower-level.

--
Maciej Sobczak * http://www.msobczak.com * http://www.inspirel.com



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-07 10:37             ` Maciej Sobczak
@ 2012-06-08  2:11               ` Shark8
  2012-06-08  6:31                 ` Pascal Obry
  2012-06-08 21:26                 ` Maciej Sobczak
  0 siblings, 2 replies; 30+ messages in thread
From: Shark8 @ 2012-06-08  2:11 UTC (permalink / raw)


On Thursday, June 7, 2012 5:37:43 AM UTC-5, Maciej Sobczak wrote:
> In other words, some problems have very direct solutions with DSA, due
> to its synchronous nature and integration with the language, but some
> other problems have easier solutions with YAMI4.

Could you give examples of what sorts of tasks are better-suited/easier in DSA and YAMI4?



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-08  2:11               ` Shark8
@ 2012-06-08  6:31                 ` Pascal Obry
  2012-06-08 21:26                 ` Maciej Sobczak
  1 sibling, 0 replies; 30+ messages in thread
From: Pascal Obry @ 2012-06-08  6:31 UTC (permalink / raw)



Shark8,

> Could you give examples of what sorts of tasks are better-suited/easier in DSA and YAMI4?

with Ada.Containers.Vectors;
package Flt_Vector is new (Positive, Float);
pragma Remote_Types (Flt_Vector);

with Flt_Vector; use Flt_Vector;

package A is
   pragma Remote_Call_Interface;

   procedure Call (V : in Flt_Vector);
end A;


Then using this from another partition across the network:

with A;
with Flt_Vector;

procedure Main is
   V : Flt_Vector.Vector;
begin
   V.Append (1.0);
   V.Append (4.5);
   A.Call (V);
end Main;

That is the vector is passed across the network and this just looks like
a procedure call in Ada. This is possible because the distributed
support is directly in Ada. With "foreign" distributed support (MPI for
example) you'll only get support for basic types. And of course, what is
done with Ada.Containers.Vectors is not magic, you can define your own
type and have serialization by specifying the 'Write and 'Read attribute
for the type.

Another very good point with the DSA is that the application can be
partitioned differently. With the same code base, just provides multiple
partitioning (different xyz.cfg passed to po_gnatdist). Where the
partition communication are hard coded into MPI you can in minute create
a new way to partition your DSA application and rebuild everything
without changing a line of code.

And if the code actually is on the same partition there is nothing
serialized, the call is direct as for non distributed application. This
gives the best performance in any partitioning.

Frankly I have used the DSA (I have also used MPI and RMI) and the Ada
solution is way better than any alternative I have used.

Note finally that AdaCore DSA support is done via PolyORB. This means
that you can communicate with your DSA application with CORBA or SOAP
for example. So you are not stuck on a proprietary protocol if you need
to open your application to other languages.

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|    http://www.obry.net  -  http://v2p.fr.eu.org
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver keys.gnupg.net --recv-key F949BD3B




^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-06 19:17           ` Simon Wright
@ 2012-06-08 11:38             ` Peter C. Chapin
  2012-06-08 16:29               ` Simon Wright
  0 siblings, 1 reply; 30+ messages in thread
From: Peter C. Chapin @ 2012-06-08 11:38 UTC (permalink / raw)


On 2012-06-06 15:17, Simon Wright wrote:

>> http://msgpack.org/
>>
>> It is not complete in the sense that it covers only the serialization
>> part (and even there has many limitations), but for a number of
>> problems seems like a nice and easy, off-hand solution. Yes, the Ada
>> binding is missing there.
>
> I'm thinking I need a project; would anyone be interested in an Ada
> binding?

One middleware system I've used before is Ice (http://www.zeroc.com). 
I'd love to see an Ada binding to it.

Peter



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-08 11:38             ` Peter C. Chapin
@ 2012-06-08 16:29               ` Simon Wright
  0 siblings, 0 replies; 30+ messages in thread
From: Simon Wright @ 2012-06-08 16:29 UTC (permalink / raw)


"Peter C. Chapin" <PChapin@vtc.vsc.edu> writes:

> On 2012-06-06 15:17, Simon Wright wrote:
>
>>> http://msgpack.org/
>>>
>>> It is not complete in the sense that it covers only the serialization
>>> part (and even there has many limitations), but for a number of
>>> problems seems like a nice and easy, off-hand solution. Yes, the Ada
>>> binding is missing there.
>>
>> I'm thinking I need a project; would anyone be interested in an Ada
>> binding?
>
> One middleware system I've used before is Ice
> (http://www.zeroc.com). I'd love to see an Ada binding to it.

That looks like a large-ish project. As of Feb 2011 there was no
documentation on how to approach such a project[1].

MessagePack seems to be similarly lacking. There's a singular lack of
comments in the source, I guess it's all obvious if you're a Ruby
guru. But at least the object format is defined!

[1] http://www.zeroc.com/forums/projects/5267-d-language-bindings.html



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-08  2:11               ` Shark8
  2012-06-08  6:31                 ` Pascal Obry
@ 2012-06-08 21:26                 ` Maciej Sobczak
  2012-06-09  1:10                   ` tmoran
                                     ` (3 more replies)
  1 sibling, 4 replies; 30+ messages in thread
From: Maciej Sobczak @ 2012-06-08 21:26 UTC (permalink / raw)


On 8 Cze, 04:11, Shark8 <onewingedsh...@gmail.com> wrote:

> Could you give examples of what sorts of tasks are better-suited/easier in DSA and YAMI4?

Pascal already provided example where DSA is attractive due to its
integration with the language. Note the pattern: Ada is a synchronous
sequential[*] language, so if your distributed problem is of the same
nature, then the DSA solution will look natural.

[*] this means that your problem can be described as a sequence of
steps where each step has to finish before the next step begins.

Now, the contrary example.

You have a system with several thousand machines and you want to send
them new configuration or something. You know the locations of those
systems, but the little problem is that with this scale some of them
are not working at all, some are hanging and some are overloaded and
therefore process everything very slowly. And you cannot do much about
it. And you have to send them the new data within very short time
frame. How short? Let's say several seconds.
The synchronous-sequental solution has a loop over the targets and a
remote call of the kind that Pascal has shown. This does not work. It
would not work even if all the targets were in a perfect shape.
A naive extension of this solution involves creating some tasks to
make things go in parallel. This is bad design, as the parallelism
already exists in huge amounts - remember, there are thousands of
machines over there, right? So why do we have to create tasks locally,
if the parallel resources are elsewhere? OK, let's shove this
conceptual issue under the carpet, but then - how many tasks should we
create? This is where DSA (or CORBA or ICE or similar solutions from
the same paradigm) proves to be actually very low-level instead of
high-level and forces us to think in terms that are very distant from
the actual problem at hand.

The solution is to stop thinking in terms of RPC (remote procesure
calls), and start thinking in terms of messages. If you lift the
concept of the message to the level of a design entity (yes,
distribution is *not* transparent and it *cannot* be an afterthought),
then some cool things are possible - just create thousands of messages
and let them go. No need to create additional tasks and no need to
wait for consecutive steps, as everything can happen in parallel
naturally. The original problem becomes feasible even in the presence
of partial failures, because separate message do not have to see their
delays and their individual communication problems.

Another example of this paradigm shift is a trading system, where you
want to inform all interested participants about a price change. Say,
there is a new price of the IBM share and you want to send it to
everybody. The difference from the above example is that this time you
don't really care (and therefore don't need to know) who is
interested. But you still have to publish this data somehow.
The DSA (CORBA/ICE/etc.) solution is lots of coding. Lots of low-level
code that has very little connection to the original, high-level
problem ("publish new price").
The messaging system allows to solve it differently, exactly thanks to
the fact that a message becomes a design-level entity. Create a new
message and give it to the broker - he will take care of it.
Hey, if the message is a first-class entity, then you might try to do
even more funny things with it - why not store it? Why not "record"
the stream of messages and "replay" them later? Why not attach some
security-related stuff to it, like digital signatures or access
control lists? Or maybe route tracing? This is just the tip of the
iceberg.

YAMI4 is a messaging system, where the programmer deals with messages.
This is the fundamental difference from DSA, where the programmer
deals with calls. Ironically, the lack of language integration is both
a disadvantage and a big advantage of messaging systems - the
advantage being that if there is no integration, then the language
rules do not limit what you can do.

More here:

http://www.inspirel.com/articles/RPC_vs_Messaging.html

Also, in the last section titled "Message-oriented middleware":

http://www.inspirel.com/articles/Types_Of_Middleware.html

--
Maciej Sobczak * http://www.msobczak.com * http://www.inspirel.com



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-08 21:26                 ` Maciej Sobczak
@ 2012-06-09  1:10                   ` tmoran
  2012-06-09 12:02                     ` Maciej Sobczak
  2012-06-09  6:59                   ` Dmitry A. Kazakov
                                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 30+ messages in thread
From: tmoran @ 2012-06-09  1:10 UTC (permalink / raw)


>The solution is to stop thinking in terms of RPC (remote procesure
>calls), and start thinking in terms of messages.
  How about asynchronous remote procedure calls.  They just
fire-and-forget the parameters/message to a remote procedure.  Or in the
example of possibly dead targets, use synchronous RPC that handles
Communication_Error.
  But setting up the partition mapping for thousands of partitions might
indeed be inappropriate.  ;)



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-08 21:26                 ` Maciej Sobczak
  2012-06-09  1:10                   ` tmoran
@ 2012-06-09  6:59                   ` Dmitry A. Kazakov
  2012-06-13 10:55                     ` Marius Amado-Alves
  2012-06-09  9:59                   ` Pascal Obry
  2012-06-09 15:14                   ` Robert A Duff
  3 siblings, 1 reply; 30+ messages in thread
From: Dmitry A. Kazakov @ 2012-06-09  6:59 UTC (permalink / raw)


On Fri, 8 Jun 2012 14:26:55 -0700 (PDT), Maciej Sobczak wrote:

> Now, the contrary example.
> 
> You have a system with several thousand machines and you want to send
> them new configuration or something. You know the locations of those
> systems, but the little problem is that with this scale some of them
> are not working at all, some are hanging and some are overloaded and
> therefore process everything very slowly.

In our case the locations are unknown. The middleware supports discovery
and identification.

Another typical case in process automations is when there are many
thousands of process variables distributed over not so many hosts. The
variables are not well known in advance, some may be unavailable at time.
Others are optional and created dynamically on request.

Anyway, synchronous communication is not an option in any realistic process
automation setup, even if real-time. Not worth to consider. Especially
because some data distribution layers are logically and physically
unidirectional, various 1-to-N schemes.

Our middleware supports synchronous calls, but they are implemented on top
of an asynchronous layer as send/request + wait.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-08 21:26                 ` Maciej Sobczak
  2012-06-09  1:10                   ` tmoran
  2012-06-09  6:59                   ` Dmitry A. Kazakov
@ 2012-06-09  9:59                   ` Pascal Obry
  2012-06-09 15:14                   ` Robert A Duff
  3 siblings, 0 replies; 30+ messages in thread
From: Pascal Obry @ 2012-06-09  9:59 UTC (permalink / raw)
  To: Maciej Sobczak


Maciej,

> The solution is to stop thinking in terms of RPC (remote procesure

Right.

> calls), and start thinking in terms of messages. If you lift the
> concept of the message to the level of a design entity (yes,
> distribution is *not* transparent and it *cannot* be an afterthought),
> then some cool things are possible - just create thousands of messages
> and let them go. No need to create additional tasks and no need to
> wait for consecutive steps, as everything can happen in parallel
> naturally. The original problem becomes feasible even in the presence
> of partial failures, because separate message do not have to see their
> delays and their individual communication problems.

I don't agree, I've done that.

One partition keeps a list of "job" (what you call message) in a
protected object.

   package Job is
      pragma Remote_Types;

      type Object is tagged...

   package Job_Queue is
      pragma Remote_Call_Interface;

      function Get return Job.Object;

      ...


And then each remote partition get a new job when needed. This ensure
proper use of the resources as it naturally give good load balance. The
speed of the remote machine or network is not an issue in this design.

Again, I find Ada solution really good.

The only point is that the Job above must be large grained ones. But
this is true for every distributed applications where the cost of the
network transfer must not negate the computing speed up.

Also note that with the design above you don't need to go distributed
only. You can also create some tasks on a single application to get
multiple jobs on the same machine which is surely multicore, this also
ease the debugging.

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|    http://www.obry.net  -  http://v2p.fr.eu.org
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver keys.gnupg.net --recv-key F949BD3B




^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-09  1:10                   ` tmoran
@ 2012-06-09 12:02                     ` Maciej Sobczak
  2012-06-09 12:25                       ` Pascal Obry
  0 siblings, 1 reply; 30+ messages in thread
From: Maciej Sobczak @ 2012-06-09 12:02 UTC (permalink / raw)


On 9 Cze, 03:10, tmo...@acm.org wrote:
> >The solution is to stop thinking in terms of RPC (remote procesure
> >calls), and start thinking in terms of messages.
>
>   How about asynchronous remote procedure calls.  They just
> fire-and-forget

Exactly. The difference is that with messages you do not have to
*forget*.
Firing and forgetting is easy - but firing and getting results later
on or just checking what has succeeded and what has not is an entirely
different use case. Messaging provides a valid solution to this
problem, while RPC not so much.

> Or in the
> example of possibly dead targets, use synchronous RPC that handles
> Communication_Error.

That can take several hundred milliseconds (or even many seconds) for
a failing call. Remember - you have *thousands* of targets.

>   But setting up the partition mapping for thousands of partitions might
> indeed be inappropriate.  ;)

Yep. The point is - this is no longer a single program. Just as our
communication here is not between separate partitions of a single
human entity, but rather between separate and autonomous entities.
Look, we don't even need to be awake at the same time for this
communication to happen. Why? How would you do that with "calls"
instead of "messages"?
Our posts here are messages, not calls.

--
Maciej Sobczak * http://www.msobczak.com * http://www.inspirel.com



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-09 12:02                     ` Maciej Sobczak
@ 2012-06-09 12:25                       ` Pascal Obry
  2012-06-09 20:29                         ` Maciej Sobczak
  0 siblings, 1 reply; 30+ messages in thread
From: Pascal Obry @ 2012-06-09 12:25 UTC (permalink / raw)



Maciej,

> Yep. The point is - this is no longer a single program. Just as our
> communication here is not between separate partitions of a single
> human entity, but rather between separate and autonomous entities.
> Look, we don't even need to be awake at the same time for this
> communication to happen. Why? How would you do that with "calls"
> instead of "messages"?

As said in my previous message. With a queue. I have used this scheme,
it works perfectly well. You can even launch new message consumers
dynamically or stop some. You can also start new message producers, and
of course to avoid bottleneck you can have multiple queue on the network.

All this is just easy with Ada DSA, and safe as for all other Ada features.

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|    http://www.obry.net  -  http://v2p.fr.eu.org
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver keys.gnupg.net --recv-key F949BD3B




^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-08 21:26                 ` Maciej Sobczak
                                     ` (2 preceding siblings ...)
  2012-06-09  9:59                   ` Pascal Obry
@ 2012-06-09 15:14                   ` Robert A Duff
  2012-06-09 20:40                     ` Maciej Sobczak
  3 siblings, 1 reply; 30+ messages in thread
From: Robert A Duff @ 2012-06-09 15:14 UTC (permalink / raw)


Maciej Sobczak <see.my.homepage@gmail.com> writes:

> Now, the contrary example.
>
> You have a system with several thousand machines and you want to send
> them new configuration or something.

Do you know about pragma Asynchronous (aspect Asynchronous in Ada 2012)?
It causes an "RPC" to behave like a message rather than a call
(so it's not really an "RPC" anymore).

So with DSA, you can loop through those thousand machines,
and do an asynchronous "call" to each.  This just sends
a thousand messages -- no waiting for replies, no dealing
with failed nodes, etc.

- Bob



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-09 12:25                       ` Pascal Obry
@ 2012-06-09 20:29                         ` Maciej Sobczak
  0 siblings, 0 replies; 30+ messages in thread
From: Maciej Sobczak @ 2012-06-09 20:29 UTC (permalink / raw)


On 9 Cze, 14:25, Pascal Obry <pas...@obry.net> wrote:

> > How would you do that with "calls"
> > instead of "messages"?
>
> As said in my previous message. With a queue. I have used this scheme,
> it works perfectly well.

Yes - you have just implemented a message-oriented middleware on top
of some low-level primitives known as "calls". If you have a queue,
then you have some items in there - these are exactly the messages
lifted to the level of design entities, as I've described earlier. A
push-pull queue was easy to implement on top of RPC, but what about
push-push queues? What about N:M publish-subscribe? This is where
messaging appears to be a higher-level concept.

Actually, I'm pretty convinced that RPC and messaging are
complementary and can be used to implement one another. As Dmitry
said, RPC can be achieved by send+wait on a message and as you say,
passing messages around is actually executing possibly remote calls on
some intermediary entities like queues or brokers. The question is -
when you will have to do more coding (and have more impedance
mismatch) to get from one paradigm to another.

> You can even launch new message consumers
> dynamically or stop some.

Uhm... remember the original post in this thread, by Adam? It is not
actually clear whether this is kosher or not. ;-)

--
Maciej Sobczak * http://www.msobczak.com * http://www.inspirel.com



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-09 15:14                   ` Robert A Duff
@ 2012-06-09 20:40                     ` Maciej Sobczak
  0 siblings, 0 replies; 30+ messages in thread
From: Maciej Sobczak @ 2012-06-09 20:40 UTC (permalink / raw)


On 9 Cze, 17:14, Robert A Duff <bobd...@shell01.TheWorld.com> wrote:

> Do you know about pragma Asynchronous (aspect Asynchronous in Ada 2012)?
> It causes an "RPC" to behave like a message rather than a call

Not really... It is still a call, just without the second part of it.

> So with DSA, you can loop through those thousand machines,
> and do an asynchronous "call" to each.  This just sends
> a thousand messages -- no waiting for replies, no dealing
> with failed nodes, etc.

I want to know which has failed. And I want to get responses or
rejections from those targets that got the message.

In YAMI4 I would create a container of message objects and send them
in a loop. Sending in YAMI4 is asynchronous and therefore non-
blocking, so the loop will execute in a fraction of a second. The
messages will get queued for transmission, but since their
destinations are disctinct, they will be transmitted over independent
channels. The internal mechanics is implemented in a way that allows
to deal with multiple messages in various stages of processing and
this is what will allow them to be physically transmitted in parallel.
Coming back to my application code - after the loop I would do another
loop over the same container of messages, gathering the results,
statuses, etc. I can do that, because I have a message that is a
tangible entity and that allows me to still interact with it. This is
what "asynchronous RPC" (that's an oxymoron) cannot do exactly due to
the integration with the language that is sequential in nature.

--
Maciej Sobczak * http://www.msobczak.com * http://www.inspirel.com



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-09  6:59                   ` Dmitry A. Kazakov
@ 2012-06-13 10:55                     ` Marius Amado-Alves
  2012-06-13 13:26                       ` Dmitry A. Kazakov
  2012-06-14 20:29                       ` tmoran
  0 siblings, 2 replies; 30+ messages in thread
From: Marius Amado-Alves @ 2012-06-13 10:55 UTC (permalink / raw)


> Our middleware supports synchronous calls, but they are implemented on top
> of an asynchronous layer as send/request + wait. (Kasakov)

(Secret technology?)

This thread is fascinating.

Make RPC based on messages (Kasakov and others) or vice versa (Obry and others)?

FWIW, I tend towards the latter. It's clearly the Ada way:

Ada offers asyncronous control already. Chapter 9. So it's just a matter of wrapping the RPC with the desired asyncronous logic (a task, the time out idiom, ATC...)

Distribution and synchronicity are orthogonal dimensions. Probably Ada has been designed with this in mind. Distribution => Annex E. Synchronicity => chapter 9.

Yes, an implementation of Annex E may be based on messages e.g. sockets, and therefore asynchrounous RPCs in Ada (let the oximoron pass this time) will compile to an asynchronous-synchronous-asynchronous tower. Maybe there is a performance penalty here. Maybe for some projects climbing this tower would take too long, and so they use a message protocol directly for the message abstration. I cannot think of any other reason to depart from the Ada way.

(Is it really true that Polyorb is the only Annex E implementation that exists? Or just the only libre one?)



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-13 10:55                     ` Marius Amado-Alves
@ 2012-06-13 13:26                       ` Dmitry A. Kazakov
  2012-06-14 20:29                       ` tmoran
  1 sibling, 0 replies; 30+ messages in thread
From: Dmitry A. Kazakov @ 2012-06-13 13:26 UTC (permalink / raw)


On Wed, 13 Jun 2012 03:55:47 -0700 (PDT), Marius Amado-Alves wrote:

>> Our middleware supports synchronous calls, but they are implemented on top
>> of an asynchronous layer as send/request + wait. (Kasakov)
> 
> (Secret technology?)

Just proprietary

> This thread is fascinating.
> 
> Make RPC based on messages (Kasakov and others) or vice versa (Obry and others)?

No, our middleware is not message-oriented and generally not client-server.
it uses process variables instead and has the topology of a bus. (Messaging
could be a transport)
 
> FWIW, I tend towards the latter. It's clearly the Ada way:
> 
> Ada offers asyncronous control already.

(which most likely does not work a naive reader would expect)

> Distribution and synchronicity are orthogonal dimensions.

Yes.

With distribution come issues like quality of data and other attributes of
values missing or handled differently in monolithic coherent applications.
There you would simply handle poor quality as exceptional (or even a bug),
raising Data_Error or Constraint_Error. In a distributed application you
usually have competing sources of data and errors related to the
availability of these sources. You rather store quality, timestamp etc as
additional attributes instead of raising exceptions, which cannot be
handled meaningfully right now and here. It is a different mindset.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Distributed Systems Annex, data sharing between programs
  2012-06-13 10:55                     ` Marius Amado-Alves
  2012-06-13 13:26                       ` Dmitry A. Kazakov
@ 2012-06-14 20:29                       ` tmoran
  1 sibling, 0 replies; 30+ messages in thread
From: tmoran @ 2012-06-14 20:29 UTC (permalink / raw)


> (Is it really true that Polyorb is the only Annex E implementation that exi=
> sts? Or just the only libre one?)
  Several years ago I made, for a project, a Package body System.RPC and
some text processing tools to create stubs, etc.  But without specific
compiler support, that approach leaves a lot to be done manually.



^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2012-06-14 20:29 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-04 18:49 Distributed Systems Annex, data sharing between programs Adam Beneschan
2012-06-05  7:36 ` Maciej Sobczak
2012-06-05 16:02   ` Adam Beneschan
2012-06-05 18:35     ` tmoran
2012-06-06  7:14     ` Jacob Sparre Andersen
2012-06-06  7:39     ` Maciej Sobczak
2012-06-06  8:07       ` Dmitry A. Kazakov
2012-06-06 10:09       ` Niklas Holsti
2012-06-06 11:40         ` Maciej Sobczak
2012-06-06 12:08           ` Dmitry A. Kazakov
2012-06-06 19:17           ` Simon Wright
2012-06-08 11:38             ` Peter C. Chapin
2012-06-08 16:29               ` Simon Wright
2012-06-06 20:02           ` Niklas Holsti
2012-06-07 10:37             ` Maciej Sobczak
2012-06-08  2:11               ` Shark8
2012-06-08  6:31                 ` Pascal Obry
2012-06-08 21:26                 ` Maciej Sobczak
2012-06-09  1:10                   ` tmoran
2012-06-09 12:02                     ` Maciej Sobczak
2012-06-09 12:25                       ` Pascal Obry
2012-06-09 20:29                         ` Maciej Sobczak
2012-06-09  6:59                   ` Dmitry A. Kazakov
2012-06-13 10:55                     ` Marius Amado-Alves
2012-06-13 13:26                       ` Dmitry A. Kazakov
2012-06-14 20:29                       ` tmoran
2012-06-09  9:59                   ` Pascal Obry
2012-06-09 15:14                   ` Robert A Duff
2012-06-09 20:40                     ` Maciej Sobczak
2012-06-07  0:55           ` BrianG

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox