comp.lang.ada
 help / color / mirror / Atom feed
* I'm a noob here.
@ 2014-05-05 11:28 sdalemorrey
  2014-05-05 12:43 ` Dan'l Miller
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: sdalemorrey @ 2014-05-05 11:28 UTC (permalink / raw)


Hello,

I've recently been placed at the head of an effort to completely retool a forex engine. The original developers wrote the thing in PHP (frontend) and C (backend).

There have been issues with code sanity mostly related to the size of the numbers being dealt with.  It is not uncommon for a single pass through the order match engine to generate numbers in excess of 10 Billion, and we're tracking up to 16 decimal places internally on some trades.  Obviously float would be a bad idea.  The initial Java prototype was using BigDecimal and that helped a lot, but there seems to be a bit of information loss when you do math on numbers which don't have the same scale.  Other issues were encountered as well so a new project was launched to uncover a language that could handle this sort of work more easily.

After a lot of research, I selected Ada as the platform upon which to build the next generation of this engine. My general feeling is that Ada has the best approach to minimize bugs long term, even if there is a learning curve upfront.

What I'm having a hard time wrapping my head around though, is what is the difference between Ada and Spark?  

It seems to me that Spark (at least Spark 2014) is a variant of Ada which targets the needs of systems where a code mistake might cost millions of dollars or even lives.  

What I don't understand just yet is HOW does it do that?  I've tried to follow the Ada Gems articles and the best I've been able to glean is that Spark appears to not allow assignment during compare i.e. none of C's if(a = b) garbage.  Which is great, and all, but why a whole different profile just for that? 

Or is Spark a derived language, much as C++ is derived from C, and I'm just not really groking it properly?

I'm sure I'll have tons of other questions going forward, but I'm excited about learning a new language as I undertake this project. 

My initial role will be Architect but then I'll also be the development lead.  

My own background is C/C++/Java and I have about 15 years experience in software development with the last 5 years of that in the Architect & Project Manager role.  

This is actually the first time in half a decade where I'll need to be "the guy" who functions as an SME on a new language, so I need learning resources to come up to speed as quickly as possible. I'm glad this newsgroup exists and I hope you guys/gals don't mind me leaning on you for advice from time to time.

Any caveats with the language might be good to know about as soon as possible.  I took Pascal classes in high school and Ada looks like it derives some of it's syntax from that. It's also possible that Pascal derives it's syntax from Ada, but either way they seem to be in the same family.

Thanks in advance for any info you might have!


^ permalink raw reply	[flat|nested] 13+ messages in thread

* I'm a noob here.
  2014-05-05 11:28 I'm a noob here sdalemorrey
@ 2014-05-05 12:43 ` Dan'l Miller
  2014-05-05 13:52   ` sdalemorrey
  2014-05-06 10:47   ` Brian Drummond
  2014-05-05 13:57 ` roderick.chapman
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 13+ messages in thread
From: Dan'l Miller @ 2014-05-05 12:43 UTC (permalink / raw)


In brief, what the OP is asking is:  precisely how does a SPARK compiler actually utilize the meta-information annotations?  E.g., precisely how does a SPARK compiler make logical inferences regarding correctness of Ada-subset source code to verify that the imperative code does what the annotations have declared?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: I'm a noob here.
  2014-05-05 12:43 ` Dan'l Miller
@ 2014-05-05 13:52   ` sdalemorrey
  2014-05-05 13:59     ` roderick.chapman
                       ` (3 more replies)
  2014-05-06 10:47   ` Brian Drummond
  1 sibling, 4 replies; 13+ messages in thread
From: sdalemorrey @ 2014-05-05 13:52 UTC (permalink / raw)


On Monday, May 5, 2014 6:43:50 AM UTC-6, Dan'l Miller wrote:
> In brief, what the OP is asking is:  precisely how does a SPARK compiler actually utilize the meta-information annotations?  E.g., precisely how does a SPARK compiler make logical inferences regarding correctness of Ada-subset source code to verify that the imperative code does what the annotations have declared?

Well there is that and then there is the more general question of "What is the relationship of Ada to Spark" is one a subset or superset of the other?

Is every valid Spark program a valid Ada program, or is every valid Ada program also a valid Spark program?

Ada documentation seems to be a bit thin, but Spark documentation has been nearly impossible to track down.  To my mind this means I'm either asking the question wrong, or asking the wrong question.

A good primer on Spark as well as what tools I would need to see Spark in it's full glory would be a good place for me to start I believe.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: I'm a noob here.
  2014-05-05 11:28 I'm a noob here sdalemorrey
  2014-05-05 12:43 ` Dan'l Miller
@ 2014-05-05 13:57 ` roderick.chapman
  2014-05-05 17:34 ` Shark8
  2014-05-05 17:46 ` Jeffrey Carter
  3 siblings, 0 replies; 13+ messages in thread
From: roderick.chapman @ 2014-05-05 13:57 UTC (permalink / raw)


I suggest you drop a note to sales@adacore.com and
we'll have a proper chance to explain the goals
of SPARK.

 Rod Chapman, SPARK Team, Altran UK


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: I'm a noob here.
  2014-05-05 13:52   ` sdalemorrey
@ 2014-05-05 13:59     ` roderick.chapman
  2014-05-05 16:40     ` G.B.
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 13+ messages in thread
From: roderick.chapman @ 2014-05-05 13:59 UTC (permalink / raw)


> Ada documentation seems to be a bit thin, but Spark documentation has been nearly impossible to track down.

See www.spark-2014.org

But..as I said, much better to explain this in person.
 - Rod



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: I'm a noob here.
  2014-05-05 13:52   ` sdalemorrey
  2014-05-05 13:59     ` roderick.chapman
@ 2014-05-05 16:40     ` G.B.
  2014-05-05 16:47     ` G.B.
  2014-05-05 17:35     ` Shark8
  3 siblings, 0 replies; 13+ messages in thread
From: G.B. @ 2014-05-05 16:40 UTC (permalink / raw)


On 05.05.14 15:52, sdalemorrey@gmail.com wrote:
> On Monday, May 5, 2014 6:43:50 AM UTC-6, Dan'l Miller wrote:
>> In brief, what the OP is asking is:  precisely how does a SPARK compiler actually utilize the meta-information annotations?  E.g., precisely how does a SPARK compiler make logical inferences regarding correctness of Ada-subset source code to verify that the imperative code does what the annotations have declared?
>
> Well there is that and then there is the more general question of "What is the relationship of Ada to Spark" is one a subset or superset of the other?

To those unfortunate ones who cannot buy this information
about Ada vs SPARK from AdaCore,

    SPARK (originally) defines a subset of the Ada language
    in such a way that the absence of a class of errors can
    be proved mathematically, with the help of SPARK tools.

Think "lint on formal steroids".

SPARK's analysis needs more information that is available in a
typical Ada program, and this information is added in formal
comment syntax. Being comments, an Ada compiler will ignore them.
Being formal comments, SPARK tools will interpret the formal
comments as they apply to the program text proper.

More recently, AdaCore has integrated SPARK with Ada 2012,
in such a way that their recent Ada toolset can provide the
same functionality as SPARK, thereby removing the need
to have SPARK tools plus any Ada compiler, basically letting you
do the work with AdaCore's recent GNAT.

> Is every valid Spark program a valid Ada program,

Yes.

> or is every valid Ada program also a valid Spark program?

No, since SPARK restricts the Ada language.

> Ada documentation seems to be a bit thin, but Spark documentation has been nearly impossible to track down.  To my mind this means I'm either asking the question wrong, or asking the wrong question.
>
> A good primer on Spark as well as what tools I would need to see Spark in it's full glory would be a good place for me to start I believe.

There is quite a bit of documentation, including examples,
in the distributions themselves.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: I'm a noob here.
  2014-05-05 13:52   ` sdalemorrey
  2014-05-05 13:59     ` roderick.chapman
  2014-05-05 16:40     ` G.B.
@ 2014-05-05 16:47     ` G.B.
  2014-05-05 17:35     ` Shark8
  3 siblings, 0 replies; 13+ messages in thread
From: G.B. @ 2014-05-05 16:47 UTC (permalink / raw)


On 05.05.14 15:52, sdalemorrey@gmail.com wrote:
> On Monday, May 5, 2014 6:43:50 AM UTC-6, Dan'l Miller wrote:
>> In brief, what the OP is asking is:  precisely how does a SPARK compiler actually utilize the meta-information annotations?  E.g., precisely how does a SPARK compiler make logical inferences regarding correctness of Ada-subset source code to verify that the imperative code does what the annotations have declared?
>
> Well there is that and then there is the more general question of "What is the relationship of Ada to Spark" is one a subset or superset of the other?

To those unfortunate ones who cannot get this information
about Ada vs SPARK from AdaCore directly,

    SPARK (originally) defines a subset of the Ada language
    in such a way that the absence of a class of errors can
    be proved mathematically, with the help of SPARK tools.

Think "lint on formal steroids".

SPARK's analysis needs more information than is available in a
typical Ada program, and this information is added in formal
comment syntax. Being comments, an Ada compiler will ignore them.
Being formal comments, SPARK tools will interpret the formal
comments as they apply to the program text proper.

More recently, AdaCore has integrated SPARK with Ada 2012,
in such a way that their recent Ada toolset can provide the
same functionality as SPARK, thereby removing the need
to have SPARK tools plus any Ada compiler, basically letting
you write the same annotations for AdaCore's recent GNAT.

 > Is every valid Spark program a valid Ada program,

Yes.

 > or is every valid Ada program also a valid Spark program?

No, since SPARK restricts the Ada language.

 > Ada documentation seems to be a bit thin, but Spark documentation has 
been nearly impossible to track down.  To my mind this means I'm either 
asking the question wrong, or asking the wrong question.
 >
 > A good primer on Spark as well as what tools I would need to see 
Spark in it's full glory would be a good place for me to start I believe.

There is quite a bit of documentation, including examples,
in the distributions themselves, which are freely available.



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: I'm a noob here.
  2014-05-05 11:28 I'm a noob here sdalemorrey
  2014-05-05 12:43 ` Dan'l Miller
  2014-05-05 13:57 ` roderick.chapman
@ 2014-05-05 17:34 ` Shark8
  2014-05-05 17:46 ` Jeffrey Carter
  3 siblings, 0 replies; 13+ messages in thread
From: Shark8 @ 2014-05-05 17:34 UTC (permalink / raw)


On 05-May-14 05:28, sdalemorrey@gmail.com wrote:
> Hello,
>
> I've recently been placed at the head of an effort to completely
> retool a forex engine. The original developers wrote the thing in
> PHP (frontend) and C (backend).
>
> There have been issues with code sanity mostly related to the size
> of the numbers being dealt with.  It is not uncommon for a single
> pass through the order match engine to generate numbers in excess
> of 10 Billion, and we're tracking up to 16 decimal places internally
> on some trades.  Obviously float would be a bad idea.  The initial
> Java prototype was using BigDecimal and that helped a lot, but there
> seems to be a bit of information loss when you do math on numbers
> which don't have the same scale.  Other issues were encountered as
> well so a new project was launched to uncover a language that could
> handle this sort of work more easily.

Given it was PHP I'm actually surprised you didn't list PHP's 'helpful' 
automatic methods for string/numeric conversion as one of your 
trouble-points.

> After a lot of research, I selected Ada as the platform upon which to
> build the next generation of this engine. My general feeling is that
> Ada has the best approach to minimize bugs long term, even if there
> is a learning curve upfront.

I don't think you'll be disappointed.
One technique you could use is separating out user-input (or other data 
which is possibly 'tainted' or non-conformant to your expected format) 
via private type and having functions to return the proper, safe, 
internally used type(s).

> What I'm having a hard time wrapping my head around though, is what
> is the difference between Ada and Spark?

SPARK is a subset of Ada -- the only one.
Its goal was to provide for (and facilitate) safety-critical 
applications by restricting what Ada features would likely generate 
nonsense, what constructs were impossible to check/prove, etc.

> What I don't understand just yet is HOW does it do that?

SPARK is also a tool: it integrates a prover utility into itself so that 
you can more easily use formal methods in software construction.

Ironsides is a DNS written in SPARK, and the authors have written a 
couple of scholarly papers on it for showing (a) the strengths 
formal-methods, and (b) that formal-methods are ready for mainstream 
industry-usage, and (c) that safe software needn't be slower than the 
unsafe.


Read these two papers and you should get an idea of /how/ SPARK works:
1 - http://ironsides.martincarlisle.com/globecom_2012.pdf
2 - http://ironsides.martincarlisle.com/ICRST2013.pdf

> I've tried to follow the Ada Gems articles and the best I've been
> able to glean is that Spark appears to not allow assignment during
> compare i.e. none of C's if(a = b) garbage.  Which is great, and
> all, but why a whole different profile just for that?
>
> Or is Spark a derived language, much as C++ is derived from C, and
> I'm just not really groking it properly?

You can't do that in Ada either.
SPARK is a proper subset, not a derived language.
The SPARK profile/subset is designed to reduce the Ada language to a set 
which is easy/feasible/possible to run a static prover on -- not the 
elimination of [strictly speaking] common mistakes in the language.

> I'm sure I'll have tons of other questions going forward, but I'm
> excited about learning a new language as I undertake this project.

Great, ask away -- c.l.a has some great people.

> My initial role will be Architect but then I'll also be the
> development lead.
>
> My own background is C/C++/Java and I have about 15 years experience
> in software development with the last 5 years of that in the
> Architect & Project Manager role.
>
> This is actually the first time in half a decade where I'll need to
> be "the guy" who functions as an SME on a new language, so I need
> learning resources to come up to speed as quickly as possible.

Ah, tutorials and learning resources aren't exactly prevalent -- 
compared to, say, Java or C or C++ but there are a few around.

I taught myself mainly from the John Barnes *Ada 2005* book, the *Ada 
2012* book should be available soon. See: 
http://www.amazon.com/Programming-Ada-2012-John-Barnes/dp/110742481X/ref=sr_1_1?ie=UTF8&qid=1399310596&sr=8-1&keywords=Ada+2012

> I'm glad this newsgroup exists and I hope you guys/gals don't mind
> me leaning on you for advice from time to time.
>
> Any caveats with the language might be good to know about as soon
> as possible.  I took Pascal classes in high school and Ada looks
> like it derives some of it's syntax from that. It's also possible
> that Pascal derives it's syntax from Ada, but either way they seem
> to be in the same family.

Ada was based on Pascal because its design as a learning language gave 
it many desirable features that addressed the 
strawman/woodman/ironman/steelman requirements.

You don't really have to "watch out" like you do for gotchas like you do 
in C/C++/PHP because the language is remarkably free of them -- you can 
depend on the compiler to catch a *lot* of what you're used to having to 
do 'manually' -- so feel free to "play around" with it some.

The only big 'gotcha' that comes immediately yo mind is misunderstanding 
tasks:
1 -- Inside an accept/do block there is no concurrency, this is inside 
the rendezvous and so the tasks [caller & callee] are bound together.
2 -- A task which isn't finished will keep its declaring-parent from 
exiting, so you may have to force/allow the close yourself w/ a "done" 
entry if, for some reason, you cannot use an OR TERMINATE on a select 
statement.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: I'm a noob here.
  2014-05-05 13:52   ` sdalemorrey
                       ` (2 preceding siblings ...)
  2014-05-05 16:47     ` G.B.
@ 2014-05-05 17:35     ` Shark8
  3 siblings, 0 replies; 13+ messages in thread
From: Shark8 @ 2014-05-05 17:35 UTC (permalink / raw)


On 05-May-14 07:52, sdalemorrey@gmail.com wrote:
> Is every valid Spark program a valid Ada program, […] ?

Yes -- precisely that.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: I'm a noob here.
  2014-05-05 11:28 I'm a noob here sdalemorrey
                   ` (2 preceding siblings ...)
  2014-05-05 17:34 ` Shark8
@ 2014-05-05 17:46 ` Jeffrey Carter
  2014-05-05 18:07   ` Nasser M. Abbasi
  3 siblings, 1 reply; 13+ messages in thread
From: Jeffrey Carter @ 2014-05-05 17:46 UTC (permalink / raw)


On 05/05/2014 04:28 AM, sdalemorrey@gmail.com wrote:
>
> I've recently been placed at the head of an effort to completely retool a
> forex engine. The original developers wrote the thing in PHP (frontend) and C
> (backend).
>
> There have been issues with code sanity mostly related to the size of the
> numbers being dealt with.  It is not uncommon for a single pass through the
> order match engine to generate numbers in excess of 10 Billion, and we're
> tracking up to 16 decimal places internally on some trades.  Obviously float
> would be a bad idea.  The initial Java prototype was using BigDecimal and
> that helped a lot, but there seems to be a bit of information loss when you
> do math on numbers which don't have the same scale.  Other issues were
> encountered as well so a new project was launched to uncover a language that
> could handle this sort of work more easily.
>
> After a lot of research, I selected Ada as the platform upon which to build
> the next generation of this engine. My general feeling is that Ada has the
> best approach to minimize bugs long term, even if there is a learning curve
> upfront.

This is a good decision. Ada is the language of choice for software that has to 
be correct. There are hard data that show that using Ada results in less 
expensive development and fewer deployed errors compared to C/++, both by about 
a factor of 2. And Ada's decimal fixed-point types are specifically intended for 
financial applications with very large, exact values. You'll want a compiler 
that supports the Information Systems Annex, which will provide values with at 
least 18 decimal digits.

> What I'm having a hard time wrapping my head around though, is what is the
> difference between Ada and Spark?
>
> It seems to me that Spark (at least Spark 2014) is a variant of Ada which
> targets the needs of systems where a code mistake might cost millions of
> dollars or even lives.
>
> What I don't understand just yet is HOW does it do that?  I've tried to
> follow the Ada Gems articles and the best I've been able to glean is that
> Spark appears to not allow assignment during compare i.e. none of C's if(a =
> b) garbage.  Which is great, and all, but why a whole different profile just
> for that?

Ada disallows assignment during comparisons, so that's not SPARK.

> Or is Spark a derived language, much as C++ is derived from C, and I'm just
> not really groking it properly?

SPARK is a subset of Ada which is fully defined (nothing is "implementation 
defined") plus a set of annotations in the form of Ada comments. The result can 
be processed by SPARK tools to formally prove the absence of some kinds of 
errors, and can be compiled by an Ada compiler. Equivalents of some of the SPARK 
annotations have recently been added to Ada in the 2012 revision of the 
standard, so SPARK is evolving accordingly.

Using SPARK requires that you have a good understanding of what the software is 
supposed to do up front. This information ends up in the annotations that the 
tools use. It's a formal, mathematical approach to S/W development that doesn't 
appeal to most developers (in other words, it's the opposite of C). There are 
data showing that it can reduce errors by an order of magnitude compared to 
plain Ada.

> Any caveats with the language might be good to know about as soon as
> possible.  I took Pascal classes in high school and Ada looks like it derives
> some of it's syntax from that. It's also possible that Pascal derives it's
> syntax from Ada, but either way they seem to be in the same family.

The initial design of Ada used Pascal as a starting point. One lesson from 
McCormick's experiences is to embrace user-defined numeric types, rather than 
using Integer and Float everywhere as most C people forced to use Ada do.

I recommend Riehle's /Ada Distilled/ for learning the language:

http://www.adaic.org/wp-content/uploads/2010/05/Ada-Distilled-24-January-2011-Ada-2005-Version.pdf

SPARK is covered in detail in Barne's /High Integrity Software/.

adaic.org is a good resource.

-- 
Jeff Carter
"Ada has made you lazy and careless. You can write programs in C that
are just as safe by the simple application of super-human diligence."
E. Robert Tisdale
72

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: I'm a noob here.
  2014-05-05 17:46 ` Jeffrey Carter
@ 2014-05-05 18:07   ` Nasser M. Abbasi
  2014-05-05 19:02     ` Georg Bauhaus
  0 siblings, 1 reply; 13+ messages in thread
From: Nasser M. Abbasi @ 2014-05-05 18:07 UTC (permalink / raw)


On 5/5/2014 12:46 PM, Jeffrey Carter wrote:

> The initial design of Ada used Pascal as a starting point. One lesson from
> McCormick's experiences is to embrace user-defined numeric types, rather than
> using Integer and Float everywhere as most C people forced to use Ada do.
>

This is what of the most important features in Ada for me.
Being able to define numerical types that match the problem
exactly so that the compiler can help find my error.

One would expect this feature to make Ada most used in
numerical and scientific computations, but it is almost
non-existent in that field, due I suppose to missing
libraries needed.

Again, Ada is great Language, but missing so many libraries
compared to other languages. I look at the new language Julia,
after 2 years, it already has 100 times more useful and
practical packages than Ada has.

http://julia.readthedocs.org/en/latest/packages/packagelist/

But I am going of the subject of this thread, sorry...

--Nasser

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: I'm a noob here.
  2014-05-05 18:07   ` Nasser M. Abbasi
@ 2014-05-05 19:02     ` Georg Bauhaus
  0 siblings, 0 replies; 13+ messages in thread
From: Georg Bauhaus @ 2014-05-05 19:02 UTC (permalink / raw)


On 05/05/14 20:07, Nasser M. Abbasi wrote:
> Again, Ada is great Language, but missing so many libraries
> compared to other languages. I look at the new language Julia,
> after 2 years, it already has 100 times more useful and
> practical packages than Ada has.

Yet again, C, Ada, and C++ are practical, great, and flexible
languages, but there does not seem to be a github of so many
little helpers like Matlab/Julia/Java have... PAL is old, Boost
is incomplete, etc. Really, these languages are incomparable,
I think, since they are for rather different use cases.

There are programs that won't have their best expression be
a 1, 2, 3, ...-dim matrix computation.



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: I'm a noob here.
  2014-05-05 12:43 ` Dan'l Miller
  2014-05-05 13:52   ` sdalemorrey
@ 2014-05-06 10:47   ` Brian Drummond
  1 sibling, 0 replies; 13+ messages in thread
From: Brian Drummond @ 2014-05-06 10:47 UTC (permalink / raw)


On Mon, 05 May 2014 05:43:50 -0700, Dan'l Miller wrote:

> In brief, what the OP is asking is:  precisely how does a SPARK compiler
> actually utilize the meta-information annotations?  E.g., precisely how
> does a SPARK compiler make logical inferences regarding correctness of
> Ada-subset source code to verify that the imperative code does what the
> annotations have declared?

There is no such thing as a "SPARK compiler" (at least yet). Every SPARK 
program is a valid Ada program (but not vice-versa) so SPARK is simply 
compiled with an Ada compiler.

What the SPARK tools do is - not compilation - but proving the 
"correctness" of the program - for example, proving that any errors that 
could cause integer overflow are simply not present in the source code.

Or, failing that, pointing out the unprovable parts so that either the 
proof can be completed, or the error preventing proof can be eliminated.

A very large system is unlikely to be suitable for SPARK, but a good 
candidate for Ada. 

Where SPARK comes in is if you can design the system such that a 
relatively small kernel handles the most critical information (security, 
encryption, or arbitrary precision fixed point arithmetic) and cleanly 
interfaces to the rest of the system. That kernel can be formally proven 
using SPARK; the rest of the system is implemented in conventional Ada.

- Brian

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2014-05-06 10:47 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-05 11:28 I'm a noob here sdalemorrey
2014-05-05 12:43 ` Dan'l Miller
2014-05-05 13:52   ` sdalemorrey
2014-05-05 13:59     ` roderick.chapman
2014-05-05 16:40     ` G.B.
2014-05-05 16:47     ` G.B.
2014-05-05 17:35     ` Shark8
2014-05-06 10:47   ` Brian Drummond
2014-05-05 13:57 ` roderick.chapman
2014-05-05 17:34 ` Shark8
2014-05-05 17:46 ` Jeffrey Carter
2014-05-05 18:07   ` Nasser M. Abbasi
2014-05-05 19:02     ` Georg Bauhaus

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox