On Mon, 8 Jul 2013, Randy Brukardt wrote: > I think existing Spark is evil, in that it funnels off effort best applied > to the entire Ada language (or at least, a much larger subset). Computers > are now powerful enough that the excuses for avoiding proofs of exceptions, > dynamic memory allocation, and the like no longer are valid. Agreed! Well, "evil" is a bit harsh -- existing SPARK fills a gap that Ada until recently has left wide open, which was a good thing at its time. But Ada 2012 has narrowed that gap a lot, and it is a good thing that Ada and SPARK are now evolving together. >> Spark 2005 : >> procedure Swap; >> --# global in out X, Y; >> --# derives X from Y & >> --# Y from X; > > You're certainly right that this is clumsy and makes little sense. :-) A > routine that modifies global variables in this way shouldn't pass any code > review, so it's pretty much irrelevant what the annotations would look like. Randy, here you are really throwing out the child with the bath. Sure, a procedure to swap two globals is is a silly example -- but the topic here is the syntax for information flow analysis, and the example has the benefit of being very simple. The following doesn't use globals and might pass a code review: Spark 2005 : procedure Swap(X, Y: in out T); --# derives X from Y & --# Y from X; Same specification in Spark 2014 : procedure Swap(X, Y: in out T); with Depends => (X => Y, -- to be read as "X depends on Y" Y => X); -- to be read as "Y depends on X" >> How are we supposed to guess that "X => Y" means X depends on Y, if the >> arrow goes from X to Y ? In the >literature, Flow analysis use always >> Information Flow where arrows follows the real move of information. [...] > I don't know, nor care, because I don't see any reason or value to "Depends" > information in the first place. So you think, information flow analysis is completely worthless? (I mean, for building and verifying correct systems -- as a compiler writer you will not question the usage of information flow analysis inside the compiler or optimiser). > Routines shouldn't be modifying globals in > this sort of way, IMHO, so there is no need to proof them. (Any globals > should be hidden in package bodies and not exposed to clients.) Agreed. The OP has been using an example with modifying globals, mine doesn't, the issue raised by the OP remains. >> - the syntax makes a complete confusion between a precondition and a >> test : Preconditions are SPECIFICATIONS that insures that the runtime >> tests that the Ada compiler puts in the code will pass. What's the use >> of testing at the entry of a procedure what will be tested in the >> procedure ? Doing the test twice won't impove correctness ! That is why >> proconditions and postconditions must stay as comments. > > If you're doing the test twice, you are completely misusing Ada 2012 (not to > mention Spark!). The whole point is that the *only* test is in the > precondition (or predicates). If a proof tool (or better yet, the compiler - > I believe these proofs need to be part of normal compilation), has proven > that the precondition can't fail, the compiler is not going to generate any > test anywhere. Agreed! Many people seem to confuse this like the OP: Ada 2012 is about *specifying* contracts. It *allows* to prove them or to check them at run time if proving them has not been possible. A good toolset would not generate code checking conditions that have been proven. > Spark users are on a certain dead-end, IMHO. That is a bit unfair, IMHO. SPARK users have a real head start when it comes to applying the really innovative features of Ada 2012. > It sounds like this is the direction that Spark 2014 is taking, which means > IMHO that we'll all benefit, not just a few that can live with draconian > limitations that turns Ada into a form of Fortran 77. Agreed! ------ I love the taste of Cryptanalysis in the morning! ------ --Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--