From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: border1.nntp.ams3.giganews.com!border1.nntp.ams2.giganews.com!border3.nntp.ams.giganews.com!border1.nntp.ams.giganews.com!nntp.giganews.com!weretis.net!feeder4.news.weretis.net!newsfeed.fsmpi.rwth-aachen.de!news-2.dfn.de!news.dfn.de!news.uni-weimar.de!medsec1.medien.uni-weimar.de!lucks From: Stefan.Lucks@uni-weimar.de Newsgroups: comp.lang.ada Subject: Re: The future of Spark . Spark 2014 : a wreckage Date: Tue, 9 Jul 2013 09:57:00 +0200 Organization: Bauhaus-Universitaet Weimar Message-ID: References: NNTP-Posting-Host: medsec1.medien.uni-weimar.de Mime-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="8323329-1277477239-1373356621=:23845" X-Trace: tigger.scc.uni-weimar.de 1373356533 22829 141.54.178.228 (9 Jul 2013 07:55:33 GMT) X-Complaints-To: news@tigger.scc.uni-weimar.de NNTP-Posting-Date: Tue, 9 Jul 2013 07:55:33 +0000 (UTC) X-X-Sender: lucks@debian In-Reply-To: User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) X-Original-Bytes: 5822 Xref: number.nntp.dca.giganews.com comp.lang.ada:182362 Date: 2013-07-09T09:57:00+02:00 List-Id: This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --8323329-1277477239-1373356621=:23845 Content-Type: TEXT/PLAIN; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: QUOTED-PRINTABLE On Mon, 8 Jul 2013, Randy Brukardt wrote: > I think existing Spark is evil, in that it funnels off effort best applie= d > to the entire Ada language (or at least, a much larger subset). Computers > are now powerful enough that the excuses for avoiding proofs of exception= s, > dynamic memory allocation, and the like no longer are valid. Agreed! Well, "evil" is a bit harsh -- existing SPARK fills a gap that Ada= =20 until recently has left wide open, which was a good thing at its time. But= =20 Ada 2012 has narrowed that gap a lot, and it is a good thing that Ada and= =20 SPARK are now evolving together. >> Spark 2005 : >> procedure Swap; >> --# global in out X, Y; >> --# derives X from Y & >> --# Y from X; > > You're certainly right that this is clumsy and makes little sense. :-) A > routine that modifies global variables in this way shouldn't pass any cod= e > review, so it's pretty much irrelevant what the annotations would look li= ke. Randy, here you are really throwing out the child with the bath. Sure, a=20 procedure to swap two globals is is a silly example -- but the topic here= =20 is the syntax for information flow analysis, and the example has the=20 benefit of being very simple. The following doesn't use globals and might= =20 pass a code review: Spark 2005 : procedure Swap(X, Y: in out T); --# derives X from Y & --# Y from X; Same specification in Spark 2014 : procedure Swap(X, Y: in out T); with Depends =3D> (X =3D> Y, -- to be read as "X depends on Y" Y =3D> X); -- to be read as "Y depends on X" >> How are we supposed to guess that "X =3D> Y" means X depends on Y, if th= e >> arrow goes from X to Y ? In the >literature, Flow analysis use always=20 >> Information Flow where arrows follows the real move of information.=20 [...] > I don't know, nor care, because I don't see any reason or value to "Depen= ds" > information in the first place. So you think, information flow analysis is completely worthless? (I mean,= =20 for building and verifying correct systems -- as a compiler writer you=20 will not question the usage of information flow analysis inside the=20 compiler or optimiser). > Routines shouldn't be modifying globals in > this sort of way, IMHO, so there is no need to proof them. (Any globals > should be hidden in package bodies and not exposed to clients.) Agreed. The OP has been using an example with modifying globals, mine=20 doesn't, the issue raised by the OP remains. >> - the syntax makes a complete confusion between a precondition and a=20 >> test : Preconditions are SPECIFICATIONS that insures that the runtime=20 >> tests that the Ada compiler puts in the code will pass. What's the use= =20 >> of testing at the entry of a procedure what will be tested in the=20 >> procedure ? Doing the test twice won't impove correctness ! That is why= =20 >> proconditions and postconditions must stay as comments. > > If you're doing the test twice, you are completely misusing Ada 2012 (not= to > mention Spark!). The whole point is that the *only* test is in the > precondition (or predicates). If a proof tool (or better yet, the compile= r - > I believe these proofs need to be part of normal compilation), has proven > that the precondition can't fail, the compiler is not going to generate a= ny > test anywhere. Agreed! Many people seem to confuse this like the OP: Ada 2012 is about=20 *specifying* contracts. It *allows* to prove them or to check them at run= =20 time if proving them has not been possible. A good toolset would not=20 generate code checking conditions that have been proven. > Spark users are on a certain dead-end, IMHO. That is a bit unfair, IMHO. SPARK users have a real head start when it=20 comes to applying the really innovative features of Ada 2012. > It sounds like this is the direction that Spark 2014 is taking, which mea= ns > IMHO that we'll all benefit, not just a few that can live with draconian > limitations that turns Ada into a form of Fortran 77. Agreed! ------ I love the taste of Cryptanalysis in the morning! ------ --Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universit=E4t Weimar, Germany-- --8323329-1277477239-1373356621=:23845--