From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: border1.nntp.ams.giganews.com!nntp.giganews.com!weretis.net!feeder1.news.weretis.net!feeder.erje.net!eu.feeder.erje.net!gandalf.srv.welterde.de!news.jacob-sparre.dk!loke.jacob-sparre.dk!pnx.dk!.POSTED!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: The future of Spark . Spark 2014 : a wreckage Date: Mon, 8 Jul 2013 16:32:21 -0500 Organization: Jacob Sparre Andersen Research & Innovation Message-ID: References: NNTP-Posting-Host: static-69-95-181-76.mad.choiceone.net X-Trace: loke.gir.dk 1373319142 3170 69.95.181.76 (8 Jul 2013 21:32:22 GMT) X-Complaints-To: news@jacob-sparre.dk NNTP-Posting-Date: Mon, 8 Jul 2013 21:32:22 +0000 (UTC) X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-RFC2646: Format=Flowed; Original X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Xref: number.nntp.dca.giganews.com comp.lang.ada:182345 Date: 2013-07-08T16:32:21-05:00 List-Id: wrote in message news:d84b2eb2-eab3-4dec-917c-0498473c1e93@googlegroups.com... >Have you heard of the new Spark 2014 initiative by AdaCore ? >http://docs.adacore.com/spark2014-docs/html/ug/language_subset.html > >Spark 2014 aims at replacing Spark annotations by Ada 2012 aspects, at >replacing >proof functions by expression functions, and at using the Frama-C tools >(Alt-Ergo >and why3 language) for automatic proof. At the beginning I thought this was >a good >idea, but the more I get into the subject, the more it seems for me that >they are going wrong. I have nothing to do with AdaCore or this Spark update, so what follows is personal opinion. I think existing Spark is evil, in that it funnels off effort best applied to the entire Ada language (or at least, a much larger subset). Computers are now powerful enough that the excuses for avoiding proofs of exceptions, dynamic memory allocation, and the like no longer are valid. Thus I like what AdaCore is trying to do, because they are bringing Spark closer to Ada itself. Which makes it much more likely that the effort that goes into Spark can be applied to the full Ada language as well. >First ASPECTS : Let's take an example, that is given in the Spark Reference >Manual. > >- their syntax is clumsy and error prone : > >Spark 2005 : > procedure Swap; > --# global in out X, Y; > --# derives X from Y & > --# Y from X; You're certainly right that this is clumsy and makes little sense. :-) A routine that modifies global variables in this way shouldn't pass any code review, so it's pretty much irrelevant what the annotations would look like. > >Same specification in Spark 2014 : > procedure Swap > with Global => (In_Out => (X, Y)), > Depends => (X => Y, -- to be read as "X depends on Y" > Y => X); -- to be read as "Y depends on X" > >How are we supposed to guess that "X => Y" means X depends on Y, if the >arrow goes >from X to Y ? In the >literature, Flow analysis use always Information Flow >where arrows >follows the real move of information. See for instance "SPARK the proven >approach of >High Integrity Software" from John Barn page 311 or the "Dragon Book" >(Compilers : >principles, techniques and tools). I don't know, nor care, because I don't see any reason or value to "Depends" information in the first place. Routines shouldn't be modifying globals in this sort of way, IMHO, so there is no need to proof them. (Any globals should be hidden in package bodies and not exposed to clients.) The Globals notation is much more important. We tried to design such a notation for Ada 2012, but it became too much work to complete and there was a lot of concern about getting it wrong. So we left this for the "experts". It appears that the experts are working on it, which is good, because we still want this capability in the next version of Ada (whenever that will be). And for that to be the case, we'll need it to have proper Ada syntax using aspects. >- the syntax makes a complete confusion between a precondition and a test : >Preconditions >are SPECIFICATIONS that insures that the runtime tests that the Ada >compiler puts in the >code will pass. What's the use of testing at the entry of a procedure what >will be tested in >the procedure ? Doing the test twice won't impove correctness ! That is why >proconditions >and postconditions must stay as comments. If you're doing the test twice, you are completely misusing Ada 2012 (not to mention Spark!). The whole point is that the *only* test is in the precondition (or predicates). If a proof tool (or better yet, the compiler - I believe these proofs need to be part of normal compilation), has proven that the precondition can't fail, the compiler is not going to generate any test anywhere. I suppose Spark (which is a separate tool) won't be able to tell the compiler to not make the check. But since there will only be one check in the code, you certainly won't be getting redundant checks. (The entire point of adding raise expressions was that we failed to provide proper mechanisms for setting the exception raised in these checks; that's necessary to be able to eliminate the existing hand-coded checks in existing code. That's why raise expressions were added as a "bug fix" to Ada 2012 and thus are required of all Ada 2012 implementations.) >Second EXPRESSION FUNCTIONS : > >Expressions functions break the Ada Specification, by allowing the >programmer to >give the body of a function into the specification of a package. For what >purpose ? >To allow the runtime testing of pre and post conditions... No, to allow *compile-time* analysis of pre and post conditions. An Ada compiler cannot look into the body of a package to see what is there, and without expression functions, almost no compile-time analysis could be accomplished. (We also need globals annotations for similar reasons.) The real end-game here ought to be compile-time proof (that is, proofs that are created as part of the normal compilation of Ada code). >But Spark had logical functions that could be declared in the package >specification, for >expressing pre/postconditions, for instance : > --# type StackType is abstract; > --# function Count_of(S: StackType) return Natural; > --# function Substack(S1: StackType; S2: StackType) return Boolean; > >The problem with Spark was that one had to use an old language : FDL to >express >what these functions meant. So the only thing that is required here for an >improvement >of Spark is to be able to write IN THE PACKAGE BODY the logical function >as, for instance : Proofs of any kind should never, ever depend on anything in the package body (other than the one being proved, of course): (1) Body dependence greatly increases coupling; (2) Proofs should be able to be created for partially created programs (doing proofs as an afterthought will never work); no body ought to be required or even exist. (This is also is important so that propietary code can be used and still included in proofs.) (3) If body dependencies are eliminated, proofs remain valid so long as no specification is changed. The only unit that needs reproofing when a body is changed is that body itself. (2) is necessary to use proof tools on large systems. Full source is almost never available, and if it is, it isn't available until very late in a project. (3) is necessary so that work doesn't have to be done over and over and over. ... >- it allows to do in the body a refinement of the abstract type Stacktype >that > represents the package state, which is not possible with expression > functions > in the specification. It seems that the people of Spark 2014 didn't get > that point. The expression functions go in the private part, where you can have visibility onto anything that you need, without exposing it to clients. Perhaps you missed that part. > I realy hope that Spark users will say to AdaCore that they aro going > wrong > and that AdaCore will really try to make a better future for Spark. Spark users are on a certain dead-end, IMHO. The future is getting proof into the hands of all programmers, as a normal part of their day-to-day programming. That requires that proofs aren't necessarily complete, that they use whatever information is available, and most of all, that they only depend on the specifications of packages (never, ever, on the contents of a body). It sounds like this is the direction that Spark 2014 is taking, which means IMHO that we'll all benefit, not just a few that can live with draconian limitations that turns Ada into a form of Fortran 77. Just my $1 worth (inflation, you know). Randy. Vincent DIEMUNSCH