From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=unavailable autolearn_force=no version=3.4.4 X-Received: by 10.224.29.76 with SMTP id p12mr25836648qac.5.1373306169384; Mon, 08 Jul 2013 10:56:09 -0700 (PDT) X-Received: by 10.49.12.141 with SMTP id y13mr544530qeb.41.1373306169371; Mon, 08 Jul 2013 10:56:09 -0700 (PDT) Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!news.eternal-september.org!news.eternal-september.org!feeder.eternal-september.org!v102.xanadu-bbs.net!xanadu-bbs.net!news.glorb.com!t19no768471qam.0!news-out.google.com!f7ni1559qai.0!nntp.google.com!t19no804102qam.0!postnews.google.com!glegroupsg2000goo.googlegroups.com!not-for-mail Newsgroups: comp.lang.ada Date: Mon, 8 Jul 2013 10:56:09 -0700 (PDT) Complaints-To: groups-abuse@google.com Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=163.116.6.12; posting-account=hya6vwoAAADTA0O27Aq3u6Su3lQKpSMz NNTP-Posting-Host: 163.116.6.12 User-Agent: G2/1.0 MIME-Version: 1.0 Message-ID: Subject: The future of Spark . Spark 2014 : a wreckage From: vincent.diemunsch@gmail.com Injection-Date: Mon, 08 Jul 2013 17:56:09 +0000 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Xref: news.eternal-september.org comp.lang.ada:16168 Date: 2013-07-08T10:56:09-07:00 List-Id: Have you heard of the new Spark 2014 initiative by AdaCore=A0? http://docs.adacore.com/spark2014-docs/html/ug/language_subset.html Spark 2014 aims at replacing Spark annotations by Ada 2012 aspects, at repl= acing proof functions by expression functions, and at using the Frama-C tools (Alt-Ergo and why3 language) for automatic proof. At the beginning I = thought this was a good idea, but the more I get into the subject, the more= it seems for me that they are going wrong. First ASPECTS : Let's take an example, that is given in the Spark Reference= Manual. - their syntax is clumsy and error prone :=20 Spark 2005 : procedure Swap; --# global in out X, Y; --# derives X from Y & --# Y from X; =20 Same specification in Spark 2014 : procedure Swap with Global =3D> (In_Out =3D> (X, Y)), Depends =3D> (X =3D> Y, -- to be read as "X depends on Y" Y =3D> X); -- to be read as "Y depends on X" How are we supposed to guess that "X =3D> Y" means X depends on Y, if the a= rrow goes from X=A0to Y=A0? In the literature, Flow analysis use always Inf= ormation Flow where arrows follows the real move of information. See for in= stance "SPARK the proven approach of High Integrity Software" from John Bar= n page 311 or the "Dragon Book" (Compilers : principles, techniques and too= ls). - the syntax makes a complete confusion between a precondition and a test := Preconditions are SPECIFICATIONS=A0that insures that the runtime tests tha= t the Ada compiler puts in the code will pass. What's the use of testing at= the entry of a procedure what will be tested in the procedure ? Doing the = test twice won't impove correctness ! That is why proconditions and postcon= ditions must stay as comments. Second EXPRESSION=A0FUNCTIONS : Expressions functions break the Ada Specification, by allowing the programm= er to give the body of a function into the specification of a package. For = what purpose ? To allow the runtime testing of pre and post conditions... But Spark had logical functions that could be declared in the package speci= fication, for expressing pre/postconditions, for instance :=A0 --# type StackType is abstract; --# function Count_of(S: StackType) return Natural; --# function Substack(S1: StackType; S2: StackType) return Boolean; The problem with Spark was that one had to use an old language : FDL to exp= ress what these functions meant. So the only thing that is required here for an = improvement of Spark is to be able to write IN=A0THE PACKAGE=A0BODY=A0the l= ogical function as, for instance :=A0 package body Stack --# own State is Ptr, Vector; is type Ptrs is range 0..MaxStackSize; subtype Indexes is Ptrs range 1..Ptrs'Last; type Vectors is array (Indexes) of Integer; Ptr : Ptrs :=3D 0; Vector : Vectors :=3D Vectors'(Indexes =3D> 0); =20 --# function Count_of(S: StackType) return Natural is Ptr; --# function Substack(S1: StackType; S2: StackType) return Boolean is --# (for all X in 1 .. S1.Ptr =3D> S1.Vector(X) =3D S2.Vector(X)); This is much, much better than the Spark 2014 solution because : - a proof function like Substack is not supposed to be called at runtime, b= ecause it compares to states of the same unique stack, not two real stacks = ! so it should not appear in the compilable code. But Spark 2014 would tran= sform this into a runtime comparison... - it allows to do in the body a refinement of the abstract type Stacktype t= hat represents the package state, which is not possible with expression fun= ctions in the specification. It seems that the people of Spark 2014 didn't = get that point. I realy hope that Spark users will say to AdaCore that they aro going wrong= and that AdaCore will really try to make a better future for Spark. Vincent DIEMUNSCH