From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=unavailable autolearn_force=no version=3.4.4 X-Received: by 10.224.29.76 with SMTP id p12mr40924185qac.5.1373537451700; Thu, 11 Jul 2013 03:10:51 -0700 (PDT) X-Received: by 10.49.30.105 with SMTP id r9mr1082925qeh.4.1373537451686; Thu, 11 Jul 2013 03:10:51 -0700 (PDT) Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!news.eternal-september.org!news.eternal-september.org!feeder.eternal-september.org!feeder02.blueworldhosting.com!usenet.blueworldhosting.com!feeder01.blueworldhosting.com!npeer01.iad.highwinds-media.com!news.highwinds-media.com!feed-me.highwinds-media.com!t19no1104736qam.0!news-out.google.com!f7ni1940qai.0!nntp.google.com!t19no1169559qam.0!postnews.google.com!glegroupsg2000goo.googlegroups.com!not-for-mail Newsgroups: comp.lang.ada Date: Thu, 11 Jul 2013 03:10:51 -0700 (PDT) In-Reply-To: Complaints-To: groups-abuse@google.com Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=163.116.6.12; posting-account=hya6vwoAAADTA0O27Aq3u6Su3lQKpSMz NNTP-Posting-Host: 163.116.6.12 References: <7ebe0439-fa4c-4374-a6c7-365ac78e6e39@googlegroups.com> User-Agent: G2/1.0 MIME-Version: 1.0 Message-ID: Subject: Re: The future of Spark . Spark 2014 : a wreckage From: vincent.diemunsch@gmail.com Injection-Date: Thu, 11 Jul 2013 10:10:51 +0000 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Received-Bytes: 7398 Xref: news.eternal-september.org comp.lang.ada:16291 Date: 2013-07-11T03:10:51-07:00 List-Id: >=20 > The idea of "total correctness" is exactly what I object to. It's a false= =20 > promise that only works on tiny, toy programs. Real Spark programs almost= =20 > always interface to "regular" Ada code to do stuff that Spark can't handl= e,=20 > and once you've done that, you've lost your "total correctness". >=20 Ok one can't prove everything, only a subset of the logic, an abstraction o= f the system. But total correctness is vital for : - distributed or concurrent programs, that can't be "debugged" the usual wa= y because they are not determinist - embedded systems because errors means lost of lives, and it is simply not= possible to test everyting : the number of tests is to high. > In any case, static memory allocation is not necessary to prove correctne= ss.=20 Maybe, but it is easier to prove with it :-). > It's not remotely trivial, because you have to deal with global variables= =20 > that you don't have visibility onto and thus must not name (those in pack= age=20 > bodies, especially), you have to deal intelligently with storage pools an= d=20 > access types, and you have to deal with generic units. Spark ignores almo= st=20 > all of these issues by not allowing the constructs, which is a non-starte= r=20 > for a construct for the full Ada language. We also required checking that= =20 > the annotations were accurate, and that too is a fairly complex issue. =20 > No annotation means "uses anything", of course. You have to specify that = a=20 > routine does NOT use globals. Perhaps that's backwards of what we would d= o=20 > if starting from scratch, but we're not starting from scratch. This is th= e=20 > same way we dealt with "overriding" and "limited" and lots of other thing= s. It is easy to specify that : simply use pragma Pure. But what I like with Spark is that they took the problem from scratch and c= reated own annotations with abstract states and inherit clauses. It won't be possible to do it in Ada without these abstractions, and I don't see the point of adding these without a deep rethinking of Ada encapsulation... Bec= ause visibility of state and data flow is nice but it is a heavy burden that not= everyone wants to deal with, thus the very existence of global variables. There is another point on which I disagree : the fact that compilers make p= roofs for optimisation. Compilers do symbolic computation, and this computa= tion includes propositional calculus. Sometimes the result of a symbolic co= mputation is called a proof, but it is not a mathematical proof in fact. A = proof is the very essence of mathematics and it is a human activity. Provin= g programs is a human activity exactly like conception. Spark syntax needs to be different from Ada syntax because they manipulate = different things : Ada manipulates types like boolean, integers. Spark anno= tations manipulates propositions based on abstraction of types (that I call= categories). For instance where Ada sees :=20 X : Positive;=20 Spark Sees : X : root_integer; X >=3D 0; X < Integer'Last; So it is a nonsense to keep the Ada syntax for annotations and to allow the= m to be transformed as tests, because in doing so one looses abstraction. > If you don't believe that Ada syntax is far superior to any other=20 > language's syntax, go back to curly bracket hell. :-) I like the Ada syntax, and also the Spark syntax :-) > Wow, what an elitist attitude! Ada must be better than C or Java otherwise it will disappear. > Moreover, only a very small number of people/projects are willing to spen= d=20 > extra time to prove a working program. Schedules and the like make those= =20 > sorts of extras the first thing to go.=20 Right. Spark is for the elite, trying to sell it to everybody will result i= n a big disappointement. > Too late, Ada 2012 was standardized in December and couldn't be recinded = if=20 > we wanted to. Far too much code already exists. You should have come to t= he=20 > ARG and made these comments 3 years ago.=20 I wish I could. > Of course, it's unlikely that they would have changed anything -- we had= =20 > comments from members of the formal methods community and they were taken= =20 > very seriously. And we decided to go with aspects, expression functions= =20 > and the like. Now that the path has been set, it's likely that most will= =20 > follow along. =20 > You of course don't have to follow the path -- you can stay with Ada 2005= =20 > and Spark 2005 (which I'm sure will continue to be supported for a long t= ime=20 > to come). But if you want to stay at the bleeding edge, then you're going= to=20 > have to adopt Ada 2012, and everything that comes with it. I will use Ada 2012, except annotations and expression functions, and Spark= 2005. I think that Spark 2014 will be a dilemna for many people, and it wi= ll divide the little community in two : those who follow and those who stay= with comment annotations. Since all tools are OpenSource, the second may t= ry to make an evolution of the annotations, as I proposed. > Anyway, good luck, no matter how you decide to proceed with Ada. I hope o= f=20 > course, that you will get over your initial reaction to Ada 2012 and join= us=20 > in a better future way of programming, but if not, good luck wasting time= .=20 > :-) I hope that it won't be a total waste of time, because I think that the err= or of AdaCore is that they thought they could use the tools and techniques = from Frama-C. According to some researcher, Frama-C tools are better than S= park tools (more advanced, but more reliable ?), but the ACSL=A0language is= not at all as good as Spark. The trick is that while improving the tools, = which was their job, they thought they could change the language and make i= t like ACSL=A0(E-ACSL, the new version they created). But in doing so they = lost the very specificity of Spark : not only its clear syntax, but its hig= her abstraction level, just as Ada has higher abstraction level as C or Jav= a. Regards, Vincent