From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!news.eternal-september.org!news.eternal-september.org!feeder.eternal-september.org!nntp-feed.chiark.greenend.org.uk!ewrotcd!reality.xs3.de!news.jacob-sparre.dk!loke.jacob-sparre.dk!pnx.dk!.POSTED!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: The future of Spark . Spark 2014 : a wreckage Date: Thu, 11 Jul 2013 19:00:35 -0500 Organization: Jacob Sparre Andersen Research & Innovation Message-ID: References: <7ebe0439-fa4c-4374-a6c7-365ac78e6e39@googlegroups.com> NNTP-Posting-Host: static-69-95-181-76.mad.choiceone.net X-Trace: loke.gir.dk 1373587237 22576 69.95.181.76 (12 Jul 2013 00:00:37 GMT) X-Complaints-To: news@jacob-sparre.dk NNTP-Posting-Date: Fri, 12 Jul 2013 00:00:37 +0000 (UTC) X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-RFC2646: Format=Flowed; Original X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Xref: news.eternal-september.org comp.lang.ada:16308 Date: 2013-07-11T19:00:35-05:00 List-Id: wrote in message news:a20ef22c-36f0-4c04-8d79-6aaadabb8fa6@googlegroups.com, replying to me: >> The idea of "total correctness" is exactly what I object to. It's a false >> promise that only works on tiny, toy programs. Real Spark programs almost >> always interface to "regular" Ada code to do stuff that Spark can't >> handle, >> and once you've done that, you've lost your "total correctness". > > >Ok one can't prove everything, only a subset of the logic, an abstraction >of the system. > But total correctness is vital for : >- distributed or concurrent programs, that can't be "debugged" the usual >way because > they are not determinist >- embedded systems because errors means lost of lives, and it is simply not >possible > to test everyting : the number of tests is to high. I'm pretty amazed to find out that I can't debug concurrent programs, because I've been doing it for decades. On top of that, unit tests almost never involve any concurrency (just like proofs rarely do), so that it irrelevant for the vast majority of debugging (and the sorts of things that you would prove with Spark). ... >> No annotation means "uses anything", of course. You have to specify that >> a >> routine does NOT use globals. Perhaps that's backwards of what we would >> do >> if starting from scratch, but we're not starting from scratch. This is >> the >> same way we dealt with "overriding" and "limited" and lots of other >> things. >It is easy to specify that : simply use pragma Pure. Nope. First of all, pragma Pure allows the use of lots of kinds of globals - just do a dereference somewhere in the package. Secondly, pragma Pure only works on packages. It's extremely difficult to find any package that could have pragma Pure applied, because that means that there is no I/O, no access types, no clock/timers. I've never found any package of mine that could use it (almost all of my private types include a way to trace themselves, which you can't do in a pure package). Moreover, do you know that adding pragma Pure to a package could make it erroneous (even if it is legal and compiles correctly)? Pure has a number of holes that make a program that uses it potentially less safe than one that does not. Because of this, I'm surprised that a safety-critical program would allow it to be used at all. GNAT has a pragma Pure_Function which can be used on a single function. This isn't in Ada because it is just an assumption; there is no checks at all whether it is true. In that case, any mistake in the function would make a program calling it erroneous (because the compiler is allowed to assume the function is pure, even if it isn't). It can only be made safe if it is combined with a proof that there are no side-effects, in which case it surely should never be separated from that proof -- but pragmas are *always* separated from the thing they apply to. As such, they are never appropriate for setting properties of single entities. >But what I like with Spark is that they took the problem from scratch and > created own annotations with abstract states and inherit clauses. It won't >be possible to do it in Ada without these abstractions, and I don't see the >point of adding these without a deep rethinking of Ada encapsulation... The point is very simple: Ada *compilers* can generate better code if they have this additional information. It isn't just useful for proofs. (And that has nothing to do with other annotations, either; it's unrelated to assertions like preconditions.) And there is no way that we're ever going to put anything significant to Ada compilers (and Ada programmers) into comments. Ada encapuslation is one of the things Ada gets better than other languages -- there is no reason to rethink anything. There is a need to allow people who want to to give additional information to both readers and the Ada compiler. > Because visibility of state and data flow is nice but it is a heavy burden >that not everyone wants to deal with, thus the very existence of global > variables. Quite right, which is why the default has to be to live without that information. If you want to add it, you'll get more error detection at compile-time, possibly better code (even with all checking turned off), and provide more information for tools like Spark. A win-win in my mind. >There is another point on which I disagree : the fact that compilers make >proofs for optimisation. Compilers do symbolic computation, and this >computation includes propositional calculus. Sometimes the result of a >symbolic computation is called a proof, but it is not a mathematical proof >in fact. A proof is the very essence of mathematics and it is a human >activity. Proving programs is a human activity exactly like conception. I agree that your definition of proof is wildly different than mine. What you're interested in I find not only irrelevant but actively harmful. The last thing we need (in any field!) is more human activities. Humans make mistakes; the more the computer does of the checking the better. And we absolutely do not need any additional tasks in programming; the pressure to take short-cuts is already very high and anything "extra" is very liable to be cut. >Spark syntax needs to be different from Ada syntax because they manipulate >different >things : Ada manipulates types like boolean, integers. Spark annotations >manipulates >propositions based on abstraction of types (that I call categories). For >instance where >Ada sees : >X : Positive; >Spark Sees : >X : root_integer; >X >= 0; I hope Spark sees X >= 1 else it isn't even proving the right thing. ;-) >X < Integer'Last; Well, an Ada compiler sees: X : root_integer; Check X in 1 .. Integer'Last. This is not materially different. >So it is a nonsense to keep the Ada syntax for annotations and to allow >them >to be transformed as tests, because in doing so one looses abstraction. Only if you don't understand Ada semantics. Having two different ways to write the same thing doesn't add any abstraction! >> If you don't believe that Ada syntax is far superior to any other >> language's syntax, go back to curly bracket hell. :-) >I like the Ada syntax, and also the Spark syntax :-) Fine, then stick with it. >> Wow, what an elitist attitude! >Ada must be better than C or Java otherwise it will disappear. It already is. And this has nothing to do with Spark. >> Moreover, only a very small number of people/projects are willing to >> spend >> extra time to prove a working program. Schedules and the like make those >> sorts of extras the first thing to go. >Right. Spark is for the elite, trying to sell it to everybody will result >in a >big disappointement. Then, IMHO, it's not worth having. We need more customers for Ada as a whole, not a series of fractionated subsets. And to do that, we need *Ada* to be better. ... >I will use Ada 2012, except annotations and expression functions, and Spark >2005. Since aspects (and especially precondition and postcondition aspects) are pretty much the only new feature in Ada 2012 of significance (pretty much everything else was designed to support them, other than some of the containers related things), you'll end up using Ada 2005 plus if expressions. That's hardly counts as using Ada 2012. I certainly would not call any code that uses an entity-specific pragma Ada 2012 code. (I'd also call it evil, but that's just me.) ... Randy.