From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!mx02.eternal-september.org!feeder.eternal-september.org!gandalf.srv.welterde.de!news.jacob-sparre.dk!loke.jacob-sparre.dk!pnx.dk!.POSTED!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: {Pre,Post}conditions and side effects Date: Mon, 11 May 2015 20:03:52 -0500 Organization: Jacob Sparre Andersen Research & Innovation Message-ID: References: <2430252d-52a1-4609-acef-684864e6ca0c@googlegroups.com> <0a718b39-ebd3-4ab5-912e-f1229679dacc@googlegroups.com> <9ee5e186-5aaa-4d07-9490-0f9fdbb5ca18@googlegroups.com> <87tww5296f.fsf@adaheads.sparre-andersen.dk> <871tj9dp5b.fsf@theworld.com> NNTP-Posting-Host: rrsoftware.com X-Trace: loke.gir.dk 1431392632 22353 24.196.82.226 (12 May 2015 01:03:52 GMT) X-Complaints-To: news@jacob-sparre.dk NNTP-Posting-Date: Tue, 12 May 2015 01:03:52 +0000 (UTC) X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-RFC2646: Format=Flowed; Original X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Xref: news.eternal-september.org comp.lang.ada:25824 Date: 2015-05-11T20:03:52-05:00 List-Id: wrote in message news:alpine.DEB.2.11.1505111113020.12062@debian... On Fri, 8 May 2015, Randy Brukardt wrote: >>>>> Turning off the checks just hides the problem. >>> >>> *IF* there is a problem at all. >> >> See above. There is *always* a problem; the only question is whether you >> are >> willing to defend against it or not. > >Yes, and no. Having worked a bit with SPARK, my experience is mixed. When >the tool fails to verify my program, the investigation sometimes actually >reveal a subtle bug, and I am happy to have used the tool. That's the value of those tools: to prove that something is not correct. It's just like testing in that way; you can't really prove that the program is correct, but you surely can prove that it is not correct. > But quite as > often, the verifier just fails at some simple and indisputable facts, such > as proving that ((p-1) + 1) mod p is zero. Recently, when trying to write > a sorting procedure in SPARK, the prover has been unable to figure out > that, given an array A and two indices i /= j in the proper range, the new > array I get from A by swapping A(i) and A(j) (without changing any of the > values A(k) for k not in {i,j}), is a permutation of A. I haven't yet > solved the problem. These sorts of problems would come up in proving postconditions, but I don't see that happening for preconditions. >So the ability of any of the tools we have now (and, I would argue, in the >foreseeable future) to prove program correctness is very limited. If a >compiler with such limited abilities turns my logarithmic-time search >routine into a linear-time routine, just because it couldn't prove that >the input to the routine is always sorted, then the compiler is broken. No, I'd still argue your code is broken. If *you* know that some object is always sorted, then *you* should tell the compiler that with an appropriate predicate: subtype Sorted_Array is Some_Array with Dynamic_Predicate => Is_Sorted (Sorted_Array); My_Array : Sorted_Array := ...; Now, whenever My_Array is assigned (as a whole) or passed as a parameter, it will be checked for whether it is sorted. That pushes the check to whenever the array is created/initialized/modified, which is not going to have any effect on your sort routine. On top of which, the routines that do the creation/initialization/modification probably ought to have postconditions that the array is sorted as well. In which case, the object also will have been previously checked, so the cost will be at the end of those routines. And possibly (although unlikely in the particular case), that check could be proved away there as well. > Proving program properties (apparently you don't like the word > "correctness" in that context) Right. > is a *tool* for the programmer. If properly > used, it can be an extremely useful tool, especially for medium- and > high-assurance software. But doing foolish things if the proof fails, or > strictly requiring all relevant properties must actually be proven would > turn this tool from something useful into a terrible burden. No real burden, IMHO. The sorts of properties that should be involved should be relatively simple to express and thus prove, and not that expensive to check. Much like null pointer checks or variant checks in Ada. Turning these sorts of things off is silly. I can see that are some cases where the properties are too expensive to verify at runtime. It would be nice if there was a way to turn off those (AND ONLY THOSE) properties. But Ada doesn't have that sort of granularity, so I wouldn't bother writing them in the first place. (At least not as preconditions; most of my programs have extensive tracing/self-checking modes that can be enabled unit-by-unit; that's the place for such expensive things.) >> For example, in this "Is_Sorted" example, if you are assuming that some >> object is sorted, then it should have a predicate to that effect. In such >> a >> case, the compiler would be able to eliminate all of the precondition >> checks >> on calls, the only time the predicate would be checked is when the object >> is >> initially created (which certainly should not be on any critical path), >> or >> if it is modified by a routine that doesn't include Is_Sorted in its >> postcondition (which is clearly the source of potential bugs as well). > > OK, maybe this would work for Is_Sorted. (I am a bit sceptical, but never > mind.) > >Here is another example. Assume some Big-Natural-Number-Type Num. I need a >function that tells me, if a number N of type Num is a prime or not: > > function Is_Prime(N: Num) return Boolean is > (Is_Smooth_Prime(N, To_Num(2)); > > function Is_Rough_Prime(N, Lowest_Factor: Num) return Boolean is > (if Lowest_Factor = N then True > elsif N mod Lowest/Factor = To_Num(0) then False > else Is_Rough_Prime(N, Lowest_Factor+1): > >If N is divisible by either of 2, 3, 4, 5, ..., N-1, then N isn't prime. >If N is divisible by neither, it is prime. > >So the above specifies properly what I want to have. But this is also >terribly slow. (And, depending on the compiler's ability to optimize for >storage, it could also overrun the stack.) If N is of size of interest for >cryptographic applications, the sun will long have gone dark before the >program finishes. (Assuming your computer would continue to run for that >long.) I.e., this is a valid specification, but a useless test. Correct. If it hurts, don't write that. :-) I don't begin to believe that all program properties can be proved. Indeed, there is a major problem in that there is no good way to specify which properties that a subprogram does *not* affect. There is an infinite number of such properties, so specifying them one by one in the postcondition: Is_Sorted (Arr) = Is_Sorted (Arr)'Old and ... is madness. (And even in a small system, there are a lot of properties. Consider just the interesting properties of a Vector container. The length, capacity, and tampering state all immediately come to mind, and most routines change none of them. How to communicate that?) >In practice, people usually call the Miller_Rabin primality test: > > function MR_Is_Prime(N: Num; Repetitions: Natural := 500) > return Boolean with > post => Miller_Rabin_Prime'Result = Is_Prime(N); > >As a specification, the postcondition is useful. I disagree, actually. I think any useful postcondition has to be reasonably executable. Else there is no difference from a comment, and you would be better served using a comment as won't throw out all of the easy checks with this silly one. > For testing with tiny > numbers, it might be OK. But for testing with realistically-sized N, or > for production code, this test *must* be deactivated. The user cannot wait > for Is_Prime to terminate. > > By your logic, disabling the test would be bad. Thus, the compiler would > eventually have to prove the fact that the Miller-Rabin test is > mathematically correct, and always gives the proper result, right? No, by my logic, *writing* a test that you can't actually execute is bad, as it tells no one anything useful. Anyway, I answered this complaint earlier in this note, so I won't repeat myself. >But proving such properties is mathematically deep, and way beyond the >abilities of the theorem provers we actually have. (And I would not hold >my breadth to wait for this to change.) If you can't execute it, and you can't prove it, what exact good is it over having a comment -- Returns True if N is prime. ??? I can't think of any. > Even worse, the Miller-Rabin test is a probabilistic test. There is some > chance that a composed number actually passes the test. The chance is very > small -- for the default Repetitions, the chance is below 2^{-1000}, so > neglecting this risk is perfectly OK. Well, obviously that depends on the application. > But usual tools for program > correctness (or for program properties) are not able to deal with > probabilistic results. For good reason; in some cases, they're simply not acceptable. >> In the absence of those sorts of things, you are just hiding potential >> flaws. > >Agreed! But at some point, someone (some human, not the compiler!) has to >make the choice how to proceed, if a relevant property has not been >formally proven. Neither simply rejecting the program, nor inserting a >potentially huge ballast of additional test, is always an option. I don't think anyone would ever want a system that *required* proving everything. The important thing IMHO is that you can find out what wasn't proven so you can determine whether to fix it (via additional specifications) or whether to ignore it (the obvious case being that the unproven case is on a non-executable path). >The language, the compiler, and other tools, may support you to write good >software, especially medium- and high-assurance software. Ada has been a >great tool for that purpose from its very beginning, and Ada 2012 has made >a significant step forward in that direction. But expecting too much from >the tools them will very quickly get into your way, and eventually also >destroy the support for the tools. True enough. Expecting proof to be anything more than another way to determine errors early is the root of the problem. It's useful to know what the compiler doesn't prove in order that one can decide to ignore it, but clearly ignoring it is the default (just as it is for regular Ada runtime checks). No one should ever be forced to change any code unless a proof determines that a check *will* fail (or there is a possibility of raising an uncontracted exception -- but no one would ever be required to use an exception contract). I want to bring this tool to people that would not go out of their way to use it. (I.e., me. :-) That means it has to be in the compiler so that they get the benefits automatically, because there is no way I'm going out and buying (plus learning) a separate tool to do those things. I suspect that many (perhaps most) programmers are like me. Like everyone, I want it all, for free, right now. :-) Only Ada comes close, and I just want to make it closer. Randy.