From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!mx02.eternal-september.org!feeder.eternal-september.org!feeder.erje.net!1.eu.feeder.erje.net!news-2.dfn.de!news.dfn.de!news.uni-weimar.de!medsec1.medien.uni-weimar.de!lucks From: Stefan.Lucks@uni-weimar.de Newsgroups: comp.lang.ada Subject: Re: {Pre,Post}conditions and side effects Date: Mon, 11 May 2015 12:35:47 +0200 Organization: Bauhaus-Universitaet Weimar Message-ID: References: <2430252d-52a1-4609-acef-684864e6ca0c@googlegroups.com> <0a718b39-ebd3-4ab5-912e-f1229679dacc@googlegroups.com> <9ee5e186-5aaa-4d07-9490-0f9fdbb5ca18@googlegroups.com> <87tww5296f.fsf@adaheads.sparre-andersen.dk> <871tj9dp5b.fsf@theworld.com> NNTP-Posting-Host: medsec1.medien.uni-weimar.de Mime-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="8323329-982746516-1431340548=:12062" X-Trace: pinkpiglet.scc.uni-weimar.de 1431340947 39 141.54.178.228 (11 May 2015 10:42:27 GMT) X-Complaints-To: news@pinkpiglet.scc.uni-weimar.de NNTP-Posting-Date: Mon, 11 May 2015 10:42:27 +0000 (UTC) X-X-Sender: lucks@debian In-Reply-To: User-Agent: Alpine 2.11 (DEB 23 2013-08-11) Xref: news.eternal-september.org comp.lang.ada:25806 Date: 2015-05-11T12:35:47+02:00 List-Id: This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --8323329-982746516-1431340548=:12062 Content-Type: TEXT/PLAIN; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: QUOTED-PRINTABLE On Fri, 8 May 2015, Randy Brukardt wrote: >>>> Turning off the checks just hides the problem. >> >> *IF* there is a problem at all. > > See above. There is *always* a problem; the only question is whether you = are > willing to defend against it or not. Yes, and no. Having worked a bit with SPARK, my experience is mixed. When= =20 the tool fails to verify my program, the investigation sometimes actually= =20 reveal a subtle bug, and I am happy to have used the tool. But quite as=20 often, the verifier just fails at some simple and indisputable facts, such= =20 as proving that ((p-1) + 1) mod p is zero. Recently, when trying to write= =20 a sorting procedure in SPARK, the prover has been unable to figure out=20 that, given an array A and two indices i /=3D j in the proper range, the ne= w=20 array I get from A by swapping A(i) and A(j) (without changing any of the= =20 values A(k) for k not in {i,j}), is a permutation of A. I haven't yet=20 solved the problem. So the ability of any of the tools we have now (and, I would argue, in the= =20 foreseeable future) to prove program correctness is very limited. If a=20 compiler with such limited abilities turns my logarithmic-time search=20 routine into a linear-time routine, just because it couldn't prove that=20 the input to the routine is always sorted, then the compiler is broken. Proving program properties (apparently you don't like the word=20 "correctness" in that context) is a *tool* for the programmer. If properly= =20 used, it can be an extremely useful tool, especially for medium- and=20 high-assurance software. But doing foolish things if the proof fails, or=20 strictly requiring all relevant properties must actually be proven would=20 turn this tool from something useful into a terrible burden. > For example, in this "Is_Sorted" example, if you are assuming that some > object is sorted, then it should have a predicate to that effect. In such= a > case, the compiler would be able to eliminate all of the precondition che= cks > on calls, the only time the predicate would be checked is when the object= is > initially created (which certainly should not be on any critical path), o= r > if it is modified by a routine that doesn't include Is_Sorted in its > postcondition (which is clearly the source of potential bugs as well). OK, maybe this would work for Is_Sorted. (I am a bit sceptical, but never= =20 mind.) Here is another example. Assume some Big-Natural-Number-Type Num. I need a= =20 function that tells me, if a number N of type Num is a prime or not: function Is_Prime(N: Num) return Boolean is (Is_Smooth_Prime(N, To_Num(2)); function Is_Rough_Prime(N, Lowest_Factor: Num) return Boolean is (if Lowest_Factor =3D N then True elsif N mod Lowest/Factor =3D To_Num(0) then False else Is_Rough_Prime(N, Lowest_Factor+1): If N is divisible by either of 2, 3, 4, 5, ..., N-1, then N isn't prime.=20 If N is divisible by neither, it is prime. So the above specifies properly what I want to have. But this is also=20 terribly slow. (And, depending on the compiler's ability to optimize for=20 storage, it could also overrun the stack.) If N is of size of interest for= =20 cryptographic applications, the sun will long have gone dark before the=20 program finishes. (Assuming your computer would continue to run for that=20 long.) I.e., this is a valid specification, but a useless test. In practice, people usually call the Miller_Rabin primality test: function MR_Is_Prime(N: Num; Repetitions: Natural :=3D 500) return Boolean with post =3D> Miller_Rabin_Prime'Result =3D Is_Prime(N); As a specification, the postcondition is useful. For testing with tiny=20 numbers, it might be OK. But for testing with realistically-sized N, or=20 for production code, this test *must* be deactivated. The user cannot wait= =20 for Is_Prime to terminate. By your logic, disabling the test would be bad. Thus, the compiler would=20 eventually have to prove the fact that the Miller-Rabin test is=20 mathematically correct, and always gives the proper result, right? But proving such properties is mathematically deep, and way beyond the=20 abilities of the theorem provers we actually have. (And I would not hold=20 my breadth to wait for this to change.) Even worse, the Miller-Rabin test is a probabilistic test. There is some=20 chance that a composed number actually passes the test. The chance is very= =20 small -- for the default Repetitions, the chance is below 2^{-1000}, so=20 neglecting this risk is perfectly OK. But usual tools for program=20 correctness (or for program properties) are not able to deal with=20 probabilistic results. > In the absence of those sorts of things, you are just hiding potential > flaws. Agreed! But at some point, someone (some human, not the compiler!) has to= =20 make the choice how to proceed, if a relevant property has not been=20 formally proven. Neither simply rejecting the program, nor inserting a=20 potentially huge ballast of additional test, is always an option. The language, the compiler, and other tools, may support you to write good= =20 software, especially medium- and high-assurance software. Ada has been a=20 great tool for that purpose from its very beginning, and Ada 2012 has made= =20 a significant step forward in that direction. But expecting too much from= =20 the tools them will very quickly get into your way, and eventually also=20 destroy the support for the tools. Stefan ------ I love the taste of Cryptanalysis in the morning! ------ uni-weimar.de/de/medien/professuren/mediensicherheit/people/stefan-lucks --Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universit=E4t Weimar, Germany-- --8323329-982746516-1431340548=:12062--