On Fri, 8 May 2015, Randy Brukardt wrote: >>>> Turning off the checks just hides the problem. >> >> *IF* there is a problem at all. > > See above. There is *always* a problem; the only question is whether you are > willing to defend against it or not. Yes, and no. Having worked a bit with SPARK, my experience is mixed. When the tool fails to verify my program, the investigation sometimes actually reveal a subtle bug, and I am happy to have used the tool. But quite as often, the verifier just fails at some simple and indisputable facts, such as proving that ((p-1) + 1) mod p is zero. Recently, when trying to write a sorting procedure in SPARK, the prover has been unable to figure out that, given an array A and two indices i /= j in the proper range, the new array I get from A by swapping A(i) and A(j) (without changing any of the values A(k) for k not in {i,j}), is a permutation of A. I haven't yet solved the problem. So the ability of any of the tools we have now (and, I would argue, in the foreseeable future) to prove program correctness is very limited. If a compiler with such limited abilities turns my logarithmic-time search routine into a linear-time routine, just because it couldn't prove that the input to the routine is always sorted, then the compiler is broken. Proving program properties (apparently you don't like the word "correctness" in that context) is a *tool* for the programmer. If properly used, it can be an extremely useful tool, especially for medium- and high-assurance software. But doing foolish things if the proof fails, or strictly requiring all relevant properties must actually be proven would turn this tool from something useful into a terrible burden. > For example, in this "Is_Sorted" example, if you are assuming that some > object is sorted, then it should have a predicate to that effect. In such a > case, the compiler would be able to eliminate all of the precondition checks > on calls, the only time the predicate would be checked is when the object is > initially created (which certainly should not be on any critical path), or > if it is modified by a routine that doesn't include Is_Sorted in its > postcondition (which is clearly the source of potential bugs as well). OK, maybe this would work for Is_Sorted. (I am a bit sceptical, but never mind.) Here is another example. Assume some Big-Natural-Number-Type Num. I need a function that tells me, if a number N of type Num is a prime or not: function Is_Prime(N: Num) return Boolean is (Is_Smooth_Prime(N, To_Num(2)); function Is_Rough_Prime(N, Lowest_Factor: Num) return Boolean is (if Lowest_Factor = N then True elsif N mod Lowest/Factor = To_Num(0) then False else Is_Rough_Prime(N, Lowest_Factor+1): If N is divisible by either of 2, 3, 4, 5, ..., N-1, then N isn't prime. If N is divisible by neither, it is prime. So the above specifies properly what I want to have. But this is also terribly slow. (And, depending on the compiler's ability to optimize for storage, it could also overrun the stack.) If N is of size of interest for cryptographic applications, the sun will long have gone dark before the program finishes. (Assuming your computer would continue to run for that long.) I.e., this is a valid specification, but a useless test. In practice, people usually call the Miller_Rabin primality test: function MR_Is_Prime(N: Num; Repetitions: Natural := 500) return Boolean with post => Miller_Rabin_Prime'Result = Is_Prime(N); As a specification, the postcondition is useful. For testing with tiny numbers, it might be OK. But for testing with realistically-sized N, or for production code, this test *must* be deactivated. The user cannot wait for Is_Prime to terminate. By your logic, disabling the test would be bad. Thus, the compiler would eventually have to prove the fact that the Miller-Rabin test is mathematically correct, and always gives the proper result, right? But proving such properties is mathematically deep, and way beyond the abilities of the theorem provers we actually have. (And I would not hold my breadth to wait for this to change.) Even worse, the Miller-Rabin test is a probabilistic test. There is some chance that a composed number actually passes the test. The chance is very small -- for the default Repetitions, the chance is below 2^{-1000}, so neglecting this risk is perfectly OK. But usual tools for program correctness (or for program properties) are not able to deal with probabilistic results. > In the absence of those sorts of things, you are just hiding potential > flaws. Agreed! But at some point, someone (some human, not the compiler!) has to make the choice how to proceed, if a relevant property has not been formally proven. Neither simply rejecting the program, nor inserting a potentially huge ballast of additional test, is always an option. The language, the compiler, and other tools, may support you to write good software, especially medium- and high-assurance software. Ada has been a great tool for that purpose from its very beginning, and Ada 2012 has made a significant step forward in that direction. But expecting too much from the tools them will very quickly get into your way, and eventually also destroy the support for the tools. Stefan ------ I love the taste of Cryptanalysis in the morning! ------ uni-weimar.de/de/medien/professuren/mediensicherheit/people/stefan-lucks --Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--